Although the number of uses for a 2009-era Mac Mini aren’t very long, using them to run new-and-upcoming operating systems like Haiku on would seem to be an interesting use case. This is what [The Phintage Collector] recently took a swing at, using both the 2024 Beta 5 release and a current nightly build. The focus was mostly on the 32-bit build, as this has binary compatibility with BeOS applications, but the 64-bit version of Haiku was of course also installed.
One of the main issues with these Mac systems is that they use EFI for the BIOS, so you’re condemned to either take your chances with the always glitchy CSM ‘classical BIOS’ mode, or to make Haiku and EFI get along. While for the 64-bit version of Haiku this wasn’t too much of a struggle, the 32-bit version ran into the problem that the 64-bit EFI BIOS really doesn’t like 32-bit software. After a while the 32-bit version of Haiku was thus abandoned for a later revisit.
With the 64-bit version a lot of things just work, though audio couldn’t be made to work even with a USB dongle, and there’s no hardware acceleration for graphics, so gaming isn’t really going to happen either. The positive thing here is probably that as a test system for 64-bit Haiku such a Mac Mini isn’t too crazy, it being just an Intel system with an Apple-flavor EFI BIOS.
Advertisement
If you’re into giving it a shot yourself, the video description page contains a lot of resources to consult.
Anthropic recently “hosted about 15 Christian leaders from Catholic and Protestant churches, academia, and the business world” for a two-day summit , reports the Washington Post:
Anthropic staff sought advice on how to steer Claude’s moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said. The wide-ranging discussions also covered how the chatbot should respond to users who are grieving loved ones and whether Claude could be considered a “child of God.”
“They’re growing something that they don’t fully know what it’s going to turn out as,” said Brendan McGuire, a Catholic priest based in Silicon Valley who has written about faith and technology, and participated in the discussions at Anthropic. “We’ve got to build in ethical thinking into the machine so it’s able to adapt dynamically.” Attendees also discussed how Claude should engage with users at risk of self-harm, and the right attitude for the chatbot to adopt toward its own potential demise, such as being shut off, said one participant, who spoke on the condition of anonymity to share details of the conversations…
Anthropic has been more vocal than most top tech firms about the potential risks of more powerful AI. Its leaders have suggested that tools like chatbots already raise profound philosophical and moral questions and may even show flickers of consciousness, a fringe idea in tech circles that critics say lacks evidence. The summit signals that Anthropic is willing to keep exploring ideas outside the Silicon Valley mainstream, even as it emerges as one of the most powerful players in the AI race due to Claude’s popularity with programmers, businesses, government agencies and the military…. Anthropic chief executive Dario Amodei has said he is open to the idea that Claude may already have some form of consciousness, and company leaders frequently talk about the need to give it a moral character…
Advertisement
Some Anthropic staff at the meeting “really don’t want to rule out the possibility that they are creating a creature to whom they owe some kind moral duty,” the participant said. Other company representatives present did not find that framework helpful, according to the participant. The discussions appeared to take a toll on some senior Anthropic staff, who became visibly emotional “about how this has all gone so far [and] how they can imagine this going,” the participant said. Anthropic is working to include more voices from different groups, including religious communities, to help shape its AI, a spokesperson told the Washington Post.
“Anthropic’s March summit with Christian leaders was billed as the first in a series of gatherings with representatives from different religious and philosophical traditions, said attendee Brian Patrick Green, a practicing Catholic who teaches AI and technology ethics at Santa Clara University.”
Most of today’s enterprise AI still operates within the boundaries of cloud datacenters.
It handles digital tasks well like analysis or personalization, but it struggles when intelligence needs to be applied in the physical world, where decisions need to be instant and IT infrastructure is shifting.
Models are therefore becoming smaller and more specialized by running on edge hardware and responding to constantly changing data streams.
Advertisement
Article continues below
Mohan Varthakavi
Vice president of software development, AI and edge at Couchbase.
Physical AI embeds intelligence directly into vehicles, warehouses, aircrafts, retail spaces and industrial systems.
It’s designed for environments where connectivity drops, latency matters and operations cannot stop because a network link has failed.
As organizations deploy more sensors and edge devices, this model is becoming an operational requirement.
Advertisement
Data management is critical to the AI stack
Every physical AI application depends on access to consistent local data, regardless of network quality. Decisions draw on maps, sensor inputs, telemetry, contextual information and model states, all of which must remain available even when devices, vehicles or machines are disconnected from the cloud for hours.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
This creates three core technical requirements. First, latency must approach zero. Even the shortest round trip to the cloud is too slow for millisecond-critical decisions. An autonomous vehicle detecting a sudden obstacle, a warehouse robot identifying a missing item or a smart manufacturing system responding to equipment changes cannot wait for a remote API response; the decisions must be made locally.
Advertisement
Second, data must remain available despite weak connectivity. Many operational environments have volatile connections, so physical AI systems must continue to function offline. This “offline-first” approach ensures that data storage, inference and decision logic remain operational even when cloud access is unavailable.
Third, the compute must be efficient. Edge hardware is inherently constrained, which means models must be small, specialized and optimized, often with hardware acceleration. Databases and the broader AI stack need to be lightweight, performant and resource efficient. In this architecture, the database is an integral part of the AI pipeline, delivering the data models required to make decisions at the source.
Advertisement
Why cloud-only AI breaks down outside controlled environments
Autonomous vehicles move through patchy mobile coverage. Warehouses experience RF interference. Aircraft and cruise ships operate for long periods with limited bandwidth. Even modern manufacturing sites regularly experience dead zones.
In these conditions, latency, the idea that AI can wait for a round trip to the cloud, is a limiting factor. Physical AI relies on local processing and local data because that’s the only way to guarantee consistent, reliable operation.
How physical AI is already being deployed
In autonomous and connected vehicles, edge inference is essential. One self-driving car company, for example, generates large volumes of sensor data that must be processed immediately. Cloud dependency simply isn’t viable because non-autonomous features rely on local storage and offline capability to function reliably.
Advertisement
Aviation shows many of the same constraints. Airlines want to improve crew workflows, maintenance, logistics and passenger experience with AI, but aircraft operate with intermittent connectivity. Data must be collected and stored locally, shared between onboard systems and synced efficiently when the aircraft reconnects.
Retail and logistics offer some of the most accessible examples. At Pepsi, edge devices in warehouses run vision models to analyze shelf stock and initiate replenishment automatically. The intelligence matters, but the practical challenge is managing data locally and syncing it reliably when connectivity allows.
Cruise lines face similar constraints. Operators need to support real-time transactions, personalization and on-board operations on vessels that may not have stable connectivity for days. Across these sectors, the pattern is consistent: AI works only when it operates where the data is generated.
Advertisement
Why so many AI proof-of-concepts struggle to scale
A recent MIT report found that only about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L. The reasons are well documented: Organizations expect immediate ROI. Teams underestimate the complexity of deploying and maintaining AI systems.
Architectures are built around cloud assumptions that don’t hold in real-world environments. The right data architecture doesn’t solve every challenge, but it does address one of the most common points of failure: the gap between lab conditions and operational reality.
Moving to a physical AI model requires designing systems around the actual behavior of physical environments with local processing for time-sensitive decisions, persistent local storage so devices function during outage, lightweight edge databases and optimized models that match hardware constraint and efficient synchronization to ensure data consistency when connectivity returns. Getting this layer right determines whether AI systems can operate reliably at the edge.
Advertisement
The enterprise shift is already underway
Automotive, aviation, logistics, manufacturing and travel businesses are already adopting this model because their environments demand it. The cloud remains vital, but the assumption that every AI workload must be cloud-first doesn’t fit its requirements.
As more of the enterprise becomes instrumented and autonomous, AI will increasingly need to work at the point of action, not the point of aggregation. The organizations that recognize this early are the ones most likely to deploy AI systems that behave predictably, consistently and safely in the environments that matter.
Earlier, fixing a comment on Instagram meant deleting it and starting over. Now, Instagram allows users to edit their comments within 15 minutes. This feature works only for comments posted from your own account.
It’s quite a straightforward process to understand and use. All one needs to do is click on the ‘Edit’ button under the comment made, modify the content appearing on the page, and then click on the blue check button. There’s sufficient time allowed for editing within fifteen minutes after posting the comment.
Why This Update Matters
Though the update may seem insignificant and straightforward, it holds great importance. It helps users make modifications without having to delete their comments. It also allows them to improve or update what they wrote. Since comments can appear in different places, like Stories, this feature makes them more flexible and useful.
Meta continues to update its apps with new features. After bringing message editing earlier, it has now added comment editing on Instagram. The company is also testing other updates to enhance the overall user experience and make the platform easier to use.
Advertisement
This feature might look minor, but it makes a real difference. By allowing users to edit comments, Instagram makes the overall experience easier and more convenient.
For all the hype about data centers in space, there just aren’t very many GPUs up there. As that starts to change, the near-term business of orbital compute is starting to take shape.
The largest compute cluster currently in orbit was launched by Canada’s Kepler Communications in January, and boasts about 40 Nvidia Orin edge processors onboard 10 operational satellites, all linked together by laser communications links.
The company now has 18 customers, and announced its newest on Monday — Sophia Space, a startup that will test the software for its unique orbital computer onboard Kepler’s constellation.
Experts expect that we won’t see large-scale data centers like those envisioned by SpaceX or Blue Origin until the 2030s. The first step will be processing data that is collected in orbit to improve the capabilities of space-based sensors used by private companies and government agencies.
Advertisement
Kepler doesn’t see itself as a data center company, but as infrastructure for applications in space, CEO Mina Mitry tells TechCrunch. It wants to be a layer that provides network services for other satellites in space, or drones and aircraft in the sky below.
Sophia, on the other hand, is developing passively-cooled space computers that could solve one of the key challenges for large-scale data centers in orbit: keeping powerful processors from overheating without having to build and launch heavy, expensive active-cooling systems.
In the new partnership, Sophia will upload its proprietary operating system to one of Kepler’s satellites and attempt to launch and configure it across six GPUs on two spacecraft. That sort of activity is table stakes in a terrestrial data center, and this is the first time it will be attempted in orbit. Making sure the software works in orbit will be a key de-risking exercise for Sophia ahead of its first planned satellite launch in late 2027.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
For Kepler, the partnership helps prove the utility of its network. Right now, it is carrying and processing data uploaded from the ground, or collected by hosted payloads on its own spacecraft. But as the sector matures, the company expects to start linking up with third-party satellites to provide networking and processing services.
Advertisement
Mitry says satellite companies are now planning future assets around this model, pointing to the benefits of offloading processing for more power-hungry sensors, like synthetic aperture radar. The U.S. military is a key customer for that kind of work as it develops a new missile defense system predicated on satellites detecting and tracking threats. Kepler has already demonstrated a space-to-air laser link in a demo for the U.S. government.
That kind of edge processing — dealing with data where it is collected for faster responsiveness — is where orbital data centers will initially prove their value. That vision sets Sophia and Kepler apart from established space companies like SpaceX and Blue Origin, or startups like Starcloud and Aetherflux that are raising significant capital to focus on large-scale data centers with data center-style processors.
“Because we have the belief it’s more inference than training, we want more distributed GPUs that do inference, rather than one superpower GPU that has the training workload capacity,” Mitry told TechCrunch. “If this thing consumes kilowatts of power and you’re only running at 10% of the time, then that’s not super helpful. In our case, our GPUs are running 100% of the time.”
And once these technologies are proven in orbit, well, anything can happen. Sophia CEO Rob DeMillo points out that Wisconsin adopted a ban on data center construction last week, something some lawmakers in Congress are also pushing. Anything that limits data centers on Earth is, in their eyes, making the space-based alternative more attractive.
Advertisement
“There’s no more data centers in this country,” Demillo mused. “It’s gonna get weird from here.”
The Masters 2026 reaches its conclusion today with Rory McIlroy atop the leaderboard with Cameron Young. It’s very tight with eight players separated by just four shots.
The best part? We’ve found a simple way to stream all the action live from Augusta National at no cost. Find out more below.
Watch the Masters 2026 for free
Advertisement
⛳️ Don’t miss the FINAL ROUND! Watch the The Masters 2026 LIVE for FREE via Masters.com and the The Masters App (iOS/Android).
🏆 It all comes down to Sunday at Augusta National and you can stream every moment without paying a penny. Alongside the main broadcast (as seen on CBS), you’ll also get:
👀 Final round Featured Groups
🌸 Amen Corner
🔥 Key holes like 15 & 16
🏌️♂️ Live shots from around the course
✈️ Abroad for the final day? No problem. If you’re a US viewer overseas, just switch on a VPN to access your home coverage on Masters.com and catch every clutch putt and leaderboard twist.
Use a VPN to watch any Masters 2026 stream
What if you’re outside America for the final round?This is where a VPN can help.
Advertisement
A VPN is handy piece of software that can make your device appear to be back home, so you can unlock your usual service or subscription from anywhere. The best VPN right now? We recommend NordVPN – it does everything you want it to do at great speeds and a very reasonable price.
Why should I watch the Masters 2026 on Masters.com?
Not only are Masters.com and the The Masters App the only places to stream the The Masters 2026 for free, they also provide the full final round broadcast feed.
They also offer extensive additional coverage, including featured groups, the practice range, and Amen Corner, which is just a small sample of the extra live streams available.
Advertisement
Remember: If you’re outside America traveling use NordVPN (75% off) to access your free Masters 2026 coverage.
Amazon Fire (Fire Tablets, Fire TV Stick, Fire TV Cube, Fire TVs)
Android TV (Sony, Philips, TCL, etc.; some models not fully supported)
Android (Mobile & Tablet) – Android 7.0 and above
Apple TV (tvOS – via AirPlay from iPhone/iPad/Mac)
Chromebook
Desktop PCs (Windows, macOS, Linux)
Google TV (Chromecast with Google TV, NVIDIA Shield)
iOS (iPhone & iPad) – iOS 14 and above
Kindle Fire
LG Smart TVs (2016–2024)
Mac (MacBook, iMac)
PlayStation (PS4, PS5)
Roku (Streaming Stick & Roku TVs)
Samsung Smart TVs (2017 and above)
Smart TVs (Hisense, Panasonic, Sharp, etc.)
Windows Tablets (Surface, etc.)
Xbox (One, Series X, Series S)
We test and review VPN services in the context of legal recreational uses. For example: 1. Accessing a service from another country (subject to the terms and conditions of that service). 2. Protecting your online security and strengthening your online privacy when abroad. We do not support or condone the illegal or malicious use of VPN services. Consuming pirated content that is paid-for is neither endorsed nor approved by Future Publishing.
Google has announced that end-to-end encryption (E2EE) for Gmail on Android and iOS is now rolling out for its enterprise users. Emails that require E2EE in Workspace can be composed and read within the Gmail app, so eligible users won’t need additional apps or portals.
The new feature expands Google’s client-side encryption (CSE) offering, a little more than a year after E2EE was introduced to Gmail on the web. According to a Google blog post, any encrypted message sent to a recipient who uses the Gmail app will appear in their inbox as any email thread would. If they don’t have the app, they’re still able to read and reply to the email in their browser securely, regardless of their email address.
Google says the new functionality “combines the highest level of privacy and data encryption with a user-friendly experience for all users, enabling simple encrypted email for all customers from small businesses to enterprises and public sector.” Of course, “all users” applies only to Enterprise Plus members here, with the millions of people who use Gmail as their personal email service currently unable to take advantage of the highest level of privacy and data protection.
In order for Gmail users to start using E2EE in the app, an admin must first enable Android and iOS clients in the CSE admin interface, which is available in the Admin Console. When sending an email, you have to click the lock icon and select additional encryption before sending. Attachments can then be added as normal.
Advertisement
E2EE is available straight away in the Rapid Release and Scheduled Release domains. Enterprise users will need the Assured Controls or Assured Controls Plus add-on, which provides businesses and organizations that handle sensitive data with extra security and compliance-related tools.
When it comes to smart glasses, Apple seems to be taking the road less traveled. While others have leaned on big-name eyewear brands to make their tech look fashionable, Apple appears ready to do what it does best: keep everything in-house and call it a day. Competitors have played it smart by teaming up with established eyewear giants. It makes sense. If you’re putting a camera on someone’s face, you might as well make sure it looks like something they’d already wear. Apple, however, doesn’t seem interested in that route. Instead of partnering with brands like Ray-Ban or Oakley, the company is reportedly building its own identity from scratch. Which is a bold move but also a very Apple move. This is the same company that turned wireless earbuds into a fashion statement and made smartwatches feel like personal accessories. If anyone believes it can pull off eyewear without outside help, it’s Apple.
From grand AR dreams to something more grounded
Interestingly, Apple’s current approach is a far cry from where it started. Years ago, the company had a far more ambitious plan for head-worn tech, juggling multiple ideas at once from AR-heavy devices to fully immersive headsets. The vision was futuristic, layered, and, in hindsight, a bit ahead of its time. Fast forward to today, and things look a lot more practical. Instead of jumping straight to full-blown augmented reality glasses, Apple is starting with something simpler: display-free smart glasses that prioritize everyday convenience over visual spectacle. The only product from its original roadmap to reach the market is the Apple Vision Pro. Everything else has either been reworked or pushed further down the timeline.
Digital Trends
Apple’s upcoming glasses aren’t trying to plaster digital overlays in front of your eyes. There’s no built-in display here, which might sound like a limitation, but it’s actually the point. Instead, the glasses are expected to rely on cameras, audio, and tight integration with your iPhone to get things done. Of course, none of this works without a brain behind it. Apple is banking on a significantly improved Siri to tie the whole experience together. The idea is that the glasses can see what you’re looking at, understand the context, and offer relevant information or actions without you needing to ask much.
The Apple way, as always
By skipping partnerships with legacy eyewear brands, Apple is clearly betting on its own design language to carry the product. It wants these glasses to be instantly recognizable. It’s a risky move, sure. But if there’s one thing Apple rarely does, it’s share the spotlight.
Meta
So while Apple’s smart glasses may not come with a famous fashion label attached, that might be the whole point. This isn’t about borrowing credibility, it’s about creating it. And if Apple gets it right, you won’t be asking who made the frames — you’ll already know.
If you don’t want to bother with cutting your own cubes, then Klaris has you covered. Klaris has labored to make a kitchen countertop version of the classic Clinebell machine, and we loved it. Trouble is, it costs a hard-to-swallow $550. Now, however, the company has created a new, cheaper version called the Klaris Mini, which retails at a much more palatable $300. The small 8 x 8 x 8-inch box produces two ultra-clear, 2-inch cubes at a time, which can then be stored away in your freezer if you’re stocking up before a party.
Nice Ice
Now, these methods effectively banish those pesky bubbles, but what about replicating that glacial purity in your homegrown luxury ice? It turns out there are three solutions here, and all are easily attainable at home—the last one remarkably so.
“Glacial ice is quite pure because it comes from rainwater,” Salzmann says, adding that the way to get cleaner water at home is to remove the ionic impurities (dissolved inorganic salts, minerals, and metals that carry a positive or negative charge, such as calcium, sodium, chloride, and sulfates). Salzmann says a water filter will remove much of the tap water impurities and pollutants, or you can use what Salzmann recommends: deionized water—”the kind you would use for doing the ironing.”
Clear choice: is this the best water for luxury home ice?
Advertisement
Photograph: Crystal Geyser
But we’re not done yet. Now you have to get rid of the gases in your water that will increase the chances of cloudy ice. To do this, boil your water first to force out any gas, then freeze it before it can reabsorb any more.
There’s a nasty problem with drinking too much very pure water. “Within your cells in your body, there’s a lot of water—but there’s also lots of other stuff, dissolved materials that exert osmotic pressure on your cell membranes,” Salzmann says. “If the medium surrounding your cells does not have the same content of dissolved materials, there’s a different osmotic pressure. So, the bottom line is, if you drink too much deionized or super-pure water, it would be quite bad for you.” Now, we should point out here that you’d have to drink an awful lot of this pure water to run into trouble—a few ice cubes won’t come anywhere near close to causing you any problems.
Still, we can swerve even this last potential pitfall by turning to the sage advice of Kevin Clinebell, grandson of Virgil, who now runs the family company with his brother Scott.
“I’ve got a customer in Las Vegas that studied [using water filters], because Las Vegas has very poor water,” Clinebell says. “Their dissolved solids are something to the tune of 450 parts per million, whereas here [in Colorado], it’s 45 to 48 parts per million,” adding that the system was also very inefficient. “So he started playing with different bottled waters—Fiji, Aquafina, all different types. The one that he found that was as close to perfect as you could get is Crystal Geyser bottled water. That one gave him the best results of any water that he’s ever tried, and he’s running reverse osmosis and everything else.”
The fastest road-legal electric vehicle is probably not what you’d expect. It pulls up to the light, and it’s just 112 inches long and 2,150 pounds. The little yellow box with its comically tall roof and friendly headlights is quite unassuming — but Jonny Smith has made it into a supercar-beating machine.
Smith purchased the Enfield 8000 despite its flood damaged past, back when it originally produced 8 horsepower and had a top speed of 40 miles per hour. He clearly had a vision: With two 9.0in DC motors, the Enfield 8000, originally designed in the 1970s, now has 800 horsepower, 1200 lb-ft. of torque, and will most definitely smoke you at the light.
Advertisement
Now, the Enfield 8000 can reach 60 mph in under 3 seconds, and 113 miles per hour in 6 seconds. Its quarter mile record is 9.86 seconds, reaching 121 mph. “Everyone said it would be undriveable,” said Smith, “but it has exceeded expectations. I wanted to do an old EV and found this. I liked it immediately because it was odd, British and unlikely.”
Advertisement
Can the Enfield 8000 actually beat a Lamborghini Aventador SVJ?
Jonny Smith’s Enfield 8000 reaches incredible speeds very, very fast. But can it actually beat supercars? How does the Enfield 8000’s motors compare to the Lamborghini Aventador SVJ‘s 6.5 L V12? Well, the Aventador SVJ can reach 60 mph in 2.5 to 2.8 seconds and 124 mph in 8.6 seconds. Its quarter mile is 10.3 seconds at 136.4 mph. I’m no numbers cruncher or drag strip judge, but it does sound like the Enfield 8000 would beat the Aventador SVJ in a quarter mile with a photo finish.
The Enfield 8000 could also take on a McLaren 720s, a Porsche 911 GT3, and a Ferrari LaFerrari. What an entertaining drag race that would be! Of course, there has been an ongoing debate regarding the importance of 0-60 times now that even family-oriented SUVs can take off in just a few seconds thanks to the rise of EVs. But the Enfield 8000 can even keep up with the Tesla Model S P85D, which can hit 60 mph in 3.2 seconds in “Insane” mode.
Getting La Grande Combinasion in Steal a Brainrot is a big deal. It’s not often that you find a Secret Brainrot that has a lot of characters and makes a lot of money. In this guide, we’ll talk about all the possible ways to get it without making things complicated.
The main reason La Grande Combinasion is special is that it is very powerful and hard to find. In the game, it costs about $1 billion, but it gives you $10 million every second, which makes it one of the best ways to make money. Its design is also very different from anything else. It looks strange but interesting because it’s a mix of different Brainrots.
Get La Grande Combinasion in Steal a Brainrot
La Grande Combinaison is hard to get, but you’re not limited to just one method. Compared to Los Combinasionas, you actually have better chances here.
1. Buy from Conveyor Belt
The conveyor belt method doesn’t involve any risk, which makes it the best option for many players. However, the biggest challenge is waiting. To increase your chances a bit, you can buy Server Luck for 249 Robux, but even then, it may still take time.
Advertisement
Save up $1 billion in cash.
Keep checking the conveyor belt regularly.
Buy it immediately when it appears.
2. Steal from Other Players
Stealing is the fastest way to get La Grande Combinasion, but you can’t just rush in. You need to watch the player for a bit and wait for the right moment when they’re not paying attention. If your timing is off, there’s a good chance you’ll get caught. So it’s better to plan things properly—know when to move and how you’ll get out quickly.
Keep changing servers until you find the Brainrot.
Watch the owner’s base and wait for a chance.
Use trap or stun items to stop them.
Take it and go back to your base right away.
3. Trade in Private Servers
It’s safer to trade on a private server than to steal from random players. The risk is lower because both sides agree ahead of time. But you should only do this with someone you trust, because mistakes or lying can make you lose your Brainrot.
Join a private server with a friend or someone you know.
Settle on a fair Brainrot trade.
Finish the trade by taking things from each other’s bases.
How to Protect La Grande Combinasion After Getting It?
After you get La Grande Combinasion, you really have to stay careful. It’s valuable, so people will try to take it from you. Make your base stronger, use traps, and don’t just leave it unattended. Even being away from the game for too long can cause problems if someone finds an easy way in.
If you’re trying to get it faster, keep hopping between servers instead of waiting in one place. Sometimes you’ll get lucky that way. Playing during updates can also help a bit. Server Luck is there, but don’t rely on it too much. Just be ready so that when you finally get it, you don’t lose it right away.
You must be logged in to post a comment Login