The study ran with a small number of subjects (n=22) aged between 23 to 65 years. They were tested prior to the study for normal visual function and good health. Participants worked exclusively under LED lighting, with a select group then later also given supplemental incandescent light (with all its attendant extra wavelengths) in their working area—which appears to have been a typical workshop environment.
Incandescent bulbs have a much broader spectrum of output than even the best LEDs. Credit: Research paper
Notably, once incandescent lighting was introduced, those experimental subjects showed significant increases in visual performance using ChromaTest color contrast testing. This was noted across both tritan (blue) and protan (red) axes of the test, which involves picking out characters against a noisy background. Interestingly, the positive effect of the incandescent lighting did not immediately diminish when those individuals returned to using purely LED lighting once again. At tests 4 and 6 weeks after the incandescent lighting was removed, the individuals continued to score higher on the color contrast tests. Similar long-lasting effects have been noted in other studies involving supplementing LED lights with infrared wavelengths, however the boost has only lasted for around 5 days.
The exact mechanism at play here is unknown. The study authors speculate as to a range of complex physical and biological mechanisms that could be at play, but more research will be needed to tease out exactly what’s going on. In any case, it suggests there may be a very real positive effect on vision from the wider range of wavelengths provided by good old incandescent bulbs. As an aside, if you’ve figured out how to get 40/40 vision with a few cheap WS2812Bs, don’t hesitate to notify the tip line.
We first heard the Focal Mu-so Hekla at CES 2026 in Las Vegas, and even in the less-than-ideal acoustics of a hotel suite in Sin City, it made a strong case for itself; wide, controlled, and far more composed than most “one-box” solutions pretending to be high-end. Fast forward to AXPONA 2026 in Chicago, and Focal and Naim doubled down, dropping the Hekla into a mock living space embedded inside their sprawling ballroom setup, surrounded by much of their product range, and letting it breathe in a more realistic environment.
And right from the start, they’ve been very clear: don’t call it a soundbar. After spending time with it in both settings, that stance holds up. Yes, it lives under a TV and replaces a rack full of gear, but the intent is different. This is a performance-first, all-in-one wireless speaker built around Naim’s Pulse platform, not a convenience play dressed up with Atmos logos. Call it what you want, but if you’ve actually heard it, you’ll understand why they push back.
Wait. If It’s Not a Soundbar…How Does This Thing Work?
Named after Iceland’s Hekla volcano, the Focal Mu-so Hekla isn’t chasing the usual lifestyle brief. Focal and Naim built this around output and control first, with the convenience piece trailing behind. Inside is a 15-driver array firing forward, sideways, and upward, backed by Naim’s Pulse platform and Focal’s ADAPT room correction—technology that first showed up in the Focal Diva Utopia. The goal is straightforward: create a believable, room-specific soundfield from a single enclosure without leaning on smoke and mirrors.
Setup doesn’t waste your time. ADAPT runs through the app with a short calibration routine; basic room inputs, a few test sequences, done. From there, Sphere Music and Sphere Movie modes adjust how the system presents content rather than just piling on effects. In practice, it works. Dolby Atmos material has real width and height, and it doesn’t collapse into a front-loaded blob. It’s immersive enough that you start checking for speakers behind you. There aren’t any. Bass digs deeper than expected; down to around 30 Hz within 3 dB, so it doesn’t feel incomplete out of the box.
That said, you’re not locked into a one box life sentence. You can add a subwoofer, two in fact, and while Focal would clearly love for you to keep it in the family, the system isn’t that rigid. If you already own something from SVS or another brand, it’s not going to throw a tantrum. Adjust it properly and you’ll get more scale and weight without breaking the core presentation.
Advertisement
Day-to-day use is where it keeps things grounded. Streaming, internet radio, voice control, smartwatches; it’s all here, but none of it gets in the way. The large physical volume dial handles the basics without forcing you into an app every five minutes. It also works as a hub within the broader Mu-so ecosystem, letting you link additional Mu-so speakers throughout the home for a proper multiroom setup. One box under the TV if you want simplicity, or a full-house system if you don’t.
The enclosure feels considered without trying too hard. The Focal Mu-so Hekla uses brushed, anodized aluminum with a mix of brushed and bead blasted finishes that give it some texture without overdoing it. Focal is clearly sticking to the same playbook as the Focal Diva Utopia; clean lines, solid materials, and a sense that everything is there for a reason. It’s refined, but not in a way that calls attention to itself or tries to win design awards at the expense of usability.
The circular control panel sits slightly raised and activates via proximity, offering direct access without disrupting the overall layout. Its form references the Hekla volcano, including the white top surface, but it remains integrated into the design rather than drawing attention to itself. If you are familiar with the Naim Uniti Series of network amplifiers, the design choices will feel very familiar.
The front grille is finely perforated to maintain acoustic transparency while keeping the visual presentation understated. Around back, Naim incorporates its signature heat sink structure, which manages thermal performance while also housing wireless connectivity.
Advertisement
Bluetooth here is strictly one way traffic. The Focal Mu-so Hekla will happily receive from your phone, tablet, or computer, but that’s where it stops. No sending audio out to headphones, no Auracast, and no aptX Lossless. If you were hoping this would double as a wireless hub for late night listening, it won’t.
Inside the Focal and Naim ecosystem, things open up. Multiroom and Party Mode work across compatible streamers through the app, and the latest App 8.0 update folds in a proper radio player with thousands of internet stations, including Naim Radio. It’s a cleaner, more integrated approach than juggling third party apps that may or may not behave.
Advertisement. Scroll to continue reading.
If you want to take it beyond the room, you still can; just not directly from the speaker. Focal Bathys and Focal Bathys MG can tap into those same stations by streaming from your phone over Bluetooth.
Advertisement
Immersive Sound That Actually Fills the Room Without Rear Speakers
What stands out immediately is how composed the Focal Mu-so Hekla sounds with both stereo and multi channel material. There’s no sense of it reaching or overextending to create the illusion. Surround mixes, whether film or music, are presented with control. Effects move when they’re supposed to, not because the system is trying to impress you. Everything stays anchored. Imaging doesn’t drift. It holds its shape.
That’s what makes it work. The sense of space is real, not inflated, and it scales in a way that’s unusual for a single enclosure. In the Focal and Naim demo space at AXPONA, which was packed well beyond what it was designed for, the presentation still filled the room without collapsing. You could see it on people’s faces. That moment where they stop talking and start paying attention.
Low end was clearly influenced by the size of the space, but it still carried weight and control. Not overblown, not thin. Just enough to keep everything grounded. Vocals stayed locked in, with real presence and body, while the top end had the kind of detail and energy that cuts through without getting sharp. It doesn’t try to overwhelm you. It just stays in control and lets the mix do the work.
The pricing is what makes you stop and look twice. At $3,600, the Focal Mu-so Hekla lands in a spot that doesn’t quite follow the usual script. The Naim Uniti Atom isn’t that far behind in price, and that’s a component system starter. This is everything in one chassis. Most curious.
Advertisement
So who is this actually for? Not the person chasing separates and a rack full of gear. This is for someone who splits their time between music and movies and refuses to compromise on either. Someone who wants access to the major streaming platforms, cares about sound quality, but also values a clean room and fewer cables. The kind of buyer who wants it to just work, and work well without turning setup into a weekend project.
And physically, it fits. It won’t look out of place under a 75-inch TV. If anything, the scale of the soundstage makes the footprint feel justified. It sounds bigger than it looks. Much bigger. For a company that sells two-channel systems that can ascend into the $250,000 range or even higher — the Mu-so Hekla is rather strong bargain at a show that didn’t offer very many.
The robot uses Bluetooth to communicate with your phone and uses 2.4-GHz Wi-Fi to connect directly to your home network for over-the-air updates (but not real-time management). Onboarding requires connecting to a temporary network on the device and bridging it to your home network, a quick process that gave me no trouble during setup. Firmware updates will likely be available, but note you’ll need to check the Device Information menu for them. Mammotion didn’t proactively push or suggest any updates during my testing, and these over-the-air updates often required multiple attempts to install successfully.
The app is decidedly limited, allowing you to select from the standard four operating modes and make a few small additional adjustments, including configuring the maximum speed of the robot and opting into a couple of beta features. These include a “Turbo Cleaning” mode that increases the power of the suction at the expense of battery life, and an option to improve the way the unit cleans steps and platforms. (Why this feature isn’t always on is a mystery.)
Leaves Left Behind
Photograph: Chris Null
Throughout my test runs, I saw fairly consistent performance results. The Spino E1 offers acceptable cleaning capabilities, though it’s far from perfect. With synthetic leaves, the unit averaged a cleanup rate of only about 80 percent, leaving behind a significant amount of material uncollected. This material wasn’t just isolated to corners and steps; it was scattered all around the pool. I also noticed the unit cleaned steps and platforms well, but it struggled heavily with obstacles, particularly at the waterline.
Advertisement
I saw similar results with organic debris, and the E1 struggled particularly with smaller particulate matter like dirt. On one run, I could best describe the pool as looking a bit like some of the debris had been smeared around on the pool floor instead of sucked up into the debris basket. All of this is unusual and suggests not that the unit has coverage issues, but rather that the device simply may be underpowered.
ScreenshotSpino app via Chris Null
Good news: The Turbo Cleaning mode available through the app was visibly more effective and bizarrely did not impact battery life at all. The bad news is that this option, still in beta, has to be manually activated in the app before each run of the robot. Hopefully, Mammotion will simply make Turbo Mode the default soon.
When finished, the Spino E1 climbs the pool wall and waits by the waterline for collection—at least momentarily. The problem is that the robot doesn’t push a notification via the Mammotion app to alert you when a cleaning cycle is done, and since the robot has to run its propulsion jets to float, you only have a limited time (about 10 minutes) before the battery dies and the robot sinks. A hook is included in the box to aid with pole-based retrieval in this event.
Morgan Dreiss, a copy editor in Orlando, has severe ADHD that they say requires them to always be “doing at least three things at once.” The result? A daily average screen time of 18 hours and 55 minutes.
“I’m reading a book or playing a game pretty much from waking to sleeping,” Dreiss tells WIRED. What they read comes from the library app Libby, so the books count toward overall screen engagement. Dreiss currently keeps their phone’s autolock feature disabled so they can continuously run a mobile game that pays out $35 for every 110 hours logged. (They’ve earned about $16 so far.)
For years, studies have brought forth worrying data about the potential negative effects of excessive screen time on both physical and cognitive health. Concerns over the neural development and mental health of young people glued to their phones have led to major legislative and courtroom battles; recently a jury found Meta and YouTube liable for designing their platforms with addictive features.
Yet there are those, like Dreiss, who resist the emerging common wisdom about reducing screen time. You might call them “screenmaxxers.” It’s not that they necessarily have some totalizing concept of their habits; journalist Taylor Lorenz is likely in the minority of screenmaxxers eager to put the screen directly inside her brain, as she recently confessed to WIRED. It’s just that, for various reasons, they’re on their devices pretty much all the time, and they don’t see that as a problem whatsoever.
Part of the equation, of course, is work. Corina Diaz, 45, who lives in a remote forested region of Ontario, Canada, works in video game marketing and does influencer management for a game publisher. “So, a lot of screen time,” she says.
Diaz met her husband online in 2005 and had a child three years ago—her screen time increased when she was awake at strange hours because of her newborn, she says.
But Diaz has sought friendships online since the 1990s, when that meant availing herself of tools like Internet Relay Chat and bulletin board systems. “I’ve always felt screens, phone or otherwise, connected me to things I care about,” she says. “In particular, niche social groups that don’t have great mainstream visibility.” Now that she lives two and a half hours outside Toronto, the closest major city, her screen is “a bit of a connection lifeline,” she says.
Advertisement
Daniel Rios is in a similar position. A computer programmer, he lives in the South American country where he grew up after having lived abroad for years. Most of his friends moved away and didn’t return.
As a result, Rios keeps in touch with people over Discord, his primary social outlet. Not living in a city, he doesn’t go out all that much, and screens fill his days—though he says it’s “hard to quantify” exactly how many hours it all adds up to. “When I’m not working at the [desktop] computer, I’m playing at the computer or watching TV,” he says. “If I’m not at the computer, I’m looking at my phone. If I’m not doing any of the above, and I’m out of the house, I’m still probably listening to something on my phone.”
For decades, the ERP “playbook” was a familiar exercise in endurance: organizations would mobilize an army of consultants, brace for years of disruption, and spend millions on a monolithic system designed to last a decade.
Success was binary, the system either switched on, or it didn’t, while adoption and agility remained secondary concerns. But as we enter the AI era, this traditional model has reached a breaking point.
Conrad Troy
The shift we are seeing is not just about ‘faster’ software; it is a fundamental disruption of how ERP is governed, staffed and funded. To capture the full benefit of AI tools, IT leaders must move away from viewing ERP as a one-time capital project and instead treat it as a continuous reinvention engine.
Article continues below
From marathon to sprint: The shorter delivery cycle
The most immediate impact of AI is the collapse of the multi-year delivery timeline. Traditionally, ERP programs were notorious for spiraling costs and creeping fatigue, with Gartner finding that most ran 30% over time and 50% over budget.
AI has upended this by automating the “grind”, the low-value manual work that has always drained budget and morale. By embedding AI-driven testing and configuration automation into the delivery cycle, organizations can cut testing cycles by 40% and halve solution build effort.
Programs that once spanned three years can now be delivered in 18 months, allowing the ‘marathon’ of the past to be replaced by a series of precise sprints.
Advertisement
The new delivery pyramid: Precision over scale
As delivery cycles compress, the shape of the team must also evolve. The old model relied on scale as a safety net, deploying over a hundred people at its peak – often layers of junior analysts learning on the job. In an AI-first model, the ‘delivery pyramid’ is flipped; core teams are shrinking to 30 or 40 senior individuals.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
This leaner structure is composed of senior process owners, automation specialists and data engineers who understand how to use AI copilots to handle testing, remediation, and documentation. The advantage shifts from brute-force manpower to senior judgment and precision.
Advertisement
Consequently, clients should demand that team sizes reflect actual scope and AI-driven productivity rather than outdated habits.
ERP as a product, not a project
The ‘Go Live’ was once the finish line where the project team disbanded and the system entered ‘BAU’, a term that quietly signaled that ambition was over. In the AI era, go-live is merely the starting line.
Because AI-driven optimization and insights accelerate over time, the system becomes a living, learning platform that requires a permanent operating model. Organizations now need standing ‘Reinvention Squads’- small, cross-functional teams that deliver enhancements in quarterly cycles aligned to vendor releases.
This forces an investment shift from a large upfront capital expenditure to an OpEx-led model that recognizes ERP as a strategic capability demanding constant refinement.
The criticality of AI governance
With AI handling more of the daily workflow, it introduces risks that cannot be mitigated by traditional governance. Automated decisions in finance and supply chain can accelerate insights, but they also raise accountability questions when the AI gets it wrong.
Advertisement
This is why modern ERP programs require a dedicated AI governance layer embedded within the programs office from day one. This function is responsible for defining how AI is used, ensuring ethical standards, and orchestrating adoption to prevent fragmentation.
If these guardrails are not built during the programs, organizations will find it nearly impossible to retroactively manage the risks of a continuously evolving system.
Moving beyond the day rate: A new commercial reality
Perhaps the most stubborn vestige of the old era is the commercial model. For decades, ERP consulting has been governed by the logic that effort is scarce, making person-days the primary lever for pricing. AI breaks the link between effort and outcomes.
Advertisement
If automation removes 40% of manual effort but the contract remains anchored on ‘hours billed’, the economic benefit is absorbed by the supplier rather than the client.
Procurement must pivot toward outcome-based models where partners are rewarded for business results, such as faster financial closes or improved inventory turnover, rather than technical milestones. AI cannot be a ‘black box’; its impact on productivity must be visible in the plan, the team shape, and the economics.
The leadership choice
The transformation of ERP is ultimately not a technical challenge, but a leadership one. Leaders must move away from evaluating ERP as a capital project with a defined end point and start treating it as a strategic lever for resilience and competitiveness.
Advertisement
This requires fostering a culture of curiosity and adaptability, where employees see change as a chance to learn rather than a threat. Organizations that continue to treat ERP as a back-office compliance requirement will find themselves burdened by a static system in a dynamic world.
The leaders who embrace this balance, seeing ERP as a continuously recalibrated source of competitive advantage, will be the ones who thrive in the age of AI.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Anthropic recently “hosted about 15 Christian leaders from Catholic and Protestant churches, academia, and the business world” for a two-day summit , reports the Washington Post:
Anthropic staff sought advice on how to steer Claude’s moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said. The wide-ranging discussions also covered how the chatbot should respond to users who are grieving loved ones and whether Claude could be considered a “child of God.”
“They’re growing something that they don’t fully know what it’s going to turn out as,” said Brendan McGuire, a Catholic priest based in Silicon Valley who has written about faith and technology, and participated in the discussions at Anthropic. “We’ve got to build in ethical thinking into the machine so it’s able to adapt dynamically.” Attendees also discussed how Claude should engage with users at risk of self-harm, and the right attitude for the chatbot to adopt toward its own potential demise, such as being shut off, said one participant, who spoke on the condition of anonymity to share details of the conversations…
Anthropic has been more vocal than most top tech firms about the potential risks of more powerful AI. Its leaders have suggested that tools like chatbots already raise profound philosophical and moral questions and may even show flickers of consciousness, a fringe idea in tech circles that critics say lacks evidence. The summit signals that Anthropic is willing to keep exploring ideas outside the Silicon Valley mainstream, even as it emerges as one of the most powerful players in the AI race due to Claude’s popularity with programmers, businesses, government agencies and the military…. Anthropic chief executive Dario Amodei has said he is open to the idea that Claude may already have some form of consciousness, and company leaders frequently talk about the need to give it a moral character…
Advertisement
Some Anthropic staff at the meeting “really don’t want to rule out the possibility that they are creating a creature to whom they owe some kind moral duty,” the participant said. Other company representatives present did not find that framework helpful, according to the participant. The discussions appeared to take a toll on some senior Anthropic staff, who became visibly emotional “about how this has all gone so far [and] how they can imagine this going,” the participant said. Anthropic is working to include more voices from different groups, including religious communities, to help shape its AI, a spokesperson told the Washington Post.
“Anthropic’s March summit with Christian leaders was billed as the first in a series of gatherings with representatives from different religious and philosophical traditions, said attendee Brian Patrick Green, a practicing Catholic who teaches AI and technology ethics at Santa Clara University.”
Most of today’s enterprise AI still operates within the boundaries of cloud datacenters.
It handles digital tasks well like analysis or personalization, but it struggles when intelligence needs to be applied in the physical world, where decisions need to be instant and IT infrastructure is shifting.
Models are therefore becoming smaller and more specialized by running on edge hardware and responding to constantly changing data streams.
Advertisement
Article continues below
Mohan Varthakavi
Vice president of software development, AI and edge at Couchbase.
Physical AI embeds intelligence directly into vehicles, warehouses, aircrafts, retail spaces and industrial systems.
It’s designed for environments where connectivity drops, latency matters and operations cannot stop because a network link has failed.
As organizations deploy more sensors and edge devices, this model is becoming an operational requirement.
Advertisement
Data management is critical to the AI stack
Every physical AI application depends on access to consistent local data, regardless of network quality. Decisions draw on maps, sensor inputs, telemetry, contextual information and model states, all of which must remain available even when devices, vehicles or machines are disconnected from the cloud for hours.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
This creates three core technical requirements. First, latency must approach zero. Even the shortest round trip to the cloud is too slow for millisecond-critical decisions. An autonomous vehicle detecting a sudden obstacle, a warehouse robot identifying a missing item or a smart manufacturing system responding to equipment changes cannot wait for a remote API response; the decisions must be made locally.
Advertisement
Second, data must remain available despite weak connectivity. Many operational environments have volatile connections, so physical AI systems must continue to function offline. This “offline-first” approach ensures that data storage, inference and decision logic remain operational even when cloud access is unavailable.
Third, the compute must be efficient. Edge hardware is inherently constrained, which means models must be small, specialized and optimized, often with hardware acceleration. Databases and the broader AI stack need to be lightweight, performant and resource efficient. In this architecture, the database is an integral part of the AI pipeline, delivering the data models required to make decisions at the source.
Advertisement
Why cloud-only AI breaks down outside controlled environments
Autonomous vehicles move through patchy mobile coverage. Warehouses experience RF interference. Aircraft and cruise ships operate for long periods with limited bandwidth. Even modern manufacturing sites regularly experience dead zones.
In these conditions, latency, the idea that AI can wait for a round trip to the cloud, is a limiting factor. Physical AI relies on local processing and local data because that’s the only way to guarantee consistent, reliable operation.
How physical AI is already being deployed
In autonomous and connected vehicles, edge inference is essential. One self-driving car company, for example, generates large volumes of sensor data that must be processed immediately. Cloud dependency simply isn’t viable because non-autonomous features rely on local storage and offline capability to function reliably.
Advertisement
Aviation shows many of the same constraints. Airlines want to improve crew workflows, maintenance, logistics and passenger experience with AI, but aircraft operate with intermittent connectivity. Data must be collected and stored locally, shared between onboard systems and synced efficiently when the aircraft reconnects.
Retail and logistics offer some of the most accessible examples. At Pepsi, edge devices in warehouses run vision models to analyze shelf stock and initiate replenishment automatically. The intelligence matters, but the practical challenge is managing data locally and syncing it reliably when connectivity allows.
Cruise lines face similar constraints. Operators need to support real-time transactions, personalization and on-board operations on vessels that may not have stable connectivity for days. Across these sectors, the pattern is consistent: AI works only when it operates where the data is generated.
Advertisement
Why so many AI proof-of-concepts struggle to scale
A recent MIT report found that only about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L. The reasons are well documented: Organizations expect immediate ROI. Teams underestimate the complexity of deploying and maintaining AI systems.
Architectures are built around cloud assumptions that don’t hold in real-world environments. The right data architecture doesn’t solve every challenge, but it does address one of the most common points of failure: the gap between lab conditions and operational reality.
Moving to a physical AI model requires designing systems around the actual behavior of physical environments with local processing for time-sensitive decisions, persistent local storage so devices function during outage, lightweight edge databases and optimized models that match hardware constraint and efficient synchronization to ensure data consistency when connectivity returns. Getting this layer right determines whether AI systems can operate reliably at the edge.
Advertisement
The enterprise shift is already underway
Automotive, aviation, logistics, manufacturing and travel businesses are already adopting this model because their environments demand it. The cloud remains vital, but the assumption that every AI workload must be cloud-first doesn’t fit its requirements.
As more of the enterprise becomes instrumented and autonomous, AI will increasingly need to work at the point of action, not the point of aggregation. The organizations that recognize this early are the ones most likely to deploy AI systems that behave predictably, consistently and safely in the environments that matter.
Earlier, fixing a comment on Instagram meant deleting it and starting over. Now, Instagram allows users to edit their comments within 15 minutes. This feature works only for comments posted from your own account.
It’s quite a straightforward process to understand and use. All one needs to do is click on the ‘Edit’ button under the comment made, modify the content appearing on the page, and then click on the blue check button. There’s sufficient time allowed for editing within fifteen minutes after posting the comment.
Why This Update Matters
Though the update may seem insignificant and straightforward, it holds great importance. It helps users make modifications without having to delete their comments. It also allows them to improve or update what they wrote. Since comments can appear in different places, like Stories, this feature makes them more flexible and useful.
Meta continues to update its apps with new features. After bringing message editing earlier, it has now added comment editing on Instagram. The company is also testing other updates to enhance the overall user experience and make the platform easier to use.
Advertisement
This feature might look minor, but it makes a real difference. By allowing users to edit comments, Instagram makes the overall experience easier and more convenient.
For all the hype about data centers in space, there just aren’t very many GPUs up there. As that starts to change, the near-term business of orbital compute is starting to take shape.
The largest compute cluster currently in orbit was launched by Canada’s Kepler Communications in January, and boasts about 40 Nvidia Orin edge processors onboard 10 operational satellites, all linked together by laser communications links.
The company now has 18 customers, and announced its newest on Monday — Sophia Space, a startup that will test the software for its unique orbital computer onboard Kepler’s constellation.
Experts expect that we won’t see large-scale data centers like those envisioned by SpaceX or Blue Origin until the 2030s. The first step will be processing data that is collected in orbit to improve the capabilities of space-based sensors used by private companies and government agencies.
Advertisement
Kepler doesn’t see itself as a data center company, but as infrastructure for applications in space, CEO Mina Mitry tells TechCrunch. It wants to be a layer that provides network services for other satellites in space, or drones and aircraft in the sky below.
Sophia, on the other hand, is developing passively-cooled space computers that could solve one of the key challenges for large-scale data centers in orbit: keeping powerful processors from overheating without having to build and launch heavy, expensive active-cooling systems.
In the new partnership, Sophia will upload its proprietary operating system to one of Kepler’s satellites and attempt to launch and configure it across six GPUs on two spacecraft. That sort of activity is table stakes in a terrestrial data center, and this is the first time it will be attempted in orbit. Making sure the software works in orbit will be a key de-risking exercise for Sophia ahead of its first planned satellite launch in late 2027.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
For Kepler, the partnership helps prove the utility of its network. Right now, it is carrying and processing data uploaded from the ground, or collected by hosted payloads on its own spacecraft. But as the sector matures, the company expects to start linking up with third-party satellites to provide networking and processing services.
Advertisement
Mitry says satellite companies are now planning future assets around this model, pointing to the benefits of offloading processing for more power-hungry sensors, like synthetic aperture radar. The U.S. military is a key customer for that kind of work as it develops a new missile defense system predicated on satellites detecting and tracking threats. Kepler has already demonstrated a space-to-air laser link in a demo for the U.S. government.
That kind of edge processing — dealing with data where it is collected for faster responsiveness — is where orbital data centers will initially prove their value. That vision sets Sophia and Kepler apart from established space companies like SpaceX and Blue Origin, or startups like Starcloud and Aetherflux that are raising significant capital to focus on large-scale data centers with data center-style processors.
“Because we have the belief it’s more inference than training, we want more distributed GPUs that do inference, rather than one superpower GPU that has the training workload capacity,” Mitry told TechCrunch. “If this thing consumes kilowatts of power and you’re only running at 10% of the time, then that’s not super helpful. In our case, our GPUs are running 100% of the time.”
And once these technologies are proven in orbit, well, anything can happen. Sophia CEO Rob DeMillo points out that Wisconsin adopted a ban on data center construction last week, something some lawmakers in Congress are also pushing. Anything that limits data centers on Earth is, in their eyes, making the space-based alternative more attractive.
Advertisement
“There’s no more data centers in this country,” Demillo mused. “It’s gonna get weird from here.”
The Masters 2026 reaches its conclusion today with Rory McIlroy atop the leaderboard with Cameron Young. It’s very tight with eight players separated by just four shots.
The best part? We’ve found a simple way to stream all the action live from Augusta National at no cost. Find out more below.
Watch the Masters 2026 for free
Advertisement
⛳️ Don’t miss the FINAL ROUND! Watch the The Masters 2026 LIVE for FREE via Masters.com and the The Masters App (iOS/Android).
🏆 It all comes down to Sunday at Augusta National and you can stream every moment without paying a penny. Alongside the main broadcast (as seen on CBS), you’ll also get:
👀 Final round Featured Groups
🌸 Amen Corner
🔥 Key holes like 15 & 16
🏌️♂️ Live shots from around the course
✈️ Abroad for the final day? No problem. If you’re a US viewer overseas, just switch on a VPN to access your home coverage on Masters.com and catch every clutch putt and leaderboard twist.
Use a VPN to watch any Masters 2026 stream
What if you’re outside America for the final round?This is where a VPN can help.
Advertisement
A VPN is handy piece of software that can make your device appear to be back home, so you can unlock your usual service or subscription from anywhere. The best VPN right now? We recommend NordVPN – it does everything you want it to do at great speeds and a very reasonable price.
Why should I watch the Masters 2026 on Masters.com?
Not only are Masters.com and the The Masters App the only places to stream the The Masters 2026 for free, they also provide the full final round broadcast feed.
They also offer extensive additional coverage, including featured groups, the practice range, and Amen Corner, which is just a small sample of the extra live streams available.
Advertisement
Remember: If you’re outside America traveling use NordVPN (75% off) to access your free Masters 2026 coverage.
Amazon Fire (Fire Tablets, Fire TV Stick, Fire TV Cube, Fire TVs)
Android TV (Sony, Philips, TCL, etc.; some models not fully supported)
Android (Mobile & Tablet) – Android 7.0 and above
Apple TV (tvOS – via AirPlay from iPhone/iPad/Mac)
Chromebook
Desktop PCs (Windows, macOS, Linux)
Google TV (Chromecast with Google TV, NVIDIA Shield)
iOS (iPhone & iPad) – iOS 14 and above
Kindle Fire
LG Smart TVs (2016–2024)
Mac (MacBook, iMac)
PlayStation (PS4, PS5)
Roku (Streaming Stick & Roku TVs)
Samsung Smart TVs (2017 and above)
Smart TVs (Hisense, Panasonic, Sharp, etc.)
Windows Tablets (Surface, etc.)
Xbox (One, Series X, Series S)
We test and review VPN services in the context of legal recreational uses. For example: 1. Accessing a service from another country (subject to the terms and conditions of that service). 2. Protecting your online security and strengthening your online privacy when abroad. We do not support or condone the illegal or malicious use of VPN services. Consuming pirated content that is paid-for is neither endorsed nor approved by Future Publishing.
Google has announced that end-to-end encryption (E2EE) for Gmail on Android and iOS is now rolling out for its enterprise users. Emails that require E2EE in Workspace can be composed and read within the Gmail app, so eligible users won’t need additional apps or portals.
The new feature expands Google’s client-side encryption (CSE) offering, a little more than a year after E2EE was introduced to Gmail on the web. According to a Google blog post, any encrypted message sent to a recipient who uses the Gmail app will appear in their inbox as any email thread would. If they don’t have the app, they’re still able to read and reply to the email in their browser securely, regardless of their email address.
Google says the new functionality “combines the highest level of privacy and data encryption with a user-friendly experience for all users, enabling simple encrypted email for all customers from small businesses to enterprises and public sector.” Of course, “all users” applies only to Enterprise Plus members here, with the millions of people who use Gmail as their personal email service currently unable to take advantage of the highest level of privacy and data protection.
In order for Gmail users to start using E2EE in the app, an admin must first enable Android and iOS clients in the CSE admin interface, which is available in the Admin Console. When sending an email, you have to click the lock icon and select additional encryption before sending. Attachments can then be added as normal.
Advertisement
E2EE is available straight away in the Rapid Release and Scheduled Release domains. Enterprise users will need the Assured Controls or Assured Controls Plus add-on, which provides businesses and organizations that handle sensitive data with extra security and compliance-related tools.
You must be logged in to post a comment Login