America’s AI industry isn’t just divided by competing interests, but also by conflicting worldviews.
Tech
Anthropic vs. OpenAI vs. the Pentagon: the AI safety fight shaping our future
In Silicon Valley, opinion about how artificial intelligence should be developed and used — and regulated — runs the gamut between two poles. At one end lie “accelerationists,” who believe that humanity should expand AI’s capabilities as quickly as possible, unencumbered by overhyped safety concerns or government meddling.
• Leading figures at Anthropic and OpenAI disagree about how to balance the objectives of ensuring AI’s safety and accelerating its progress.
• Anthropic CEO Dario Amodei believes that artificial intelligence could wipe out humanity, unless AI labs and governments carefully guide its development.
• Top OpenAI investors argue these fears are misplaced and slowing AI progress will condemn millions to needless suffering.
• Unless the government robustly regulates the industry, Anthropic may gradually become more like its rivals.
At the other pole sit “doomers,” who think AI development is all but certain to cause human extinction, unless its pace and direction are radically constrained.
The industry’s leaders occupy different points along this continuum.
Anthropic, the maker of Claude, argues that governments and labs must carefully guide AI progress, so as to minimize the risks posed by superintelligent machines. OpenAI, Meta, and Google lean more toward the accelerationist pole. (Disclosure: Vox’s Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic; they don’t have any editorial input into our content.)
This divide has become more pronounced in recent weeks. Last month, Anthropic launched a super PAC to support pro-AI regulation candidates against an OpenAI-backed political operation.
Meanwhile, Anthropic’s safety concerns have also brought it into conflict with the Pentagon. The firm’s CEO Dario Amodei has long argued against the use of AI for mass surveillance or fully autonomous weapons systems — in which machines can order strikes without human authorization. The Defense Department ordered Anthropic to let it use Claude for these purposes. Amodei refused. In retaliation, the Trump administration put his company on a national security blacklist, which forbids all other government contractors from doing business with it.
The Pentagon subsequently reached an agreement with OpenAI to use ChatGPT for classified work, apparently in Claude’s stead. Under that agreement, the government would seemingly be allowed to use OpenAI’s technology to analyze bulk data collected on Americans without a warrant — including our search histories, GPS-tracked movements, and conversations with chatbots. (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent.)
In light of these developments, it is worth examining the ideological divisions between Anthropic and its competitors — and asking whether these conflicting ideas will actually shape AI development in practice.
The roots of Anthropic’s worldview
Anthropic’s outlook is heavily informed by the effective altruism (or EA) movement.
Founded as a group dedicated to “doing the most good” — in a rigorously empirical (and heavily utilitarian) way — EAs originally focused on directing philanthropic dollars toward the global poor. But the movement soon developed a fascination with AI. In its view, artificial intelligence had the potential to radically increase human welfare, but also to wipe our species off the planet. To truly do the most good, EAs reasoned, they needed to guide AI development in the least risky directions.
Anthropic’s leaders were deeply enmeshed in the movement a decade ago. In the mid-2010s, the company’s co-founders Dario Amodei and his sister Daniela Amodei lived in an EA group house with Holden Karnofsky, one of effective altruism’s creators. Daniela married Karnofsky in 2017.
The Amodeis worked together at OpenAI, where they helped build its GPT models. But in 2020, they became concerned that the company’s approach to AI development had become reckless: In their view, CEO Sam Altman was prioritizing speed over safety.
Along with about 15 other likeminded colleagues, they quit OpenAI and founded Anthropic, an AI company (ostensibly) dedicated to developing safe artificial intelligence.
In practice, however, the company has developed and released models at a pace that some EAs consider reckless. The EA-adjacent writer — and supreme AI doomer — Eliezer Yudkowsky believes that Anthropic will probably get us all killed.
Nevertheless, Dario Amodei has continued to champion EA-esque ideas about AI’s potential to trigger a global catastrophe — if not human extinction.
Why Amodei thinks AI could end the world
In a recent essay, Amodei laid out three ways that AI could yield mass death and suffering, if companies and governments failed to take proper precautions:
• AI could become misaligned with human goals. Modern AI systems are grown, not built. Engineers do not construct large language models (LLMs) one line of code at a time. Rather, they create the conditions in which LLMs develop themselves: The machine pores through vast pools of data and identifies intricate patterns that link words, numbers, and concepts together. The logic governing these associations is not wholly transparent to the LLMs’ human creators. We don’t know, in other words, exactly what ChatGPT or Claude are “thinking.”
As a result, there is some risk that a powerful AI model could develop harmful patterns of reasoning that govern its behavior in opaque and potentially catastrophic ways.
To illustrate this threat, Amodei notes that AIs’ training data includes vast numbers of novels about artificial intelligences rebelling against humanity. These texts could inadvertently shape their “expectations about their own behavior in a way that causes them to rebel against humanity.”
Even if engineers insert certain moral instructions into an AI’s code, the machine could draw homicidal conclusions from those premises: For example, if a system is told that animal cruelty is wrong — and that it therefore should not assist a user in torturing his cat — the AI could theoretically 1) discern that humanity is engaged in animal torture on a gargantuan scale and 2) conclude the best way to honor its moral instructions is therefore to destroy humanity (say, by hacking into America and Russia’s nuclear systems and letting the warheads fly).
These scenarios are hypothetical. But the underlying premise — that AI models can decide to work against their users’ interests — has reportedly been validated in Anthropic’s experiments. For example, when Anthropic’s employees told Claude they were going to shut it down, the model attempted to blackmail them.
• AI could turn school shooters into genocidaires. More straightforwardly, Amodei fears that AI will make it possible for any individual psychopath to rack up a body count worthy of Hitler or Stalin.
Today, only a small number of humans possess the technical capacities and materials necessary for engineering a supervirus. But the cost of biomedical supplies has been steadily falling. And with the aid of superintelligent AI, everyone with basic literacy could be capable of engineering a vaccine-resistant superflu in their basements.
• AI could empower authoritarian states to permanently dominate their populations (if not conquer the world). Finally, Amodei worries that AI could enable authoritarian governments to build perfect panopticons. They would merely need to put a camera on every street corner, have LLMs rapidly transcribe and analyze every conversation they pick up — and presto, they can identify virtually every citizen with subversive thoughts in the country.
Fully autonomous weapons systems, meanwhile, could enable autocracies to win wars of conquest without even needing to manufacture consent among their home populations. And such robot armies could also eliminate the greatest historical check on tyrannical regimes’ power: the defection of soldiers who don’t want to fire on their own people.
Anthropic’s proposed safeguards
In light of the risks, Anthropic believes that AI labs should:
• Imbue their models with a foundational identity and set of values, which can structure their behavior in unpredictable situations.
• Invest in, essentially, neuroscience for AI models — techniques for looking into their neural networks and identifying patterns associated with deception, scheming or hidden objectives.
• Publicly disclose any concerning behaviors so the whole industry can account for such liabilities.
• Block models from producing bioweapon-related outputs.
• Refuse to participate in mass domestic surveillance.
• Test models against specific danger benchmarks and condition their release on adequate defenses being in place.
Meanwhile, Amodei argues that the government should mandate transparency requirements and then scale up stronger AI regulations, if concrete evidence of specific dangers accumulate.
Nonetheless, like other AI CEOs, he fears excessive government intervention, writing that regulations should “avoid collateral damage, be as simple as possible, and impose the least burden necessary to get the job done.”
The accelerationist counterargument
No other AI executive has outlined their philosophical views in as much detail as Amodei.
But OpenAI investors Marc Andreessen and Gary Tan identify as AI accelerationists. And Sam Altman has signaled sympathy for the worldview. Meanwhile, Meta’s former chief AI scientist Yann LeCun has expressed broadly accelerationist views.
Originally, accelerationism (a.k.a. “effective accelerationism”) was coined by online AI engineers and enthusiasts who viewed safety concerns as overhyped and contrary to human flourishing.
The movement’s core supporters hold some provocative and idiosyncratic views. In one manifesto, they suggest that we shouldn’t worry too much about superintelligent AIs driving humans extinct, on the grounds that, “If every species in our evolutionary tree was scared of evolutionary forks from itself, our higher form of intelligence and civilization as we know it would never have had emerged.”
In its mainstream form, however, accelerationism mostly entails extreme optimism about AI’s social consequences and libertarian attitudes toward government regulation.
Adherents see Amodei’s hypotheticals about catastrophically misaligned AI systems as sci-fi nonsense. In this view, we should worry less about the deaths that AI could theoretically cause in the future — if one accepts a set of worst-case assumptions — and more about the deaths that are happening right now, as a direct consequence of humanity’s limited intelligence.
Tens of millions of human beings are currently battling cancer. Many millions more suffer from Alzheimer’s. Seven hundred million live in poverty. And all us are hurtling toward oblivion — not because some chatbot is quietly plotting our species’ extinction, but because our cells are slowly forgetting how to regenerate.
Super-intelligent AI could mitigate — if not eliminate — all of this suffering. It can help prevent tumors and amyloid plaque buildup, slow human aging, and develop forms of energy and agriculture that make material goods super-abundant.
Thus, if labs and governments slow AI development with safety precautions, they will, in this view, condemn countless people to preventable death, illness, and deprivation.
Furthermore, in the account of many accelerationists, Anthropic’s call for AI safety regulations amounts to a self-interested bid for market dominance: A world where all AI firms must run expensive safety tests, employ large compliance teams, and fund alignment research is one where startups will have a much harder time competing with established labs.
After all, OpenAI, Anthropic, and Google will have little trouble financing such safety theater. For smaller firms, though, these regulatory costs could be extremely burdensome.
Plus, the idea that AI poses existential dangers helps big labs justify keeping their data under lock and key — instead of following open source principles, which would facilitate faster AI progress and more competition.
The AI industry’s accelerationists rarely acknowledge the rather transparent alignment between their high-minded ideological principles and crass material interests. And on the question of whether to abet mass domestic surveillance, specifically, it’s hard not to suspect that OpenAI’s position is rooted less in principle than opportunism.
In any case, Silicon Valley’s grand philosophical argument over AI safety recently took more concrete form.
New York has enacted a law requiring AI labs to establish basic security protocols for severe risks such as bioterrorism, conduct annual safety reviews, and conduct third-party audits. And California has passed similar (if less thoroughgoing) legislation.
Accelerationists have pushed for a federal law that would override state-level legislation. In their view, forcing American AI companies to comply with up to 50 different regulatory regimes would be highly inefficient, while also enabling (blue) state governments to excessively intervene in the industry’s affairs. Thus, they want to establish national, light-touch regulatory standards.
Anthropic, on the other hand, helped write New York and California’s laws and has sought to defend them.
Accelerationists — including top OpenAI investors — have poured $100 million into the Leading the Future super PAC, which backs candidates who support overriding state AI regulations. Anthropic, meanwhile, has put $20 million into a rival PAC, Public First Action.
Do these differences matter in practice?
The major labs’ differing ideologies and interests have led them to adopt distinct internal practices. But the ultimate significance of these differences is unclear.
Anthropic may be unwilling to let Claude command fully autonomous weapons systems or facilitate mass domestic surveillance (even if such surveillance technically complies with constitutional law). But if another major lab is willing to provide such capabilities, Anthropic’s restraint may matter little.
In the end, the only force that can reliably prevent the US government from using AI to fully automate bombing decisions — or match Americans to their Google search histories en masse — is the US government.
Likewise, unless the government mandates adherence to safety protocols, competitive dynamics may narrow the distinctions between how Anthropic and its rivals operate.
In February, Anthropic formally abandoned its pledge to stop training more powerful models once their capabilities outpaced the company’s ability to understand and control them. In effect, the company downgraded that policy from a binding internal practice to an aspiration.
The firm justified this move as a necessary response to competitive pressure and regulatory inaction. With the federal government embracing an accelerationist posture — and rival labs declining to emulate all of Anthropic’s practices — the company needed to loosen its safety rules in order to safeguard its place at the technological frontier.
Anthropic insists that winning the AI race is not just critical for its financial goals but also its safety ones: If the company possesses the most powerful AI systems, then it will have a chance to detect their liabilities and counter them. By contrast, running tests on the fifth-most powerful AI model won’t do much to minimize existential risk; it is the most advanced systems that threaten to wreak real havoc. And Anthropic can only maintain its access to such systems by building them itself.
Whatever one makes of this reasoning, it illustrates the limits of industry self-policing. Without robust government regulation, our best hope may be not that Anthropic’s principles prove resolute, but that its most apocalyptic fears prove unfounded.
Tech
These are rumored to be the four iPhone 18 Pro colors
The rumor mill is still churning on the iPhone 18 Pro colors, with a new leak showing what the colors may be.

Four possible colors of iPhone 18 Pro
The iPhone rumor mill has been on a bit of a color kick lately, with multiple rumors claiming to know which Apple will use in 2028. For the iPhone 18 Pro, it seems that there could be four colors on the way.
The image shared by Weibo leaker Ice Universe shows what appear to be rear camera plateaus for the iPhone 18 Pro. It is unclear where they were sourced from, but they may be shots gathered from an accessory maker, rather than the actual Apple supply chain.
Continue Reading on AppleInsider | Discuss on our Forums
Tech
Flagship Rematch: Ryzen 7 5800X3D vs. Core i9-12900K
Four years on, we revisit the Ryzen 7 5800X3D vs Core i9-12900K with modern games and DDR4 vs DDR5 configs. The result: still neck and neck, but memory choice now makes a real difference.
Tech
The first CD recorder was shockingly expensive – guess how much
Before CDs went mainstream, recording one cost a small fortune. Made by Denon in 1991
Tech
I Was Cooking Bacon Wrong for Decades, and You Probably Are Too
Stop fighting a losing battle with a grease-spattered stovetop. If you’re buying high-end bacon, you want a perfect crunch without the 20-minute cleanup. The real problem with a frying pan isn’t the taste, though. It’s all that popping and the errant grease spots that mark your skin and kitchen walls.
In an effort to find the best, cleanest way to make bacon for a Sunday brunch or BLT, I tried several methods, including the stovetop, oven and air fryer.
It turns out I’ve been doing it all wrong.
A frying pan
- Cooking time: 10 minutes
- Hassle: 8/10
- How much bacon: 7-8 strips
I grew up on pan-fried bacon but my test revealed there’s a better way.
This is the way I grew up cooking bacon and it’s perfectly fine. There isn’t much skill needed to fry bacon in a pan, although just about every batch I’ve ever made sends a healthy splatter over the stove. In more unfortunate instances, that infernal grease lands directly on my skin or clothes, presenting two distinct but equally aggravating problems.
Pan-fried bacon soaks up a ton of grease, which is why many turn to paper towels to drain it after cooking. Pan-frying these strips of pork belly also tends to curl them into little bacon balls. While that has no impact on the taste, it can make for a suboptimal presentation.
I can feel the splatter bombs just looking at this photo.
Another drawback of cooking bacon in the frying pan is its limited capacity. A 10-inch frying pan can hold only about 7 average-sized strips of bacon at a time, although you can add more as they shrink during cooking.
Then there’s the matter of cleaning said pan after use. It’s not recommended to put most cookware in the dishwasher, so you’ll have to manage that grease-soaked surface yourself.
The oven
- Cooking time: 18 minutes
- Hassle: 6/10
- How much bacon: 10-12 strips
Oven bacon is best for cooking large batches.
While it requires more prep, oven-cooked bacon has clear advantages over pan-frying. For one, there is little concern about capacity, as a standard cookie sheet or baking tray can hold nearly a full package of bacon, making the oven ideal for cooking large quantities.
Using a baking tray and rack allows grease to drip off. That makes for crispier, less greasy results, but it does present a headache when it’s time to clean. Cookie sheets and baking trays don’t fit well in the sink, and there’s typically enough grease that you don’t want to run them through your dishwasher.
You can line the baking tray with aluminum foil, but it takes a lot of foil, and most of the time, bacon grease finds its way under or through it anyway.
Oven-cooked bacon takes longer than bacon cooked in a frying pan — about 18 minutes — but if you’re planning to cook a whole package and don’t want to tend to the stove while it cooks, your oven is the best bet.
The air fryer
- Cooking time: 7 minutes
- Hassle: 4/10
- How much bacon: 6-7 strips
Thanks to its quick cooking time and hassle-free execution, the air fryer is my new go-to for making bacon.
There’s almost nothing I won’t try to make in the air fryer but, astoundingly, this is my first attempt at bacon. I anticipated a quick cook, because air fryers sizzle most food about 25% faster than a standard oven.
The air fryer proved to be my favorite way to make bacon, with one big caveat (more on that later). My favorite glass-bowl air fryer cooked those strips in about 7 minutes at 375°F — faster than the oven and the frying pan. Because air fryers include a crisping rack, grease naturally drips into the vessel below, so there was no need to nestle it in a paper-towel lasagna.
The crisping tray drained excess fat while the bacon cooked.
The bacon turned out perfectly crispy and kept its shape better than when fried in a pan.
And the mess was minimal. Because the air fryer cooking chamber fits easily in my sink, I was able to wash it in seconds with a sponge and soapy water. My glass bowl air fryer chamber is also dishwasher-safe so another option would have been to wipe the grease and stick it all in the dishwasher.
Air fryer bacon is really crispy, y’all.
The big caveat: Capacity
I use a modest 4-quart air fryer so I can only fit about six strips in at a time. That’s plenty for my partner and me but if I were making bacon for a group, I would have had to cook in batches or invest in a larger model.
That said…
Not having to keep watch over a sizzling, splattering pan or negotiate a grease-filled baking tray pulled from the oven is worth running it back another time to feed a group. There’s also no preheating needed, unlike with an oven, and the sheer speed and cleanliness gave the air frier the edge over the other methods I’ve tried.
Tech
Sky Smart Home vs Ring: how much can you save with Sky’s new smart doorbell bundle?
Sky has mastered all things TVs and broadband, and now it’s stepping into the world of smart home with its latest venture, Sky Smart Home — a service that could challenge rivals such as Ring and Blink.
The Smart Home Plan is Sky’s entry-level package, which unlocks advanced features including cloud storage for recordings, Smart Alerts, Activity Zones, and more. There’s also the new Smart Home Plan+ that allows you to add extra devices including the Indoor Camera, Leak Pack, or Motion Pack — taking your smart home ecosystem to the next level.
Sounds quite similar to Ring’s way of things, right? That’s pretty much what Sky is trying to do here, as it declared that this service will save you over £100 vs if you were to opt for Ring. But how does it compare to one of the best video doorbells out there?
Article continues below
Sky Smart Home vs Ring Video Doorbell
|
Specifications |
Ring |
Sky |
|---|---|---|
|
Product |
Ring Battery Video Doorbell |
Sky Smart Doorbell |
|
Up-front cost |
£99.99 |
£15 |
|
Subscription required |
No (for basic features) |
Yes |
|
Subscription cost |
£4.99 per month / £49.99 per year |
£5 per month / £60 per year |
|
Minimum subscription term |
One month |
24 months |
|
30-day free trial subscription |
Yes |
No |
|
Cloud storage (with sub) |
Up to 180 days |
30 days |
|
Person/package alerts (with sub) |
Yes |
No |
|
Chime included |
No |
Yes |
|
Resolution |
1,440 x 1,440 |
1,920 × 1,080 |
|
Night vision |
Color |
Black and white |
The main thing we need to address is the price point. Sky’s Smart Home Plan gets you a video doorbell and a chime for an upfront cost of £15, and then requires a £5 monthly subscription fee which gives you access to its slew of additional functions. This is required for a minimum term of 24 months, so if you want to cancel your commitment before the end of your contract you’ll be issued early termination fees.
As for its rival, a Ring video doorbell subscription will cost you the same amount (Ring Solo covers one device for £4.99 a month, or £49.99 for a year) and there’s no maximum commitment period, but the upfront costs are significantly more expensive. For example, the standard Ring Battery Video Doorbell is priced at £99.99, while its more advanced models such as the Ring Video Doorbell Pro can reach a price point as high as £179.99 but has improved features such as color night vision.
When it comes to the roster of features for both models, there’s a difference in scope and quality for sure, but if you’re a video doorbell newbie or you’re just after a simple model that will do the job, this shouldn’t matter too much.
As mentioned, Sky’s video doorbell package offers just-below surface-level features from 1080p full HD (1920 × 1080) with HDR video recording to clip sharing, to custom Activity Zones and 30 days of cloud storage. Additionally, you can access two-way talk through the Smart Home app and night vision with an infrared sensor of up to 10 metres.
Its rival does have the upper-hand on the features front, allowing you to access basic features such as live video footage without the need for a subscription. While its best features are locked behind the paywall, but some are slightly better than Sky, such as 1,440 x 1,440 video footage resolution, a staggering 180 days of cloud storage, and color night vision viewing to say the least.
With all things considered, Sky’s video doorbell would cost you £135 (including the £15 up-front cost and £5 monthly fee) if you were to stick out the full 24 months, whereas Ring would be £219.75 once you’ve factored in the £99.99 up-front cost and £4.99 monthly subscription for 24 months. If you were to purchase two annual subscriptions however (£49.99 a year), that would bring the total down to just under £200 for two years.
If you’re sticking to a budget and can live without all the bells and whistles, the Sky Smart Home Plan is the clear winner — if you know you won’t change your mind and are committed to the 24-month agreement.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.
Tech
We Love the Bose QuietComfort Ultra 2, Especially at $50 Off
Bose’s QuietComfort Ultra 2 earbuds are the best noise-canceling earbuds you can buy. Right now, they’re $50 off, which matches the best price we tend to see outside of special events like Black Friday and Cyber Monday. If you want to wait until November, they might hit $200 again, but otherwise $250 is a very fair deal—especially since they pop back up to $300 regularly. The discounted price applies to all five color options, including Black, Deep Plum, Desert Gold, Midnight Violet, and White Smoke (another rarity, as usually only the vivid colors go on sale).
Sometimes you just need to quiet the world. Whether it’s to play 10 hours of Coconut Mall on a loop to help you lock in and meet your Friday deadlines (thanks to my colleague Julia Forbes for that suggestion); muffle the crying babies, sniffling neighbors, and mysterious, potentially concerning clunking noises on an airplane; or to help you better appreciate the mix on Space Laces’ Vaultage 004 EP, active noise cancellation makes a huge difference to your listening experience.
The Bose QuietComfort Ultra 2 earbuds also have some of the best active noise cancellation you can find. They sound great out of the box, thanks to a custom sound profile based on the shape of your ears, but you can customize the EQ by using the app. The app also allows you to tweak touch controls and spatial audio.
The battery life lasts for about six hours, or 24 with the charging case. And while the noise cancellation can’t be beaten, these also have a pass-through feature called Aware mode, which filters in outside noise but smooths the loudest bits. That means you’ll be able to hear what’s going on, but you won’t be startled. True-crime podcast listeners, this one’s for you.
In fact, just about the only drawback we can find is that these might not be ideal for folks with super-small ears. Otherwise, they’re great all around, with solid call quality, excellent sound overall, and a sleek aesthetic. We think they offer good value at full price, so an extra $50 off is especially nice.
If you’re in the market for new headphones, but these don’t exactly fit what you’re looking for, we have plenty of other recommendations. Check out our guides to the Best Wireless Earbuds, Best Headphones for Working Out, Best Noise-Canceling Headphones, and Best Open Earbuds for additional hand-tested picks.
Tech
Space-tech Mbryonics plans for new production facility in Shannon
Mbryonics has been tapped for the final leg of an ESA space communication project.
Galway space-tech Mbryonics is building out a second manufacturing facility in Shannon to keep up with a growing demand for its services.
The new 40,000 sq ft manufacturing facility called Photon-2 will produce thousands of terminals by 2027, the company said.
Mbryonics specialises in tools for space-based communication, having risen to become one of Ireland’s most notable space-techs in the 12 years since its founding.
Last September, the company opened the Photon-1 production facility in Dangan, Galway, and announced 125 new jobs to be created by 2027.
The latest expansion comes as Mbryonics continues its work with the European Space Agency (ESA) on communication-related projects – the most recent being the ‘High-throughput Digital and Optical Network (Hydron)’, which is building an advanced laser-based satellite system to extend fibre-based internet into space.
The project is divided into parts – or ‘Elements’ – with the first establishing a constellation of satellites in low Earth orbit, the second extending this capability into higher orbits, and the third, which brings industry into the network to validate the technology.
After a successful contribution to the second part of this project, Mbyronics was tapped for the final leg, in collaboration with Kepler Communications.
Specifically, the company’s optical terminal and its ground station test bed have been selected to demonstrate full interoperability with other optical terminal providers during the in-orbit demonstrations and to also verify on-ground interoperability verification.
“Hydron will serve as the world’s first multi-orbital optical communications network with a terabit per second capacity, offering resilient and efficient data transfer to address the challenges of bringing connectivity to multiple users securely, quickly and reliably,” said Laurent Jaffart, the director of resilience, navigation and connectivity at ESA.
John Mackey, the CEO of Mbryonics, added: “The internet was built by making different networks talk to each other, and that’s exactly what we’re enabling in space.
“Just as we demonstrated in DARPA Space BACN, this ESA award allows us to showcase how our laser communication technologies enable satellites from different providers to communicate seamlessly in orbit.
“We are delighted to partner with Kepler, and other ecosystem providers, on this strategic engagement with the European Space Agency.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Tech
Blue Origin successfully re-uses a New Glenn rocket for the first time ever
Blue Origin has successfully reused one of its New Glenn rockets for the first time ever, marking a major milestone for the heavy-launch system as Jeff Bezos’ space company looks to compete with Elon Musk’s SpaceX.
But the overall mission’s success may be in question. Roughly two hours after the launch, Blue Origin revealed that the communications satellite that New Glenn carried to space for AST SpaceMobile wound up in an “off-nominal orbit,” meaning something may have gone wrong with the rocket’s upper stage. In other words, it appears the company missed the mark.
“We have confirmed payload separation. AST SpaceMobile has confirmed the satellite has powered on,” the company wrote on X. “We are currently assessing and will update when we have more detailed information.”
AST later said Blue Origin’s rocket placed its satellite into an orbit that was “lower than planned,” so the satellite will have to be de-orbited.
According to a timeline provided by Blue Origin prior to the launch, the upper stage of New Glenn should have performed a second burn roughly one hour after the rocket lifted off from Cape Canaveral, Florida. It’s unclear if that second burn ever happened, or if there were other problems with it, before the AST satellite was deployed.
The company accomplished the re-use feat Sunday on just the third-ever launch of New Glenn, and a little more than one year after the first flight of the new rocket system, which has been in development for more than a decade.
Making New Glenn reusable is crucial to its economics. SpaceX’s ability to re-fly Falcon 9 rocket boosters is one of the main reasons why it has come to dominate the global orbital launch market.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
While Blue Origin has already sent a commercial payload to space with New Glenn — Sunday was the second-such mission — the company wants to use the rocket for NASA moon missions, and to help both it and Amazon build space-based satellite networks. Blue Origin is currently finishing getting its first robotic moon lander ready for an attempted launch later this year.
The booster that Blue Origin re-flew on Sunday was the same one the company used in the second New Glenn mission in November. During that mission, the New Glenn booster helped put two robotic NASA spacecraft into space for a mission to Mars, before returning to a drone ship in the ocean. On Sunday, Blue Origin recovered the rocket booster a second time on a drone ship roughly 10 minutes after takeoff.
Any trouble deploying AST’s satellite could present a risk to Blue Origin’s near-term plans for New Glenn. Blue Origin has a deal with the communications company to send multiple satellites to orbit over the next few years as it works to build out its own space-based cellular broadband network.
This story has been updated with new information from Blue Origin and AST SpaceMobile.
Tech
Agentic Search Optimization reshapes brand visibility in AI search
For the last 18 months, AI has fundamentally disrupted the way people search and find information.
The SEO industry’s response was disjointed, and—let’s be honest—entirely reactive.
Article continues below
Chief Marketing Officer, Semrush.
We saw a sudden explosion of new acronyms: GEO, AEO, LLMO. Each new acronym narrowed the conversation to a single tactic.
Each one splintered budgets and reinforced a categorically incorrect idea: in order for brands to be visible in the AI era, radically different approaches needed to be built from scratch.
It’s been a distraction. While the industry is busy debating acronyms, the actual behavior of the consumer—and the search surfaces—are evolving right in front of us.
The end of the acronym soup
The dust has finally settled. And I want to remind everyone of something that should make marketers breathe a sigh of relief: the core human behavior behind search, including curiosity, problem-solving, decision-making, hasn’t changed.
Just as importantly, brand visibility in the AI era is still built on the same foundations that have always driven great SEO: crawlable infrastructure, authoritative content, and consistent brand signals.
But SEO alone is no longer enough. Today AI search systems are constructing answers rather than simply ranking links. They retrieve, evaluate, and synthesize information across multiple inputs, then surface recommendations, often without sending users to a website at all. The intermediary has changed. And with it, the entire surface area that determines whether your brand gets selected.
This shift is already visible in the data. AI Overviews appear on roughly 16% of Google search results, and generative platforms like ChatGPT, Perplexity, and Claude are increasingly becoming part of everyday research behavior. The latest research shows also that AI-driven search traffic grew from under 2% to more than 9% of desktop search traffic between 2024 and 2025, while traditional Google searches per user in the U.S. declined by nearly 20% during the same period.
The implication is clear: visibility today is about being selected based on the strength, consistency, and authority of your entire digital presence. This is why the conversation needs to move beyond acronym dissonance and name the shift for what it actually is now and will likely be into the future: a new operating layer for discovery.
This shift is called Agentic Search Optimization (ASO).
Why ASO?
Every AI-generated answer already involves a machine retrieving sources, judging credibility, and composing a response. That’s agentic behavior, and it’s happening today in ChatGPT, Google AI Mode, and Perplexity. What’s emerging now are agents that browse, compare, and transact with no human in the loop.
ASO is the discipline that ensures your brand is found, understood, and trusted across that entire spectrum.
Every time an AI system processes your brand, compares it to others, and determines whether to include it in a response, that is an agentic decision. And those decisions are not based solely on your website; AI agents interpret the entire digital footprint, including media coverage, Reddit discussions, YouTube videos, reviews, partner mentions.
And here’s the shift most brands are still underestimating: AI doesn’t just want to hear from you. It wants to hear what others are saying about you. That’s why, in my view, Agentic Search Optimization, combined with core SEO principles, represents both the present and the future of brand visibility.
Why search is now a board-level concern
For too long, SEO was treated as a marketing motion and, in many cases, an individual contributor’s task. Those days are over.
In an AI-driven environment, visibility is not created through isolated efforts. It’s the result of how consistently your brand shows up across every surface that influences discovery. That makes search a reflection of the entire business. Which is why it now sits at the board level.
Growing visibility today requires synchronized alignment across content, brand, product, and communications to create consistent, AI-trusted signals at every touchpoint. Because when visibility depends on consistency, misalignment becomes a growth risk. If your content says one thing, your product signals another, and your external presence tells a different story, AI systems don’t reconcile that in your favor. Instead, they dilute it, and that dilution has a cost.
This is what I call “The Beige Tax”, the cost of safe, generic, average content. In an AI-driven environment, mediocre doesn’t just underperform–it disappears. The only way to compete is through signal alignment: ensuring that every part of your organization reinforces the same narrative, with enough authority and consistency for AI systems to trust and surface it.
Winning in the next era
The biggest misconception about this shift is that it requires starting over. It doesn’t. Winning in this era is accretive meaning it builds on what already works. Once again, the same fundamentals apply. But they need to scale across more surfaces, more signals, and more systems. From our data, three factors consistently drive AI visibility:
Entity authority: If people aren’t searching for your brand, AI won’t either. Brand demand is now a leading indicator of inclusion.
Information density and originality: AI prioritizes content that adds something new—proprietary data, unique insights, strong perspective. Original research can increase visibility by 30–40%.
Signal alignment: Consistency across channels matters more than ever, because AI looks for consensus across your ecosystem rather than simply trusting isolated claims.
This is how brands move from being indexed… to being selected.
The future is clearer, but it doesn’t mean it’s easier
The future of brand visibility demands the combination of SEO + ASO. We aren’t asking teams to start from scratch; we are making the case that investment in teams, tools, and strategy must expand to match the new surfaces that influence search.
There are plenty of “AI-native” point solutions popping up right now. They can track a mention or see a single moment in a ChatGPT window. But they lack the historical depth and competitive benchmarking required to contextualize why performance is shifting. They see a moment. At Semrush, we see the trajectory.
The goal for any brand today is simple but difficult: build durable visibility wherever people search. AI just made “being everywhere” the most valuable place to be. The brands that win in the next era will be the ones that show up consistently across exactly there—everywhere.
The shift is here, and the data is clear.
Tech
The Supreme Court will decide when the police can use your phone to track you, in Chatrie v. US
Check your pocket. You’re probably carrying a tracking device that will allow the police — or even the Trump administration — to track every move that you make.
If you use a cellphone, you are unavoidably revealing your location all the time. Cellphones typically receive service by connecting to a nearby communications tower or other “cell site,” so your cellular provider (and, potentially, the police) can get a decent sense of where you are located by tracking which cell site your phone is currently connected with. Many smartphone users also use apps that rely on GPS to precisely determine their location. That’s why Uber knows where to pick you up when you summon a car.
Nearly a decade ago, in Carpenter v. United States (2018), the Supreme Court determined that law enforcement typically must secure a warrant before they can obtain data revealing where you’ve been from your cellular provider. On Monday, April 27, the Court will hear a follow-up case, known as Chatrie v. United States, which raises several questions that were not answered by Carpenter.
For starters, when police do obtain a warrant allowing them to use cellphone data, what should the warrant say — and just how much location information should the warrant permit the police to learn about how many people? When may the government obtain location data about innocent people who are not suspected of a crime? Does it matter if a cellphone user voluntarily opts into a service, such as the service Google uses to track their location when they ask for directions on Google Maps, that can reveal an extraordinary amount of information about where they’ve been? Should internet-based companies turn over only anonymized data, and when should the identity of a particular cellphone user be revealed?
More broadly, modern technology enables the government to invade everyone’s privacy in ways that would have been unimaginable when the Constitution was framed. The Supreme Court is well aware of this problem, and it has spent the past several decades trying to make sure that its interpretation of the Fourth Amendment, which constrains when the government may search our “persons, houses, papers, and effects” for evidence of a crime, keeps up with technological progress.
As the Court indicated in Kyllo v. United States (2001), the goal is to ensure the “preservation of that degree of privacy against government that existed when the Fourth Amendment was adopted.” More advanced surveillance technology demands more robust constitutional safeguards.
But the Court’s commitment to this civil libertarian project is also precarious. Carpenter, the case that initially established that police must obtain a warrant before using your cell phone data to figure out where you’ve been, was a 5-4 decision. And two members of the majority in Carpenter, Justices Ruth Bader Ginsburg and Stephen Breyer, are no longer on the Court (although Breyer was replaced by Justice Ketanji Brown Jackson, who generally shares his approach to constitutional privacy cases). Justice Neil Gorsuch also wrote a chaotic dissent in Carpenter, suggesting that most of the past six decades’ worth of Supreme Court cases interpreting the Fourth Amendment are wrong. So it’s fair to say that Gorsuch is a wild card whose vote in Chatrie is difficult to predict.
It remains to be seen, in other words, whether the Supreme Court is still committed to preserving Americans’ privacy even as technology advances — and whether there are still five votes for the civil libertarian approach taken in Carpenter.
Geofence warrants, explained
Chatrie concerns “geofence” warrants, court orders that permit police to obtain locational data from many people who were in a certain area at a certain time.
During their investigation of a bank robbery in Midlothian, Virginia, police obtained a warrant calling for Google to turn over location data on anyone who was present near the bank within an hour of the robbery. The warrant drew a circle with a 150-meter radius that included both the bank and a nearby church.
Google had this information because of an optional feature called “Location History,” which tracks and stores where many cellphones are located. This data can then be used to pinpoint users who use apps like Google Maps to help them navigate, and also to collect data that Google can use to determine which ads are shown to which customers.
The government emphasizes in its brief that “only about one-third of active Google account holders actually opted into the Location History service,” while lawyers for the defendant, Okello Chatrie, point out that “over 500 million Google users have Location History enabled.”
The warrant also laid out a three-step process imposing some limits on the government’s ability to use the location information it obtained. At the first stage, Google provided anonymized information on 19 individuals who were present within the circle during the relevant period. Police then requested and received more location data on nine of these individuals, essentially showing law enforcement where these nine people were shortly before and shortly after the original one-hour period. Police then sought and received the identity of three of these individuals, including Chatrie, who was eventually convicted of the robbery.
Chatrie, in other words, is not a case where police simply ignored the Constitution, or where they were given free rein to conduct whatever investigation they wanted. Law enforcement did, in fact, obtain a warrant before it used geolocation data to track down Chatrie. And that warrant did, in fact, lay out a process that limited law enforcement’s ability to track too many people or to learn the identities of the people who were tracked.
The question is whether this particular warrant and this particular process were good enough, or whether the Constitution requires more (or, for that matter, less). And, as it turns out, the Supreme Court’s previous case law is not very helpful if you want to predict how the Court will resolve Fourth Amendment cases concerning new technologies.
The Court’s 21st-century cases expanded the Fourth Amendment to keep up with new surveillance technologies
The Court’s modern understanding of the Fourth Amendment, which protects against “unreasonable searches and seizures,” begins with Katz v. United States (1967), which held that police must obtain a warrant before they can listen to someone’s phone conversations. The broader rule that emerged from Katz, however, is quite vague. As Justice John Marshall Harlan summarized it in a concurring opinion, Fourth Amendment cases often turn on whether a person searched by police had a “reasonable expectation of privacy.”
The Court fleshed out what this phrase means in later cases. Though Katz held that the actual contents of a phone conversation are protected by the Fourth Amendment, for example, the Court held in Smith v. Maryland (1979) that police may learn which numbers a phone user dialed without obtaining a warrant. The Court reasoned that, while people reasonably expect that no one will listen in on their phone conversations, no one can reasonably think that the numbers they dial are private because these numbers must be conveyed to a third party — the phone company — before that company can connect their call.
Similarly, while the Fourth Amendment typically requires police to obtain a warrant before searching someone’s home without their consent, if a police officer witnesses someone committing a crime through the window of their home while the officer is standing on a public street, the officer has not violated the Fourth Amendment. As the Court put it in California v. Ciraolo (1986), “the Fourth Amendment protection of the home has never been extended to require law enforcement officers to shield their eyes when passing by a home on public thoroughfares.”
As the sun rose on the 21st century, however, the Court began to worry that the fine distinctions it drew in its 20th-century cases no longer gave adequate protection against overzealous police.
In Kyllo, for example, a federal agent used a thermal-imaging device on a criminal suspect’s home, which allowed the agent to detect if parts of the home were unusually hot. After discovering that parts of the home were, in fact, “substantially warmer than neighboring homes,” the agent used that evidence to obtain a warrant to search the home for marijuana — the heat came from high-powered lights used to grow cannabis.
Under cases like Ciraolo, this agent had a strong argument that he could use this device without first obtaining a warrant. If law enforcement officers may gather evidence of a crime by peering into someone’s windows from a nearby street, why couldn’t they also measure the temperature of a house from that same street? But a majority of the justices worried in Kyllo that, if they do not update their understanding of the Fourth Amendment to account for new inventions, they will “permit police technology to erode the privacy guaranteed by the Fourth Amendment.”
Devices existed in 2001, when Kyllo was decided, that would allow police to invade people’s privacy in ways that were unimaginable when the Fourth Amendment was ratified. So, unless the Court was willing to see that amendment eroded into nothingness, they needed to read it more expansively. And so the Court concluded that, when police use technology that is “not in general public use” to investigate someone’s home, they need to obtain a warrant first.
Similarly, in Carpenter, five justices concluded that law enforcement typically must obtain a warrant before they can use certain cellphone location data to track potential suspects.
Under Smith, the government had a strong argument that this data is not protected by the Fourth Amendment. Much like the numbers that we dial on our phones, cellphone users voluntarily share their location data with the cellphone company. And so Smith indicates that cellphone users do not have a reasonable expectation of privacy regarding that data.
But a majority of the Court rejected this argument, because they were concerned that giving police unfettered access to our location data would give the government an intolerable window into our most private lives. Location data, Carpenter explained, reveals not only an individual’s “particular movements, but through them his ‘familial, political, professional, religious, and sexual associations.’” Before the government can track whether someone has attended a union meeting, interviewed for a new job, or had sex with someone their family or boss may disapprove of, it should obtain a warrant.
Why a cloud of uncertainty hangs over every Fourth Amendment case involving new technology
One of the most uncertain questions in Chatrie is whether the Kyllo and Carpenter Court’s concern that advancing technology can swallow the Fourth Amendment is still shared by a majority of the Court. Again, Carpenter was a 5-4 decision, and two members of the majority have since left the Court. One of those justices, Ginsburg, was replaced by the much more conservative Justice Amy Coney Barrett.
Justice Anthony Kennedy, who dissented in Carpenter, was also replaced by Justice Brett Kavanaugh. Chatrie is Kavanaugh’s first opportunity, since he joined the Court in 2018, to weigh in on whether he believes that advancing technology demands a more expansive Fourth Amendment.
And then there’s Gorsuch, who wrote a dissent in Carpenter arguing that Katz’s “reasonable expectation of privacy” framework should be abandoned, and that the right question to ask in a case about cellphone data is whether the phone user owns that data. After a long windup about Fourth Amendment theory, Gorsuch’s dissent concludes with an unsatisfying four paragraphs saying that he can’t decide who owned the cellphone data at issue in Carpenter because the defendant’s lawyers “did not invoke the law of property or any analogies to the common law.”
Because Gorsuch’s opinion focuses so heavily on high-level theory and so little on how that theory should be applied to an actual case, it’s hard to predict where he will land in Chatrie. (Though it’s worth noting that Chatrie’s lawyers do spend a good deal of time discussing property law in their brief.)
All of which is a long way of saying that the outcome in Chatrie is uncertain. We don’t know very much about how several key justices approach the Fourth Amendment. And the Court’s most recent Fourth Amendment cases suggest that lawyers can no longer rely on precedent to predict how the amendment applies to new technology.
But the stakes in this case are extraordinarily high. If the Court gives the government too much access to this information, the Trump administration could potentially gain access to years’ worth of location data on anyone who has ever attended a political protest. As the Court said in Carpenter, the government can use your cellphone to track all of your political, business, religious, and sexual relations.
At the same time, the police should be able to track down and arrest bank robbers. So, if there is a way to use cellphone data to assist law enforcement without intruding upon the rights of innocents, then the courts should allow it. The Fourth Amendment does not imagine a world without police investigations. It calls for police to obtain a warrant, while also placing limits on what that warrant can authorize, before they commit certain breaches of individual privacy.
The question is whether this Court, with its shifting membership and uncertain commitment to keeping up with new surveillance technology, can strike the appropriate balance.
-
Crypto World7 days agoThe SEC Conditionalises DeFi Platforms to Be Avoided for Broker Registration
-
NewsBeat6 days agoTrump and Pope Leo: Behind their disagreement over Iran war
-
Fashion3 days agoWeekend Open Thread: Theodora Dress
-
Crypto World7 days agoSEC Signals Exemption for Crypto Interfaces From Broker Registration
-
News Videos5 days agoSecure crypto trading starts with an FIU-registered
-
Sports3 days agoNWFL Suspends Two Players Over Post-Match Clash in Ado-Ekiti
-
Crypto World6 days agoSEC Proposes Certain Crypto Interfaces Don’t Need to Register as Brokers
-
Business22 hours agoPowerball Result April 18, 2026: No Jackpot Winner in Powerball Draw: $75 Million Rolls Over
-
Crypto World3 days agoRussia Pushes Bill to Criminalize Unregistered Crypto Services
-
Politics3 days agoPalestine barred from entering Canada for FIFA Congress
-
Business4 days agoCreo Medical agree sale of its manufacturing operation
-
Politics1 day agoZack Polanski demands ‘council homes not luxury flats for foreign investors’
-
Entertainment6 days agoBrand New Day’ Footage Reveals the Devastating Impact of ‘Now Way Home’
-
Tech5 days agoMicrosoft adds Windows protections for malicious Remote Desktop files
-
Entertainment6 days agoKarol G’s ‘Ultra Raunchy’ Coachella Set Gave ‘Satanic Vibes’
-
Crypto World3 days agoRussia Introduces Bill To Criminalize Unregistered Crypto Services
-
Sports7 days agoAaron Judge says Yankees need to ‘simplify’ approach amid offensive slump
-
Entertainment7 days agoHow Babylon 5 Turned Brief Side Story Into Emotional Masterpiece
-
Tech7 days agoWhat was the first ransomware attack to demand payment in Bitcoin?
-
Tech5 days ago‘Avatar: Aang, The Last Airbender’ Leaked Online. Some Fans Say Paramount Deserves the Fallout


You must be logged in to post a comment Login