Anthropic and SpaceX have struck a deal to allow the company behind Claude to use xAI’s Colossus 1 data center. As a result, Anthropic is doubling the rate limits for paid users of Claude Code and loosening other limits, the company said.
The arrangement with SpaceX allows Anthropic to use “all the compute capacity” at SpaceX’s Colossus 1 data center, which will add more than 300 megawatts of new capacity “within the month.” Anthropic also recently struck deals with Amazon and Google. As part of the deal, Anthropic has also “expressed interest in partnering with SpaceX to develop multiple gigawatts of orbital AI compute capacity,” the companies said. SpaceX recently filed paperwork with the FCC indicating it wanted to launch a million satellites in order to create an orbital data center.
Paid users of Claude Code (which includes people on Pro, Max, Team and Enterprise plans) should see immediate benefits from the deal. Anthropic is doubling the five-hour rate limits for Claude Code. It’s also removing the “peak hours” restrictions for Pro and Max users, and increasing the API rate limits “considerably” for Claude Opus models.
Advertisement
Same here.
By way of background for those who care, I spent a lot of time last week with senior members of the Anthropic team to understand what they do to ensure Claude is good for humanity and was impressed.
Everyone I met was highly competent and cared a great deal about…
Anthropic partnering with Elon Musk’s SpaceX (which now owns xAI) may seem like an unlikely move. Musk recently called Anthropic “misanthropic and evil,” and has long been critical of its CEO Dario Amodei. But in a post on X he said that he was “impressed” with the Anthropic team and that Claude will “probably” be good. “We reserve the right to reclaim the compute if their AI engages in actions that harm humanity,” he added in another post.
Most people who use AI daily, including me, have already found their preferences. I use a paid Claude subscription to help with my editorial chores (headline brainstorming, fine-tuning the tone, etc.), while Gemini is my go-to AI model for image generation and background research, especially when I want to go deep on a topic.
Maybe you use Perplexity for search or ChatGPT for code, and that’s absolutely fine, as it only boosts your productivity and gets you the right information faster. The problem is, the operating system on your phone or laptop isn’t aware of your choice, and until now, it didn’t even care before imposing its own choice of AI.
In the corporate world, we call this partnerships. Most Android manufacturers, including Samsung and OnePlus, have partnered with Google to integrate Gemini on their devices. Windows, on the other hand, gives you Copilot, whether you take it or leave it.
Shikhar Mehrotra / Digital Trends
Apple, reportedly, is about to do something neither of them has bothered to ask, much less execute: which AI do you actually want to use, and where?
The days of your phone choosing an AI for you are almost over
The moment you look at what Android and Windows are doing, the contrast gets stark fast.
When iOS 27 arrives, you should be able to head into Settings, assign your preferred third-party AI model to an Apple Intelligence feature (Writing Tools, Image Playground, etc.), and iOS 27 takes it from there. Siri also gets the same treatment, letting you select the AI model that handles your requests at the backend.
The moment iOS 27 lands on my iPhone 17, I am going to select Claude as the AI model of my choice for Writing Tools and Gemini for Image Playground.
Rachit Agarwal / Digital Trends
This isn’t just a regular choice, but a frictionless, system-level choice. You set it once, and every time you invoke the Apple Intelligence tool, your preferred AI model shows up. No switching of apps or copy-pasting prompts and outputs across windows required.
That’s what true model portability looks like in practice, and in my frank opinion, it’s a more coherent solution than anything Android or Windows has ever implemented.
Advertisement
On Android and Windows, my favorite AI is always one detour away
Let’s say that you’re using the AI-powered text processing tools on a Samsung Galaxy smartphone. Galaxy AI offers you quite a lot of them, including Writing Assist, Note Assist, and Call Assist, but all of them are powered by Google’s Gemini.
What if I want to use Claude in Samsung Messages to write a message? I have to exit the app, open Claude’s app or web version, paste the text, type in a prompt, copy the output, return to the Messages app, paste it, and send it. Want a different model to summarize your meeting notes in Samsung Notes? I’ve to take the same detour.
I know, it’s tiring to even read that. Imagine millions of users doing this every single day, just to use their preferred AI model. On Windows, Copilot is baked into Notepad and Paint, and there’s absolutely no other choice for everyday users.
Nadeem Sarwar / Digital Trends
The platforms hand you an AI ecosystem to ease your digital pain, but what they’ve actually built is a walled garden.
Advertisement
Apple isn’t winning the AI race: It’s building the track
Apple’s approach, on the other hand, treats your AI preference the way iOS already treats default browsers or email apps: as a user setting. That, in my opinion, is real democratization at work here. It is the right to choose which AI is genuinely useful to me, not whichever scores highest on a benchmark.
To understand how Extensions could actually materialize in practice, any AI company could opt in and add support through their App Store app, which then becomes available as an engine inside Apple Intelligence. Once we install the app, it surfaces as an option inside Settings.
From there, you can route an Apple Intelligence tool to whichever model you trust the most for that specific task. While the company’s in-house models stay intact, the option to outsource a query to third-party models sits as a layer on top. Further, we could also get a dedicated App Store section, which will highlight the compatible AI apps.
Andy Boxall / Digital Trends
Here’s where it gets even more interesting for Apple as a business. So far, the company has been fairly criticized for lagging in the AI race. However, opening up its platform of over 2.5 billion active devices, including iPhones, iPads, and MacBooks, to Claude, Gemini, and whoever qualifies may turn that weakness into its most lucrative strength.
Remember, none of it is officially confirmed by Apple. But we’re pretty close to WWDC 2026, and that’s when Apple could officially announce the transition from becoming an AI-first company to an AI-agnostic platform, one that profits from all of them.
At this point in the brand’s history, the Milwaukee catalog is vast and varied. There are simple hand and power tool essentials, specialized professional items, and even tools that have managed to win over some Milwaukee haters. Still, it hasn’t become one of the biggest and most relied-upon names in the tool game by remaining stagnant. Milwaukee is consistently adding new products to its already rich selection, and it appears that the summer of 2026 will be a big time for this. There are some new Milwaukee hand tools and organizers on their way down the pipeline, set to reach customers soon.
These May releases are just a warmup for what will be a rather eventful June for Milwaukee. Alongside a $169.97 variation on the aforementioned 6-piece tool set that comes with a Packout organizer, these are the other three new kits – only one of which features an accompanying organizer — you can expect to see released at the start of the summer.
Advertisement
Multiple 6-piece hand tool sets are due out in June
Further expanding its hand tool offerings, Milwaukee has a couple more 6-piece sets scheduled for release in June 2026. One is a kit of cushion grip screwdrivers of varying sizes and tip shapes — among the Milwaukee hand tools manufactured within the United States – along with a Packout storage case to hold them. This set will have a retail price of $89.97, with the screwdrivers covered by the Milwaukee Lifetime Guarantee and the Packout container protected by the Milwaukee Limited Lifetime Warranty.
Alongside this, there’s the more varied yet Packout-container-lacking 6-piece comfort grip cutting pliers, wire stripper, and cushion grip screwdrivers set. This set includes 9-inch lineman’s comfort grip pliers, 8-inch diagonal comfort grip cutting pliers, 8-inch long nose comfort grip pliers, an 8-20 AWG dipped grip wire stripper and cutter, a #2 Phillips 4-inch cushion grip screwdriver, and a 1/4-inch slotted 4-inch cushion grip screwdriver. Those in need will be able to get all of this for $129.97 once the set hits store shelves and online marketplaces, with the Lifetime Guarantee included.
Advertisement
Don’t forget the 2-piece comfort grip cutting pliers set
Cutting pliers are some of the most versatile hand tools on the market. They’re needed for all kinds of job duties and can cut material, strip wire, and more with ease. Even though Milwaukee already offers cutting pliers for sale in different sizes, shapes, and use cases, the brand intends to add more to its selection in summer 2026. Coming in June is the 2-piece comfort grip cutting pliers set, which will have a price tag of $74.97 and therefore could someday join the best Milwaukee tools for under $100.
Per the Milwaukee product description on the Milwaukee website, the two included plier types are the 9-inch lineman pliers and the 8-inch diagonal cutting pliers. Both are comprised of press-forged steel, they’re said to open and close smoothly without breaking in, and the lineman pliers specifically feature fish-tape pullers and reaming heads intended for 1/2-inch to 1-inch conduits. The set also comes with the aforementioned Milwaukee Lifetime Guarantee.
Evidently, Milwaukee has no intention of calling its hand tool and Packout organizer lineup complete. Surely this drop will turn out to be just a small part of the brand’s overall 2026 product release roadmap.
The Supreme Court has denied a request from Apple to pause a mandated return to District Court with Epic while it contends with its appeals. So, it faces a battle on two fronts after all.
The new decision is the latest defeat for Apple that has resulted from the 2020 lawsuit against Epic. Apple won the great majority of that case, yet it is still embroiled in legal battles over it.
On Monday, May 4, 2026, Apple asked the Supreme Court for a stay on a mandate that saw it required to meet with Epic Games in court to negotiate a new commission rate. According to Reuters, that request has now been denied by Justice Elena Kagan.
Previously, Judge Yvonne Gonzalez Rogers of the District Court had ruled that Apple was in contempt of an injunction that required Apple to end its anti-steering practices. As a result, since April 2025, Apple has been forced to take no commission on these external purchases.
Advertisement
Later, the Circuit Court ruled that Apple was indeed allowed to charge a commission on external purchases, but that it and Epic would have to decide on the rate in court. After some back and forth, Apple must now face the Supreme Court with its appeals and the District Court at the same time.
In the meantime, Apple will continue to take zero money when iPhone users purchase certain extra features by linking out of the App Store to developers’ sites. It hasn’t collected a commission on external purchases since the original injunction violation was filed in April 2025.
Apple vs Epic continues. Image source: Epic Games
Apple had hoped to have the lower courts side of the case paused while it prepared its appeals to the Supreme Court. The company had argued that:
Advertisement
A stay is now needed before Apple is forced to litigate its commission rate under an erroneous and prejudicial contempt label— in proceedings that could reshape the global app market— before this Court can consider whether to grant review.
Even though the stay wasn’t granted, there’s still some hope that Apple could get its appeals through the Supreme Court before the District Courts arrive at any decision. If the Supreme Court agrees with Apple’s scope appeal, it could mean only having to make changes for Epic and not all developers.
If the Supreme Court also agrees that using the spirit of the law to call for an injunction violation isn’t allowed, the battle in the District Court won’t be required at all. It all hinges on which court moves faster at this point.
How Apple got here
It’s now six years since Epic Games chose to make Apple throw its “Fortnite” game off the App Store and so begin a long legal battle. Despite the fact that Apple won that battle overall, there was a single count in the case that went in Epic’s favor.
That was concerning how Apple then prevented app developers from directing users to alternative ways to pay, such as through special offers on their website. Apple was ordered to change this, and would claim that it did.
Advertisement
However, Epic Games has argued that Apple has flouted the spirit of the law. In April 2025, Judge Gonzalez Rogers agreed, and called Apple’s moves a “gross miscalculation” of what the court would accept.
In a world where a viral TikTok video can cause a brand to trend globally in mere hours, the traditional market research cycle — often spanning 12 weeks — is becoming a liability.
The lag between a survey question and the answers from a wide (or targeted) pool of respondents has become a primary bottleneck for Fortune 500 decision-makers who are forced to navigate volatile geopolitical and economic shifts with data that is frequently outdated by the time it reaches a slide deck, as industry experts have observed.
Brox, a predictive human intelligence startup, recently announced a strategic funding round following a year where they reported 10X revenue growth. Their proposition is as ambitious as it is technical: the creation of a “parallel universe” populated by 60,000 digital twins of real, living human beings and their entire demographic profiles and consumer preferences, allowing enterprises to run unlimited experiments in hours rather than months.
“These digital twins are one-to-one replicas of actual, real individuals,” said Brox CEO Hamish Brocklebank in a recent video call interview with VentureBeat. “We recruit real people like a normal panel company does, pay them to interview them, and capture all the data around them — fully consent-driven.”
Advertisement
The company, currently a lean 14-person operation, is positioning itself as the antithesis of the “insane” research industry. By replacing statistical models with behavioral replicas, Brox aims to transform how the world’s largest banks and pharmaceutical giants anticipate human reactions to high-stakes global and market-shifting events, or narrow, targeted product releases and personnel news, and everything in between.
The kinds of surveys and specific questions that Brox asks its digital twins are completely open-ended and can be customized to fit any conceivable business customer’s use cases and goals.
According to Brocklebank, examples of survey questions include: “What happens if America invades Iran or Greenland? Will depositors at Bank of America put more money into their account or take more money out? Or, in pharmaceuticals, if RFK Jr. says something next week, will that make people more likely to take vaccines or less likely?”
Not synthetic people — AI copies of real ones
The core differentiator of Brox’s technology lies in the fidelity of its input data.
Advertisement
While many competitors in the “digital audience” space rely on purely synthetic identities — generic personas generated by Large Language Models (LLMs ) — Brocklebank argues that these methods inevitably produce “AI slop”.
Purely synthetic audiences often cluster around a tight distribution of answers, over-indexing for “correct” or “healthy” behaviors (such as eating broccoli) because of inherent biases in the underlying models.
Brox’s “Digital Twins” are instead one-to-one behavioral replicas of real individuals who have been recruited and interviewed with exhaustive depth. The process is intensive:
Deep Interviews: The company conducts hours of real and AI-driven interviews with each participant.
Psychological Depth: The data collection seeks to understand fundamental “decision drivers,” including upbringing, relationships, and even marital stability.
Data Density: For some twins, Brox maintains up to 300 pages of text data, representing what Brocklebank calls “the deepest per person data set that exists”.
To solve the “black box” problem common in AI, Brox utilizes a “reasoning chain” for its predictive outputs. When a digital twin predicts a reaction — such as how a $2 billion net-worth individual might respond to a specific interest rate hike — the model introspects and provides a step-by-step explanation for that decision.
Advertisement
This allows clients to understand not just what will happen, but the underlying psychology of why it is happening.
Scaling the “unscalable” interview
The product offering is currently live in the US, UK, Japan, and Turkey. Brox has successfully digitized specific, high-value cohorts that are traditionally difficult for researchers to access.
This includes a panel of “high-net-worth” individuals (those worth over $5 million) and specialized medical professionals like dermatologists — including a multibillionaire.
However, the largest value for customers is likely in the aggregate mass of all individuals that can be polled en masse and/or segmented across demographics, especially those of medium and lower income levels, whose purchasing power and decision-making is more constrained and whose market-
Advertisement
One of the more unique aspects of the Brox platform is its incentive structure. To ensure twins remain up-to-date, real-world counterparts are re-contacted frequently.
For high-value individuals who are not motivated by small cash payments, Brox has issued Stock Appreciation Rights (SARs), essentially making these participants “investors” in the company’s success to ensure they continue to provide high-fidelity personal updates. The platform’s use cases currently focus on two primary sectors:
Pharmaceuticals: Predicting vaccine hesitancy or how physicians might react to new biologics based on shifting political climates.
Finance: Simulating how depositors at major banks might move funds in response to geopolitical events, such as conflicts in the Middle East.
As for why go to the trouble of interviewing and digitally cloning real people instead of just creating wholly fictitious, synthetic audience characters and personas using LLMs and other AI models, Brocklebank offered his perspective.
“You can create 10,000 truly synthetic digital twins, but the answers will still normalize into a very tight distribution, which is not realistic when you’re actually asking real people,” Brocklebank said.
Advertisement
By maintaining a pre-built audience of 60,000 twins, the company enables clients to bypass the recruitment phase of research. A large US bank or a global pharma giant can now “query” the digital population and receive a validated analysis in a matter of hours.
Pricing and accessibility
Unlike traditional research firms that charge on a per-project or per-respondent basis, Brox operates as a high-end Software-as-a-Service (SaaS) platform with enterprise-level commercial licensing. The company avoids the “seat” or “usage” limits that often hinder rapid experimentation within large organizations.
Pricing Tiers: Subscriptions are sold as blanket flat fees, starting at a minimum of $100,000 per year.
Top-Tier Contracts: For larger deployments involving multiple teams and global data access, contracts scale up to $1.5 million per year.
Usage Rights: Clients are granted unlimited usage during the contract period. This allows them to run thousands of simulations without worrying about incremental costs, encouraging a culture of “testing everything” before deployment.
From a legal and privacy standpoint, the digital twins are built on a “fully consent-driven” framework. While the twins can be traced back to real human data for internal validation, the platform is designed to provide aggregated behavioral insights that protect the anonymity of the participants while maintaining the predictive power of their digital replicas.
Rejecting the rise of Kalshi, Polymarket and ‘prediction markets’
The tech industry has recently seen a surge in valuations and interest in “prediction markets” like PolyMarket and Kalshi, which allow users to bet on the outcomes of various global events.
Advertisement
However, the leadership at Brox maintains a distinct distance from these platforms, citing a “personal disdain” for betting markets from both a moral and intellectual perspective.
Brocklebank argues that while betting markets can predict outcomes (e.g., who wins an election), they offer zero utility for business decision-makers because they fail to provide the “why”.
Knowing there is a 60% chance of a certain candidate winning does not help a company adjust its consumer strategy; knowing why a specific cohort of depositors is feeling anxious does.
Investors including Scribble Ventures, Wonder Ventures, and Vela Partners have backed this “human-first” approach to AI, betting that the moat created by deep human data will prove more resilient than the commoditized models of synthetic data providers.
Advertisement
As Brox prepares for launches in the Middle East and APAC, the company is moving toward its ultimate goal: simulating the entire world as a “parallel universe” for risk-free decision-making.
Lock Noob got his hands on the NPX-002 from Works by Design and wanted to put its security claims to the test to see how well they held up. The lock’s designers created this travelling key system, in which the key’s bow spins some internal gears and the actual key blade moves into place deep inside the cylinder. The key only fits perfectly in the exact position, at which point the keyway seals off, leaving no place for your standard picks or tension tools to reach the pins. To prevent the normal impressioning techniques, the brass ones had a plastic pin inserted.
He began by taking a close look at the lock, shining a light through the clear anti-tamper cover to see the key within. A quick stream of solvent was sufficient to lift the seal without causing any damage. He pulled out the key, opened the lock once, and then disassembled the cylinder to inspect all of the moving parts. After that, he put everything back together and sealed the case with tape. Bypassing it was more of a curiosity for him, as the actual test would be to pick it honestly.
His next step was to impressioning the lock by clamping it in a vice and inserting a simple brass blank into the keyhole. He then took an old screwdriver, fitted a bespoke brass bit on it, and began applying mild twisting pressure so that the pins left a mark on the soft metal without harming the lock body. He shook the blank back and forth, watching as tiny marks appeared where the pins pressed the strongest. Each one informed him exactly where to begin filing.
The grind had settled into a regular rhythm, as you filed a bit, turned again, examined the fresh marks with a magnifying glass, and repeated. Position three refused to make a mark, which he discovered was due to the plastic pin, but he continued nevertheless, deepening the wounds where the brass was in touch. He had been turning, filing, inspecting, and repeating for two hours before he realized it. The cuts were increasing deeper in some places and shallower in others, until the blank finally matched the unknown depth.
After two hours, the cylinder gave a gritty turn, and the lock finally opened. The plastic pin had chipped somewhat at the top, but the remainder of the mechanism remained intact. He disassembled the cylinder one last time and saw the damage with his own eyes. A new plastic pin was installed, and the lock was restored to its original functionality.
He attempted one final idea: wrap some aluminum foil around a skeleton key blank. The plan sounded fine on paper: the pins would simply press into the soft foil, leaving quick impressions. In practice the pins dropped at an angle and hit a ledge before they touched the foil. No useful marks formed. The foil method stayed on the shelf.
The expansion follows the organisation’s recent authorisation as an electronic money institution by the Central Bank of Ireland, providing a strategic gateway to support SMEs across the continent.
Hong Kong fintech Currenxie, which recently established an Ireland-based team, has announced plans to create 30 jobs in Dublin, which is the company’s European base of operations. The new roles will be in areas such as technology, operations, compliance, finance and client services.
The 30 jobs will be filled in the next two years and the expansion follows the company’s recent authorisation as an electronic money institution (EMI) by the Central Bank of Ireland, which Currenxie said has provided the company with a strategic gateway to support established SMEs across the continent.
Currenxie also stated that the move provides European finance leaders with the local payout and collection infrastructure needed to navigate complex global trade corridors, particularly between Europe and the Asia-Pacific region.
Advertisement
Commenting on the announcement, the Minister for Enterprise, Tourism and Employment Peter Burke, TD said: “I welcome Currenxie’s decision to establish its European operations in Dublin and their plans to create high quality jobs over the next two years.
“The company’s decision reflects confidence in Ireland as a stable, innovative and well-regulated location from which to serve European markets.”
Michael Lohan, the CEO of IDA Ireland added: “We are delighted to welcome Currenxie as it establishes itself in Ireland. This announcement reinforces Ireland’s position as a leading location for international financial services and fintech companies, offering a highly skilled talent pool, a strong and well-regulated financial ecosystem, and direct access to customers across Europe.”
2026 has seen a positive uptick in investment within Ireland’s fintech ecosystem. In April, Dublin-based start-up Audrey AI announced the closure of a $1.8m pre-seed funding round. In early February, Dublin-based fintech company Circit secured $22m in growth equity funding to further scale its financial auditing and verification platform.
Advertisement
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
As artificial intelligence continues to advance, people are finding creative and useful ways to use it. One possibility: getting a good deal on a car. From finding a vehicle to negotiating with the dealer, AI could handle every step of the process, saving you time — and hopefully money. For example, ex-Apple engineer Mustafa Khan has created Negoshify, which can be downloaded on Claude or used as an app in ChatGPT. You can ask Negoshify to find a model around a certain price point in your area. It will show you available cars that match your inquiry within Negoshify’s dealership network, put down a refundable $500 reserve on the car, and even negotiate with a dealer to get you a better price.
Some car buyers simply chat directly to ChatGPT — no additional apps needed — to make smart purchases. You can ask ChatGPT to evaluate the cars you’re considering to find out which one is the right choice for you. ChatGPT can also coach you through the negotiation process and give you advice on how to respond to dealers. One Reddit user said she ran her messages through ChatGPT, and it told her how to reword them after it decided she was losing too much leverage. “I normally can’t negotiate to save my life, so this was amazing for me,” she said.
Advertisement
Should you be using ChatGPT to negotiate car deals for you?
Summit Art Creations/Shutterstock
Some buyers using ChatGPT have been impressed by its ability to gather information ahead of a negotiation, analyze dealer emails, and make sense of documents. There is no denying that an AI sidekick getting you a hassle-free deal on a car is a neat concept, but the technology may not be quite where it needs to be. There are a lot of things to remember if you plan to run your negotiation correspondence through ChatGPT.
First, it can sometimes be biased depending on how you shape your prompts, or even make up things completely. “If the model’s training data has incomplete, conflicting, or insufficient information for a given query, it could generate plausible but incorrect information to ‘fill in’ the gaps,” a software engineer told TechRadar. Any feedback you get from ChatGPT should still be researched and verified before using it in a negotiation.
Another thing to consider is the biases that dealers may have towards AI. You may not get a response to your inquiry if they can tell it’s been generated by an AI program. There are some common ChatGPT phrases and styling that may be a giveaway, so steer clear of simply copying and pasting what ChatGPT tells you to say and instead rewrite it in your own words. Or maybe just stick to doing your own negotiations.
Anthropic and Elon Musk’s SpaceX said on Wednesday that the two entities have signed an agreement for Anthropic to use computing resources from xAI’s data center in Memphis, Tennessee. It’s the latest tie up in an industry that is scrambling to find enough computers to run complex AI software. SpaceX and xAI were previously separate companies, but the two merged earlier this year. The combined entity, also owned by Musk, is called SpaceXAI.
Anthropic executives made the announcement on stage at the company’s annual developer conference in San Francisco. SpaceXAI also put out a blog post sharing more details about the deal, which will see Anthropic draw power from xAI’s Colossus 1 supercomputer.
The partnership comes at a pivotal time for SpaceXAI, which is seeking to go public as soon as next month. A relationship with a leading AI lab could bolster SpaceX’s credibility as it pitches investors on the potential gold mine in establishing more data centers, including in space.
Notably, Space XAI said in its blog post that Anthropic has “expressed interest” in partnering to develop “orbital AI compute capacity”—essentially, data centers in space. Having Anthropic as a potential customer could help SpaceXAI boost investor confidence that there will be buyers for its super expensive, supercomputing project.
Advertisement
Earlier this year, Musk criticized Anthropic’s AI models in a post on X, calling the company’s policies “misanthropic” and “evil,” and alleging without evidence that the AI models showed racial and sexual biases.
Now, he’s significantly changed his tune, writing on X that he “spent a lot of time last week with senior members of the Anthropic team to understand what they do to ensure Claude is good for humanity and was impressed.”
Musk’s SpaceXAI first broke ground on Colossus 1 on a former Electrolux site in 2024. The company claims that it is one of the largest and fastest AI supercomputers in the world. It includes roughly 220,000 Nvidia GPUs, ranging from Nvidia’s H100 and H200 to its newer GB200 chips. The company likes to boast that it built the super computer in 122 days.
Emissions from gas turbines at the data center have prompted numerous complaints from local residents. Last month, environmental protestors demonstrated at a nearby SpaceX investor gathering ahead of the IPO.
Advertisement
Anthropic plans to use its newfound computing power to “directly improve capacity for Claude Pro and Claude Max subscribers,” both companies said. The deal will give Anthropic access to more than 300 megawatts of new capacity, or approximately 220,000 Nvidia GPUs. In recent months, software developers have increasingly complained about rate limits and service disruptions to Claude Code, due to the physical limits of available computing resources.
As more and more people use services like OpenAI’s Codex and Anthropic’s Claude Code—sometimes running coding programs and agentic tasks for hours on end—the services are being bogged down. According to Anthropic, the average developer is now spending at least 20 hours per week running Claude Code.
Yesterday, The Information reported that Anthropic had committed to spend $200 billion on Google’s AI cloud services and TPU chips. Anthropic has also used Amazon’s cloud computing services and AI chips since 2023. Last month, Anthropic committed “more than $100 billion over the next ten years to [Amazon] technologies.” According to the Information report, contracts with Anthropic and OpenAI now account for “more than half of the $2 trillion in backlogs at major cloud providers” like Amazon, Microsoft, and Google.
Paresh Dave and Zoe Schiffer contributed to this report.
Mac minis and Mac Studios now available in fewer configurations
AI and the associated memory shortage is to blame
With AI data center demand sucking up the world’s supply of RAM, consumers are feeling the effects: having already removed some configurations of the Mac Studio and Mac mini from its store last month, Apple has now reduced the available options even further.
As spotted by MacRumors, you can no longer buy Mac mini models with 32GB or 64GB of RAM, while the M3 Ultra Mac Studio with 256GB of RAM has also been taken off sale — so right now that particular computer is only available with 96GB of RAM.
Both the M3 Mac Studio and the M4 Max Mac Studio, meanwhile, are showing delivery estimates of 9-10 weeks. Even if the configuration you want can be purchased through the store, you might be waiting a long while for it.
Article continues below
Advertisement
As for Mac minis, you’re left with a 48GB of RAM option for the M4 Pro model, and 16GB or 24GB for the standard M4 version. The 256GB SSD storage option has been removed in recent weeks, too, raising the starting price.
‘Supply demand balance’
Apple boss Tim Cook has admitted constraints on supply (Image credit: Getty Images)
Outgoing Apple CEO Tim Cook has gone on record as saying “the Mac mini and Mac Studio may take several months to reach supply demand balance” — and Cook specifically mentioned AI and agentic tools as reasons why these computers are so in demand.
Advertisement
These Macs are being squeezed in two ways: not only are they ideal for running AI models and software, which increases their popularity, but that same demand for AI processing power is also significantly reducing stocks of memory to go inside these computers.
And there’s no sign of the situation getting any better in the short term. The biggest players in the business have been warning that it’s going to take a while before supply can catch up, which isn’t encouraging for availability and pricing going forward.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Reaction online has been understandably negative: “I’ve come to despise AI,” admits one Redditor, while other commenters want to see more done to limit the number of machines that can be purchased at once (most commonly for large, complex AI projects).
Last week the Trump FCC quietly announced that it was cooking up a new ban on any labs that have testing offices in China from testing electronic devices such as smartphones, cameras and computers for sale in the United States.
That’s going to create some major issues given that roughly 75% of all U.S.-bound electronics are currently tested in Chinese facilities. Many of these operations are owned by U.S. or European companies that have testing facilities in China because that’s where the lion’s share of technology is manufactured, so it’s simply more efficient for testing evolving iterations of new product.
That these companies have offices in China doesn’t inherently mean the testing labs are somehow all magically compromised and in dutiful service to the Chinese government, though that’s certainly the implication the xenophobic Trump administration is making (and has made before in previous, similar announcements).
One major problem outside of the raw logistics of it all: Carr’s planned cybersecurity fix would be significantly more expensive, driving up costs for everyone:
Advertisement
“27 of the affected facilities are Chinese subsidiaries of major Western testing firms, including Intertek, SGS, TUV Rheinland, and Bureau Veritas. Those companies operate labs in the U.S., Europe, and Taiwan that can absorb redirected work, but the shift won’t be seamless. Basic FCC certification testing runs between $400 and $1,300 at Chinese labs, compared with $3,000 to $4,000 at U.S. equivalents.”
Who is going to eat the difference in those costs? You are, of course. In addition to the higher costs from the AI boom, the tariffs, and Trump’s pointless war in Iran. Whatever companies lobbied Carr and Trump will do great. You probably won’t.
Given the terrible nature of smart IOT home security standards (more a byproduct of unregulated crony capitalism than China-based testing locations), having a more direct line of control over the testing of U.S. bound hardware makes superficial sense.
But then you have to remember that this is Brendan Carr, who does nothing authentically in the public interest, and is likely just looking to drive more business to a handful of U.S. companies that lobbied for his attention. And you have to remember that these folks, as you saw when they talked about shifting smartphone production to the States, don’t actually know what the fuck they’re doing.
The other major problem: Trump and Carr’s rabid deregulatory, anti-governance zealotry on other fronts has repeatedly worked to undermine U.S. cybersecurity, making these sorts of fixes leaky and highly performative, even if they were to be successful (which they won’t be).
The Trump administration’s stacked courts are also making it extremely difficult to hold telecoms accountable for literally anything (see the Fifth Circuit’s recent reversal of a fine against AT&T for spying on customer movement), which also undermines consumer privacy and national security, and ensures zero real repercussions for companies that fail to secure their networks and sensitive data.
So, with one hand you have Carr claiming he’s “fixing cybersecurity” with stuff like this or his recent foreign router “ban” (which as we’ve noted is really a lazy extortion scheme), while with the other he’s doing everything in his power to ensure that domestic telecoms don’t really have anything even vaguely resembling meaningful privacy and security oversight.
Advertisement
Here’s where I’ll remind you that because the U.S. is too corrupt to pass even a basic modern privacy law, we also have a vast and largely unregulated data broker industry that hoovers up your every movement and online habit, then sells access to it to any random asshole (including foreign and domestic government intelligence agencies).
Here too, weird zealots like Trump and Carr have rolled back efforts to regulate data brokers or do anything about it. As authoritarian racists, they’re too blinded by personal self-enrichment and racism to have any genuine understanding of how any of this stuff actually works.
As with the TikTok “ban” (which basically involved shoveling ownership to Trump’s billionaire buddies), so much of this is heavily xenophobic, nationalistic, transactional, self-serving, and performatively detached from any actual reality. By the time the check comes due, guys like Carr and Trump will already be off to the next grift.
You must be logged in to post a comment Login