TL;DR
Rocket Lab’s Q1 revenue grew 64 per cent to a record 200 million dollars, its backlog reached 2.2 billion, and its stock hit a record high. The only thing that has not launched is Neutron, the rocket the valuation depends on.
Rocket Lab’s Q1 revenue grew 64 per cent to a record 200 million dollars, its backlog reached 2.2 billion, and its stock hit a record high. The only thing that has not launched is Neutron, the rocket the valuation depends on.
TL;DR
Rocket Lab’s revenue grew 64 per cent, its stock hit a record high, and its backlog reached 2.2 billion dollars. The company sold more launches in the first quarter of 2026 than in the entire previous year. The only thing that has not launched yet is the rocket the market is pricing in.
First-quarter revenue was 200.3 million dollars, up from 122.6 million a year earlier, beating analyst estimates that had already been raised twice in the past three months. Space systems, the division that builds satellites and spacecraft components, generated 136.7 million dollars. The launch business contributed 63.7 million. Both exceeded expectations. The stock rose 30 per cent in after-hours trading to a record high, valuing the company at approximately 45 billion dollars.
The financial results showed a company accelerating across every segment. Gross margin reached 38.2 per cent, up from the low thirties a year ago. The net loss narrowed to 45 million dollars from 60.6 million in the first quarter of 2025. Adjusted EBITDA loss was 11.8 million, a figure that suggests profitability is within reach if the revenue trajectory holds.
Rocket Lab signed 31 new Electron and HASTE launch contracts in the quarter, plus five contracts for Neutron, its medium-lift rocket that has not yet flown. The company announced its largest launch deal in history, a bulk purchase of Neutron and Electron flights from an undisclosed customer whose identity and order size the company declined to reveal.
The same day, Rocket Lab disclosed a 30 million dollar contract from Anduril Industries for three HASTE hypersonic test flights from its Virginia launch complex. The HASTE vehicle, a suborbital variant of Electron, serves as a testbed for hypersonic technologies at speeds exceeding Mach 5. Anduril is funding the flights with its own capital, not government money, a distinction that signals private-sector demand for hypersonic testing infrastructure that previously existed only within government programmes.
The 2.2 billion dollar backlog is the number that explains why investors added 10 billion dollars to the company’s market capitalisation in a single evening. A year ago, Rocket Lab’s backlog was approximately 1.1 billion. It has doubled in twelve months. The largest component is an 816 million dollar prime contract to build a missile defence constellation for the Space Development Agency, the satellite procurement arm of the Space Force.
Second-quarter guidance of 225 to 240 million dollars in revenue exceeded Wall Street’s estimate of 205 million by a margin wide enough to suggest that analysts had not fully accounted for the acceleration. CEO Peter Beck said the pipeline supports continued growth into the second half and beyond.
The company’s customer base spans government and commercial clients. It launches satellites for the National Reconnaissance Office, NASA, the Space Force, and allied militaries. It builds spacecraft components for constellations operated by companies including GlobalStar. It is developing the SDA’s Tranche 2 Transport Layer satellites. The breadth of the business is the argument for the valuation: Rocket Lab is not just a launch company, it is a vertically integrated space infrastructure provider.
Neutron is the medium-lift launch vehicle on which Rocket Lab’s ambitions depend. It is designed to carry 13,000 kilograms to low Earth orbit in a reusable configuration and 15,000 kilograms expendable. It is intended to compete for the constellation deployment, national security, and deep space missions that are currently served almost exclusively by SpaceX’s Falcon 9.
The rocket has not flown. Beck said first-flight hardware integration is underway, Archimedes engine qualification is progressing, and the second stage and reusable fairing systems are advancing. The debut launch is targeted for later this year. Rocket Lab has said “later this year” about Neutron before. The original target was late 2024. It slipped to mid-2025, then to 2026. Each delay has been accompanied by plausible technical explanations and continued investor patience.
The patience is partly justified by Electron’s track record. Dawn Aerospace, the New Zealand spaceplane company, has demonstrated that small nations can produce credible launch vehicles, but Rocket Lab has gone further than any non-American, non-SpaceX company in building a commercially successful orbital launch business. Electron has completed more than 60 missions with a success rate exceeding 95 per cent. It is the most frequently launched orbital small rocket in the world. The question is whether the engineering discipline that made Electron reliable can scale to a vehicle ten times larger.
SpaceX, which disclosed in its IPO filing that orbital data centres may not be viable, dominates the launch market with a cadence and cost structure that no competitor has matched. Falcon 9 launched more than 100 times in 2025. Rocket Lab launched 21 times. The gap is enormous. But the gap in market positioning is narrower than the gap in launch frequency suggests.
SpaceX’s backlog is dominated by its own Starlink constellation. Rocket Lab’s 2.2 billion dollar backlog is almost entirely third-party customers. The distinction matters because it means Rocket Lab’s revenue is diversified across dozens of government and commercial clients, while SpaceX’s launch revenue is heavily self-referential. For customers who want an alternative to SpaceX, or who need a launch provider that is not controlled by Elon Musk, Rocket Lab is increasingly the answer.
The race to put data centres, communications networks, and surveillance constellations in orbit is driving demand for launch capacity that exceeds what any single provider can supply. NATO is backing space and AI startups, the Space Development Agency is building a proliferated constellation architecture that requires hundreds of satellites, and commercial operators are scaling their own networks. The launch market is not zero-sum. There is more demand than there are rockets to serve it.
Peter Beck drew Rocket Lab’s logo on a napkin on a flight back to New Zealand in 2006. He had skipped university, taken an apprenticeship at a tools manufacturer, built a steam-powered rocket bicycle, and decided he was going to start a launch company. Twenty years later, the company he founded has a 45 billion dollar market capitalisation, a 2.2 billion dollar backlog, and contracts with the most sensitive national security programmes in the United States.
European defence technology alliances are forming between AI companies and military contractors, but Rocket Lab has built something rarer: a non-American company that the American defence establishment trusts with its most classified satellite programmes. The SDA constellation contract, the NRO missions, and the Anduril hypersonic flights all require security clearances and operational trust that take years to establish.
The stock’s 30 per cent surge reflects a market that believes the backlog will convert to revenue, the Neutron delays will end, and the defence and commercial pipelines will sustain growth rates above 50 per cent. Beck has delivered on every commitment except the one that matters most. Neutron’s first flight will determine whether Rocket Lab is a successful small-launch company with a large valuation or a full-spectrum space company that justifies one. The backlog says the customers are ready. The question is whether the rocket is.
Dario Amodei is not the kind of CEO who talks loosely about numbers. The Anthropic co-founder and chief executive, a former VP of research at OpenAI with a PhD in computational neuroscience from Princeton, has built a reputation for measured public statements — particularly around the financial performance of a company that, until recently, disclosed almost nothing about its business.
So when Amodei took the stage at Anthropic’s Code with Claude developer conference on Wednesday and offered a genuinely striking piece of financial candor, the room paid attention.
“We tried to plan very well for a world of 10x growth per year,” Amodei said during a fireside chat with Anthropic’s chief product officer, Ami Vora. “And yet we saw 80x. And so that is the reason we have had difficulties with compute.”
Anthropic had planned for tenfold growth. But revenue and usage increased 80-fold in the first quarter on an annualized basis, a rate Amodei described as “just crazy” and “too hard to handle.”
The number demands context. Annualized growth rates can overstate sustained performance — a single strong quarter, extrapolated across a full year, can paint a picture that doesn’t hold. Amodei knows this. But the underlying trajectory is not a mirage. Anthropic has crossed a $30 billion annualized revenue run rate, up sharply from roughly $9 billion at the end of 2025, and that growth is being driven largely by enterprise demand. The company’s revenue trajectory has been relentless: $87 million run rate in January 2024, $1 billion by December 2024, $9 billion by end of 2025, $14 billion in February 2026, $19 billion in March, and $30 billion in April.
For context: Salesforce took about 20 years to reach $30 billion in annual revenue. Anthropic did it in under three years from a standing start.
The growth story at Anthropic is, to a remarkable degree, a single-product story. Claude Code, the company’s agentic AI coding tool launched publicly in mid-2025, has become the fastest-growing product in the company’s history — and, by several measures, one of the fastest-growing software products ever built.
Claude Code hit $1 billion in annualized revenue within six months of launch, and the growth hasn’t slowed down. By February 2026, the product was generating over $2.5 billion in run-rate revenue. The company also said Claude Code’s weekly active users had doubled since January 1 and that business subscriptions had quadrupled since the start of 2026.
The mechanics of the product are straightforward. Claude Code is not a chatbot that suggests snippets. It reads a codebase, plans a sequence of actions, executes them using real development tools, evaluates the result, and adjusts its approach. The developer sets the objective and retains control over what gets committed, but the execution loop runs independently. The average developer using Claude Code now spends 20 hours per week working with the tool.
At Anthropic itself, the majority of code is now written by Claude Code. Engineers focus on architecture, product thinking, and continuous orchestration: managing multiple agents in parallel, giving direction, and making the decisions that shape what gets built.
That last point may be the most revealing detail Amodei disclosed at the conference: this is the first year Anthropic’s own internal pull requests have inflected upward due to Claude’s work on the company’s own codebase. The tool that Anthropic sells to developers is now a material contributor to Anthropic’s own engineering output. That creates a feedback loop that is almost impossible for competitors without a comparable product to replicate — the company is using its own product to build the next version of its own product.
The enterprise numbers tell the same story. The company now counts over 1,000 enterprise customers spending more than $1 million per year on Claude services, a figure that has doubled since February. Much of this increase has been fueled by a wave of corporate customers including Uber and Netflix.
Amodei framed the adoption curve in economic terms. “Software engineers are the ones who are fastest to adopt new technology,” he said on stage. “It’s a foreshadowing of how things are going to work across the economy, and how the economy is going to be transformed by AI.”
Hypergrowth creates its own category of problem. When demand outstrips supply by an order of magnitude, the constraint is not go-to-market strategy or product-market fit. The constraint is physics.
The company is growing so fast that its infrastructure has struggled to keep up, forcing Anthropic into what may be the most unexpected partnership in the current AI cycle. Amodei’s comments came hours after Anthropic announced a deal with Elon Musk’s SpaceX to use all of the compute capacity at his company’s Colossus 1 data center in Memphis, Tennessee. As part of the agreement, Anthropic will get access to more than 300 megawatts of capacity — over 220,000 Nvidia GPUs, including dense deployments of H100, H200, and next-generation GB200 accelerators.
The deal is remarkable for several reasons. Musk has been, until very recently, one of Anthropic’s most vocal critics. He has said Anthropic is “doomed to become the opposite of its name” and wrote in February that “Anthropic hates Western Civilization.” But on Wednesday, Musk changed his tune, saying he spent a lot of time with senior members of the Anthropic team over the past week and that he was “impressed.” “Everyone I met was highly competent and cared a great deal about doing the right thing. No one set off my evil detector,” Musk wrote.
The strategic logic on both sides is clear. xAI’s Colossus 1 ended up with capacity that Grok’s user base never grew into, while Anthropic needs compute immediately. Anthropic has been signing deals with Amazon, Google, Nvidia, and Microsoft for more compute capacity, but most of that isn’t expected to come online until late 2026 or early 2027. The SpaceX deal gives Anthropic a significant boost now — the key word being “now.”
As one industry watcher summarized the alignment: “Elon’s enemy is Sam. Dario’s enemy is Sam. Enemy of my enemy is a compute partner.”
Last month, Anthropic said demand for Claude has led to “inevitable strain on our infrastructure,” which has impacted “reliability and performance” for its users, particularly during peak hours. The company admitted in a postmortem from late April that three bugs had affected Claude Code since March 4, and that internal tests hadn’t caught them, leading to several weeks of degraded performance. Amodei said at the Code with Claude conference that the company is “working as quickly as possible to provide more” capacity and will “pass that compute on to you as soon as we can.”
The growth figures arrive at a moment when Anthropic’s valuation is itself becoming one of the defining financial stories of the AI era.
Anthropic has begun weighing a fresh funding round that would value the company at more than $900 billion, according to people familiar with the matter, potentially leapfrogging its longtime rival OpenAI as the world’s most valuable AI startup. The velocity of the escalation is difficult to overstate. From $61.5 billion in March 2025, to $183 billion by its Series F in September, to $380 billion in February, to, if the current discussions proceed, more than $900 billion in May. Anthropic’s shares were already trading at an implied $1 trillion valuation on secondary markets earlier this month.
Instead of cashing out, many existing investors are waiting to potentially exit during Anthropic’s anticipated IPO later this year. The company is raising what is likely to be its last private round before going public to fund its massive computing needs. Bloomberg has reported that the company is weighing an IPO as early as October 2026, with Goldman Sachs, JPMorgan, and Morgan Stanley already in early discussions.
Anthropic is also building out infrastructure on longer time horizons. Amazon has agreed to invest up to $25 billion in Anthropic, securing up to 5 gigawatts of compute capacity for training and deploying Claude models. Anthropic also secured 5 gigawatts of computing capacity as part of a separate deal with Google and Broadcom that will start to come online next year. The total commitment is staggering — tens of gigawatts of compute across three separate hardware ecosystems: Amazon’s Trainium chips, Google’s TPUs via Broadcom, and Nvidia GPUs through SpaceX and Microsoft Azure.
For perspective: Anthropic’s $30 billion run rate exceeds the trailing twelve-month revenues of all but approximately 130 S&P 500 companies. A company that was essentially pre-revenue in early 2024 now out-earns most of the Fortune 500.
That comparison comes with caveats. Private-market revenue run rate is not the same thing as audited GAAP revenue, gross margin, free cash flow, or public float. OpenAI has internally argued that Anthropic’s $30 billion figure is overstated by roughly $8 billion, pointing to questions about whether revenues from AWS and Google Cloud should be reported at gross value or net of the partner’s cut. The accounting question will ultimately be resolved when both companies file IPO prospectuses — but even on a net basis, Anthropic’s growth rate is unlike anything in enterprise software history.
The financial story — 80x growth, a near-trillion-dollar valuation, a scramble to secure enough GPUs to meet demand — is dramatic on its own terms. But Amodei used his time on stage to place it inside a larger thesis about where AI is headed.
He described a progression from single agents to multiple agents to what he called whole organizational intelligence — from “a team of smart people in a room” to “a country of geniuses in the data center.” The framing is deliberately expansive. What Anthropic is selling today is a coding tool. What Amodei is describing is a future in which entire categories of knowledge work are performed by fleets of AI agents operating in parallel, supervised by humans who define objectives and review outputs.
He reiterated a prediction he made roughly a year ago: that 2026 would see the first billion-dollar company run entirely by a single person. “Hasn’t quite happened yet,” he said. “But we’ve got seven more months.”
The company has also been navigating political headwinds. The Pentagon declared Anthropic a supply chain risk in March, blacklisting it from work with the military. The company has warned the designation could result in billions in lost revenue, with over one hundred enterprise customers reportedly expressing doubts about continuing their relationships.
And yet — as that scuffle makes its way through the legal system, Anthropic is only getting more popular. Amodei said this week he’s eventually hoping for “more normal” expansion.
There is a temptation, when covering a company growing at this rate, to let the numbers speak for themselves. They shouldn’t. Growth at 80x annualized is not a business plan — it’s an emergency. It means demand has outrun infrastructure, that customers want something the company cannot yet reliably deliver at scale, and that every week of constrained capacity is a week during which competitors can close the gap.
The investors funding Anthropic — including SoftBank, Amazon, Nvidia, Google, a16z, Lightspeed, and ICONIQ — are making a specific bet: that compute costs continue to fall per unit of intelligence, that revenue keeps compounding faster than burn, and that whoever owns the AI infrastructure layer in 2029 will generate returns that make the interim losses irrelevant.
Amodei’s candor at Code with Claude was not a victory lap. It was a diagnostic — an admission that his company is running faster than it can steer. He planned for a world of 10x growth and got 80x instead. Now he has seven months to prove that the infrastructure, the organization, and the vision can catch up to the demand. The country of geniuses in the data center is getting crowded. The question is whether anyone remembered to build enough rooms.
![]()
According to insider sources, Microsoft engineers are working on a new feature called “Low Latency Profile” (LLP) aimed at improving Windows 11’s performance in certain critical, system-wide tasks. The change is already present in recent preview builds distributed to Windows Insider participants, meaning enthusiast users can enable and test it…
Read Entire Article
Source link
The attack on the Trellix source code repository disclosed last week has been claimed by the RansomHouse threat group, which leaked a small set of images as proof of the intrusion.
Yesterday, the threat actor published on their data leak site screenshots indicating access to the cybersecurity company’s appliance management system. However, BleepingComputer could not confirm the authenticity of the data.
Trellix is an international cybersecurity firm with global Fortune 100 customers. In 2025, the company had more than 53,000 customers in 185 countries and 3,500 employees.
The company confirmed the breach in a statement on May 1st and said that it was investigating the incident. “Trellix recently identified unauthorized access to a portion of our source code repository. Upon learning of this matter, we immediately began working with leading forensic experts to resolve it,” stated Trellix.
“We have also notified law enforcement. Based on our investigation to date, we have found no evidence that our source code release or distribution process was affected, or that our source code has been exploited.”
At the time, BleepingComputer’s request for details went unanswered, and the company did not disclose any information about the perpetrators.
Following a new request for comments after RansomHouse’s disclosure, Trellix told BleepingComputer that it was “aware of claims of responsibility for the attack and are looking into it.”
According to the threat actor, the intrusion occurred on April 17 and resulted in data encryption.

RansomHouse is a cybercrime group that launched in 2022 as a data-extortion operation, listing victims on a darkweb portal and leaking or selling data stolen from their corporate networks.
Over time, the threat actor added more advanced encryption utilities to their toolkit, such as ‘Mario,’ which performs a dual-encryption pass with two keys on target files, and ‘MrAgent,’ which automates the deployment of encryptors on VMware ESXi hypervisors.
A recent high-profile case involving RansomHouse was that of Japanese e-commerce giant Askul Corporation, from which the threat group stole 740,000 customer records, among other sensitive information.
Trellix’s investigation is still underway, and the company previously promised to share more details once they become available.
AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.
At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.

EVERETT, Wash. — With just three years left on a hard deadline to prove its fusion approach works, Helion Energy is still wrestling with fundamental questions — and it’s building a new, smaller machine to help find answers faster.
Since launching more than a decade ago, Helion has built increasingly larger prototype devices to test and refine its fusion technology as it races to deliver a source of nearly limitless clean energy. But by 2028, Helion is contractually obligated to have a commercial facility producing energy from fusion reactions, essentially replicating the physics that power the sun.
So now it’s going small.
The company is building a downsized testbed device called “Tiny Merge,” a machine less than one-eighth the size of Polaris, its seventh-generation and final prototype. The decision reflects the reality that key issues remain that Helion’s larger, more expensive prototypes haven’t fully resolved. These concerns must be addressed before final designs for a power plant can be locked in.
“With this agile testbed, we will be able to test new ideas with much less energy and far fewer resource requirements, meaning we can iterate faster than we can on full-scale machines such as Polaris,” said Michael Hua, Helion’s senior director of radiation safety and nuclear science.
GeekWire got a sneak peek at Tiny Merge during a recent tour of the company’s sprawling R&D facility north of Seattle. Behind massive curtains in a cordoned-off section of the building sits the gleaming, tubular fusion device measuring roughly 8 feet long.
Running parallel to the machine are two rows of tall shelving — heavy-duty versions of what you’d find at a home improvement store — that will eventually hold hundreds of mini-fridge-sized capacitors to store power flowing into and out of the device. Helion plans to have Tiny Merge up and running by the end of the summer, leaving roughly two years to incorporate what it learns into final designs.
The stakes couldn’t be higher. Over in Eastern Washington, Helion has broken ground on Orion, a facility that it hopes will be the first to produce fusion energy at a commercial scale. It’s a feat no one has yet accomplished, though more than 45 companies are trying.
Helion has made the sector’s most aggressive timeline commitment through a deal with Microsoft to supply electricity from Orion for a data center development starting in 2028. Miss that deadline, and Helion faces financial penalties from Microsoft and partner Constellation.
The company is counting on Tiny Merge to help make that big bet pay off.
Fusion works by heating matter and compressing it into a plasma, a superheated state in which atoms are stripped of their electrons. In those extreme conditions, atomic nuclei collide, fuse and release energy. The process holds enormous promise for abundant clean power, but achieving it at scale remains a formidable scientific challenge.
The team’s first tests with Tiny Merge will focus on the formation and merging of plasma rings, said Manav Singh, Helion’s director of electrical engineering. The company has researched this with previous prototypes, Singh said, but new results have prompted further questions. “There’s a few much more deep investigations we want to do,” he added.
Helion and the broader fusion industry have made measurable progress in recent years, with devices hitting new records in temperature and pressure. Companies have poured significant funding into the pursuit, with Helion alone raising more than $1 billion from investors including OpenAI CEO Sam Altman.
But plenty of skeptics remain, arguing that grid-scale fusion energy is still many years away — if it ever arrives.
RELATED: Helion gives behind-the-scenes tour of secretive 60-foot fusion prototype as it races to deployment
While Google is helping Apple upgrade its AI, the search giant may have taken a little too much liking to the Apple Intelligence name. A new leak shared by Mysticleaks on Telegram seems to show “Gemini Intelligence” inside Google’s software running on what looks like a Pixel smartphone.
For now, it is best to take the leak with a grain of salt until there is something more concrete. But if the video is accurate, Google could be preparing the feature for the Pixel 11 series, which is expected to launch around August 2026.
The irony is almost too rich. Apple Intelligence is Apple’s big bet on making Siri smarter, more personal, and actually useful in the AI age. And yet Apple has signed a multi-year partnership with Google to power next-gen Siri with Gemini models. So Google may simultaneously be fueling Apple Intelligence and launching Gemini Intelligence. That is either very efficient or very silly branding.

Google has already started expanding Gemini’s Personal Intelligence features. These allow Gemini to connect with apps like Gmail, Google Photos, YouTube, and Search to answer questions with a user’s own context. Instead of asking a generic chatbot for help, users can ask for information tied to their emails, photos, saved details, and activity across Google services.
Pixel phones have long been Google’s test bed for AI features, including call screening and AI-powered photo editing tools. If “Gemini Intelligence” is real, Pixel 11 would be the natural place to introduce it as a deeply integrated, phone-level AI layer. We just hope that the name gets a second pass. Assuming, of course, that there’s a name to pass on at all.
Started more than 50 years ago, data storage company Western Digital is one of the world’s largest computer hard disk drive manufacturers, and produces solid-state drives (SSDs) and flash memory devices. Western Digital makes all the essentials for home office and business digital storage, whether you want to back up via cloud storage, easily take your presentation on a USB flash drive to your next important meeting, or upgrade your home security surveillance’s storage system, Western Digital has what you need—and we have promo codes to help you save.
One of the biggest issues of our modern life is how to responsibly recycle e-waste. That’s why Western Digital makes it easier to recycle your old, broken, or defunct electronics. With Western Digital’s Easy Recycle program, you can safely dispose of NAS systems and internal or external HDDs and SSDs. Plus, they recycle devices from any manufacturer—not just Western Digital products. And when you go green and recycle through their program, you’ll get a 15% off Western Digital promo code that counts towards your next purchase of $50 or more when you shop online at Western Digital.
Right now, you can save 10% on your first order when you sign up to receive emails from Western Digital. All you need to do is head to Western Digital’s promo page, where you’ll input your email to sign up for special offers, promotions, and that Western Digital promo code for 10% off. The code will be sent to your inbox where you can use it to save on tech essentials.
Western Digital has even more ways to save, with free standard shipping on eligible orders of $50 or more for non-members. Western Digital members receive free standard shipping on all eligible orders in the lower 48.
Western Digital has education discounts, where students and teachers can get up to 15% off purchases after verifying their status with Youth Discount. Once their identity is verified, they’ll get a voucher code sent to their inbox to use at checkout. Western Digital also has a 15% discount for seniors 55 years or older. Seniors just need to verify their status with Senior Discount. Once age is verified, folks will get a Western Digital promo code sent to their email to save.
In a commitment to sustainability, Western Digital has a program with Easy Recycle, where you can safely dispose of NAS systems and internal or external HDDs and SSDs. (They’ll also recycle devices from any manufacturer, not just Western Digital). As a token of appreciation for participating in their initiative for a greener future, participants can get 15% off their next purchase of $50 or more.
It’s hard to know which is the right digital storage system for you—in fact, we even made a handy guide on How to Back Up Your Digital Life, and have a whole roundup of some of our favorite WIRED-tested external hard drives. In a similar vein, Western Digital created a FAQ webpage on how to choose the right storage drive for your needs, like budget and data. A Western Digital Hardrive is a budget-friendly option that delivers the capacity needed to store years of photos, videos, backups, workloads, and archives. While a Western Digital SSD offers fast and reliable responsiveness for more large-scale operating systems and active projects.
Why you can trust TechRadar
We spend hours testing every product or service we review, so you can be sure you’re buying the best. Find out more about how we test.
FlexiSpot continues to shine with great options across a wide range of pricing tiers, making it a bit difficult to pin down a higher-priced offering or a budget option.
The C7 Morpher leans towards the higher end of mid-tier – expect to pay around $800 / £800 when not on sale. It’s a nice enough ergonomic office chair, but it blends in a bit more than some of the best office chairs I’ve reviewed at this price, looking not too dissimilar from other ‘serious’ and ‘professional’ seats.
Depending on what you want and your own design and styling preferences, this may be preferred. I know I prefer simple black or dark grey chairs, unless it’s an accent or statement piece, but that usually comes with elegance. Some people prefer fun colors to liven up their workspace, while others prefer a specific color to match what they already have (or to avoid clashing).
The C7 Morper can fit that niche of looking nice and simple, but not cheap, but it’s still not going to be an elegant statement piece.
The FlexiSpot C7 Morpher is available for $800 from FlexiSpot.com and £800 from FlexiSpot.co.uk. However, at the time of review, it’s discounted in the US to $650, and FlexiSpot generally run sales on all its office chairs – if you can wait a bit and watch the price, I’d suggest doing so.
Cost-wise, this is akin the excellent Steelcase Series 2, sitting at the upper end of mid-tier (arguably, it’s broaching the premium price-point). What it lacks in design style, it makes up for in comfort features.
The C7 Morpher arrived in a single, unassuming box that weighed just under 80 lbs. Once we started unboxing we noticed that every piece was individually wrapped in foam to help make sure that the chair gets delivered in good condition, which is something I appreciate as some chair companies skimp on this and then the chairs can sometimes arrive damaged.
With a single person, setup took about 30 minutes from unboxing to fully assembled utilizing the included T-handle Allen wrench — though we didn’t utilize the included gloves this time.
Upon first inspection, my team and I agree that the materials for this chair are on the nicer quality side of the spectrum, especially for this price range. The wheel base and the arms are made from aluminum, while the chair frame is a durable plastic material. The chair seat and back are covered in a comfortable yet durable fabric mesh material that seems like it’ll be able to last quite a while without any signs of wear and tear.
As I’ve mentioned in the past for other chairs, I’m a big fan of mesh backs due to the increased airflow circulation and because I naturally run a bit warmer than the average individual, so I appreciated seeing that on this chair.
After years now of having leg rests available on chairs and having them become more and more common, I have noticed that I rarely end up actually using it.
However, that could very well just be a personal thing, as I don’t usually use these chairs for anything but work. I’m usually trying to be really intentional with my posture when sitting, but if you’re the kind of person who would utilize it, this is another one of the chairs that has a built-in one that slides underneath the seat when not in use.
Beyond the largest and materials, the other adjustability points are pretty standard for Flexispot chairs, and they’re still overall usually on the more adjustable side when it comes to ergonomic offerings for these office thrones.
I’ve put this chair through the test with a handful of my team members, some friends, some family, and several others who have walked past that have been interested in my ever-growing chair collection.
While I haven’t tested the capacity up to 380 lb, I have tested the height range with individuals ranging from 5’7″ to 6’2″, and there seems to be wiggle room on both ends for comfortable seating in this chair.
If you plan to use this on a low-pile carpet or a hard floor, the standard casters will be good enough. However, if you want a smoother ride, or if you are on a rougher surface or longer carpet, you will want to upgrade the casters. If you’re interested, FlexiSpot offers this at an additional cost, or you can pick some up on Amazon.
If you’re looking for a simple chair that will still provide ergonomic comfort for all-day work, but you don’t want to spend an arm and a leg, then this is a chair that’s worth considering. But if you’re the kind of person who wants a more luxurious or elegant-looking chair that perhaps stands out a little bit more – especially at this price, then the C7 Morpher may not be the option for you.
For more office furniture, we’ve tested the best standing desks.
The Department of War has announced that it’s published “never-before-seen files” of unidentified anomalous phenomena (UAP) on a new government webpage, and plans to add material on a “rolling basis.” Some Pentagon UAP footage was declassified during President Donald Trump’s first term, but this new page appears to be the results of a February Truth Social post from Trump calling on the DOW and related agencies “to begin the process of identifying and releasing Government files related to alien and extraterrestrial life, unidentified aerial phenomena (UAP), and unidentified flying objects (UFOs).”
The webpage — war.gov/UFO — includes a carousel of images and files from the DOW, FBI, NASA and more, presented in a way that seems to intentionally lean into the conspiratorial nature of UFO fandom in general. You don’t have to spend long clicking through images and downloading PDFs to realize that there’s not much in the way of actual evidence of aliens, though. Whether or not the files are supposed to direct attention away from the other flailing projects of the second Trump administration — a disastrous war with Iran, for example — they’re much more interesting as an example of how a bureaucracy processes and catalogs unexplained phenomena than as a smoking gun that proves extraterrestrials have visited Earth.
Suspicion that the US government knows more about unidentified anomalous phenomena (the term that replaced unidentified aerial phenomena and UFOs) than it’s letting on has been around for decades, but confirmation of formal research into the subject wasn’t made official until the Advanced Aerospace Threat Identification Program (AATIP) was revealed in 2017. AATIP was formed in 2007 to study UAP and later disbanded in 2012, but its work has been carried on by other government groups and task forces, most recently the All-domain Anomaly Resolution Office, an organization currently working inside the DOW that contributed to this new release of files.
The videos of UAP shared during the first Trump administration were unexplained, but ruled by a government report to not be an alien spacecraft. It’s not clear new files released by the DOW will change anything, but they do make for an interesting curio at the very least.
Intel could soon be making Apple’s chips. Image credit: Intel
Apple has reportedly reached a preliminary agreement that will see Intel become a chipmaking partner, helping reduce the company’s reliance on TSMC for Mac chips and more.
Reports had earlier suggested that the two companies were discussing a deal that would help Apple diversify its supply chain. Now, the pair are closer than ever to manufacturing some of Apple’s chips in the United States.
According to a report Wall Street Journal on Friday, the two companies have been in discussions about the project for more than a year. Significant progress has been made in recent months, however.
It’s currently unclear which Apple devices Intel will produce chips for, however. A report from late 2025 suggested that Apple intended for Intel to produce M-series chips destined for the Mac and iPad lineups.
The WSJ report notes that Apple’s decision to use Intel comes after President Trump personally advocated for the move. Trump has pushed for more U.S.-based manufacturing of Apple components, and the company has pledged to spend $400 million to make that happen.
The U.S. government previously signed a deal to effectively buy a $9 billion, 10% stake in Intel. Since then, it’s been keen for companies to use Intel wherever possible.
Apple joins Nvidia in giving Intel new business. Nvidia invested $5 billion in Intel in September 2025, with the latter building custom data center CPUs.
Apple currently relies heavily on TSMC to produce the chips for its iPhones, iPads, Macs, Apple Watches, and more. However, manufacturing capacity and Apple’s reliance on a single chipmaker has caused issues of late.
Apple was caught flat-footed by the popularity of the MacBook Neo. A recent boom in Mac mini and Mac Studio popularity has also seen both products become increasingly difficult to buy.
As demand for high-performance chips continues to grow, deals like the one Apple appears to be signing with Intel become increasingly important. As the AI boom requires more silicon than ever, having two companies producing your chips is surely better than one.
Even before Trump’s push to bring manufacturing to the U.S., Apple was aware of its need to divest its supply chain. As far back as the COVID pandemic, Apple found it relied too heavily on Chinese plants.
Since then, Apple has moved more and more manufacturing to other countries, reducing its reliance on China. Both India and Vietnam have been beneficiaries to date, with the U.S. following suit.
By Rich Perkins, Principal Sales Engineer, Prophet Security
Your security spend has roughly doubled in six years. Your time-to-investigate and respond hasn’t moved. Your CFO is asking why the security headcount keeps growing while the metrics that matter to the business don’t.
The architecture under your SOC is the reason. Not your team. Not your tooling investment. Not your hiring funnel. The operating model your program inherited assumed human-driven alert triage at the volume the business was producing five years ago, and the business stopped producing alerts at that volume a long time ago.
This is a piece about why hiring more analysts won’t close the gap, what changes when you fix the model instead, and the specific limitations and questions that should shape any AI SOC evaluation. It includes a four-question diagnostic you can run on your own program in the time it takes to finish a coffee.
Google Mandiant’s recent M-Trends reporting puts global median dwell time at 14 days. The same report found that in 2025 the “hand-off” window between initial access and subsequent transfer to secondary threat group collapsed to just 22 seconds, a 95% drop from the 8 hours from 2022. Crowdstrike’s 2026 Global Threat report uncovered similar trends, with the average breakout time falling to 29 minutes, from initial access to exfiltration.
IBM’s most recent Cost of a Data Breach research puts the average time to identify and contain a breach in 2025 at 241 days, with an average cost of $4.88 million. That’s a drop of 16% from 2020, when the time to identify and contain a breach stood at 281 days. Those numbers have not improved at the pace security spending would suggest, despite that spending having roughly doubled in five years, nor have they kept up with the shorter “breakout” or “hand-off” window
This isn’t framed to scare defenders into chasing the next hype. It’s the operating reality. Money in, complexity in, but the curve from detection to investigation and containment barely moves.
SOC teams have already done the obvious efficiency moves. They tier severity. They auto-close known-benign alert classes. They suppress noisy detection rules. They tune. They route. That’s not the problem.
The problem is that even after all of that work, the volume that lands on humans for actual investigation still exceeds what humans can investigate at the depth required. We’ve written an entire ebook on how the SOC queue is the breach, which you can download here.
In the deployments I’ve worked across, the post-tiering volume that hits human triage typically lands in the 120 to 150 alerts per day range. At 20 minutes per investigation including documentation, that’s 40 to 50 analyst-hours daily. SOC teams of 5 to 10 analysts can cover the top of that range during business hours, leaving the rest of the queue for the next shift, the next day, or never.
That’s the gap that doesn’t close with more headcount. You can’t hire enough analysts to investigate 100% of post-tiering volume at the depth the work requires. You can hire your way to better coverage at the margins. You cannot hire your way to the model change.
Most breaches don’t trigger a high severity alert. Instead the first signs appear in a low severity alert that gets buried in a queue no human can clear.
This ebook from Prophet Security breaks down why the alert backlog is the actual attack surface, and what changes when AI investigates every alert.
Before going further, run these four questions on your program. Honestly. The answers map your SOC capacity blind spots more reliably than any vendor pitch will.
1. What percentage of alerts above your defined investigation threshold did your team actually investigate last quarter? If less than 90%, you have a coverage gap that’s hiding real risk. The gap exists because of how the work flows, not because anyone is dropping the ball. More headcount won’t close it.
2. How many detection rules has your team suppressed in the last 12 months without an engineering ticket to replace the coverage? Suppressing noisy rules is healthy tuning. Suppressing them without follow-up engineering to replace what they were watching is debt. Each undocumented suppression is an attack surface you’ve stopped watching, and the threats those rules were designed to catch don’t go away because you disabled them.
3. What was your senior analyst turnover last year, and how long did each replacement take to reach productive contribution? If turnover exceeds 15% or ramp exceeds 6 months, your bench is fragile. You’re one resignation away from operational impact. Tribal knowledge walking out the door is a single point of failure most programs don’t have a remediation plan for.
4. If alert volume doubled tomorrow, what’s the first thing your team would stop doing? The honest answer is the part of your program that’s already underwater. Whatever you’d cut first is what’s currently holding on by a thread. That’s where to focus the operating model conversation.
If three or more of these answers concern you, the productive conversation moves past hiring and into a different question: whether the architecture under your team can carry the program you actually want to run.
The teams making real progress aren’t the ones hiring more analysts. They’re the ones changing what work humans are required to do at all.
JB Poindexter & Co, an 8,500-employee diversified manufacturer, deployed Prophet AI in 2025. In the first 60 days, they ran 4,407 investigations through the platform with a mean time to investigate under 4 minutes.
That’s 73 investigations per day at depth, against a Mandiant industry median dwell time measured in days. The deployment returned roughly 1,469 hours of analyst time to their team, equivalent to 6.3 analyst-years of investigation capacity at full annualization.
Their CISO, John Barrow, framed the outcome as “faster, more focused, and able to scale without adding immediate headcount.”
The operating model shift in that sentence is what matters. Not “we hired more people.” Not “we worked our existing people harder.” The work no longer required the same number of people.
Cabinetworks ran 3,200 alerts through Prophet AI in 33 days. Six escalated to a human. The unexpected outcome was a 90% reduction in SIEM costs, primarily from no longer needing to ingest and store raw EDR and identity telemetry that had been pulled into the SIEM purely for analyst pivot queries.
When the AI handles those pivots directly against source systems, that ingest tier becomes optional. The line item that gets cut isn’t the obvious one, and most teams don’t model that secondary saving when they evaluate AI SOC tools. They should. For programs running enterprise SIEM contracts in the seven-figure range, the secondary savings often exceed the cost of the AI platform itself.
A second outcome worth noting: when the queue clears, teams stop having to ignore low and medium severity alerts. Most SOCs quietly stop investigating those classes under capacity pressure, even when their security leadership knows the coverage gap matters. A medium-severity alert isn’t risky because it’s medium.
It’s risky because that’s where real attackers hide while your team is buried in critical-severity noise. Bringing the medium and low tiers back into investigation scope is the coverage shift most teams want and very few can resource.
Every deployment requires two to four weeks of focused tuning before reaching steady state.
The piece a CISO is mentally writing while reading vendor content is the budget request. Where does this money come from?
Three patterns I’ve seen work, in order of CISO political difficulty.
Path one: Unapproved headcount budget. The cleanest funding path. The team has approved or pending headcount the program hasn’t filled, and the AI platform replaces the need to hire that role. Fully loaded cost for a Tier 2 analyst typically runs $180K to $300K depending on market and seniority, which sets the floor for what the AI platform needs to displace to make the math work.
The JB Poindexter pattern fits here. The “scaling without adding immediate headcount” framing is procurement language for “this is what we’re doing instead of approving the next hire.”
Path two: SIEM cost reduction. If your team is using the SIEM as an investigation pivot workspace (raw EDR telemetry, identity logs, network data), and the AI platform takes over those pivots, the SIEM ingest and storage tier becomes optional.
The Cabinetworks pattern. SIEM ingest savings depend heavily on volume but commonly run 30 to 60 percent of total SIEM spend when investigation telemetry is the main driver.
For programs running mid-six-figure or seven-figure SIEM contracts, this funding path can fully cover the AI platform with savings left over. Get your SIEM renewal cycle date before you start the evaluation, because the timing matters.
Path three: Tool displacement. The hardest political fight. Replacing an existing SOAR, an existing case management workflow, or an existing managed service. The savings vary too widely to generalize, but the displacement creates internal opposition from whoever owns the displaced tool. Plan for it as a 6-month change management project, not a procurement decision.
Most programs end up funding through a combination of paths one and two. Path three is a year-two conversation, not a year-one one.
I’m pro AI SOC. I work for one. So when I tell you where it isn’t the right tool, take it seriously. Three categories where I’d recommend keeping humans in the lead.
Insider threat investigations where the signal lives in human context, not logs. AI does fine on the DLP-shaped insider threat work where the signal is in telemetry: unusual file movement, exfil to personal cloud, after-hours pulls of sensitive repos. Where it struggles is the harder subset where the deciding signal isn’t in any log.
The PIP that started Monday. The conversation a manager had two weeks ago. The contractor whose contract ends Friday. AI doesn’t have that context. Your humans do.
The right design splits the work cleanly: AI handles the telemetry layer, your team handles the human-context layer. Asking one tool to do both is where these investigations break down.
Novel TTPs with no analog in training data. AI investigation is fundamentally pattern-matching over historical examples. By definition, that’s weakest on attacks that don’t look like anything you’ve seen. Your senior threat hunters earn their keep on the alerts that don’t match anything in the catalog. Don’t outsource that work.
Highly regulated environments where data residency rules dictate where alert telemetry can live. If your compliance posture won’t let metadata leave a specific cloud or country, most AI SOC platforms (Prophet AI included) require real architecture work to fit. Some can’t fit at all. Don’t let any vendor wave that concern away with a slide.
If you’re evaluating an AI SOC tool, ask the vendor exactly where their tool fails. If they don’t have an answer ready, that’s the answer.
Three questions come up in almost every evaluation, and they deserve direct answers.
What happens when the AI gets it wrong? Prophet AI documents every step of every investigation. Every question asked, every query run, every piece of evidence pulled, the reasoning that led to the verdict. When a verdict is wrong, the chain of reasoning shows exactly where it went wrong, and your team can encode the correction back into Guidance so the same mistake doesn’t repeat.
That’s a different audit trail than the three-sentence case notes most analysts write under queue pressure today, and it matters more than vendor content typically acknowledges.
Regulators are starting to ask about AI-driven security decisions. Boards are asking about defensible documentation of what the SOC investigated and why. Post-incident reviews are easier to run when the evidence chain is complete by default. The audit trail isn’t a feature. It’s how you keep your seat at the table when the auditor or the board comes asking.
What happens to detection engineering? This is the question senior practitioners ask first, and it’s the right question. You might worry that if AI handles investigation, your team loses the natural feedback loop where analysts catch and tune noisy detections. The honest answer: that work moves explicitly upstream.
Instead of relying on manual triage to spot noise, detecting engineering now use the AI’s comprehensive investigation data as a massive feedback loop, shifting the focus from suppressing alerts to equipping the AI with better context..
To make that upstream work happen, detection engineering shifts from an emergent activity squeezed between alerts to a scheduled discipline owned by the senior analysts whose triage time the AI has freed up. Teams that fail to operationalize that shift see detection quality drift over time. Teams that operationalize it well see detection quality improve, because the engineering happens with intention and dedicated focus.
What does the buying committee look like? AI SOC platforms touch security operations, but the procurement conversation often pulls in IT (for integrations and identity), compliance (for data handling and audit posture), legal (for the data processing agreement and AI-specific contractual terms), and procurement (for vendor risk review).
Plan for that early. Programs that try to push AI SOC through as a security-team decision often hit a six-week delay when compliance discovers the data flow questions in week four. Programs that bring compliance and legal in at the start of the evaluation typically close in half the time.
One question vendor content almost never addresses directly, and CISOs care about it more than vendors realize: what happens to your program if the AI SOC vendor gets acquired, pivots, or fails? Three-year procurement cycles outlast a lot of vendor strategies.
Three things worth confirming with any AI SOC vendor before signing.
First, data portability: can you export your investigation history, Guidance configurations, and detection logic in a format that survives a vendor change?
Second, runbook independence: are the human-readable Guidance rules you encoded specific to this vendor, or do they document SOC logic your team could rebuild elsewhere?
Third, contractual continuity: what happens to service obligations, data handling, and support during an acquisition or wind-down event?
The third tends to separate the serious vendors from the rest. Most can answer the first two. Few have a clean answer to the third without significant pre-work, which is itself a signal worth noting during evaluation.
Prophet Security’s agentic AI SOC platform operationalizes expert analyst techniques at machine speed across all alert volumes, regardless of severity, to ensure a consistently clear triage queue and preemptively neutralize threats.
If your honest answers to the four diagnostic questions earlier in this piece concerned you, the next conversation isn’t whether AI SOC is the answer. It’s what your senior analysts would actually do with their Tuesday mornings if the triage queue weren’t running them.
That’s the operating model question. Whether you solve it with Prophet Security or someone else, the architecture is what needs to change. Hiring more analysts to triage at machine-generated volume is a strategy that worked in 2018. The math hasn’t worked since 2022.
The teams that change the architecture will get a different conversation with their board next year. The teams that don’t will get the same one they had last year, with a slightly higher number on the spend line and the same number on the time-to-detect line.
Pick the conversation you want to be having.
If your SOC is dealing with alert overload or long investigation times, we’d be happy to show you what Prophet AI looks like in practice. Request a demo or reach out directly to learn more.
Rich Perkins is a Principal Sales Engineer at Prophet Security. Reach him at rich.perkins@prophetsecurity.ai or connect on LinkedIn.
Sponsored and written by Prophet Security.
HarrisX Poll Found 52% of Registered Voters Support the CLARITY Act
Channel 5 – All Creatures Great and Small series 7 new post
Upbit adds B3 Korean won pair as Base token gains Korea access
Image AI models now drive app growth, beating chatbot upgrades
NCP car park operator enters administration putting 340 UK sites at risk of closure
AP Dhillon – Old Money (Official Audio)
Weekend Open Thread: Marianne Dress
Five killed in Texas plane crash identified as Amarillo pickleball players
New Netflix Movies in May 2026 — My Top 3 Picks to Stream
Pi Network Mandates Protocol 23 Upgrade for All Mainnet Nodes Before May 15 Deadline
Melissa Joan Hart and More Stars Attend 2026 Kentucky Derby
Anna Nicole Smith’s Daughter Attends 2026 Kentucky Derby
Bitcoin mining equities rise in 2026 as BTC lags behind
Luka Doncic Injury Update: Doncic’s Hamstring Recovery Slows Lakers’ Hopes Against Thunder: Can He Run Yet?
Venus Williams’ Best Met Gala Looks Over the Years
Kuwait International Airport Resumes Operations After Closure as Regional Tensions Ease in 2026
“Storage Wars” star Darrell Sheets' ex-wife breaks silence on his death
Is Man United v Liverpool on TV? Channel, streaming and how to watch Premier League fixture
After 2 Years, This Taylor Sheridan Neo-Western Doesn’t Have a Single Bad Episode
Q1 2026 Tech Layoffs AI Wave Hits 81,747 as Firms Shift to AI Infrastructure
You must be logged in to post a comment Login