The D900 takes all the things that Topping is very good at and evolves them to their logical conclusion. This is truly state of the art decoding and performance that very few brands can get anywhere near. This is the best device of its kind anywhere near the price
Sounds incredible
Good connectivity
Very well made and attractive
No RCA outs
Can be little reluctant to connect
Remote is a bit clunky
Key Features
Advertisement
Source
USB Audio, i2S, coax, optical, AES and Bluetooth
Advertisement
Audio quality
Supports PCM to 768kHz and DSD512
Advertisement
Connectivity
XLR outputs
Introduction
In the space of a few years, Topping has gone from being completely unheard of to a mainstay of affordable hi-fi.
Advertisement
From not much over £100, the company offers a range of deeply capable digital to analogue convertors and headphone amplifiers. They have an unerring habit of doing more for less than most of their key rivals and they have a determinedly loyal following as a result.
What you see here is different to almost anything that Topping has built before. Sure, it’s still a DAC (and just a DAC, I’ll come to that in due course) but the manner in which it does digital to analogue decoding is something pointedly different to almost anything else.
Advertisement
The overwhelming majority of devices on the market make use of off the shelf components from two producers; ESS and AKM. There are then a smattering of smaller concerns; Texas Instruments, Wolfson and Crystal but the result is the same; the actual business of conversion is handled by a fixed piece of silicone.
Advertisement
The D900 joins a tiny number of devices where there this isn’t the case. The manner in which it turns a digital signal into an analogue one is bespoke and designed to maximise the areas of performance that Topping feels is important. This is not without risk; Topping has a formidable reputation built on great implementations of ESS and AKM DACs.
The D900 is at once an argument that there might be a bit more to the business of decoding being made to people who seem quite settled with what there is and a step outside Topping’s own comfort zone of expertise. How does it fare?
Pricing
In the UK, the D900 is available from a selection of retailers for £1799. It can be ordered online from some authorised retailers and there should be no issue securing one from any location in the UK. In the USA the D900 is available for $1799, reflecting a larger market and different sales model. In Australia it is available for $3099.
It is possible at the time of writing to find online locations shipping the D900 direct from the Far East, usually with a reduction over the UK retail. These units will not have a UK warranty however so it would be best to be careful about doing so.
Advertisement
Advertisement
Design
Solid and understated
Small but informative display
Remote control
Matching headphone amp
Some gremlins connecting up
The D900 is a three quarter width design at 330mm. It is perfectly possible when you unbox it from the (really well thought out) packaging that you might find it slightly underwhelming but I suspect that feeling should pass pretty quickly.
The D900 arrives looking sober to the point of minimalist. I have to say I feel this is the right approach and I really like it. The D900 has a quiet seriousness to it that should sit in most systems very effectively. The standard of build is excellent and it whispers rather than shouts a level of quality. It is exclusively available in silver.
Image Credit (Trusted Reviews)
The main focal point on the front panel is a small display. This can show input and incoming sample rate information as well as settings menus and both an old fashioned output VU meter and more modern graphic equaliser style interface. The display isn’t terribly large and can’t be read at a huge distance but it’s useful to have when setting the D900 up.
There is a small but no less sturdy remote handset too. This has been a bit of a mixed bag for me in use; there have been points where it hasn’t been responsive at all, but it’s useful to have; particularly if you intend to use the D900 as a preamp in your system. The remote also combines with the display to simplify settings menu access although the menu tree for this is not as intuitive as it could be.
Advertisement
Advertisement
As hinted at earlier, the D900 is a DAC and not a DAC headphone amp (and this is why the D900 is a ‘D’ and not a ‘DX’). If you want to go all in, Topping makes the entirely analogue A900 to partner the D900 and this is a formidable looking device with sockets for any occasion.
Image Credit (Trusted Reviews)
It does mean that the D900 isn’t as all singing and dancing as some of its more affordable brethren but allows it to focus on a smaller range of tasks. How much of an issue this will be to you is almost certainly going to depend on what equipment you have kicking around already.
Getting the D900 up and running wasn’t completely straightforward. It would not connect at all to the Chord Electronics 2Go/2Yu streaming head unit over USB and fought me for some time to connect to the usually viceless Eversolo T8.
First it didn’t want to be seen and then, once it was, it proceeded to lock incorrectly, resulting in garbled, high speed sound. Once it was sorted, it stayed sorted but I had to put the effort in.
Specification
Wholly bespoke digital to analogue decoding
Wide selection of digital inputs…
…but slightly more limited outputs
On board EQ
Preamp functionality
Advertisement
The principle focus of the D900 is its decoding. It isn’t the first time Topping has implemented this system; that was the D90 III Discrete which uses a simplified version but the D900 takes it to its logical conclusion.
Advertisement
The system is called PSRM which stands for Precision Stream Reconstruction Matrix. It is a ‘1 bit’ system (a notional ideal that dates back to the early days of CD where, so long as the signal is handled correctly before it reaches the actual decoder, it boasts the scope for excellent measured performance) and incorporates discrete 1-bit modules that convert digital audio streams into analogue voltage by turning each audio sample into a very fast train of 1-bit pulses.
Image Credit (Trusted Reviews)
The waveform is in turn defined by the density of the pulses and it’s shaped by an analogue reconstruction filter. This is the same as an off the shelf Delta Sigma DAC but Topping controls the entire process rather than buying in a chip that gets on with it. Where the D90 III had 16 of these modules, the D900 has 32 of them.
These modules are powered by a bespoke power supply that employs a voltage-reference power supply that is purely resistive. It uses digital switching logic operating at the nanosecond level for maximum performance.
The business of turning this signal into a usable output is undertaken by a new, proprietary I/V conversion circuit composed of low distortion integrated op-amps and ultra-low-noise discrete components carefully selected after repeated testing.
Advertisement
Advertisement
Image Credit (Trusted Reviews)
If this all makes your eyes glaze over, you can focus on the fact that the claimed measurements (that the D900 has achieved when independently measured) are state of the art.
This formidable hardware is made available to an extensive selection of inputs. There are seven wired connections; two optical, two coax, one AES, one USB (on both USB-C and B connections) and an i2S connection; a very high performance option derived from pro audio.
These are augmented by Topping’s excellent Bluetooth implementation. Sample rate handling via USB and i2S is PCM to 768kHz and DSD to 512 with other connections having lower overall sample rate handling.
The situation with regards outputs is a little less comprehensive though. Output is exclusively via XLR with both fixed and variable level examples fitted. Topping says it’s perfectly ok to use XLR to RCA adapters should you need to but you’ll need to budget for those if that’s the way you want to go.
Image Credit (Trusted Reviews)
Advertisement
Something you do get is Topping Tune. This allows you to adjust a ten band EQ to tweak the output of the Topping to better suit the output relative to the room. What’s quite interesting about this software is that Topping has elected to make it desktop software that can be adjusted on a screen you can actually see without squinting.
Advertisement
From there, adjustments are communicated over USB to the device itself. I’ve found Topping Tune a bit tricky to actually uninstall from a Mac but, if you own the D900 rather than have it turn up for review, this should be less of an issue.
In keeping with most Topping devices, the D900 has a volume control and can be used as a preamp. If you have no further interest in an analogue source, it can be used directly into a power amp or active speakers to streamline your system.
Performance
Truly outstanding levels of detail
Immaculate soundstage and three dimensionality
Surprisingly tolerant of poor recordings
Ensures you can’t hear the cleverness
Topping’s priority in their circuit design is low distortion and the best signal to noise ration they can manage; in this case a claimed harmonic distortion below -140dB and a signal-to-noise ratio of 131dB.
This is great in an abstract sense but what does it mean? When you listen to the sublime Fink Meets the Royal Concetgebouw Orchestra on the D900, the effect is subtly but noticeably different to how it often sounds. The opening Berlin Sunrise builds from silence… but on the Topping it’s not silent. In the seven seconds before the orchestra actually starts, the D900 finds the tiniest rustling and stirring of 100 plus people getting ready to perform. It’s buried in the noise floor of the recording… but the D900 finds it.
Advertisement
Advertisement
It’s not simply about these random artefacts either. As the this track builds and builds, there is a logic and order to the orchestra that makes it sound like a believable body of musicians. Different instruments play out from different sections and you can discern individual musicians rather than single body of’ strings’ or ‘brass.’ It’s the difference between a reproduction and a performance and the Topping excels at it.
Image Credit (Trusted Reviews)
It doesn’t have to be an orchestra either. Listen to the pounding and dramatic GO! on Santigold’s Master of my Make-Believe and the Topping doesn’t unpick the dense levels of production but it ensures that the whole performance is just that little more intelligible and orderly than it was before.
It does this with astonishing consistency too. Mid-seventies Trojan Records outing that sounds like it was saved to a tape and then left at the bottom of the sea? Not a problem. Absolute perfection from Blue Note? Delivered as intended. The Topping doesn’t alter or even tweak what you hear, it simply delivers more of it.
What I have found most impressive about this is how well it handles less than perfect recordings. You can give the D900 ii by Meat Puppets; a brilliantly entertaining and hugely influential album but one that is in no way shape or form hi-fi and the D900 does its work at opening it out and finding detail but the chaos and energy of the album is left intact.
Image Credit (Trusted Reviews)
Advertisement
This isn’t a ‘save for best’ style DAC, it’s a genuinely engaging and listenable device with all the things you choose to play on it.
Advertisement
The single most important thing is that you can’t hear the technology at work when you listen to the D900. For some of you reading this, this might sound anticlimactic; why go to all the effort? It reflects that the hardware is a means to an end rather than the end in itself.
It’s also worth noting that to achieve this as early on in the development of the technology is notable. Companies like Chord Electronics and dCS who also use bespoke decoding took rather longer to achieve the same feat and it represents a considerable technical achievement on Topping’s part.
Should you buy it?
The Topping represents the state of the art in digital decoding and it does so at price where almost everything else uses off the shelf decoding options. This is a taste of the truly exotic; a part of the digital market that has, at times, been in danger of pricing itself out of existence, at a price that isn’t too crazy. It combines this with a useful and comprehensive spec too
Advertisement
Detail aspects of the D900 aren’t as easy to live with as some key rivals. The slightly reluctant remote, reluctance to connect the first time and the absence of RCA connections make for a device that is fractionally more demanding than some rivals and that might need a bit of extra work on your part to get up and running.
Advertisement
Final Thoughts
There is some mild but genuine jeopardy to Topping building the D900. There will always be a subset of people who feel it represents Topping somehow ‘selling out’ and building something that, even if it does measure better, was a contradiction to the affordable brilliance of what the company has been doing so far.
If it wasn’t actually better, it would have looked pointless; a device that wasn’t any improvement over its more conventional brethren. The fact that the company was willing to take the risk and build it should be commended.
Advertisement
How We Test
We test every DAC we review thoroughly over an extended period of time. We use industry standard tests to compare features properly. We’ll always tell you what we find.
We never, ever, accept money to review a product.
Advertisement
Find out more about how we test in our ethics policy.
Tested for several days
Tested with real world use
FAQs
Does the Topping D900 DAC support Bluetooth?
This model does come with built-in Bluetooth 5.1 support with LDAC streaming.
Advertisement
Full Specs
Topping D900 DAC Review
Manufacturer
–
Size (Dimensions)
330 x 210 x 57 MM
Release Date
2025
Resolution
x
Connectivity
Bluetooth 5.1
Audio Formats
Up to 32-bit/768kHz PCM, DSD512, LDAC Bluetooth, SBC, AAC, aptX, aptX Adaptive
Bluetooth
Yes
Inputs
USB-C, USB-B, two optical, two coaxial, AES, IIS-LVDS
Meta is reportedly cutting about 10% of its workforce, or roughly 8,000 jobs, while closing thousands of open roles it had intended to fill. “We’re doing this as part of our continued effort to run the company more efficiently and to allow us to offset the other investments we’re making,” said Janelle Gale, Meta’s chief people officer. The company had almost 79,000 employees at the start of the year. Quartz reports: Meta CEO Mark Zuckerberg has poured resources into building out AI capabilities, directing spending toward model development, chatbot products, and the engineering talent to support them. Meta set its 2026 capital expenditure guidance at $115 billion to $135 billion, almost double the $72 billion it spent in 2025. Employees have been encouraged to use AI agents internally for tasks such as writing code.
The early disclosure, Gale explained, was prompted by the fact that information about the cuts had already made its way into press reports before the company was ready to announce. “I know this is unwelcome news and confirming this puts everyone in an uneasy state, but we feel this is the best path forward, given the circumstances,” she wrote.
According to the memo, severance for affected workers in the United States will cover 18 months of COBRA health insurance premiums, along with a base pay component of 16 weeks that increases by two weeks for each year of service. Departing employees will have access to job placement assistance and, where applicable, help navigating immigration status. Packages outside the U.S. will vary by country. Meta cut between 10% and 15% of its Reality Labs workforce in January, shut down several VR game studios, and shed about 700 positions across at least five divisions in March.
Microsoft’s gaming division is reverting to the Xbox name after operating as “Microsoft Gaming” since 2022. (Microsoft Photo)
Microsoft is changing the way it measures success in its Xbox business, focusing on daily active players rather than longer periods of time — a tighter measure that reflects the way the biggest social media platforms have evolved to gauge engagement and retention of users.
Xbox will also reevaluate its approach to game exclusivity, the timing of releases across platforms, and the use of AI, while looking for opportunities for strategic acquisitions.
And yes, it’s the Xbox business again, not “Microsoft Gaming,” the broader name the company adopted for the division internally around the time of its giant Activision Blizzard acquisition.
Those are some of the highlights from a memo that Xbox CEO Asha Sharma and Chief Content Officer Matt Booty sent to employees Thursday, laying out a strategic vision for the division about two months into their tenure in the roles.
The memo, titled “We Are Xbox,” opens with a blunt admission that players are frustrated, and frames Xbox as a challenger with work to do.
Advertisement
“From the beginning, Xbox was built by people willing to try things that others wouldn’t,” they write. “We placed a consumer bet inside an enterprise company because we believed gaming would define the living room, and we were at risk of missing it.”
Asha Sharma and Matt Booty, the new leadership team for Microsoft Gaming. (Microsoft Photo)
The memo comes amid financial pressure on the gaming business. Revenue fell 9% in the most recent holiday quarter to $5.96 billion, with Xbox content and services coming in below internal projections. Hardware sales dropped 32%.
Earlier this week, Sharma made her first major move, cutting the price of Game Pass Ultimate from $29.99 to $22.99 a month while removing new Call of Duty games from the day-one lineup — unwinding a bundle that had driven a 50% price hike last October.
Sony’s PlayStation remains comfortably ahead in the current console generation, and Nintendo’s Switch 2 has had a strong launch.
The memo references Microsoft’s own next-generation console, Project Helix, which it unveiled at GDC in March, saying the machine will “lead in performance and play your console and PC games.” Alpha hardware is expected to go to developers in 2027.
Advertisement
Sharma took over as CEO of Microsoft Gaming in February, replacing Phil Spencer, who retired after 38 years at the company. She had been running Microsoft’s CoreAI product organization and previously served as chief operating officer at Instacart and as a vice president at Meta.
That social media background may help explain the shift to daily active players as the internal “north star,” a metric that defined how Facebook and Instagram measured their own success.
Microsoft has said its gaming ecosystem has more than 500 million monthly active users across platforms and devices. It’s not clear if Microsoft will shift to daily users in its public reporting.
The memo closes with 10 operating principles for the division, including “earn every player,” “protect our art,” “stay rebellious,” and “clarity is kindness.” They conclude, “We’re here to do the most creative and courageous work of our lives, and that’s what we’ll do together.”
Advertisement
Microsoft reports earnings for the March quarter next week, including Xbox results.
Microsoft is planning to get rid of more US employees via its first voluntary buyout program, CNBC reports. The buyout program will reportedly be offered to US employees at “the senior director level and below whose years of employment and age add up to 70 or higher,” and could cover up to 7 percent of the company’s US workforce.
With around 125,000 employees in the US as of June 2025, that could mean up to 8,750 will be offered a paid exit when Microsoft begins its program in May. That’s a smaller figure than the 15,000 or so employees the company laid off in May and July of 2025, but still significant, particularly if the majority of employees do take the buyout.
“Our hope is that this program gives those eligible the choice to take that next step on their own terms, with generous company support,” Microsoft’s executive vice president and chief people officer Amy Coleman shared in a memo viewed by CNBC.
Engadget has contacted Microsoft to confirm the existence of the voluntary buyout program and other details CNBC reported. We’ll update this article if we hear back.
Advertisement
Microsoft used its 2025 layoffs to streamline layers of management and its video game business, but these new cuts may have a lot more to do with AI. Not necessarily because the company’s adoption of AI tools has made employees redundant, but rather because Microsoft continues to aggressively spend on AI infrastructure. The company said it spent $37.5 billion in capital expenditures during Q2 2026, much of which went toward data center buildout.
After months of rumors and reports that OpenAI was developing a new, more powerful AI large language model for use in ChatGPT and through its application programming interface (API), allegedly codenamed “Spud” internally, the company has today unveiled its latest offering under the more formal name GPT-5.5.
And to likely no one’s surprise, it’s hardly a “potato” in the disparaging sense of the word: GPT-5.5 retakes the lead for OpenAI in generally available LLMs, coming ahead of rivals Anthropic’s and Google’s latest public offerings, and even beating the private Anthropic Claude Mythos Preview model narrowly on one benchmark (essentially a statistical tie).
“It’s definitely our strongest model yet on coding, both measured by benchmarks and based on the feedback that we’ve gotten from trusted partners, as well as our own experience,” explained Amelia “Mia” Glaese, VP of Research at OpenAI, in a video call with journalists ahead of the launch earlier today.
OpenAI positions GPT-5.5 as a fundamental redesign of how intelligence interacts with a computer’s operating system and professional software stacks.
Advertisement
“What is really special about this model is how much more it can do with less guidance,” said OpenAI co-founder and president Greg Brockman on the same call. “It’s way more intuitive to use. It can look at an unclear problem and figure out what needs to happen next.”
Brockman proceeded to emphasize the areas in which users can expect to see gains from using GPT-5.5 compared to OpenAI’s prior state-of-the-art model, GPT-5.4, which remains available (for now) to users and enterprises at half the API cost of its new successor.
“It’s extremely good at coding,” Brockman said of GPT-5.5. “It’s also great at broader computer work, computer use, scientific research—these kinds of applications that are very intelligent bottlenecks.”
OpenAI CEO and-cofounder Sam Altman also weighed in on the launch and the company’s philosophy in a post on X, writing, in part: “We want our users to have access to the best technology and for everyone to have equal opportunity.”
Advertisement
The model is available in two variants: GPT-5.5 and GPT-5.5 Pro, distinguished by the latter offering enhanced precision and specialized logic for handling the most rigorous cognitive demands.
While the standard version serves as the versatile flagship for general intelligence tasks, the Pro model is architected specifically for high-stakes environments such as legal research, data science, and advanced business analytics where accuracy is paramount. This premium tier provides noticeably more comprehensive and better-structured responses, supported by specialized latency optimizations that ensure high-quality performance during complex, multi-step workflows.
Unfortunately for third-party software developers, API access is not yet available for either GPT-5.5 nor GPT-5.5 Pro and will be coming “very soon,” according to the company’s announcement blog post.
“API deployments require different safeguards and we are working closely with partners and customers on the safety and security requirements for serving it at scale,” OpenAI writes.
Advertisement
For the time being, GPT-5.5 is available only to paying subscribers of the ChatGPT Plus ($20 monthly), Pro ($100-$200 monthly), Business, and Enterprise users, with GPT-5.5 Pro access starting at the Pro tier and upwards.
A focus on agency
At the core of GPT-5.5 is a focus on “agentic” performance—specifically in coding, computer use, and scientific research.
Unlike its predecessors, which often required granular, step-by-step prompting to avoid “hallucinating” a path forward, GPT-5.5 is designed to handle messy, multi-part tasks autonomously.
It excels at researching online, debugging complex codebases, and moving between documents and spreadsheets without human intervention.
Advertisement
One of the most significant technical leaps is the model’s efficiency. While larger models typically suffer from increased latency, GPT-5.5 matches the per-token latency of the previous GPT-5.4 while delivering a higher level of intelligence.
This was achieved through a deep hardware-software co-design. OpenAI served GPT-5.5 on NVIDIA GB200 and GB300 NVL72 systems, utilizing custom heuristic algorithms—written by the AI itself—to partition and balance work across GPU cores.
This optimization reportedly increased token generation speeds by over 20%.For high-stakes reasoning, the “GPT-5.5 Thinking” mode in ChatGPT provides smarter, more concise answers by allowing the model more internal “compute time” to verify its own assumptions before responding.
This capability is particularly visible in the model’s performance on “Expert-SWE,” an internal OpenAI benchmark for long-horizon coding tasks with a median human completion time of 20 hours. GPT-5.5 notably outperformed GPT-5.4 on this metric while using significantly fewer tokens.
Advertisement
Benchmarks show OpenAI has retaken the lead in most powerful publicly available LLM over Claude Opus 4.7 (but the unreleased Mythos still outperforms it)
The market for leading U.S.-made frontier models has become an increasingly tight race between OpenAI, Anthropic, and Google.
Literally a week ago to the date, OpenAI rival Anthropic released Opus 4.7, its most powerful generally available model, to the public, taking over the leaderboard in terms of the number of third-party benchmark tests in which it has the lead.
Yet today, GPT-5.5 has surpassed it and even Anthropic’s heavily restricted, more powerful model Claude Mythos Preview, albeit only on one benchmark, Terminal-Bench 2.0, which tests “a model’s ability to navigate and complete tasks in a sandboxed terminal environment.”
GPT-5.5 achieved 82.7% accuracy on Terminal-Bench 2.0, easily surpassing Opus 4.7 (69.4%) and narrowly beating the Mythos Preview (82.0%).
However, in multidisciplinary reasoning without tools, the landscape is more competitive. On Humanity’s Last Exam without tools, GPT-5.5 Pro scored 43.1%, trailing behind Opus 4.7 (46.9%) and Mythos Preview (56.8%).
Benchmark
GPT-5.5
Advertisement
Claude Opus 4.7
Gemini 3.1 Pro
Mythos Preview*
Terminal-Bench 2.0
Advertisement
82.7
69.4
68.5
82.0
Advertisement
Expert-SWE (Internal)
73.1
—
—
Advertisement
—
GDPval (wins or ties)
84.9
80.3
Advertisement
67.3
—
OSWorld-Verified
78.7
Advertisement
78.0
—
79.6
Toolathlon
Advertisement
55.6
—
48.8
—
Advertisement
BrowseComp
84.4
79.3
85.9
Advertisement
86.9
FrontierMath Tier 1–3
51.7
43.8
Advertisement
36.9
—
FrontierMath Tier 4
35.4
Advertisement
22.9
16.7
—
CyberGym
Advertisement
81.8
73.1
—
83.1
Advertisement
Tau2-bench Telecom (original prompts)
98.0
—
—
Advertisement
—
OfficeQA Pro
54.1
43.6
Advertisement
18.1
—
Investment Banking Modeling Tasks (Internal)
88.5
Advertisement
—
—
—
MMMU Pro (no tools)
Advertisement
81.2
—
80.5
—
Advertisement
MMMU Pro (with tools)
83.2
—
—
Advertisement
—
GeneBench
25.0
—
Advertisement
—
—
BixBench
80.5
Advertisement
—
—
—
Capture-the-Flags challenge tasks (Internal)
Advertisement
88.1
—
—
—
Advertisement
ARC-AGI-2 (Verified)
85.0
75.8
77.1
Advertisement
—
SWE-bench Pro (Public)
58.6
64.3
Advertisement
54.2
77.8
This suggests that while OpenAI is winning on “computer use” and “agency,” other models may still hold an edge in pure, zero-shot academic knowledge.
It is important to clarify that Mythos Preview is not a generally available product; Anthropic has classified it as a strategic defensive asset due to its high cybersecurity risks, restricting its access to a small, limited audience of trusted partners and government agencies.
Advertisement
Because Mythos is excluded from broad commercial use, the primary market competition remains between GPT-5.5, Gemini 3.1 Pro, and Claude Opus 4.7.
So when it comes to models that the general public can access, GPT-5.5 has retaken the crown for OpenAI, achieving the state-of-the-art across 14 benchmarks compared to 4 for Claude Opus 4.7 and 2 for Google Gemini 3.1 Pro.
It dominates in agentic computer use, economic knowledge work (GDPval), specialized cybersecurity (CyberGym), and complex mathematics (Frontier Math).
In comparison, Claude Opus 4.7 leads on software engineering and reasoning without tools, while Gemini 3.1 Pro leads in three categories, specifically excelling in academic reasoning and financial analysis.
Advertisement
Increased costs for users
The shift in intelligence comes with a significant price increase for API developers, according to material OpenAI shared ahead of the model’s public release.
OpenAI has effectively doubled the entry price for its flagship model compared to the previous generation, and again double it from there for the most-cutting edge variant of the model, GPT-5.5 Pro:
Model
Input Price (per 1M tokens)
Advertisement
Output Price (per 1M tokens)
GPT-5.4
$2.50
$15.00
Advertisement
GPT-5.5
$5.00
$30.00
GPT-5.5 Pro
Advertisement
$30.00
$180.00
To mitigate these costs, OpenAI emphasizes that GPT-5.5 is more “token efficient,” meaning it uses fewer tokens to complete the same task compared to GPT-5.4.
For users requiring speed over depth, OpenAI also introduced a Fast mode in Codex, which generates tokens 1.5x faster but at a 2.5x price premium.
Advertisement
The “mini” and “nano” tiers seen in the GPT-5.4 era (priced at $0.75 and $0.20 per 1M input tokens respectively) currently have no GPT-5.5 equivalent, though the company notes that GPT-5.5 is rolling out to all subscription tiers, including Plus, Pro, and Enterprise.
Licensing and the ‘cyber-permissive’ frontier
OpenAI’s approach to safety and licensing for GPT-5.5 introduces a novel concept: Trusted Access for Cyber. Because the model is now capable of identifying and patching advanced security vulnerabilities, OpenAI has implemented stricter “cyber-risk classifiers” for general users.
For legitimate security professionals, however, OpenAI is offering a specialized “cyber-permissive” license. This program allows verified defenders—those responsible for critical infrastructure like power grids or water supplies—to use models like GPT-5.4-Cyber or unrestricted versions of GPT-5.5 with fewer refusals for security-related prompts.
This dual-use framework acknowledges that while AI can accelerate cyber defense, it can also be weaponized. Under OpenAI’s Preparedness Framework, GPT-5.5 is classified as “High” risk for biological and cybersecurity capabilities.
Advertisement
To manage this, API deployments currently require different safeguards than the consumer-facing ChatGPT, and OpenAI is working with government partners to ensure these tools are used to strengthen—not undermine—digital resilience.
Initial reactions: losing access feels like having a ‘limp amputated’
The early feedback from power users and engineers suggests that GPT-5.5 has crossed a psychological threshold in AI utility. For developers, the model’s ability to maintain “conceptual clarity” across massive codebases is its standout feature.
“The first coding model I’ve used that has serious conceptual clarity,” noted Dan Shipper, CEO of Every.
Shipper tested the model by asking it to debug a complex system failure that had previously required a team of human engineers to rewrite; GPT-5.5 produced the same fix autonomously. Similarly, Pietro Schirano, CEO of MagicPath, described a “step change” in performance when the model successfully merged a branch with hundreds of refactor changes into a main branch in a single, 20-minute pass.Perhaps the most visceral reaction came from an anonymous engineer at NVIDIA, who had early access to the model:
“Losing access to GPT-5.5 feels like I’ve had a limb amputated”.
This sentiment is echoed in the scientific community. Derya Unutmaz, a professor at the Jackson Laboratory for Genomic Medicine, used GPT-5.5 Pro to analyze a dataset of 28,000 genes, producing a report in minutes that would have normally taken his team months.
Advertisement
Brandon White, CEO of Axiom Bio, went further, stating that if OpenAI continues this pace, “the foundations of drug discovery will change by the end of the year”.
GPT-5.5 is more than an incremental update; it is a tool designed for a world where humans delegate entire workflows rather than single prompts. While the costs are higher and the safety guardrails tighter, the performance gains in agentic work suggest that AI is finally moving from the chat box and into the operating system.
Perhaps most astonishingly of all, it’s not even hearing the end of the scaling limits — whereupon models are trained on more and more GPUs — according to researchers at the company.
“We actually still have headroom to train significantly smarter models than this,” said OpenAI chief scientist Jakub Pachocki.
The processor’s compute-in-memory architecture departs from the conventional separation between processing and storage. Traditional chips shuttle data back and forth between memory and compute units, a process that consumes both time and energy. In Thus, computation happens directly inside the NOR flash cells themselves, so models run in the same… Read Entire Article Source link
The headline spec is the Snapdragon X Elite processor, which Microsoft positions as faster than the MacBook Air M3 for everyday productivity tasks, and it sits alongside 32GB of LPDDR5x RAM and a 1TB SSD that together mean you are unlikely to feel throttled whether you are running creative applications, video calls, or multiple browser sessions simultaneously.
Advertisement
That performance headroom matters more with this machine than with most, because the Snapdragon X Elite includes an NPU capable of running Copilot Plus features such as Recall, which lets you search your activity history using plain language rather than filing through folders and apps manually.
The 15-inch PixelSense Flow touchscreen produces a native resolution of 2736 by 1824 pixels with HDR support, which gives the display range and contrast that holds up well for anything from editing documents to watching video during a long commute or flight.
Battery life is rated at up to 22 hours based on local video playback, and the chassis weighs 1.66kg, so you are getting a machine that could genuinely replace a bag full of adapters and a portable charger for most travel days.
Advertisement
For someone who wants a large-screen Windows laptop with AI features built in at hardware level rather than bolted on through software, the Surface Laptop at this price represents a meaningful reduction on a machine that originally sat well above the £1,500 mark.
Our experts have tested and ranked the top portable computers across every category in our best laptops 2026 guide, and if you are buying for college or university, our best student laptops 2026 picks are worth a look before you decide.
Meta is planning to cut 10% of its workforce, amounting to 8,000 employees, according to a report from Bloomberg. Meta also will not hire for 6,000 roles that are currently open.
According to an internal memo sent to employees Thursday and viewed by Bloomberg, Meta told staff that the cuts will begin on May 20. Reuters had earlier reported on Meta’s plans for sweeping layoffs.
TechCrunch has reached out to Meta for comment.
“We’re doing this as part of our continued effort to run the company more efficiently and to allow us to offset the other investments we’re making,” chief people office Janelle Gale told employees, according to the memo. “This is not an easy tradeoff and it will mean letting go of people who have made meaningful contributions to Meta during their time here.”
Advertisement
Meta spent tens of billions on its metaverse efforts, which largely failed. The company has also had to make major investments in its AI efforts in order to keep up with competitors in the space — earlier this month, it debuted a completely overhauled AI product called Muse Spark.
Hackers have compromised Docker images, VSCode and Open VSX extensions for the Checkmarx KICS analysis tool to harvest sensitive data from developer environments.
KICS, short for Keeping Infrastructure as Code Secure, is a free, open-source scanner that helps developers identify security vulnerabilities in source code, dependencies, and configuration files.
The tool is typically run locally via CLI or Docker, and processes sensitive infrastructure configs that often contain credentials, tokens, and internal architecture details.
Dependency security company Socket investigated the incident after receiving an alert from Docker about malicious images pushed to the official checkmarx/kics Docker Hub repository.
The investigation revealed that the compromise extended beyond the trojanized KICS Docker image to VS Code and Open VSX extensions that downloaded a hidden ‘MCP addon’ feature designed to fetch the secret-stealing malware.
Advertisement
Socket found that the ‘MCP addon’ feature downloaded from a hardcoded GitHub URL “a multi-stage credential theft and propagation component” as mcpAddon.js.
According to the researchers, the malware targets precisely the data processed by KICS, including GitHub tokens, cloud (AWS, Azure, Google Cloud) credentials, npm tokens, SSH keys, Claude configs, and environment variables.
It then encrypts it and exfiltrates it to audit.checkmarx[.]cx, a domain designed to impersonate legitimate Checkmarx infrastructure. Moreover, public GitHub repositories are automatically created for data exfiltration.
Automatically created GitHub repositories Source: Socket
It is important to clarify that Docker tags were temporarily repointed to a malicious digest, so the impact depends on when they were pulled. The dangerous timeframe for the DockerHub KICS image was from 2026-04-22 14:17:59 UTC to 2026-04-22 15:41:31 UTC.
Affected tags have now been restored to their legitimate image digests, and the fake v2.1.21 tag was deleted entirely.
Advertisement
Developers who have downloaded the above should consider their secrets compromised, rotate them as soon as possible, and rebuild their environments from a known safe point.
While the TeamPCP hackers, responsible for the massive Trivy and LiteLLM supply-chain compromise, claimed the attack publicly, the researchers could not find sufficient evidence beyond pattern-based correlations to confidently attribute it.
BleepingComputer has reached out to Checkmarx, an application security testing company, for a statement, but a comment wasn’t immediately available.
Meanwhile, the company published a security bulletin about the incident, assuring users that all malicious artifacts have been removed, and their exposed credentials were revoked and rotated.
Advertisement
The firm is currently investigating with help from external experts and has promised to provide more information as it becomes available.
Users of the compromised tool are recommended to block access to ‘checkmarx.cx => 91[.]195[.]240[.]123’ and ‘audit.checkmarx.cx => 94[.]154[.]172[.]43,’ use pinned SHAs, revert to known safe versions, and rotate secrets and credentials if compromise is suspected or confirmed.
The latest safe versions of the compromised projects are: DockerHub KICS v2.1.20, Checkmarx ast-github-action v2.3.36, Checkmarx VS Code extensions v2.64.0, and Checkmarx Developer Assist extension v1.18.0.
AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.
At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.
Two weeks ago, Anthropic announced that its new model, Claude Mythos Preview, can autonomously find and weaponize software vulnerabilities, turning them into working exploits without expert guidance. These were vulnerabilities in key software like operating systems and internet infrastructure that thousands of software developers working on those systems failed to find. This capability will have major security implications, compromising the devices and services we use every day. As a result, Anthropic is not releasing the model to the general public, but instead to a limited number of companies.
The news rocked the internet security community. There were few details in Anthropic’s announcement, angering many observers. Some speculate that Anthropic doesn’t have the GPUs to run the thing, and that cybersecurity was the excuse to limit its release. Others argue Anthropic is holding to their AI safety mission. There’shype and counter–hype, reality and marketing. It’s a lot to sort out, even if you’re an expert.
We see Mythos as a real but incremental step, one in a long line of incremental steps. But even incremental steps can be important when we look at the big picture.
How AI Is Changing Cybersecurity
We’ve written about Shifting Baseline Syndrome, a phenomenon that leads people—the public and experts alike—to discount massive long-term changes that are hidden in incremental steps. It has happened with online privacy, and it’s happening with AI. Even if the vulnerabilities found by Mythos could have been found using AI models from last month or last year, they couldn’t have been found by AI models from five years ago.
Advertisement
The Mythos announcement reminds us that AI has come a long way in just a few years: The baseline really has shifted. Finding vulnerabilities in source code is the type of task that today’s large language models excel at. Regardless of whether it happened last year or will happen next year, it’s been clear for a while this kind of capability was coming soon. The question is how we adapt to it.
We don’t believe that an AI that can hack autonomously will create permanent asymmetry between offense and defense; it’s likely to be more nuanced than that. Some vulnerabilities can be found, verified, and patched automatically. Some vulnerabilities will be hard to find, but easy to verify and patch—consider generic cloud-hosted web applications built on standard software stacks, where updates can be deployed quickly. Still others will be easy to find (even without powerful AI) and relatively easy to verify, but harder or impossible to patch, such as IoT appliances and industrial equipment that are rarely updated or can’t be easily modified.
Then there are systems whose vulnerabilities will be easy to find in code but difficult to verify in practice. For example, complex distributed systems and cloud platforms can be composed of thousands of interacting services running in parallel, making it difficult to distinguish real vulnerabilities from false positives and to reliably reproduce them.
So we must separate the patchable from the unpatchable, and the easy to verify from the hard to verify. This taxonomy also provides us guidance for how to protect such systems in an era of powerful AI vulnerability-finding tools.
Advertisement
Unpatchable or hard to verify systems should be protected by wrapping them in more restrictive, tightly controlled layers. You want your fridge or thermostat or industrial control system behind a restrictive and constantly-updated firewall, not freely talking to the internet.
Distributed systems that are fundamentally interconnected should be traceable and should follow the principle of least privilege, where each component has only the access it needs. These are bog standard security ideas that we might have been tempted to throw out in the era of AI, but they’re still as relevant as ever.
Rethinking Software Security Practices
This also raises the salience of best practices in software engineering. Automated, thorough, and continuous testing was always important. Now we can take this practice a step further and use defensive AI agents to test exploits against a real stack, over and over, until the false positives have been weeded out and the real vulnerabilities and fixes are confirmed. This kind of VulnOps is likely to become a standard part of the development process.
Documentation becomes more valuable, as it can guide an AI agent on a bug finding mission just as it does developers. And following standard practices and using standard tools and libraries allows AI and engineers alike to recognize patterns more effectively, even in a world of individual and ephemeral instant software—code that can be generated and deployed on demand.
Advertisement
Will this favor offense or defense? The defense eventually, probably, especially in systems that are easy to patch and verify. Fortunately, that includes our phones, web browsers, and major internet services. But today’s cars, electrical transformers, fridges, and lampposts are connected to the internet. Legacy banking and airline systems are networked.
Not all of those are going to get patched as fast as needed, and we may see a few years of constant hacks until we arrive at a new normal: where verification is paramount and software is patched continuously.
Rivian has begun production of its R2 SUV. However, you can’t get one just yet: The first customer deliveries (of the most expensive version) aren’t expected until later this spring.
On Wednesday, CEO RJ Scaringe drove the first electric SUV off the production line at the company’s Normal, IL, factory. A storage and logistics building at that factory was damaged by a tornado last weekend, with Wednesday’s rollout event seemingly designed to reassure nervous customers and investors.
“We are really excited to be producing R2 for our customers,” Scaringe is quoted as saying in a news release. However, Rivian CFO Claire McDonough toldReuters that customers won’t be able to configure their vehicle orders until June. Electrekreports that these first units rolling out now are going to Rivian employees.
Rivian
If you were drawn to the R2’s $45,000 starting price, well, Rivian won’t have any of those for a while. First off the line (this spring) is the Launch Package, starting at $57,990. A Premium trim, expected late 2026, will cost $53,990. Then, in the first half of 2027, a Standard (RWD long range) variant arrives at $48,490. And as for that headline-grabbing $45,000 base-model R2, I hope you like waiting. It won’t be here until late 2027.
Advertisement
The Rivian R2 was revealed in 2024. Smaller and lighter than the flagship R1, the company is positioning the EV as its answer to Tesla’s best-selling Model Y. All versions of the new two-row SUV are rated for at least 300 miles per charge. Each trim has a native NACS charge port. The vehicle can charge from 10 percent to 80 percent in under 30 minutes when using a DC fast charger.
You must be logged in to post a comment Login