There’s something off in the audiophile world right now, and it’s not just coming from Denmark. Between audiophile media excess that feels increasingly detached from reality, a long overdue Qobuz CarPlay update that finally fixes a daily annoyance, and a reminder from Wes Montgomery that timeless music outlasts every format war, this week’s news cuts in a few different directions. Add in the Marantz M1 earning an Editors’ Choice nod for doing the sensible thing exceptionally well, and the picture gets clearer: good engineering and good music still matter more than hype cycles, press junkets, or how many zeros are on the invoice.
This coming weekend marks the beginning of the silly season I mentioned last week. The calendar fills quickly with hi-fi shows that will get covered whether anyone really needs another one or not. FLAX arrives next weekend in Tampa, and the press will enjoy the warmth while it lasts. The Olympics are still underway, which means no Tampa Bay Lightning NHL games, still the best show in town. Shows are work, not vacations, and covering them costs money. Airfare, taxis, meals, and the quiet expenses nobody lists on a receipt add up fast.
It is also worth being clear with readers about how this works. Some shows cover hotel costs for media because without coverage there is no visibility, no buzz, and no record of what actually happened. Transparency matters. The media business is under real pressure right now. Publications are shrinking, budgets are tight, and layoffs have been widespread over the past year. Ask the people at the Washington Post, Tech Radar, Digital Trends, Sound & Vision, and others. We have been fortunate to add experienced talent because of that reality, but nobody should assume that publications are rolling in money. Even the biggest names are watching every dollar.
When it comes to press junkets, not everyone gets invited. These trips are usually reserved for high profile journalists from mainstream outlets like Forbes, T3, Wall Street Journal, and the New York Times, along with editors from specialist publications. We are not excluded from that group, which likely reflects our growing influence. I have been invited on overseas trips for product launches, factory tours, listening sessions, luxury car drives, and early looks at new TV technology in Asia, but illness, family emergencies, or other obligations have always gotten in the way. I have never been able to go.
Advertisement
Domestically, the rules are simple. We pay our own way. That has always been policy at eCoustics, with reimbursement handled later. Overseas press junkets are where things start to feel off, when necessary access blurs into hospitality and the line between reporting and obligation gets harder to see. Audio Group Denmark’s recent introduction in Aalborg of its $1.1 million flagship loudspeakers and $115,000 mono block power amplifiers for a very select group of the press sharpened that concern and has become a topic online in recent days.
When you are flown overseas, wined and dined, there is an unspoken expectation that coverage will reflect the experience. They are hardly alone in this practice, and it says nothing about the quality of what was introduced. By every account I have heard from those who were there, the experience was out of body phenomenal. The harder truth is that entry into this level of audio now borders on the absurd. One might need to sell off body parts just to get in the door, and even that feels optimistic given the general condition of most of the audiophile press.
Audiophile Excess Runs Wild in Denmark
Back in October at T.H.E. Show New York, which was held in New Jersey despite the branding gymnastics, I had my first real exposure to Audio Group Denmark. Calling it New York clearly sounds better on a banner, even if the venue landed nowhere near the part of the Garden State where I actually live. Still, it was enough to make one thing clear: Danish high-end audio is having a moment, and it is not subtle.
That moment extends well beyond Audio Group Denmark. Denmark has been quietly exporting serious audio thinking for decades, with brands like Gryphon, Dynaudio, Buchardt, DALI, Bang & Olufsen, Audiovector, Lyngdorf, Ortofon, and Raidho all contributing to Denmark’s oversized footprint in the high end. Different philosophies, different price brackets, same national tendency to push engineering harder than the market sometimes expects.
Audio Group Denmark sits firmly in that conversation but plays its own game. Its core brands Ansuz, Børresen, and Aavik were out in force, supported by their North American team and HiFi Loft, their dealer with locations on West 44th Street in Manhattan and in Glens Falls, just north of Saratoga Springs and not far from Lake George. It is a part of upstate New York where the term summer home tends to mean something very specific and very expensive.
What stood out was not just the technical ambition on display, but the pricing ambition as well. Danish brands across the board are pushing boundaries right now, both in how far they are willing to go technologically and how unapologetic they are about cost. Audio Group Denmark, in particular, has no interest in playing it safe. My first real exposure to them will not be my last. That was clear before I left the room.
Advertisement
Anyone thinking about a system designed to stay under $30,000 should stop reading now. Even a modest configuration built around their stand mount speakers, an integrated amplifier with streaming, and the required cabling clears that threshold quickly, before analog sources or outboard stages even enter the conversation.
Advertisement. Scroll to continue reading.
At T.H.E. Show New York 2025, the two Danish systems on display occupied a very different financial lane, landing between $90,000 and $360,000 USD. Those figures are real. From a listening standpoint, the lower cost $90,000 system was far more compelling to me, but both already lived well beyond what most listeners would consider attainable.
2026 flagship Aavik components powering system including M-880 amps
What was introduced last week, however, makes those show systems look almost entry-level. When you factor in the Børresen M8 Gold Signature loudspeakers at roughly $1.15 million per pair and the Aavik M-880 monoblock amplifiers at $115,000 each, the scale shifts entirely. These are not conceptual exercises or dressed up prototypes.
Advertisement
The Aavik M-880 uses a reworked Class A amplification stage that maintains its bias 0.63 volts above the required current level at all times. The goal is continuous Class A operation regardless of load or signal conditions, while keeping operating temperatures lower than traditional Class A designs to improve long term stability and reliability; which is a good plan when you consider the “rated” power output and size of these amplifiers.
Aavik M-880 Amplifier
Power delivery is equally unapologetic. Each M-880 is rated at 400 watts into 8 ohms, 800 watts into 4 ohms, and approximately 1,300 watts into 2 ohms. Add sources, cabling, and the supporting ecosystem that inevitably comes with systems at this level, and it is very likely that the total system cost is approaching $2 million at its peak.
The Aavik M-880 mono amplifier measures 794.02 mm high, 342.00 mm wide, and 509.68 mm deep, which translates to 31.26 inches in height, 13.46 inches in width, and 20.07 inches in depth. Each amplifier weighs 70.0 kilograms, or 154.3 pounds.
The Gold Standard?
Børresen M8 Gold Signature Loudspeaker
At the heart of the Børresen M8 Gold Signature is a folded dipole bass architecture that defines both its scale and its intent. Each loudspeaker uses two dedicated bass modules populated by twelve 8-inch drivers, firing forward and backward in opposing polarity. The idea is not brute force but control, managing low frequency energy before the room gets a chance to do what rooms usually do.
Every pair is built and calibrated in Denmark, with final measurements and listening sessions completed before the speakers leave the factory. The look is unapologetically serious: black high gloss lacquer, carbon accents, and zero attempt to disguise the mass.
Audio Group Denmark co-founders, Michael Børresen (left) and Lars Kristensen (right) standing in front of M8 Gold Signature loudspeakers.
That mass is substantial. Each speaker stands just over 87 inches tall, spans roughly 25 inches in width, and reaches more than 32 inches deep. At 325 kilograms per cabinet, or about 716.5 pounds, placement is a commitment, not a casual decision. The specified frequency range stretches from 20 Hz to 50 kHz, with a sensitivity rating of 87 dB.
Advertisement
The system is effectively tri sectional. Bass impedance is rated at 5 ohms, while the midrange and treble sections sit at 8 ohms, with each section requiring more than 100 watts of amplification.
The crossover between mid bass and tweeter is set at 2,400 Hz, while bass integration is handled externally via an active crossover that is not included. High frequencies are delivered by Børresen’s RP94 Gold Signature ribbon planar tweeter, supported by two IronFree5 Gold Signature drivers for midrange and upper bass duties, while twelve IronFree8 Gold Signature drivers handle the low end.
This is not a loudspeaker designed to coexist quietly in a room. The fact that it was demonstrated in an auditorium sized performance hall, elevated on a stage, says a lot about the assumptions baked into the design. Context matters here. These are loudspeakers that expect space, structural support, and a listening environment that can accommodate their scale and output without compromise.
We shall miss the children.
Advertisement
Advertisement. Scroll to continue reading.
Craft Recordings Revives Wes Montgomery’s Full House for the OJC Series
This Craft Recordings OJC pressing of Full House ($38.98 at Amazon) is all analog from the original tapes, cut by Kevin Gray at Cohearent Audio and pressed on 180 gram vinyl at RTI. A 24-bit/192kHz high resolution digital edition is available for those who want it. Recorded live on June 25, 1962 at Tsubo in Berkeley, the album captures Wes Montgomery at a point where restraint and intensity exist side by side. He can sound smooth and measured one moment, then suddenly lean in hard enough to make you sit up and pay attention.
Johnny Griffin is on tenor sax, backed by the Wynton Kelly Trio with Wynton Kelly, Paul Chambers, and Jimmy Cobb, all fresh from their time with Miles Davis and fully locked in. The pressing itself is clean and well executed, with excellent clarity through the guitar and horns and a sense of presence that feels natural rather than hyped. It is the kind of record that makes you wish you had been in the room that night, even if only for a set.
An audiophile once told me, back in my twenties, that Wes Montgomery was mostly hype and not all that impressive. This came from the same guy who shushed me so we could sit through yet another Eagles demo on speakers neither of us could afford. I left the show, walked into Sam the Record Man, bought two Wes Montgomery records, and learned something useful very quickly. Some audiophiles know as little about jazz guitar as I know about the inner workings of nuclear propulsion, which is saying something considering my college roommate went on to become a USN captain running submarines and carriers.
Advertisement
Wes Montgomery was not hype. He was about feel, timing, touch, and control, with the ability to shift from calm to confrontation without losing the thread. Records like Full House make that obvious within minutes. Call it whatever you want, but the playing still holds up, and it still exposes bad takes just as efficiently as it did back then.
Marantz M1 Streaming Amplifier Is Hiding in Plain Sight
The Marantz M1 was released well over a year ago, but in a category that moves quickly, time can be useful. With so many network amplifiers competing on features alone, it is easy to miss products that take a more measured approach. The M1 does not try to dominate on paper. It focuses on stable performance, sensible design choices, and an emphasis on sound quality over spectacle.
The M1 is rated at 100 watts per channel with a specified distortion figure of 0.005 percent THD. It includes HDMI eARC for television integration and provides a dedicated subwoofer output with adjustable crossover points and a plus or minus 15 dB level trim. That allows for proper configuration of a 2.1 system rather than a fixed one size approach. The amplifier operates fully in the digital domain and supports hi resolution PCM up to 24-bit/192 kHz as well as DSD playback.
Streaming and connectivity are well covered. Bluetooth, Spotify Connect, Qobuz Connect, AirPlay 2, and HEOS are all supported, with HEOS also enabling multi room playback and integration with control systems such as Control4, URC, and Crestron. There is no built in phono stage, so analog playback requires an external solution.
If you use Qobuz at home, great. If you use it in the car through Apple CarPlay, the experience until now has been less convincing. Scrolling through playlists while driving was awkward, the interface was not doing anyone any favors, and asking Siri to find a specific track or playlist went nowhere. That is the kind of thing that earns looks from the passenger seat that suggest you should keep both hands on the wheel.
For anyone who spends real time behind the wheel, those small frustrations add up. I average 30,000 to 40,000 miles a year, and there are only so many times you can give up and start jabbing at the dashboard while the NHL Network blares on SiriusXM before it becomes a pattern. The latest Qobuz CarPlay update tackles those pain points in a practical way, improving day to day usability and finally making Siri a functional part of the experience. It does not reinvent in car listening, but it makes Qobuz far more livable where many of us use it the most.
Advertisement
Advertisement. Scroll to continue reading.
So what did Qobuz actually change, and why does it matter. The CarPlay experience has been rebuilt from the ground up, with a cleaner interface and features that users have been asking for since CarPlay support first arrived. The biggest day to day fix is simple but overdue: shuffle is now available directly from the player, exactly where it should have been all along.
Just as important, Siri finally works the way it should. You can now search, browse, and control playback entirely by voice without poking at the screen. That includes asking Siri to play a specific playlist, artist, or favorite track, turning shuffle or repeat on and off, adding the current song to a playlist or your library, and even asking what is currently playing. The full Discover experience is also available in CarPlay, including personalized playlists, Release Watch, and Radio, all accessible safely while driving.
It is also a cosmetic update, and that part matters more than it sounds. You can now actually see things you could not before, with a cleaner layout that makes sense at a glance. Scrolling through your own playlists or Qobuz’s curated ones no longer frustrates, and discovery is finally usable on a CarPlay screen. The interface is clearer, more logical, and far easier to navigate, unlike the backseat of my car, which remains a lost cause thanks to kids and a dog.
Advertisement
More importantly, this cleanup makes Qobuz’s strengths visible. Hi-res playlists and editorial content are no longer buried or awkward to access, which means the stuff audio dorks and editors actually care about is front and center where it belongs. It does not just look better. It makes the service easier to live with, especially if you spend serious time behind the wheel.
David Solomon can relax. The Facebook messages will stop. Qobuz finally fixed what needed fixing, and for those of us who live in the car as much as the listening room, that actually matters. Long live Qobuz.
Drift Protocol confirms $280 million crypto theft via sophisticated attack abusing durable nonces
Hackers hijacked Security Council powers through misrepresented transaction approvals and social engineering
Deposits in borrow/lend, vaults, and trading affected; incident marks largest crypto heist of 2026 so far
Decentralized cryptocurrency exchange Drift has confirmed suffering a cyberattack in which threat actors stole hundreds of millions of dollars worth of tokens.
On April 1 2026,, Drift Protocol posted on X, saying it was “experiencing an active attack”, and that all deposits and withdrawals were suspended as a result.
“This is not an April Fools joke,” the maintainers tweeted. “We are coordinating with multiple security firms, bridges, and exchanges to contain the incident.”
Article continues below
Advertisement
Highly sophisticated attack
Soon after, an update was posted, explaining that a malicious actor was able to access the protocol “through a novel attack involving durable nonces,” resulting in a “rapid takeover of Drift’s Security Council administrative powers.”
Security Council is a governance and safety mechanism designed to act quickly in emergencies, without waiting for full DAO voting. It is a small, trusted group (usually multisig signers) within the protocol’s governance structure, who have limited, fast-track powers. Ironically enough, Security Council was supposed to prevent attacks like this one.
Advertisement
Drift says the attack was a “highly sophisticated operation that appears to have involved multi-week preparation and staged execution”.
It was not a bug, and no seed phrases were compromised. Instead, the attack involved “unauthorized or misrepresented transaction approvals obtained prior to execution, likely facilitated through durable nonce mechanisms and sophisticated social engineering.”
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
At press time, no one claimed responsibility for this attack, but Drift said roughly $280 million was withdrawn from the protocol. North Korean state-sponsored groups Lazarus and different Chollima variants (Labyrinth, Pressure, Golden) are usually tasked with stealing cryptocurrencies from organizations in the west. The country uses the stolen money to fund its government apparatus and its weapons programme, some researchers claim.
Advertisement
All deposits placed into borrow/lend, vault deposits, and funds deposited for trading, are affected, Drift confirmed. This is now one of the largest crypto heists ever, and the largest one this year so far.
Microsoft on Wednesday launched three new foundational AI models it built entirely in-house — a state-of-the-art speech transcription system, a voice generation engine, and an upgraded image creator — marking the most concrete evidence yet that the $3 trillion software giant intends to compete directly with OpenAI, Google, and other frontier labs on model development, not just distribution.
The trio of models — MAI-Transcribe-1, MAI-Voice-1, and MAI-Image-2 — are available immediately through Microsoft Foundry and a new MAI Playground. They span three of the most commercially valuable modalities in enterprise AI: converting speech to text, generating realistic human voice, and creating images. Together, they represent the opening salvo from Microsoft’s superintelligence team, which Suleyman formed just six months ago to pursue what he calls “AI self-sufficiency.”
“I’m very excited that we’ve now got the first models out, which are the very best in the world for transcription,” Suleyman told VentureBeat in an exclusive interview ahead of the launch. “Not only that, we’re able to deliver the model with half the GPUs of the state-of-the-art competition.”
The announcement lands at a precarious moment for Microsoft. The company’s stock just closed its worst quarter since the 2008 financial crisis, as investors increasingly demand proof that hundreds of billions of dollars in AI infrastructure spending will translate into revenue. These models — priced aggressively and positioned to reduce Microsoft’s own cost of goods sold — are Suleyman’s first answer to that pressure.
Advertisement
Microsoft’s new transcription model claims best-in-class accuracy across 25 languages
MAI-Transcribe-1 is the headline release. The speech-to-text model achieves the lowest average Word Error Rate on the FLEURS benchmark — the industry-standard multilingual test — across the top 25 languages by Microsoft product usage, averaging 3.8% WER. According to Microsoft’s benchmarks, it beats OpenAI’s Whisper-large-v3 on all 25 languages, Google’s Gemini 3.1 Flash on 22 of 25, and ElevenLabs’ Scribe v2 and OpenAI’s GPT-Transcribe on 15 of 25 each.
The model uses a transformer-based text decoder with a bi-directional audio encoder. It accepts MP3, WAV, and FLAC files up to 200MB, and Microsoft says its batch transcription speed is 2.5 times faster than the existing Microsoft Azure Fast offering. Diarization, contextual biasing, and streaming are listed as “coming soon.” Microsoft is already testing MAI-Transcribe-1 inside Copilot’s Voice mode and Microsoft Teams for conversation transcription — a detail that underscores how quickly the company intends to replace third-party or older internal models with its own.
Alongside it, MAI-Voice-1 is Microsoft’s text-to-speech model, capable of generating 60 seconds of natural-sounding audio in a single second. The model preserves speaker identity across long-form content and now supports custom voice creation from just a few seconds of audio through Microsoft Foundry. Microsoft is pricing it at $22 per 1 million characters. MAI-Image-2, meanwhile, debuted as a top-three model family on the Arena.ai leaderboard and now delivers at least 2x faster generation times on Foundry and Copilot compared to its predecessor. Microsoft is rolling it out across Bing and PowerPoint, pricing it at $5 per 1 million tokens for text input and $33 per 1 million tokens for image output. WPP, one of the world’s largest advertising holding companies, is among the first enterprise partners building with MAI-Image-2 at scale.
The contract renegotiation with OpenAI that made Microsoft’s model ambitions possible
To understand why these models matter, you have to understand the contractual tectonic shift that made them possible. Until October 2025, Microsoft was contractually prohibited from independently pursuing artificial general intelligence. The original deal with OpenAI, signed in 2019, gave Microsoft a license to OpenAI’s models in exchange for building the cloud infrastructure OpenAI needed. But when OpenAI sought to expand its compute footprint beyond Microsoft — striking deals with SoftBank and others — Microsoft renegotiated. As Suleyman explained in a December 2025 interview with Bloomberg, the revised agreement meant that “up until a few weeks ago, Microsoft was not allowed — by contract — to pursue artificial general intelligence or superintelligence independently.” The new terms freed Microsoft to build its own frontier models while retaining license rights to everything OpenAI builds through 2032.
Advertisement
Suleyman described the dynamic to VentureBeat in characteristically blunt terms. “Back in September of last year, we renegotiated the contract with OpenAI, and that enabled us to independently pursue our own superintelligence,” he said. “Since then, we’ve been convening the compute and the team and buying up the data that we need.”
He was quick to emphasize that the OpenAI partnership remains intact. “Nothing’s changing with the OpenAI partnership. We will be in partnership with them at least until 2032 and hopefully a lot longer,” Suleyman said. “They have been a phenomenal partner to us.” He also highlighted that Microsoft provides access to Anthropic’s Claude through its Foundry API, framing the company as “a platform of platforms.” But the subtext is unmistakable: Microsoft is building the capability to stand on its own. In March, as Business Insider first reported, Suleyman wrote in an internal memo that his goal is to “focus all my energy on our Superintelligence efforts and be able to deliver world class models for Microsoft over the next 5 years.” CNBC reported that the structural shift freed Suleyman from day-to-day Copilot product responsibilities, with former Snap executive Jacob Andreou taking over as EVP of the combined consumer and commercial Copilot experience.
How teams of fewer than 10 engineers built models that rival Big Tech’s best
Perhaps the most striking detail Suleyman shared with VentureBeat is how small the teams behind these models actually are. “The audio model was built by 10 people, and the vast majority of the speed, efficiency and accuracy gains come from the model architecture and the data that we have used,” Suleyman said. “My philosophy has always been that we need fewer people who are more empowered. So we operate an extremely flat structure.” He added: “Our image team, equally, is less than 10 people. So this is all about model and data innovation, which has delivered state of the art performance.”
This matters for two reasons. First, it challenges the prevailing industry narrative that frontier AI development requires thousands of researchers and billions in headcount costs. Meta, by contrast, has pursued what Suleyman described in his Bloomberg interview as a strategy of “hiring a lot of individuals, rather than maybe creating a team” — including reported compensation packages of $100 million to $200 million for top researchers. Second, small teams producing state-of-the-art results dramatically improve the economics. If Microsoft can build best-in-class transcription with 10 engineers and half the GPUs of competitors, the margin structure of its AI business looks fundamentally different from companies burning through cash to achieve similar benchmarks.
Advertisement
The lean-team philosophy also echoes Suleyman’s broader views on how AI is already reshaping the work of building AI itself. When asked by VentureBeat how his own team works, Suleyman described an environment that resembles a startup trading floor more than a traditional Microsoft engineering org. “There are groups of people around round tables, circular tables, not traditional desks, on laptops instead of big screens,” he said. “They’re basically vibe coding, side by side all day, morning till night, in rooms of 50 or 60 people.”
Why Suleyman’s “humanist AI” pitch is aimed squarely at enterprise buyers
Suleyman has been steadily building a philosophical brand around Microsoft’s AI efforts that he calls “humanist AI” — a term that appeared prominently in the blog post he authored for the launch and that he elaborated on in our interview. “I think that the motivation of a humanist super intelligence is to create something that is truly in service of humanity,” he told VentureBeat. “Humans will remain in control at the top of the food chain, and they will be always aligned to human interests.”
The framing serves multiple purposes. It differentiates Microsoft from the more acceleration-oriented rhetoric coming from OpenAI and Meta. It resonates with enterprise buyers who need governance, compliance, and safety assurances before deploying AI in regulated industries. And it provides a narrative hedge: if something goes wrong in the broader AI ecosystem, Microsoft can point to its stated commitment to human control. In his December Bloomberg interview, Suleyman went further, describing containment and alignment as “red lines” and arguing that no one should release a superintelligence tool until they are “confident it can be controlled.”
Suleyman also stressed data provenance as a competitive advantage, describing a conversation with CEO Satya Nadella about developing “a clean lineage of models where the data is extremely clean.” He drew an implicit contrast with open-source alternatives, noting that “many of the open-source models have been trained on data in, let’s say, inappropriate ways. And there are potentially security issues with that.” For enterprise customers evaluating AI vendors amid a thicket of copyright lawsuits across the industry, that is a meaningful commercial argument — if Microsoft can credibly claim that its training data was acquired through properly licensed channels, it reduces the legal and reputational risk of deploying these models in production.
Advertisement
Microsoft’s aggressive pricing puts pressure on Amazon, Google, and the AI startup ecosystem
Today’s launch positions Microsoft on three competitive fronts simultaneously. MAI-Transcribe-1 directly targets the transcription workloads that OpenAI’s Whisper models have dominated in the open-source community, with Microsoft claiming superior accuracy on all 25 benchmarked languages. The FLEURS results also show it winning against Google’s Gemini 3.1 Flash Lite on 22 of 25 languages — a direct challenge as Google aggressively pushes Gemini across its own product suite. And MAI-Voice-1‘s ability to clone voices from seconds of audio and generate speech at 60x real-time puts it in competition with ElevenLabs, Resemble AI, and the growing ecosystem of voice AI startups, with Microsoft’s distribution advantage — any Foundry developer can now access these capabilities through the same API they use for GPT-4 and Claude — acting as a powerful moat.
Suleyman framed the competitive position confidently: “We’re now a top three lab just under OpenAI and Gemini,” he told VentureBeat. The pricing strategy — MAI-Voice-1 at $22 per million characters, MAI-Image-2 at $5 per million input tokens — reflects a deliberate decision to compete on cost. “We’re pricing them to be the very best of any hyperscaler. So there will be the cheapest of any of the hyperscalers out there, Amazon. And obviously Google,” Suleyman said. “And that’s a very conscious decision.”
This makes strategic sense for Microsoft, which can amortize model development costs across its enormous installed base of enterprise customers. But it also speaks to the question investors have been asking with increasing urgency: when does AI spending start generating returns? Microsoft’s stock has fallen roughly 17% year-to-date, according to CNBC, part of a broader selloff in software stocks. By building models that run on half the GPUs of competitors, Microsoft reduces its own infrastructure costs for internal products — Teams, Copilot, Bing, PowerPoint — while offering developers pricing designed to undercut the rest of the market. In his March memo, Suleyman wrote that his models would “enable us to deliver the COGS efficiencies necessary to be able to serve AI workloads at the immense scale required in the coming years.” These three models are the first tangible delivery on that promise.
Suleyman says a frontier large language model is coming — and Microsoft plans to be “completely independent”
Suleyman made clear that transcription, voice, and image generation are just the beginning. When asked whether Microsoft would build a large language model to compete directly with GPT at the frontier level, he was unequivocal. “We absolutely are going to be delivering state of the art models across all modalities,” he said. “Our mission is to make sure that if Microsoft ever needs it, we will be able to provide state of the art at the best efficiency, the cheapest price, and be completely independent.”
Advertisement
He described a multi-year roadmap to “set up the GPU clusters at the appropriate scale,” noting that the superintelligence team was formally stood up only in October 2025. Suleyman spoke to VentureBeat from Miami, where the full team was convening for one of its regular week-long in-person sessions. He described Nadella flying in for the gathering to lay out “the roadmap of everything that we need to achieve for our AI self-sufficiency mission over the next 2, 3, 4 years, and all the compute roadmap that that would involve.”
Building a competitive frontier LLM, of course, is a different order of magnitude in complexity, data requirements, and compute cost from what Microsoft demonstrated Wednesday. The models launched today are specialized — they handle audio and images, not the general reasoning and text generation that underpin products like ChatGPT or Copilot’s core intelligence. Suleyman has the organizational mandate, Nadella’s public backing, and the contractual freedom. What he doesn’t yet have is a track record at Microsoft of delivering on the hardest problem in AI.
But consider what he does have: three models that are best-in-class or near it in their respective domains, built by teams smaller than most seed-stage startups, running on half the industry-standard GPU footprint, and priced below every major cloud competitor. Two years ago, Suleyman proposed in MIT Technology Review what he called the “Modern Turing Test” — not whether AI could fool a human in conversation, but whether it could go out into the world and accomplish real economic tasks with minimal oversight. On Wednesday, his own models took a step toward that vision. The question now is whether Microsoft’s superintelligence team can repeat the trick at the scale that actually matters — and whether they can do it before the market’s patience runs out.
Sony has confirmed that PlayStation console prices will increase globally starting April 2, 2026, affecting several models across major regions and making current PlayStation deals potentially some of the last opportunities to buy the consoles at existing retail prices.
With prices rising across the US, UK, Europe and Japan, current deals on PlayStation consoles are likely to become more appealing for buyers who want to enter the PlayStation ecosystem before retailers begin reflecting the higher official prices.
Below are some of the best PlayStation deals currently available, covering the PS5 Pro, the standard PS5 console, and the Digital Edition, each offering slightly different benefits depending on how you prefer to play.
Advertisement
PlayStation 5 Pro
The PlayStation 5 Pro represents the most powerful console in the PlayStation lineup and targets players who want the best graphics performance possible from Sony’s current hardware generation.
Advertisement
Following Sony’s pricing update, the PS5 Pro now carries a recommended retail price of $899.99 in the United States, £789.99 in the UK, and €899.99 in Europe, making deals on this premium model particularly valuable before retailers adjust their listings.
The console focuses on enhanced visual performance, improved ray tracing capabilities, and higher-resolution gaming output that aims to take fuller advantage of modern 4K televisions and high-refresh-rate displays.
Advertisement
For players who want the most future-proof PlayStation console, the PS5 Pro offers the strongest hardware platform available right now, making it a compelling option for demanding titles and visually intensive games.
PlayStation 5
The standard PlayStation 5 remains the most versatile option in the lineup because it includes a built-in disc drive that allows players to run both physical and digital games.
Under Sony’s updated pricing structure, the standard PS5 now sits at $649.99 in the US, £569.99 in the UK, and €649.99 across Europe, increasing the appeal of any retailer discounts that still reflect earlier pricing.
Advertisement
Advertisement
That flexibility makes it especially attractive for players who already own physical PlayStation game collections or who prefer buying discs that can be resold, traded, or shared between consoles.
The standard PS5 also continues to deliver strong performance across the current generation of games, supporting 4K output, fast loading through Sony’s SSD architecture, and access to the full PlayStation ecosystem.
PlayStation 5 Digital Edition
The PlayStation 5 Digital Edition offers the same core gaming performance as the standard PS5 but removes the disc drive in favour of a fully digital gaming experience.
Sony’s updated pricing places the Digital Edition at $599.99 in the US, £519.99 in the UK, and €599.99 in Europe, which keeps it as the most affordable entry point into the PlayStation console lineup.
Advertisement
This approach suits players who buy their games directly through the PlayStation Store and prefer the convenience of maintaining a digital library that can be downloaded instantly across multiple devices.
Advertisement
Because the Digital Edition typically carries a lower retail price than the disc version, it often represents the most accessible way to step into the PlayStation platform while still delivering the same gaming capabilities.
With Sony confirming global price increases across the PlayStation lineup starting April 2, current PlayStation console deals may become harder to find once retailers begin adjusting prices to match the updated recommended retail values.
Paris-based Omniscient ingests 100,000+ sources, press, social, web, video, audio, internal pipelines, and synthesises them into a two-minute executive briefing. Renault is an early client. A global syndicate spanning France, Japan, and the US backed the round.
Omniscient, the Paris-based decision intelligence platform built for boards and senior executives, has raised $4.1 million in pre-seed funding led by Seedcamp.
Additional investors include Drysdale, Plug and Play, MS&AD, Raise, Anamcara, and xdeck, with Bpifrance also participating. The company was co-founded by Arnaud d’Estienne, who serves as CEO, and Mehdi Benseghir, both formerly of McKinsey.
The problem Omniscient is addressing is specific: large organisations manage more than 150 disparate intelligence platforms, each covering a different channel, geography, or function, with no single view of what matters.
Communications and intelligence teams are built to react to crises rather than anticipate them. By the time a significant signal surfaces through manual monitoring, the moment for proactive response has often passed.
Advertisement
Corporate reputation represents an average of approximately 30% of market capitalisation for the world’s largest listed companies, according to widely cited research.
A signal missed hours too late can mean billions wiped from market value before a communications team has even convened.
Omniscient’s platform ingests data from more than 100,000 sources across press, social media, web, video, audio, and internal pipelines, then synthesises that into a two-minute executive briefing updated in real time.
At the core is a proprietary architecture of specialist AI agents, each covering a defined domain, stories, regulation, supply chain, competition, that feed into a unified management cockpit.
Advertisement
The platform is designed for C-level users rather than analysts: no manual configuration, natural language interaction throughout, and a system that grows more attuned to an organisation’s priorities with use.
Renault is named as an early client. The company claims its AI-native approach is 50 times faster than legacy manual monitoring workflows, a benchmark derived from its own assessments.
The funding will go to engineering hires, product development, and commercial rollout. The roadmap extends into predictive analytics: the platform aims to tell organisations not just what is happening but what is likely to happen next and what to do about it, drawing on historical precedent, competitor behaviour, and real-time signal patterns.
Sia Houchangnia, Partner at Seedcamp, described Omniscient as “technically differentiated and commercially validated from day one,” pointing to the calibre of early design partners as the signal.
Advertisement
The investor syndicate spans France, Japan, and the United States, with Bpifrance’s involvement adding a French state-backed dimension to a round that is otherwise built around global fintech and deep tech specialist investors.
If there’s anything that makes people more uncomfortable than highly advanced AI or nuclear weapons technology, it’s the combination of the two. But there’s been a symbiotic relationship between cutting-edge computing and America’s nuclear weapons program since the very beginning.
In the fall of 1943, Nicholas Metropolis and Richard Feynman, two physicists working on the top-secret atomic bomb project at Los Alamos, decided to set up a contest between humans and machines.
Los Alamos National Laboratory recently partnered with OpenAI to install its flagship ChatGPT AI model on the supercomputers used to process nuclear weapons testing data. It’s the latest in a long history of symbiosis between America’s nuclear program and cutting edge computing.
AI tools are already revolutionizing the way scientists are conducting research at Los Alamos, part of a larger program called Genesis Mission that aims to harness the technology to accelerate scientific research at America’s national labs.
Comparisons of AI to the early days of nuclear weapons abound, both among critics and proponents, but Vox’s reporting trip to the lab found little evidence of the kind of doomsday fears the permeate conversations about AI elsewhere.
In the early days of the Manhattan Project, the only “computers” on site were humans, many of them the wives of scientists working on the project, performing thousands of equations on bulky analog desk calculators. It was painstaking and exhausting work, and the calculators were constantly breaking down under the demands of the lab, so the researchers began to experiment with using IBM punch-card machines — the cutting edge of computer technology at the time. Metropolis and Feynman set up a trial, giving the IBMs and the human computers the same complex problem to solve.
As the Los Alamos physicist Herbert Anderson later recalled, “For the first two days the two teams were neck and neck — the hand-calculators were very good. But it turned out that they tired and couldn’t keep up their fast pace. The punched-card machines didn’t tire, and in the next day or two they forged ahead. Finally everyone had to concede that the new system was an improvement.”
Today, at Los Alamos, a similar dynamic is taking place, as scientists at the lab increasingly rely on artificial intelligence tools for their most ambitious research. Like their punch-card ancestors, today’s AI models have a leg up on human researchers simply by virtue of not having to eat, sleep, or take breaks. Scientists say they’re also approaching tough problems in entirely new and unexpected ways, changing how research is conducted at one of America’s largest scientific institutions.
Advertisement
In recent weeks, in the wake of the feud between the Pentagon and Anthropic, as well as the reported use of AI software for targeting during the war in Iran, the partnership between the US military and leading AI companies has become a highly charged political topic. Less discussed has been the already extensive cooperation between these firms and the country’s nuclear weapons complex, under the supervision of the Department of Energy.
Last year, the Los Alamos National Lab (LANL) entered a partnership with OpenAI allowing it to install the company’s popular ChatGPT AI system on Venado, one of the world’s most powerful supercomputers. As of August, Venado was placed on a classified network, meaning that the AI chatbot now has access to some of the country’s most sensitive scientific data on nuclear weapons.
Supercomputers at Los Alamos’s high-performance computing center.Provided by Los Alamos National Laboratory/Joey Montoya, photographer
Supercomputers at Los Alamos’s high-performance computing center.Provided by Los Alamos National Laboratory/Joey Montoya, photographer
Supercomputers at Los Alamos’s high-performance computing center.Provided by Los Alamos National Laboratory/Joey Montoya, photographer
That wasn’t all. Later last year, the Department of Energy, which oversees Los Alamos and the country’s 16 other national laboratories, announced a $320 million initiative known as the Genesis Mission, which aims to “harness the current AI and advanced computing revolution to double the productivity and impact of American science and engineering within a decade.”
Few people are in a better position to think about the upsides and downsides of revolutionary new technologies than the people who today populate the mesa once occupied by Robert Oppenheimer, Feynman, and the other pioneers of the nuclear age. But when I visited the lab in January, I found that the researchers there were remarkably sanguine about the more existential risks that often come up in conversation about AI, even as they worked on the production of the world’s most dangerous weapons.
Advertisement
“They think we’re building Skynet; that’s not what’s going on here at all,” LANL’s deputy director of weapons, Bob Webster, said, referring to the superintelligent system from the Terminator movies. Geoff Fairchild, deputy director for the National Security AI Office, volunteered that he does not have a “p(doom),” the Silicon Valley shorthand for how likely one believes it is that AI will lead to globally catastrophic outcomes, and doesn’t believe most of his colleagues do either. “We don’t talk about it. I don’t think I’ve ever had that conversation,” he added.
For Alex Scheinker, a physicist who uses AI for the maintenance and operation of LANL’s massive particle accelerator, AI is an extraordinarily useful tool, but a tool nonetheless. “It’s just more math,” he said. “I don’t like to think about it like it’s magic.”
Still, the nuclear-AI comparison is unavoidable. Given the technology’s transformative potential, the dangers it could pose to humanity, and the potential for an innovation “arms race” between the United States and its international rivals, the current state of AI has frequently been compared to the early days of the nuclear age. And how people feel about the Manhattan Project — a triumphant union between the national security state and scientific visionaries? Or humanity opening Pandora’s box? — likely has a lot to do with how they view their work now.
Those making the comparison include OpenAI CEO Sam Altman who is fond of quoting Oppenheimer, and expressed disappointment that the 2023 biopic of the Los Alamos founder wasn’t the kind of movie that “would inspire a generation of kids to be physicists.” One of the film’s central conflicts is how a guilt-stricken Oppenheimer spent much of the second half of his life in an unsuccessful quest to control the spread of his creation. (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent.)
Advertisement
The Trump administration has been explicit about the comparison. In the executive order announcing the mission, the White House invoked the creation of the atomic bomb, writing, “In this pivotal moment, the challenges we face require a historic national effort, comparable in urgency and ambition to the Manhattan Project that was instrumental to our victory in World War II.”
But if we really are in a new “Manhattan Project” moment, you wouldn’t know it in the place where the original Manhattan Project took place.
“The world’s nuclear information is right in there. You’re looking at it,” LANL’s director for high performance computing, Gary Grider, told me during my visit to Los Alamos in January.
We were staring through a glass window at a densely packed shelf of magnetic tapes, each of which could be accessed and read via a robotic system that resembled a high-end vending machine more than a hyperintelligent doomsday computer. The machine we were staring into contained nuclear data so sensitive it’s kept on physical drives rather than an accessible network, not that any of the data stored in the room I was standing in is exactly open source.
Advertisement
Magnetic tapes containing nuclear testing information at Los Alamos’s high-performance computing center.Provided by Los Alamos National Laboratory/Joey Montoya, photographer
I was in Los Alamos’s high-performance computing complex, a vast, brightly lit, 44,000-square-foot room in a building named for Nicholas Metropolis, containing six supercomputers with space cleared out for two more. The first thing that strikes visitors to the computing center, the refrigerator-like temperature and the roar of the overhead fans, both evidence of the gargantuan effort, in money and megawatts, that it takes to keep these machines cool. “Going into high-performance computing, I never thought that I’d be spending this much of my time thinking about power and water,” Grider told me. Computing at Los Alamos is an insatiable beast: The average lifespan of a supercomputer, the cost of which can run into the hundreds of millions of dollars, was once around five to six years. Now it’s around three to five.
Cutting-edge computing has been intertwined with the American nuclear enterprise from the beginning. Los Alamos scientists used the world’s first digital computer, ENIAC, to test the feasibility of a thermonuclear weapon. The lab got its own purpose-built cutting-edge computer, MANIAC, in the early ’50s. In addition to playing a role in the development of the hydrogen bomb, MANIAC was the first computer to beat a human at chess…sort of. It played on a 6×6 board without bishops and took around 20 minutes to make a move. In 1976, the Cray-1, one of the earliest supercomputers, was installed at Los Alamos. Weighing more than 10,000 pounds, it was the fastest and most powerful computer in the world at the time, though it would be no match for a modern iPhone.
Signatures of lab officials and executives, including Nvidia’s Jensen Huang, on the Venado Supercomputer.Provided by Los Alamos National Laboratory/Joey Montoya, photographer
I had visited Los Alamos to see MANIAC and Cray’s descendant, Venado, comprised of dozens of quietly humming 8-foot tall cabinets. Currently ranked as the 22nd most powerful computer in the world, Venado was built in collaboration with the supercomputer builder HPE Cray and chip giant Nvidia, which provided some 3,480 of its superchips for the system. It is capable of around 10 exaflops of computing — about 10 quintillion calculations per second. The signatures of executives, including Nvidia’s Jensen Huang, adorn one of the cabinets.
Last May, OpenAI representative, accompanied by armed security, arrived at Los Alamos bearing locked metal briefcases containing the “model weights” — the parameters used by AI systems to process training data — for its ChatGPT 03 model, for installation on Venado. It was the first time this type of reasoning model had been applied to national security problems on a system of this kind.
Advertisement
LANL’s computers are a closed system not connected to the wider internet, but the OpenAI software installed on Venado brings with it learning it has acquired since the company started developing it. Officials at the lab were not about to let a visiting reporter start asking the AI itself questions, but from all accounts, its users interface with it from their desktop computers essentially the same way the rest of us have learned to talk to ChatGPT or other chatbots when we’re generating memes or brainstorming weeknight recipes.
Those users include scientists at LANL itself as well as the country’s other main nuclear labs — Sandia, in nearby Albuquerque, and Lawrence Livermore, near San Francisco. Grider says demand for the new tool was immediately overwhelming. “I was surprised how fast people became dependent on it,” he told me.
Initially, the system was used for a wide array of scientific research, but in August, Venado was moved onto a secure network so it could be used on weapons research, in the hope that it can become an invaluable part of the effort to maintain America’s nuclear arsenal.
Whatever your attitude toward nuclear weapons, Los Alamos researchers argue that as long as we have them, we want to make sure they work.
Advertisement
Since the 1990s, the United States — along with every other country other than North Korea, has been out of the live nuclear testing business, notwithstanding Trump’s recent social media posts on the subject. But between the original Trinity detonation in 1945 and the most recent blast in an underground site in 1992, the United States conducted more than 1,000 nuclear tests, acquiring vast stores of information in the process. That information is now training data for artificial intelligence that can help the lab ensure that America’s nukes work without actually blowing one up.
Venado is effectively a massive simulation machine to test how a weapon would respond to being put under unique forms of stress in real-world conditions. We can “take a weapon and give it the disease that we want and then blow it up 1000 different ways,” as Grider puts it.
In some ways this fulfills the vision of Los Alamos’s founder Robert Oppenheimer, who opposed further nuclear tests after Hiroshima and Nagasaki on the grounds that we already knew these weapons worked and any other questions could be answered by “simple laboratory methods.”
Those methods are not so simple today. When Webster, the LANL deputy director of weapons, first got involved in nuclear testing in the 1980s, the “state of computing that we had was extremely primitive,” he said, and not a viable substitute for gathering new data. Today, he says, “we’re doing calculations I could only dream of doing” before.
Advertisement
Mike Lang, director of the lab’s National Security AI Office, suggested that using AI tools to analyze the data kept “behind the fence” could not only ensure the weapons work, but also improve them. “We’re using [the same] materials that we’ve been using for a very long time,” he said. “Could we make a new high explosive that is less reactive, so you can drop it, and nothing happens? [Or] that’s not made with toxic chemicals, so people handling it would be safer from exposures? We can go through and look at some of the components of our nuclear deterrence, and see how we can make it cheaper to manufacture, easier to manufacture, safer to manufacture.”
Whatever your attitude toward nuclear weapons, Los Alamos researchers argue that as long as we have them, we want to make sure they work.
“We don’t build the weapons to do something stupid,” Webster said. “We build them not to do something stupid.”
The Los Alamos lab’s mesa location, an oasis of pines in the midst of a stark desert landscape, is known to locals as “the Hill.” About 45 minutes north of Santa Fe (on today’s roads, that is), it was chosen during World War II for its remoteness, defensibility, and natural beauty. Oppenheimer, who had traveled in the region since his youth, had long expressed a desire to combine his two main loves, “physics and desert country.”
Advertisement
Eight decades after the days of Oppenheimer, the sprawling fenced-off Los Alamos campus feels a bit like a university town without the young people. Los Alamos County is the wealthiest in New Mexico and has the highest number of PhDs per capita in the country. The lab has around 18,000 employees and the population has boomed since the lab resumed production of plutonium pits — the explosive cores of nuclear weapons — as part of America’s ongoing $1.7 trillion nuclear modernization program. Federal officials recently adopted a plan for a significant expansion of the lab, including an additional supercomputing complex, which critics say fails to take account of the environmental impact of the facility’s electricity and water use as well as the hazardous waste caused by pit production.
“Gun site, the facility when the “Little Boy” bomb dropped on Hiroshima was assembled.Provided by Los Alamos National Laboratory/Joey Montoya, photographer
Officials at Los Alamos are quick to point out that despite what the lab is best known for, scientists there are working on more than just weapons of mass destruction. During my tour, I met with chemists using AI to design new targeted radiation therapies to improve cancer treatment and visited the Los Alamos Neutron Science Center, a kilometer-long particle accelerator that, in addition to weapons research, produces isotopes for medical research and pure physics experiments.
Critics point out that the vast majority of its budget is still devoted to weapons research, but still, Los Alamos is one of the best places in the world to observe the seismic impact AI is having on how scientific research is conducted. When the decision was made to move Venado onto a secure network, it cut off a number of ongoing scientific research projects, which is one big reason why two new supercomputers, known as Mission and Vision, are planned to debut this summer. Both are designed specifically for AI applications — one for weapons research, one for less classified scientific work.
AI projects, including at Los Alamos, are often criticized for their power use, but scientists at the lab say their work could ultimately result in safer and more abundant energy. There’s a long-running joke that nuclear fusion technology, which could deliver clean power in vast quantities, is perpetually 20 years away. LANL scientists are hopeful that AI could help crack the remaining scientific breakthroughs needed to get it off the ground. Several researchers mentioned the potential use of AI tools to design heat-resistant materials for use in nuclear fusion reactors. Scientists at LANL’s sister lab, Livermore, achieved the world’s first fusion ignition reaction a few years ago, though it lasted only a few billionths of a second. “The thing that excites me…is the notion that we can move out of this computational world and start interacting with these experimental facilities,” said Earl Lawrence, chief scientist at the National Security AI Office.
Advertisement
Researchers increasingly use AI for “hypothesis generation,” devising new potential compounds or materials for testing. But the main feature of AI that excited the Los Alamos scientists I spoke with the most harkens back to what Metropolis and Feynman discovered about using early computers 80 years ago: It can do more work, faster, and without breaks than any human. Increasingly, it can do the sort of physical real-world experiments that post-docs and junior researchers were responsible for as well.
Asked about how he envisioned the future of scientific research in a world of AI, Lawrence quipped, “I hope it’s more coffee shops and walks in the woods.” Grider, a career computer programmer, said, “I hope to hell we can get out of the code business.”
There are downsides to that ease, as well. The sort of grunt work that AI can now do more efficiently is how scientists once learned their craft, assisting senior scientists with research. As in other fields, the pathways to those careers could narrow.
“We need to be intentional about how we train the next generation of scientists,” Lawrence said.
Advertisement
From the atomic age to the AI age
Reminders of Los Alamos’s history are everywhere on the Mesa. During my visit to the lab, I toured the sites, now eerie abandoned historical monuments maintained by the National Parks Service, where the bomb detonated by Oppenheimer and company in the 1945 Trinity test, and Little Boy, dropped on Hiroshima, were assembled. They’re possibly the only US National Parks locations where visiting involves a safety briefing on radiation and nearby live explosives testing.
1/5Industrial boilers used in the original Manhattan Project.Provided by Los Alamos National Laboratory/Joey Montoya, photographer
But the heirs to Oppenheimer and Feynman have mixed feelings about the Manhattan Project metaphor when it comes to AI.
Advertisement
Lang felt it was a mistake to characterize AI as a weapon, or frame development as an arms race, with China the main competitor this time instead of Germany. He preferred to think of today’s research as continuing the Manhattan Project’s model of “giving a bunch of multidisciplined scientists a goal to really go after and try to make progress on.” Others pointed to the scientists who were concerned at the time about the risk of a nuclear explosion igniting the earth’s atmosphere as somewhat equivalent to today’s AI “doomers.”
There’s also a fundamental difference between the two in how knowledge is disseminated. “In the very early days of nuclear energy, there were only a handful of people who had the knowledge and understanding to even know what was going on,” said Fairchild, the deputy director for LANL’s National Security AI Office. Plus, supplies of uranium and plutonium could be tightly controlled. “These days, everybody knows what’s going on…and much of it is happening in open source.”
AI is also developing in a very different way from previous technologies with national security implications. In the past, the government and military have often dictated academic research into futuristic tech to meet their own needs, with commercial applications only being found later: The internet may be the prime example. Now, as LANL’s partnership with OpenAI shows, it’s the government and military racing to react to cutting-edge applications developed first by private industry for commercial use.
“For the very first time, I would argue, on a really big scale, we find ourselves not in a leadership role here,” said Aric Hagberg, leader of LANL’s computational sciences division.
Advertisement
There may also be an AI-atomic parallel in the sheer size of investment proponents should be devoted to the advancement of the technology. Ilya Sutskever, OpenAI’s former chief scientist once remarked (maybe jokingly) that in a world of superintelligent AI “it’s pretty likely the entire surface of the Earth will be covered with solar panels and data centers.” The remark brings to mind another one by the Nobel Prize-winning physicist Niels Bohr, who had been skeptical that the United States would be able to build an atomic bomb “without turning the whole country into a factory.” When Bohr first visited Los Alamos, he felt, stunned, that the Americans had “done just that.”
The majority of the Manhattan Project was not the work done on chalkboards on the Hill by physicists, but the industrial scale efforts to enrich uranium and produce plutonium in Oak Ridge, Tennessee and Hanford, Washington. The latter site, carried out in large part by chemical firm Dupont — a “public-private partnership” of its era — produced radioactive waste that is still being cleaned up today. Likewise, the work of producing the AI future is as much or if not more about a massive build-out of data centers and the power needed to keep them cool and humming as it is the cutting edge research coming out of Silicon Valley or government labs.
When you visit Los Alamos, it’s hard not to be struck by the amount of ingenuity — in everything from nuclear physics, to explosive design, to revolutionary new techniques in high-speed photography — as well as the sheer industrial output that turned theoretical physics into a workable bomb in just three years.
You can still see the raw intellectual talent and can-do spirit that built the most advanced civilization the world has ever seen at Los Alamos today, and can easily imagine how it might build an even better one tomorrow. But it’s also impossible not to wonder if you’re seeing something else: Humanity’s thirst for power over the material world meeting with its instincts toward fear and aggression to engineer new nightmares. Perhaps we’ll get an answer soon.
The downtown Seattle skyline. (GeekWire File Photo / Kurt Schlosser)
Seattle has officially leveled up from a “secondary” tech market to a critical “reinforcer” of the global innovation economy — but the city is running out of room to grow, according to a new report.
The latest edition of commercial real estate firm JLL’s Innovation Geographies report reveals that while Seattle is outpacing traditional hubs like New York and London in talent migration, a shortage of “investment-grade” real estate is creating a bottleneck for the city’s next era of tech expansion.
Seattle lands among 18 so-called reinforcer markets, where it is classified in the report as a “tech powerhouse” alongside cities like Austin, Berlin, and Tel Aviv. Reinforcers also include Los Angeles, Shanghai, Toronto, Washington, D.C., Raleigh, N.C., and others.
While diverse in what makes them attractive, the cities share the common characteristics of much higher rates of net migration, JLL says, having seen population inflows that are 3.8 times higher than the San Francisco Bay Area — the lone “core” city — and eight other “anchor” cities.
The 135 cities ranked in the report are scored based on an analysis of talent concentration and innovation output. While talent concentration measures the human capital and educational pipeline, the output score focuses on the tangible results and financial activity of a city’s innovation ecosystem, such as VC funding, startup activity, R&D spending, and more.
Advertisement
Seattle ranks 12th in innovation output and 23rd in talent concentration. The Bay Area is No. 1 in both categories.
But high-tier hubs are facing a global undersupply of premium, investment-grade real estate that is attractive to innovative companies, according to JLL, which says that only 11% of global office space was built after 2020.
Meanwhile, reinforcer markets like Seattle have seen surging prime rents, averaging $837 per square meter. And while some markets have seen an occupancy recovery, Seattle and others are still below pre-pandemic occupancy highs.
Commercial real estate firm CBRE reported earlier this year that Seattle’s office vacancy reached another record high at 34.7% in Q4. The numbers underscore how hybrid work and shrinking office footprints continue to weigh on a tech-heavy market like Seattle.
Advertisement
In nearby downtown Bellevue, vacancy rates still remain high, reaching 25.4% at the end of last year, according to Broderick Group. But OpenAI signed a big new lease in February, reflecting a growing role for the Eastside in the AI boom.
NASA’s Space Launch System rises from its Florida launch pad, sending the Artemis 2 crew into orbit. (NASA via YouTube)
After years of postponements and close to $100 billion in spending, NASA has launched the first mission to send astronauts around the moon since Apollo 17 in 1972.
The 10-day Artemis 2 mission began today with the liftoff of NASA’s 322-foot-tall Space Launch System rocket from Launch Complex 39B at Kennedy Space Center in Florida at 6:35 p.m. ET (3:35 p.m. PT). NASA is streaming coverage of the flight via YouTube and Amazon Prime.
During the last two hours of the countdown, engineers addressed concerns about the rocket’s flight termination system and instrumentation for a battery on the launch abort system. “Godspeed, Artemis 2,” launch director Charlie Blackwell-Thompson told the crew just before liftoff. “Let’s go!”
Artemis 2 is the first crewed test flight in a series leading up to a moon landing that’s currently scheduled for 2028. It follows Artemis 1, which sent a crewless Orion around the moon in 2022. This time, four astronauts are riding inside Orion: NASA mission commander Reid Wiseman, NASA astronauts Christina Koch and Victor Glover, and Canadian astronaut Jeremy Hansen.
“Great view,” Wiseman told Mission Control during the rocket’s ascent. “We have a beautiful moonrise, we’re headed right at it.”
Advertisement
Koch will be the first woman to go beyond Earth orbit. Similar firsts apply to Glover as a Black astronaut, and Hansen as a non-American astronaut.
Although Artemis 2’s astronauts won’t be landing on the lunar surface, they’ll follow a figure-8 trajectory that will send them 4,700 miles beyond the far side of the moon and make them the farthest-flung travelers in human history.
Last week, NASA Administrator Jared Isaacman laid out a plan for establishing a permanent base on the moon and preparing for even farther trips into the solar system. Today, Isaacman said Artemis 2 is “the opening act” of that golden age of science and discovery.
Senior test director Jeff Spaulding, a veteran of the space shuttle program, said he was looking forward to the mission. “I’m excited about going to the moon,” he told reporters on the eve of the launch. “I’m excited about establishing a presence there. It’s something that I have had a desire for, for a great many years — and then to get humans out to Mars as well.”
Advertisement
The mission timeline calls for Orion to adjust its orbit around Earth today and go through system checkouts. An hour after launch, Mission Control had to troubleshoot a dropout in communications with the crew. After a gap of several minutes, Wiseman reported that he could hear capsule communicator Stan Love “loud and clear.” The crew also worked with Mission Control to fix a balky space toilet.
On Thursday, Orion is due to fire its main engine for about six minutes to leave orbit and head for the moon. The engine burn is designed to put the space capsule on a free-return trajectory, which takes advantage of orbital mechanics to slingshot around the moon for the return trip.
The climactic lunar flyby is due to take place on April 6. “They’re going to be able to see the whole moon as a lunar disk on the lunar far side,” Marie Henderson, lunar science deputy lead for the Artemis 2 mission, said in a NASA video. “So, that’s a brand-new, unique perspective that humans haven’t been able to look at before.”
Advertisement
The astronauts will also get an opportunity to capture a 21st-century “Earthrise” photo, and they may be able to glimpse a solar eclipse made possible by the lunar flyby. “They will be able to see the sun’s corona, which is kinda cool,” said Lori Glaze, acting associate administrator for NASA’s Exploration Systems Development Mission Directorate.
At the end of the trip, the crew and their Orion capsule are due to splash down in the Pacific Ocean off the California coast. They’ll be brought to a recovery ship for medical checkouts and their return to shore, following a routine that became familiar during the Apollo era.
Artemis 2 is about the history of America’s space program as well as its future. The round-the-moon mission profile matches that of Apollo 8, which served as a unifying event for a nation riven by the social tumult of the time. That mission’s commander, Frank Borman, reported receiving a telegram reading, “Congratulations to the crew of Apollo 8. You saved 1968.” Notably, less than a third of Americans living today were around when Apollo 8 flew.
The main motivation for the Apollo program was America’s superpower competition with the Soviet Union, and today, the geopolitical stakes are similarly high. NASA and the White House are seeking to jump-start progress on Artemis in part because China is targeting a crewed moon landing by 2030.
Advertisement
Sen. Maria Cantwell, D-Wash., said this week during a visit to Seattle-area suppliers for the Artemis program that it’s important for America to get to the moon first. “We’re trying to get the best real estate on the moon,” she said. “So, to do that, you’ve got to get up there to claim it.”
The course of the Artemis program, which is named after the goddess of the moon and the twin sister of Apollo in Greek mythology, hasn’t always run smooth. When the program was given its name in 2019, the Artemis 2 mission was planned for 2022 or 2023, with the moon landing scheduled for 2024. The cost of the program has been estimated at $93 billion through 2025, with each Artemis launch costing $4.1 billion.
Several companies with a presence in the Seattle area are banking on Artemis’ success. For example, a facility in Redmond operated by L3Harris (previously known as Aerojet Rocketdyne) builds thrusters for the Orion spacecraft and is already working ahead on the Artemis 8 mission.
The bike also has a front fork with 80 millimeters of suspension, so accidentally piloting all 60 pounds of it into a pothole won’t pitch you head over heels. It’s fully loaded, with integrated lights, fenders, and a kickstand. And finally, the Vida E+ is UL-certified, so it won’t catch on fire while charging in your garage. The RideControl app lets you check your bike’s electronic systems for problems, lock your bike, and, if you have a bike mount, use it for rudimentary navigation.
Quality Components
Riding the Vida E+ feels like riding a couch, but in a good way. This is a bike that will do everything for you, without your having to think about it very much (unless you’re trying to maneuver it between two cars in your driveway). The step-through frame makes it easy to get on or off. The sit-up geometry and ergonomic handlebars are incredibly comfortable; I can ride with one hand, slowly pedaling at 9 mph while biking my kids home from school, and they blabber on about whatever.
Photograph: Adrienne So
Because this is a bike made by Giant, the components are very nice, for a reasonable price. I can easily read the display in high-glare natural sunlight. The fork is made by Suntour; while I would definitely not take this bike on trails, I hit many potholes, both on purpose and not, without dumping myself. The brakes are high-performance Tektro four-piston hydraulic disc brakes, which is also a little unusual at the price point. You don’t have to worry about being able to make quick stops on hills or with a heavy load.
The Shimano shifters work well with the SyncDrive motor to climb steep hills. I did find that the buttons are not terribly easy to push, and I also tended to mix up the headlight and power buttons at the top, which my kids find annoying when they’ve taken off and I’m still struggling to get a 60-pound bike moving without assistance.
The new 25th Anniversary Vinylphyle restoration of Erykah Badu’s chart-toping 2000 album Mama’s Gun is an excellent reissue which should be of interest to fans of vocal jazz and modern soul sounds as well as analog loving audiophiles.
A platinum seller with three hit singles including her first top 10, this is a super chill, fluid grooving and melodic song cycle often categorized as “neo-soul” and bridging pop, soul, funk, jazz, hip hop and even singer-songwriter pop. While I’ve read numerous references to Billie Holiday in discussing Ms. Badu’s vocal style, I also hear strong Dinah Washington flavors by way of Minnie Ripperton and Chaka Kahn (which are some pretty great touchstones as well).
As with other Vinylphyle releases in this top-notch new series from Universal Music, Mama’s Gun was pressed at RTI — renown as one of the best vinyl manufacturing facilities in the world. The 180-gram vinyl is dark and well centered. The production quality elements throughout are also outstanding, the album cover is made of heavy cardboard stock akin to a vintage jazz album from 1960s on Verve or Blue Note. Each disc comes housed in an audiophile-grade plastic lined inner-sleeve.
From Universal’s udiscovermusic website we’ve also gleaned some additional information which reveals that this release is more than “just” a reissue but a genuine restoration of note for fans seeking the best quality version of a favorite album.
There we learn:
Advertisement
“There are no sequenced analog masters for Mama’s Gun. The original 44.1kHz/16-bit files, with the original CD mastering and limiting, have been the only source for all digital and vinyl reissues—until now. The record was reassembled and rebuilt digitally from 14 individual track tapes, newly transferred in 96kHz/24-bit, in order to create the first true remaster of this record since it came out 25 years ago.”
In the new liner notes for the album Ms. Badu adds insights into her passion for analog at the crossroads of digital: “With this remastering, we’ve carefully blended analog warmth with digital precision. It’s breathtaking to hear the subtleties of each layer come alive in a new way, making the project resonate even more powerfully.”
Indeed, what a lush round sound Mama’s Gun delivers! Largely played by live musicians in top studios including New York’s iconic Electric Lady (which was created by Jimi Hendrix), no less than The Roots’ Questlove is featured on drums on many of the tracks.
It is haunting hearing “In Love with You” which features vocal contributions from Stephen Marley — Bob Marley’s second son — supported mostly by lovely softly strummed nylon string acoustic guitar. Ms. Badu’s hit “Bag Lady” (#6 Billboard Top 100) was co-written with soul legend Issac Hayes and received two Grammy nominations that year. “Cleva” — which features Roy Ayers on Vibraphone, feels almost like a lost Stevie Wonder tune.
if you ever liked early Meshell Ndegeocello albums like her 1999 masterwork Bitter or even newer artists like New Orleans’ Tank & The Bangas, you might well enjoy Mama’s Gun. Highly recommended.
Universal’s Vinylphyle series 2LP release of Erykah Badu’s Mama’s Gun is currently exclusively available via udiscovermusic for $54.98.
Advertisement
Mark Smotroff is a deep music enthusiast / collector who has also worked in entertainment oriented marketing communications for decades supporting the likes of DTS, Sega and many others. He reviews vinyl for Analog Planet and has written for Audiophile Review, Sound+Vision, Mix, EQ, etc. You can learn more about him at LinkedIn.
Are pickup truck engines the same as those used in normal passenger or sports cars? The answer is both yes and no. Physically, at least, there’s usually little that separates an engine in a truck’s engine bay from one in a car’s. After all, there have been plenty of times in the industry’s history when automakers have sold cars and trucks with nearly identical engines. Case in point, the legendary Chrysler slant-six engine, which came in everything from compact cars to pickup trucks and vans.
But in the modern era, especially, there can be notable differences between car and truck engines, even if their displacement and general engine architecture are the same. The modern HEMI V8 used in Dodge muscle cars and Ram pickups is a good example of this, with different versions of the same engines used in performance cars and pickups. Most of the differences between truck and car engines involve how and when the engines deliver their horsepower and torque.
A car engine may produce more peak horsepower than an equivalent truck engine, but the truck engine will often provide more torque or deliver the same amount of torque at lower revs. Just how much difference there is between the two will vary by automaker, and some brands, like Ford, offer V8 engines designed from the ground up for trucks that share nothing with their car counterparts.
Advertisement
The different flavors of V8s
Jetcityimage/Getty Images
Ultimately, the main difference between car and truck engines is rooted in the difference between horsepower and torque. While horsepower matters in a truck, when it comes to pulling a trailer or carrying a heavy load, it’s the torque that’s important — and the lower in an engine’s powerband that torque comes, the better it is. Thus, the popularity of ultra-torquey, but relatively low-horsepower turbodiesel engines for large pickups. Peak horsepower, meanwhile, takes prominence in a sports car where engine speeds are higher.
Even within the same V8 family, there can be notable differences in car and truck engines. In GM’s V8 lineup, the 401-hp 6.6-liter L8T truck engine is designed for low-speed torque, with 464 lb-ft of torque at 4,000 RPM. The Chevrolet Corvette’s smaller, 495-hp 6.2-liter LT2 V8 is part of the same family and easily bests the L8T in peak horsepower, yet it barely edges the L8T in torque. It also needs to rev much higher to generate its torque, with its 470 lb-ft coming at 5,150 rpm.
Advertisement
Ford’s Super Duty 7.3-liter Godzilla V8 takes this concept even further. Not only is the Godzilla much larger than the 5.0 Coyote V8 in the Mustang GT, but it also uses an entirely different design with an overhead-valve, single-camshaft design compared to the 5.0’s dual overhead cams and 32 valves. At 480 hp, the 5.0 beats out the 430-horsepower Godzilla, but the 7.3 takes the torque crown, with 475 pound-feet to the Mustang’s 415 lb-ft.
Advertisement
The curious case of the Nissan 240SX
betto rodrigues/Shutterstock
So what happens, then, if you put a pickup truck engine into a sports car? Look no further than the North American-market Nissan 240SX from the 1990s. When the S13 Nissan Silvia and 180SX debuted in the Japanese home market, the cars were available with high-horsepower turbocharged four-cylinder engines — first the 1.8-liter CA18DET and later the legendary SR20DET. This, combined with a great chassis and tons of aftermarket support, helped the S13 become a smash hit among enthusiasts.
However, when it came time to export the car to America, Nissan decided to forgo the turbo engines in favor of the naturally aspirated 2.4-liter KA24 engine used in Nissan pickup trucks. Though the USDM engine was larger than its JDM counterpart and produced a decent amount of torque for its size, the KA24 only made 140 hp and, more importantly, lacked the high-revving sports car feel many expected from the 240SX.
Fortunately, the SR20DET was an easy swap, and Nissan’s decision to go with a truck engine didn’t entirely detract from the many features that helped the 240SX become a legendary drift car in the years and decades that followed. Even then, though, one can’t help but wonder what would’ve happened had Nissan given the U.S. market 240SX the turbocharged performance engine it deserved.
You must be logged in to post a comment Login