If there’s anything that makes people more uncomfortable than highly advanced AI or nuclear weapons technology, it’s the combination of the two. But there’s been a symbiotic relationship between cutting-edge computing and America’s nuclear weapons program since the very beginning.
Tech
What happened when they installed ChatGPT on a nuclear supercomputer
In the fall of 1943, Nicholas Metropolis and Richard Feynman, two physicists working on the top-secret atomic bomb project at Los Alamos, decided to set up a contest between humans and machines.
- Los Alamos National Laboratory recently partnered with OpenAI to install its flagship ChatGPT AI model on the supercomputers used to process nuclear weapons testing data. It’s the latest in a long history of symbiosis between America’s nuclear program and cutting edge computing.
- AI tools are already revolutionizing the way scientists are conducting research at Los Alamos, part of a larger program called Genesis Mission that aims to harness the technology to accelerate scientific research at America’s national labs.
- Comparisons of AI to the early days of nuclear weapons abound, both among critics and proponents, but Vox’s reporting trip to the lab found little evidence of the kind of doomsday fears the permeate conversations about AI elsewhere.
In the early days of the Manhattan Project, the only “computers” on site were humans, many of them the wives of scientists working on the project, performing thousands of equations on bulky analog desk calculators. It was painstaking and exhausting work, and the calculators were constantly breaking down under the demands of the lab, so the researchers began to experiment with using IBM punch-card machines — the cutting edge of computer technology at the time. Metropolis and Feynman set up a trial, giving the IBMs and the human computers the same complex problem to solve.
As the Los Alamos physicist Herbert Anderson later recalled, “For the first two days the two teams were neck and neck — the hand-calculators were very good. But it turned out that they tired and couldn’t keep up their fast pace. The punched-card machines didn’t tire, and in the next day or two they forged ahead. Finally everyone had to concede that the new system was an improvement.”
Today, at Los Alamos, a similar dynamic is taking place, as scientists at the lab increasingly rely on artificial intelligence tools for their most ambitious research. Like their punch-card ancestors, today’s AI models have a leg up on human researchers simply by virtue of not having to eat, sleep, or take breaks. Scientists say they’re also approaching tough problems in entirely new and unexpected ways, changing how research is conducted at one of America’s largest scientific institutions.
In recent weeks, in the wake of the feud between the Pentagon and Anthropic, as well as the reported use of AI software for targeting during the war in Iran, the partnership between the US military and leading AI companies has become a highly charged political topic. Less discussed has been the already extensive cooperation between these firms and the country’s nuclear weapons complex, under the supervision of the Department of Energy.
Last year, the Los Alamos National Lab (LANL) entered a partnership with OpenAI allowing it to install the company’s popular ChatGPT AI system on Venado, one of the world’s most powerful supercomputers. As of August, Venado was placed on a classified network, meaning that the AI chatbot now has access to some of the country’s most sensitive scientific data on nuclear weapons.
That wasn’t all. Later last year, the Department of Energy, which oversees Los Alamos and the country’s 16 other national laboratories, announced a $320 million initiative known as the Genesis Mission, which aims to “harness the current AI and advanced computing revolution to double the productivity and impact of American science and engineering within a decade.”
Few people are in a better position to think about the upsides and downsides of revolutionary new technologies than the people who today populate the mesa once occupied by Robert Oppenheimer, Feynman, and the other pioneers of the nuclear age. But when I visited the lab in January, I found that the researchers there were remarkably sanguine about the more existential risks that often come up in conversation about AI, even as they worked on the production of the world’s most dangerous weapons.
“They think we’re building Skynet; that’s not what’s going on here at all,” LANL’s deputy director of weapons, Bob Webster, said, referring to the superintelligent system from the Terminator movies. Geoff Fairchild, deputy director for the National Security AI Office, volunteered that he does not have a “p(doom),” the Silicon Valley shorthand for how likely one believes it is that AI will lead to globally catastrophic outcomes, and doesn’t believe most of his colleagues do either. “We don’t talk about it. I don’t think I’ve ever had that conversation,” he added.
For Alex Scheinker, a physicist who uses AI for the maintenance and operation of LANL’s massive particle accelerator, AI is an extraordinarily useful tool, but a tool nonetheless. “It’s just more math,” he said. “I don’t like to think about it like it’s magic.”
Still, the nuclear-AI comparison is unavoidable. Given the technology’s transformative potential, the dangers it could pose to humanity, and the potential for an innovation “arms race” between the United States and its international rivals, the current state of AI has frequently been compared to the early days of the nuclear age. And how people feel about the Manhattan Project — a triumphant union between the national security state and scientific visionaries? Or humanity opening Pandora’s box? — likely has a lot to do with how they view their work now.
Those making the comparison include OpenAI CEO Sam Altman who is fond of quoting Oppenheimer, and expressed disappointment that the 2023 biopic of the Los Alamos founder wasn’t the kind of movie that “would inspire a generation of kids to be physicists.” One of the film’s central conflicts is how a guilt-stricken Oppenheimer spent much of the second half of his life in an unsuccessful quest to control the spread of his creation. (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent.)
The Trump administration has been explicit about the comparison. In the executive order announcing the mission, the White House invoked the creation of the atomic bomb, writing, “In this pivotal moment, the challenges we face require a historic national effort, comparable in urgency and ambition to the Manhattan Project that was instrumental to our victory in World War II.”
But if we really are in a new “Manhattan Project” moment, you wouldn’t know it in the place where the original Manhattan Project took place.
“The world’s nuclear information is right in there. You’re looking at it,” LANL’s director for high performance computing, Gary Grider, told me during my visit to Los Alamos in January.
We were staring through a glass window at a densely packed shelf of magnetic tapes, each of which could be accessed and read via a robotic system that resembled a high-end vending machine more than a hyperintelligent doomsday computer. The machine we were staring into contained nuclear data so sensitive it’s kept on physical drives rather than an accessible network, not that any of the data stored in the room I was standing in is exactly open source.
I was in Los Alamos’s high-performance computing complex, a vast, brightly lit, 44,000-square-foot room in a building named for Nicholas Metropolis, containing six supercomputers with space cleared out for two more. The first thing that strikes visitors to the computing center, the refrigerator-like temperature and the roar of the overhead fans, both evidence of the gargantuan effort, in money and megawatts, that it takes to keep these machines cool. “Going into high-performance computing, I never thought that I’d be spending this much of my time thinking about power and water,” Grider told me. Computing at Los Alamos is an insatiable beast: The average lifespan of a supercomputer, the cost of which can run into the hundreds of millions of dollars, was once around five to six years. Now it’s around three to five.
Cutting-edge computing has been intertwined with the American nuclear enterprise from the beginning. Los Alamos scientists used the world’s first digital computer, ENIAC, to test the feasibility of a thermonuclear weapon. The lab got its own purpose-built cutting-edge computer, MANIAC, in the early ’50s. In addition to playing a role in the development of the hydrogen bomb, MANIAC was the first computer to beat a human at chess…sort of. It played on a 6×6 board without bishops and took around 20 minutes to make a move. In 1976, the Cray-1, one of the earliest supercomputers, was installed at Los Alamos. Weighing more than 10,000 pounds, it was the fastest and most powerful computer in the world at the time, though it would be no match for a modern iPhone.
I had visited Los Alamos to see MANIAC and Cray’s descendant, Venado, comprised of dozens of quietly humming 8-foot tall cabinets. Currently ranked as the 22nd most powerful computer in the world, Venado was built in collaboration with the supercomputer builder HPE Cray and chip giant Nvidia, which provided some 3,480 of its superchips for the system. It is capable of around 10 exaflops of computing — about 10 quintillion calculations per second. The signatures of executives, including Nvidia’s Jensen Huang, adorn one of the cabinets.
Last May, OpenAI representative, accompanied by armed security, arrived at Los Alamos bearing locked metal briefcases containing the “model weights” — the parameters used by AI systems to process training data — for its ChatGPT 03 model, for installation on Venado. It was the first time this type of reasoning model had been applied to national security problems on a system of this kind.
LANL’s computers are a closed system not connected to the wider internet, but the OpenAI software installed on Venado brings with it learning it has acquired since the company started developing it. Officials at the lab were not about to let a visiting reporter start asking the AI itself questions, but from all accounts, its users interface with it from their desktop computers essentially the same way the rest of us have learned to talk to ChatGPT or other chatbots when we’re generating memes or brainstorming weeknight recipes.
Those users include scientists at LANL itself as well as the country’s other main nuclear labs — Sandia, in nearby Albuquerque, and Lawrence Livermore, near San Francisco. Grider says demand for the new tool was immediately overwhelming. “I was surprised how fast people became dependent on it,” he told me.
Initially, the system was used for a wide array of scientific research, but in August, Venado was moved onto a secure network so it could be used on weapons research, in the hope that it can become an invaluable part of the effort to maintain America’s nuclear arsenal.
Whatever your attitude toward nuclear weapons, Los Alamos researchers argue that as long as we have them, we want to make sure they work.
Since the 1990s, the United States — along with every other country other than North Korea, has been out of the live nuclear testing business, notwithstanding Trump’s recent social media posts on the subject. But between the original Trinity detonation in 1945 and the most recent blast in an underground site in 1992, the United States conducted more than 1,000 nuclear tests, acquiring vast stores of information in the process. That information is now training data for artificial intelligence that can help the lab ensure that America’s nukes work without actually blowing one up.
Venado is effectively a massive simulation machine to test how a weapon would respond to being put under unique forms of stress in real-world conditions. We can “take a weapon and give it the disease that we want and then blow it up 1000 different ways,” as Grider puts it.
In some ways this fulfills the vision of Los Alamos’s founder Robert Oppenheimer, who opposed further nuclear tests after Hiroshima and Nagasaki on the grounds that we already knew these weapons worked and any other questions could be answered by “simple laboratory methods.”
Those methods are not so simple today. When Webster, the LANL deputy director of weapons, first got involved in nuclear testing in the 1980s, the “state of computing that we had was extremely primitive,” he said, and not a viable substitute for gathering new data. Today, he says, “we’re doing calculations I could only dream of doing” before.
Mike Lang, director of the lab’s National Security AI Office, suggested that using AI tools to analyze the data kept “behind the fence” could not only ensure the weapons work, but also improve them. “We’re using [the same] materials that we’ve been using for a very long time,” he said. “Could we make a new high explosive that is less reactive, so you can drop it, and nothing happens? [Or] that’s not made with toxic chemicals, so people handling it would be safer from exposures? We can go through and look at some of the components of our nuclear deterrence, and see how we can make it cheaper to manufacture, easier to manufacture, safer to manufacture.”
Whatever your attitude toward nuclear weapons, Los Alamos researchers argue that as long as we have them, we want to make sure they work.
“We don’t build the weapons to do something stupid,” Webster said. “We build them not to do something stupid.”
The Los Alamos lab’s mesa location, an oasis of pines in the midst of a stark desert landscape, is known to locals as “the Hill.” About 45 minutes north of Santa Fe (on today’s roads, that is), it was chosen during World War II for its remoteness, defensibility, and natural beauty. Oppenheimer, who had traveled in the region since his youth, had long expressed a desire to combine his two main loves, “physics and desert country.”
Eight decades after the days of Oppenheimer, the sprawling fenced-off Los Alamos campus feels a bit like a university town without the young people. Los Alamos County is the wealthiest in New Mexico and has the highest number of PhDs per capita in the country. The lab has around 18,000 employees and the population has boomed since the lab resumed production of plutonium pits — the explosive cores of nuclear weapons — as part of America’s ongoing $1.7 trillion nuclear modernization program. Federal officials recently adopted a plan for a significant expansion of the lab, including an additional supercomputing complex, which critics say fails to take account of the environmental impact of the facility’s electricity and water use as well as the hazardous waste caused by pit production.
Officials at Los Alamos are quick to point out that despite what the lab is best known for, scientists there are working on more than just weapons of mass destruction. During my tour, I met with chemists using AI to design new targeted radiation therapies to improve cancer treatment and visited the Los Alamos Neutron Science Center, a kilometer-long particle accelerator that, in addition to weapons research, produces isotopes for medical research and pure physics experiments.
Critics point out that the vast majority of its budget is still devoted to weapons research, but still, Los Alamos is one of the best places in the world to observe the seismic impact AI is having on how scientific research is conducted. When the decision was made to move Venado onto a secure network, it cut off a number of ongoing scientific research projects, which is one big reason why two new supercomputers, known as Mission and Vision, are planned to debut this summer. Both are designed specifically for AI applications — one for weapons research, one for less classified scientific work.
AI projects, including at Los Alamos, are often criticized for their power use, but scientists at the lab say their work could ultimately result in safer and more abundant energy. There’s a long-running joke that nuclear fusion technology, which could deliver clean power in vast quantities, is perpetually 20 years away. LANL scientists are hopeful that AI could help crack the remaining scientific breakthroughs needed to get it off the ground. Several researchers mentioned the potential use of AI tools to design heat-resistant materials for use in nuclear fusion reactors. Scientists at LANL’s sister lab, Livermore, achieved the world’s first fusion ignition reaction a few years ago, though it lasted only a few billionths of a second. “The thing that excites me…is the notion that we can move out of this computational world and start interacting with these experimental facilities,” said Earl Lawrence, chief scientist at the National Security AI Office.
Researchers increasingly use AI for “hypothesis generation,” devising new potential compounds or materials for testing. But the main feature of AI that excited the Los Alamos scientists I spoke with the most harkens back to what Metropolis and Feynman discovered about using early computers 80 years ago: It can do more work, faster, and without breaks than any human. Increasingly, it can do the sort of physical real-world experiments that post-docs and junior researchers were responsible for as well.
Asked about how he envisioned the future of scientific research in a world of AI, Lawrence quipped, “I hope it’s more coffee shops and walks in the woods.” Grider, a career computer programmer, said, “I hope to hell we can get out of the code business.”
There are downsides to that ease, as well. The sort of grunt work that AI can now do more efficiently is how scientists once learned their craft, assisting senior scientists with research. As in other fields, the pathways to those careers could narrow.
“We need to be intentional about how we train the next generation of scientists,” Lawrence said.
From the atomic age to the AI age
Reminders of Los Alamos’s history are everywhere on the Mesa. During my visit to the lab, I toured the sites, now eerie abandoned historical monuments maintained by the National Parks Service, where the bomb detonated by Oppenheimer and company in the 1945 Trinity test, and Little Boy, dropped on Hiroshima, were assembled. They’re possibly the only US National Parks locations where visiting involves a safety briefing on radiation and nearby live explosives testing.
1/5
But the heirs to Oppenheimer and Feynman have mixed feelings about the Manhattan Project metaphor when it comes to AI.
Lang felt it was a mistake to characterize AI as a weapon, or frame development as an arms race, with China the main competitor this time instead of Germany. He preferred to think of today’s research as continuing the Manhattan Project’s model of “giving a bunch of multidisciplined scientists a goal to really go after and try to make progress on.” Others pointed to the scientists who were concerned at the time about the risk of a nuclear explosion igniting the earth’s atmosphere as somewhat equivalent to today’s AI “doomers.”
There’s also a fundamental difference between the two in how knowledge is disseminated. “In the very early days of nuclear energy, there were only a handful of people who had the knowledge and understanding to even know what was going on,” said Fairchild, the deputy director for LANL’s National Security AI Office. Plus, supplies of uranium and plutonium could be tightly controlled. “These days, everybody knows what’s going on…and much of it is happening in open source.”
AI is also developing in a very different way from previous technologies with national security implications. In the past, the government and military have often dictated academic research into futuristic tech to meet their own needs, with commercial applications only being found later: The internet may be the prime example. Now, as LANL’s partnership with OpenAI shows, it’s the government and military racing to react to cutting-edge applications developed first by private industry for commercial use.
“For the very first time, I would argue, on a really big scale, we find ourselves not in a leadership role here,” said Aric Hagberg, leader of LANL’s computational sciences division.
There may also be an AI-atomic parallel in the sheer size of investment proponents should be devoted to the advancement of the technology. Ilya Sutskever, OpenAI’s former chief scientist once remarked (maybe jokingly) that in a world of superintelligent AI “it’s pretty likely the entire surface of the Earth will be covered with solar panels and data centers.” The remark brings to mind another one by the Nobel Prize-winning physicist Niels Bohr, who had been skeptical that the United States would be able to build an atomic bomb “without turning the whole country into a factory.” When Bohr first visited Los Alamos, he felt, stunned, that the Americans had “done just that.”
The majority of the Manhattan Project was not the work done on chalkboards on the Hill by physicists, but the industrial scale efforts to enrich uranium and produce plutonium in Oak Ridge, Tennessee and Hanford, Washington. The latter site, carried out in large part by chemical firm Dupont — a “public-private partnership” of its era — produced radioactive waste that is still being cleaned up today. Likewise, the work of producing the AI future is as much or if not more about a massive build-out of data centers and the power needed to keep them cool and humming as it is the cutting edge research coming out of Silicon Valley or government labs.
When you visit Los Alamos, it’s hard not to be struck by the amount of ingenuity — in everything from nuclear physics, to explosive design, to revolutionary new techniques in high-speed photography — as well as the sheer industrial output that turned theoretical physics into a workable bomb in just three years.
You can still see the raw intellectual talent and can-do spirit that built the most advanced civilization the world has ever seen at Los Alamos today, and can easily imagine how it might build an even better one tomorrow. But it’s also impossible not to wonder if you’re seeing something else: Humanity’s thirst for power over the material world meeting with its instincts toward fear and aggression to engineer new nightmares. Perhaps we’ll get an answer soon.
This story was produced in partnership with Outrider Foundation and Journalism Funding Partners.
Tech
Microsoft issues emergency update for macOS and Linux ASP.NET threat
Microsoft released an emergency patch for its ASP.NET Core to fix a high-severity vulnerability that allows unauthenticated attackers to gain SYSTEM privileges on devices that use the Web development framework to run Linux or macOS apps.
The software maker said Tuesday evening that the vulnerability, tracked as CVE-2026-40372, affects versions 10.0.0 through 10.0.6 of the Microsoft.AspNetCore.DataProtection NuGet, a package that’s part of the framework. The critical flaw stems from a faulty verification of cryptographic signatures. It can be exploited to allow unauthenticated attackers to forge authentication payloads during the HMAC validation process, which is used to verify the integrity and authenticity of data exchanged between a client and a server.
Beware: Forged credentials survive patching
During the time users ran a vulnerable version of the package, they were left open to an attack that would allow unauthenticated people to gain sensitive SYSTEM privileges that would allow full compromise of the underlying machine. Even after the vulnerability is patched, devices may still be compromised if authentication credentials created by a threat actor aren’t purged.
“If an attacker used forged payloads to authenticate as a privileged user during the vulnerable window, they may have induced the application to issue legitimately-signed tokens (session refresh, API key, password reset link, etc.) to themselves,” Microsoft said. “Those tokens remain valid after upgrading to 10.0.7 unless the DataProtection key ring is rotated.”
Microsoft describes ASP.NET Core as a “high-performance” web development framework for writing .Net apps that run on Windows, macOS, Linux, and Docker. The open-source package is “designed to allow runtime components, APIs, compilers, and languages [to] evolve quickly, while still providing a stable and supported platform to keep apps running.”
Tech
This smart pillow ensures you never sleep through an emergency alarm, or even a phone call
Sleeping through a phone call is annoying. Sleeping through a fire alarm is a whole different level of bad. So this new smart pillow idea feels a lot more useful than gimmicky. Researchers at Nottingham Trent University have developed a smart pillow sleeve designed to help deaf users wake up to important nighttime alerts.
Unlike a typical smart pillow, the team developed a smart sleeve that is designed to fit over a standard pillow. It slips inside a normal pillowcase, and vibrates when connected alarms or calls come through.
What problem does it solve?
The project came out of feedback from members of the Deaf community, who told the researchers that existing under-pillow alert devices are often too bulky and uncomfortable to sleep on. In response, the team built a much thinner electronic textile sleeve with four tiny haptic actuators embedded into a yarn-like structure.

Each actuator measures just 3.4mm by 12.7mm, and the electronics are small enough that users are not supposed to feel them while seeping. So the safety product is both handy and comfortable to use.
How it can even save lives
The sleeve connects to a smartphone through a microcontroller, and that setup can then link wirelessly to household alarms. When something goes off, the pillow vibrates intensely enough to wake the user, with distinct patterns used to signal different kinds of alerts. This means a user with a hearing impairment can be alerted of a fire alarm, a burglar alarm, or even an incoming phone call.
This extra layer basically makes the feature thoughtful. The goal here is to wake up someone and also give them enough information to know why they are being woken up in the first place.
The researchers say the yarn used in the sleeve has already passed durability testing, including multiple washing cycles, which suggests they are treating this as a real product concept rather than a lab-only demo. The work was presented at the ACM CHI conference in Barcelona, and the team is now looking for an industrial partner to help bring it to market. Tech Xplore also quotes supervisor Theo Hughes-Riley calling it a significant step toward more inclusive emergency alert systems for deaf and deaf-blind individuals.
Tech
Android finally gets a fitting answer to the iPad mini, and it looks stunning
Apple has owned the compact premium tablet segment for years, but there’s a new contender in the market that runs on Android and takes on the iPad mini for everything it stands for.
Unveiled alongside the Find X9 Ultra, the Oppo Pad Mini comes with an 8.8-inch 2.5K OLED panel (2520 x 1680 pixels) in a 3:2 aspect ratio. This is the same, near-square aspect ratio that makes the iPad mini ideal for reading, note-taking, consuming content, and other productive workflows.

What makes the Oppo Pad Mini worth comparing to the iPad mini?
The tablet’s bezels are remarkably thin at 2.99 mm, and the screen can achieve up to 1,600 nits of peak brightness with a variable refresh rate between 60 and 144 Hz. There’s an optional matte version of the tablet that mimics a paper-like surface, something that the iPad mini doesn’t offer.
Where Apple puts an A17 Pro inside its mini, the Oppo Pad Mini comes with a Snapdragon 8 Gen 5 (3nm) chipset paired with up to 12GB of LPDDR5X RAM and 512GB of UFS 4.1 storage, which, in my opinion, is a capable combination.
For those wondering, the Snapdragon chip provides better multi-core performance, but its single-core performance matches that of the A17 Pro. In addition, the type of memory and storage should make the Oppo tablet feel more responsive and snappy.

How does it hold up in terms of portability and battery?
At just 5.39mm thick and weighing 279 grams, the Oppo Pad Mini is designed for portability, to the extent that it can fit in relatively larger pockets and small bags. The iPad mini, by comparison, weighs 293 grams and measures 6.3mm.
The 8,000 mAh battery supports 67W wired charging (full charge in around an hour), something that the iPad mini lacks. Pricing starts at CNY 3,199, which is around $470 for the baseline variant with 8GB of RAM and 256GB of storage, rising to around $590 for the variant with 12GB of RAM and 512GB of storage.
While the sales for the iPad mini alternative commence on April 24, 2026, it won’t be available in the United States, at least for now. To me, Oppo’s entry into the premium small-screen tablet segment signals that Android OEMs are taking the category seriously.

For now, the Oppo Pad Mini isn’t a direct competitor to the iPad mini, primarily because it isn’t available in the United States. However, if and when the product arrives in the region, it could easily take up a good chunk of iPad mini’s sales, providing Android users with a top-notch experience in a smaller form factor without paying a hefty price.
Tech
Notification bug that let FBI access messages patched with iOS 26.4.2
People being investigated by the FBI deleted Signal, but some messages were still retrievable from the iPhone’s notification database. The latest iOS update patches this vulnerability.

iPhones may be secure, but they aren’t invulnerable to bugs
Users should reasonably expect that deleting an app from their iPhone will remove all associated data. However, a recent case involving the FBI showed that some notification data was being retained by mistake.
The iOS 26.4.2, iPadOS 26.4.2, iOS 18.7.8, and iPadOS 18.7.8 updates released on Wednesday address the notification database issue directly. The notes simply say that “a logging issue was addressed with improved data redaction.”
Continue Reading on AppleInsider | Discuss on our Forums
Tech
Tesla Plaid Owner Learns The Hard Way It Can’t Keep Up With A Corvette
Car enthusiasts love comparing vehicle performance, especially when you can see it play out on a drag strip. A YouTube video recently went viral of a very unlikely matchup: a Tesla Model S Plaid versus a Chevrolet Corvette ZR1X. In the video shared by DragTimes, the ZR1X took on three Model S Plaids in the quarter mile at the TX2K event at Texas Motorplex in Ennis, Texas.
The first Tesla Model S Plaid driver wasn’t sure if he’d beat the ZR1X, but he felt it would be really close. However, it was clear from the launch that it wasn’t close at all — the ZR1X left the Plaid far behind. The ZR1X was able to get up to 60 miles per hour in 1.95 seconds, beating the Plaid’s 2.26 seconds. The ZR1X finished the quarter mile in 8.92 seconds, hitting nearly 154 miles per hour. The Plaid finished in 9.65 seconds, with a top speed of 140 mph. It was a similar story with the other two Plaids.
Why is the Corvette ZR1X better than the Model S Plaid on the drag strip?
The Corvette ZR1X and the Model S Plaid that raced that day were both stock with all-season tires, meaning the quarter mile race was a true indicator of the vehicles’ performance without enhancements. To be fair to the Model S Plaid, it beat the Corvette ZR1 in a previous video due to its incredible speed, which is why Brooks Weisblat took out the ZR1X, which pairs the twin-turbo 5.5L LT7 V8 engine with a front-axle electric motor for 1,250 horsepower. That’s more than the Plaid’s tri-motor setup, which produces 1,020 hp. The Plaid is also 4,802 pounds (about 1,000 more than the ZR1X).
With more horsepower and a lighter weight, it’s no surprise that the ZR1X had a faster launch. The Plaid still impressed since it had 70,000 miles on it and 85% battery. EVs slightly slow down over time.
The Tesla Model S Plaid has a top speed of 163 mph without the added $20,000 Track Package while the ZR1X can reach 225 mph. With the ZR1X already ahead, it’s no surprise that it was able to remain far ahead of the Plaid as they raced down the track. While the Plaid is so fast that it was previously banned from NHRA races, the Plaid was no match for what Corvette considers a track-focused “hypercar.”
Tech
Anthropic’s Mythos Model Is Being Accessed by Unauthorized Users
Bloomberg reports that a small group of unauthorized users gained access to Anthropic’s restricted Mythos model through a mix of contractor-linked access and online sleuthing. Anthropic says it is investigating and has no evidence the access extended beyond a third-party vendor environment or affected its own systems. From the report: The users relied on a mix of tactics to get into Mythos. These included using access the person had as a worker at a third-party contractor for Anthropic and trying commonly used internet sleuthing tools often employed by cybersecurity researchers, the person said. The users are part of a private Discord channel that focuses on hunting for information about unreleased models, including by using bots to scour for details that Anthropic and others have posted on unsecured websites such as GitHub. […] To access Mythos, the group of users made an educated guess about the model’s online location based on knowledge about the format Anthropic has used for other models, the person said, adding that such details were revealed in a recent data breach from Mercor, an AI training startup that works with a number of top developers.
Crucially, the person also has permission to access Anthropic models and software related to evaluating the technology for the startup. They gained this access from a company for which they have performed contract work evaluating Anthropic’s AI models. Bloomberg is not naming the company for security reasons. The group is interested in playing around with new models, not wreaking havoc with them, the person said. The group has not run cybersecurity-related prompts on the Mythos model, the person said, preferring instead to try tasks like building simple websites in an attempt to avoid detection by Anthropic. The person said the group also has access to a slew of other unreleased Anthropic AI models.
Tech
Intel’s upcoming gaming CPU specs have leaked
Pointing squarely at AMD’s Ryzen range, Intel’s next-generation desktop CPU lineup has leaked, with the Nova Lake-S architecture set to arrive with up to 288MB of L3 cache across a range expected to carry the Core Ultra 400 branding.
That cache figure dwarfs the 36MB found in Intel’s current flagship Core Ultra 9 285K, and comfortably exceeds the 96MB and 192MB L3 totals found in AMD’s Ryzen 7 9850X3D and Ryzen 9 9950X3D, respectively.
The leak originates from X user Jaykihn, an established source of CPU specification information, who confirmed that the flagship Nova Lake-S chip will carry 16 P-Cores, 32 E-Cores, and four LPE-Cores alongside the 288MB L3 cache figure, with LPE-Cores representing a new low-power efficiency core tier introduced specifically with this architecture.
That core configuration marks a substantial step up from the Core Ultra 9 285K’s eight P-Cores and 16 E-Cores, with the addition of LPE-Cores extending the architectural complexity beyond what Intel’s current Arrow Lake desktop lineup offers at any price point.
Cache capacity matters in gaming because processors can access it far faster than system RAM, reducing latency during gameplay in scenarios where data retrieval speed determines frame time consistency, which explains why AMD’s X3D chips have maintained a performance lead in gaming workloads despite competitive core counts from Intel.
Two unnamed chips sitting above the Core Ultra 9 designation in the leaked table carry 52 and 44 total cores respectively, suggesting Intel plans a tiered flagship structure that extends beyond its current naming scheme for the Nova Lake-S generation.
Intel has not confirmed any specifications for the Nova Lake-S lineup, though Computex in early June represents a credible window for an official announcement, with AMD also expected to reveal details of its next-generation Zen 6 architecture at the same event.
Tech
Mozilla fixes 271 Firefox vulnerabilities found by Anthropic’s Claude Mythos in a single evaluation pass
Summary: Mozilla released Firefox 150 with fixes for 271 security vulnerabilities identified by Anthropic’s Claude Mythos Preview, an unreleased frontier AI model distributed under the restricted Project Glasswing programme. The collaboration began with Claude Opus 4.6 finding 22 bugs in Firefox 148 earlier this year; Mythos produced more than twelve times as many. Firefox CTO Bobby Holley said the defects are “finite” and that defenders can “finally find them all,” while the UK AI Security Institute confirmed Mythos can also execute autonomous multi-stage network attacks, making the dual-use tension the central policy question.
Mozilla released Firefox 150 on Monday with fixes for 271 security vulnerabilities identified by Anthropic’s Claude Mythos Preview, an unreleased frontier AI model restricted to a handful of organisations under Project Glasswing. The number is striking not because the bugs were exotic but because they were not. “We haven’t seen any bugs that couldn’t have been found by an elite human researcher,” Mozilla said in a blog post titled “The zero-days are numbered.” The point is that no human team could have found 271 of them this fast.
The collaboration between Mozilla and Anthropic began earlier this year with a more modest effort. Starting in February, Firefox’s security team used Claude Opus 4.6 to scan nearly 6,000 C++ files across the browser’s codebase. That pass produced 112 unique reports, of which 22 were confirmed as security-sensitive bugs and shipped as fixes in Firefox 148. Fourteen were classified as high severity, representing almost a fifth of all high-severity Firefox vulnerabilities remediated in 2025. The Mythos evaluation, which followed as part of the continued partnership, produced more than twelve times as many confirmed vulnerabilities. Bobby Holley, Firefox’s chief technology officer, described the experience as giving the team “vertigo.”
What Mythos is, and who gets to use it
Claude Mythos Preview is the model at the centre of Anthropic’s restricted Mythos model programme, Project Glasswing, announced on 7 April. It is a general-purpose frontier model, not a security-specific tool, but its coding capabilities have crossed a threshold that Anthropic considers significant enough to warrant controlled distribution. The UK’s AI Security Institute evaluated the model and found it capable of executing multi-stage network attacks autonomously, completing a 32-step corporate network attack simulation called “The Last Ones” in three out of ten attempts. It can chain multiple small vulnerabilities into a single devastating attack, reconstruct source code from deployed software to find exploitable weaknesses, and build custom tools for lateral movement and data extraction once inside a network.
Access is restricted to 12 named launch partners, including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks, with roughly 40 additional organisations granted access for defensive security work. Anthropic committed up to $100 million in usage credits and $4 million in direct donations to open-source security organisations, including $2.5 million to Alpha-Omega and OpenSSF through the Linux Foundation and $1.5 million to the Apache Software Foundation. The model is available to Glasswing participants at $25 per million input tokens and $125 per million output tokens through the Claude API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry.
The restricted rollout has already been tested. On the same day Anthropic announced Glasswing, a group of unauthorised users gained access to Mythos Preview by guessing the model’s URL through a third-party vendor environment, an incident Anthropic said it is investigating.
The defender’s argument
Holley framed the 271 vulnerabilities not as an indictment of Firefox’s code quality but as evidence that the security landscape is shifting in favour of defenders for the first time. “A gap between machine-discoverable and human-discoverable bugs favors the attacker, who can concentrate many months of costly human effort to find a single bug,” he wrote. “Closing this gap erodes the attacker’s long-term advantage by making all discoveries cheap.”
The logic is straightforward. A zero-day vulnerability is valuable to an attacker precisely because it is unknown. If a defender can find and patch the same bug before an attacker discovers it, the bug has no offensive value. The cost asymmetry has historically favoured attackers: a browser like Firefox has millions of lines of code, and a single undiscovered flaw in any of them is enough for exploitation. An elite human security researcher might spend weeks or months finding one such flaw. A model like Mythos can scan the entire codebase in a fraction of that time. Mozilla’s thesis is that this changes the economics permanently. “Software like Firefox is designed in a modular way for humans to be able to reason about its correctness,” the blog post stated. “It is complex, but not arbitrarily complex. The defects are finite, and we are entering a world where we can finally find them all.”
The claim is bold and deliberately so. Mozilla is arguing that the age of zero-day vulnerabilities in well-structured software has an expiration date, not because attackers will stop looking, but because defenders will get there first.
The numbers in context
The 271 figure requires some unpacking. Mozilla’s official security advisory for Firefox 150, MFSA 2026-30, lists 41 CVE entries, three of which are standard memory-safety roll-ups that aggregate multiple individual bugs under a single identifier. The 271 number represents the total count of discrete code defects identified by Mythos during its evaluation, many of which were grouped into those CVE bundles. The distinction matters because the headline number and the formal advisory number measure different things: one measures what the AI found, the other measures how much AI-generated code actually ships through the industry’s standard vulnerability disclosure process.
The most dangerous flaws include use-after-free vulnerabilities in the DOM and WebRTC components, the kinds of memory safety bugs that have been the bread and butter of browser exploitation for two decades. These are not novel attack surfaces. They are the same categories of bugs that Google’s Project Zero has been finding across browsers since 2014. Google’s own AI vulnerability research programme, Big Sleep, a collaboration between Project Zero and DeepMind, found a zero-day in SQLite in October 2024 and has since expanded to discover multiple flaws in widely used software. The difference with Mozilla’s effort is scale: 271 bugs in a single evaluation pass, patched before release, across a codebase that has accumulated technical debt over more than two decades.
The dual-use problem
The UK AI Security Institute’s evaluation of Mythos Preview confirmed what the Mozilla results imply from the other direction: the same capabilities that make the model effective at finding vulnerabilities make it effective at exploiting them. The model became the first AI to complete “The Last Ones,” a benchmark designed to simulate a full corporate network compromise. It succeeded in three out of ten attempts, averaging 22 of 32 steps across all runs. Independent testing confirmed that Mythos cannot reliably execute autonomous attacks against organisations with well-hardened defences, but the trajectory is clear. Each generation of frontier model has performed better on offensive security benchmarks than the last.
This is the tension that Project Glasswing is designed to manage. By restricting Mythos to vetted organisations with defensive mandates, Anthropic is attempting to give defenders a structural head start, a window in which the good actors can scan and patch before the capabilities proliferate. The strategy depends on the restriction holding. The vendor breach on launch day suggests that containment is harder than access control. Anthropic has also identified thousands of zero-day vulnerabilities across every major operating system and every major web browser using Mythos, findings it is disclosing to the affected vendors through Glasswing.
Anthropic’s expanding enterprise footprint, from legal contract review in Microsoft Word to cybersecurity through Glasswing, reflects a company that is monetising Claude across every professional vertical where accuracy matters. The Mozilla partnership is the most dramatic demonstration yet, not because the model did something no human could do, but because it did what only a handful of humans can do, and did it 271 times in a single pass.
Holley’s conclusion captures both the promise and the vertigo: “Our work isn’t finished, but we’ve turned the corner and can glimpse a future much better than just keeping up. Defenders finally have a chance to win, decisively.” Whether that future arrives depends on whether the models that find the bugs remain in the hands of the people who fix them, or whether the capabilities leak faster than the patches ship. For now, Firefox 150 has 271 fewer ways to be broken. That is not a small thing. The question is how long that advantage lasts when the tool that found them is commanding extraordinary valuations precisely because of what it can do.
Tech
The 'Missing-Scientist' Story Is Unbelievably Dumb
Longtime Slashdot reader mmarlett writes: The Atlantic has a long article on the story of missing scientists recently featured here on Slashdot. In short, it is an incoherent conspiracy theory that spreads wide and far, not paying any attention to boundaries of time, space, or area of expertise. “Which is all to say that another piece of flagrant nonsense has ascended to the highest levels of U.S. politics and media,” writes the Atlantic’s Daniel Engber. “To call it a conspiracy theory would be far too kind, because no comprehensive theory has been floated to explain the pattern of events. But then, even the phrase pattern of events is imprecise, because there is no pattern here at all. Given all the people who could have been roped into this narrative but weren’t, any hope of finding meaning falls away. Barring any dramatic new disclosures, the mystery of the missing scientists has the dubious honor of being a sham in every way at once.”
Read more of this story at Slashdot.
Tech
Stop Begging Big Tech To Fix Your Social Media Experience. You Can Do It Yourself.
from the vibe-code-your-social-experience dept
Disclaimer: This post talks about Bluesky and an offering from Bluesky and I am on the Bluesky board. Take everything I say with whatever size grains of salt you feel is appropriate.
I’ve written a few times now about how I think that AI tools, used carefully and thoughtfully, represent our best chance at taking back control over the open web. I know this is not a popular opinion with many Techdirt readers, but I’m hoping some of you will read through this to try to understand and engage with the points I’m making here. I truly do believe that if used well and appropriately, these tools can serve to put power back into the hands of users, rather than giant centralized companies who are more interested in exploiting your attention.
Over the last few weeks I’ve been playing around with an AI-powered tool that Bluesky has released (much to the chagrin of many users) to a relatively small group of early beta testers. I think the negative reaction to the product announcement is understandable, given the general distrust of all AI tools, but it’s really worth examining what this tool is and what it can enable, including really empowering people to take back control over their own social experience. It literally gives you a path to routing around Bluesky’s own design features if you don’t like them.
Yes, a lot of AI is overhyped garbage being shoved at people who don’t want it — but that doesn’t mean the underlying tools can’t be useful when applied carefully by those who choose to use the tools appropriately.
It means not outsourcing your brain to the tool, but rather using it the way any skilled person automates some aspect of work that they do. I’ve sanded and restained the floors of my house, and while I could have done the whole thing by hand with a stack of sandpaper, it was helpful to rent a floor sander from a local hardware store, learn how to use it properly, and then use it so that I could finish the job in a day rather than weeks. I view AI tools the same way. If you learn how to use them properly, as an assistive tool rather than a replacement for your brain, they can help you accomplish useful things.
Let me give an example: a couple of weeks ago, law professor Blake Reid wrote a short thread on Bluesky about how he needed to take a break from social media, because he worried that it was eating up too much of his time and he was better off just stopping cold turkey, to avoid getting sucked into unproductive discussions that push him to (as he put it) “get over my skis” in engaging in conversations where he’s tempted to weigh in despite not having much expertise (a common thing on social media). It’s a worthwhile thread.
But in that thread he mentioned that he was hopeful that maybe some day technology itself could help him use social media in a healthier way, to dial back how much time he spent on it, and get him focused on the more productive and useful discussions (which he admits also happen regularly on Bluesky).
What was amusing to me was that the only reason I saw that post by Reid was because I’ve been beta testing a new tool that… kinda does that. When he wrote that thread, I was actually on vacation, hiking in the National Parks in Utah, and mostly offline. But in the evenings, I would check in, and rather than sorting through everything I missed on social media that day, I had a tool just show me things that I would find useful that I might have missed.
But using an AI tool, I had built an entirely personalized news aggregator, which had access to my Bluesky account, Techdirt’s RSS feed, and the knowledge that I had been out all day and wanted not just a summary of what news might be interesting to me as the editor of Techdirt, but also what people on Bluesky were saying about it. Here’s a screenshot of what my first attempt at this looks like:

The tool that let me do this is an advanced version of Attie, which I also recognize is extremely controversial among users on Bluesky, many of whom vocally have expressed their hatred of the very idea of it when it was announced last month. But, my main interest is in figuring out to empower users who want to take control over their own social experience, and this seems like a clear example of that. I’ll note that this version of Attie has not yet rolled out to most of the beta testers (I believe some have access to it — but this is one small benefit of being on the board).
Honestly, I think the way Bluesky announced Attie may have done it an injustice, positioning it as a kind of AI-powered feed generator. There are multiple other feed generator tools for Bluesky out there, many of which are really fantastic. For a while now I’ve used both Graze.social and Surf.social to make AI-powered feeds (which never seemed to generate much controversy).
But generating feeds alone isn’t all that interesting. With the more advanced version of Attie, I can take much more control over my entire social experience. The fact that with a single prompt I could build that personalized aggregator (based not just on my own feed, but Techdirt’s RSS) is something more powerful, including the fact that the tool knows to summarize a whole days’ worth of posts, because I’m trying to see in a glance if there’s anything relevant for Techdirt and I’d been offline the entire day.
Rather than just letting a single company (in this case Bluesky) define my entire experience for me, I can vibe-code my social experience. I can tell it not just the types of content I want to see, but how I want to see it. And for what reason. And how much (or how little) content to show me. And with what context around it. It’s all based on what I expressly want. Not what any company thinks I want.
And I keep experimenting with other versions of this as well. In one test, I had it also try to summarize stories and tell me why it thought I’d find them useful for Techdirt:

In this case it not only found a story that is interesting to me, but it suggested multiple sources for me to read about it, even noting (for example) that Professor Eric Goldman’s blog post is “the definitive blog post” for my coverage (it’s not wrong).
I go back to the piece I wrote a little while back about the kind of learned helplessness of social media users. We’ve had two decades of billionaires deciding exactly how they wanted to intermediate your social experience. How your feed looks. What kind of algorithm you’ll see. What sorts of content will be put in your feed. They got to focus on engagement maxxing. You just had to deal with it.
In such a world, the only thing users felt they could do in response was to yell. They could yell at the CEOs of these platforms. Or at the government, telling them to yell at the CEOs of these platforms.
But with an AI tool that explores an open social ecosystem, you don’t need to yell at a CEO or a regulator. You can just tell the tool what you want, what you don’t want, how you want (or don’t want) to see it, and what context would be useful. It puts you in control.
And yes, sometimes it makes mistakes. It can recommend a story I’m not interested in. But, then I can just tell it that such and such story isn’t useful and why… and it will update the system for me.
Once again, I understand that some people hate any and all uses of AI. And I’m not suggesting you have to run out and use the tools yourself. You do you. But showing concrete use cases where these tools actually deliver more user agency — more control over your online environment, rather than deferring to the whims of any particular company — matters.
The larger point here isn’t really about Attie specifically (indeed, anyone could build their own version of this thanks to open protocols). It’s that for two decades, users have been trained to believe their only options are to accept whatever a platform gives them, or yell loudly enough that someone powerful might change it. That’s the learned helplessness I wrote about earlier, and it’s corrosive.
Tools like this — built on open protocols, not locked inside a corporate walled garden — represent a different path. One where you don’t petition a billionaire for a better feed algorithm. You don’t petition the government to try to put time limits on social media. You just build the experience you want. You tell it to make you a better interface that matches what you want. You tell it you don’t want to spend that much time. That’s what “protocols, not platforms” actually looks like in practice, helped along by agentic tools, and it’s why I think this matters well beyond whether any particular AI tool is good or not.
Filed Under: ai, attie, custmization, decentralization, vibe coding
Companies: bluesky
-
Fashion5 days agoWeekend Open Thread: Theodora Dress
-
Sports6 days agoNWFL Suspends Two Players Over Post-Match Clash in Ado-Ekiti
-
Politics5 days agoPalestine barred from entering Canada for FIFA Congress
-
Entertainment3 days ago
NBA Analyst Charles Barkley Chimes in on Ice Spice McDonald’s Fiasco
-
Business3 days agoPowerball Result April 18, 2026: No Jackpot Winner in Powerball Draw: $75 Million Rolls Over
-
Tech4 days agoAuto Enthusiast Scores Running Tesla Model 3 for Two Grand and Turns It Into Bare-Bones Go-Kart
-
Politics4 days agoZack Polanski demands ‘council homes not luxury flats for foreign investors’
-
Crypto World5 days agoRussia Pushes Bill to Criminalize Unregistered Crypto Services
-
Politics2 days agoGary Stevenson delivers timely reminder to register to vote as deadline TODAY
-
Tech7 days ago‘Avatar: Aang, The Last Airbender’ Leaked Online. Some Fans Say Paramount Deserves the Fallout
-
Business6 days agoCreo Medical agree sale of its manufacturing operation
-
Business12 hours agoRolls-Royce Voted UK’s Most Iconic Trade Mark as IPO Register Hits 150
-
Politics4 hours agoDisabled people challenge government SEND proposals over segregation concerns
-
Politics4 hours agoMaking troops accountable for war crimes threatens US alliance, ex-SAS colonel warns
-
Crypto World5 days agoRussia Introduces Bill To Criminalize Unregistered Crypto Services
-
Politics5 hours agoStarmer handler McSweeney to be dragged from shadows by Foreign Affairs Committee
-
Politics5 hours agoZack Polanski responds to home secretary’s taser threat
-
Politics5 hours ago
Wings Over Scotland | How To Get Away With Crimes
-
Crypto World4 days agoKelp DAO rsETH Bridge Hack Drains $292M as DeFi Losses Top $600M in Two Weeks
-
Tech7 days agoFord EV and tech chief leaving automaker







You must be logged in to post a comment Login