Connect with us
DAPA Banner

Tech

AI for Particle Physics: Searching for Anomalies

Published

on

In 1930, a young physicist named Carl D. Anderson was tasked by his mentor with measuring the energies of cosmic rays—particles arriving at high speed from outer space. Anderson built an improved version of a cloud chamber, a device that visually records the trajectories of particles. In 1932, he saw evidence that confusingly combined the properties of protons and electrons. “A situation began to develop that had its awkward aspects,” he wrote many years after winning a Nobel Prize at the age of 31. Anderson had accidentally discovered antimatter.

Four years after his first discovery, he codiscovered another elementary particle, the muon. This one prompted one physicist to ask, “Who ordered that?”

a photo shows a man in a suit sitting beside a large laboratory apparatus.

 a circular black-and-white image shows curved particle tracks. Carl Anderson [top] sits beside the magnet cloud chamber he used to discover the positron. His cloud-chamber photograph [bottom] from 1932 shows the curved track of a positron, the first known antimatter particle. Caltech Archives & Special Collections

Over the decades since then, particle physicists have built increasingly sophisticated instruments of exploration. At the apex of these physics-finding machines sits the Large Hadron Collider, which in 2022 started its third operational run. This underground ring, 27 kilometers in circumference and straddling the border between France and Switzerland, was built to slam subatomic particles together at near light speed and test deep theories of the universe. Physicists from around the world turn to the LHC, hoping to find something new. They’re not sure what, but they hope to find it.

It’s the latest manifestation of a rich tradition. Throughout the history of science, new instruments have prompted hunts for the unexpected. Galileo Galilei built telescopes and found Jupiter’s moons. Antonie van Leeuwenhoek built microscopes and noticed “animalcules, very prettily a-moving.” And still today, people peer through lenses and pore through data in search of patterns they hadn’t hypothesized. Nature’s secrets don’t always come with spoilers, and so we gaze into the unknown, ready for anything.

Advertisement

But novel, fundamental aspects of the universe are growing less forthcoming. In a sense, we’ve plucked the lowest-hanging fruit. We know to a good approximation what the building blocks of matter are. The Standard Model of particle physics, which describes the currently known elementary particles, has been in place since the 1970s. Nature can still surprise us, but it typically requires larger or finer instruments, more detailed or expansive data, and faster or more flexible analysis tools.

Those analysis tools include a form of artificial intelligence (AI) called machine learning. Researchers train complex statistical models to find patterns in their data, patterns too subtle for human eyes to see, or too rare for a single human to encounter. At the LHC, which smashes together protons to create immense bursts of energy that decay into other short-lived particles of matter, a theorist might predict some new particle or interaction and describe what its signature would look like in the LHC data, often using a simulation to create synthetic data. Experimentalists would then collect petabytes of measurements and run a machine learning algorithm that compares them with the simulated data, looking for a match. Usually, they come up empty. But maybe new algorithms can peer into corners they haven’t considered.

A New Path for Particle Physics

“You’ve heard probably that there’s a crisis in particle physics,” says Tilman Plehn, a theoretical physicist at Heidelberg University, in Germany. At the LHC and other high-energy physics facilities around the world, the experimental results have failed to yield insights on new physics. “We have a lot of unhappy theorists who thought that their model would have been discovered, and it wasn’t,” Plehn says.

Person wearing a patterned shirt against a pale blue background.

Tilman Plehn

Advertisement

“We have a lot of unhappy theorists who thought that their model would have been discovered, and it wasn’t.”

Gregor Kasieczka, a physicist at the University of Hamburg, in Germany, recalls the field’s enthusiasm when the LHC began running in 2008. Back then, he was a young graduate student and expected to see signs of supersymmetry, a theory predicting heavier versions of the known matter particles. The presumption was that “we turn on the LHC, and supersymmetry will jump in your face, and we’ll discover it in the first year or so,” he tells me. Eighteen years later, supersymmetry remains in the theoretical realm. “I think this level of exuberant optimism has somewhat gone.”

The result, Plehn says, is that models for all kinds of things have fallen in the face of data. “And I think we’re going on a different path now.”

That path involves a kind of machine learning called unsupervised learning. In unsupervised learning, you don’t teach the AI to recognize your specific prediction—signs of a particle with this mass and this charge. Instead, you might teach it to find anything out of the ordinary, anything interesting—which could indicate brand new physics. It’s the equivalent of looking with fresh eyes at a starry sky or a slide of pond scum. The problem is, how do you automate the search for something “interesting”?

Advertisement

Going Beyond the Standard Model

The Standard Model leaves many questions unanswered. Why do matter particles have the masses they do? Why do neutrinos have mass at all? Where is the particle for transmitting gravity, to match those for the other forces? Why do we see more matter than antimatter? Are there extra dimensions? What is dark matter—the invisible stuff that makes up most of the universe’s matter and that we assume to exist because of its gravitational effect on galaxies? Answering any of these questions could open the door to new physics, or fundamental discoveries beyond the Standard Model.

A long blue accelerator tube marked \u201cLHC\u201d runs through an underground tunnel.

The Large Hadron Collider at CERN accelerates protons to near light speed before smashing them together in hopes of discovering “new physics.”

CERN

Advertisement

“Personally, I’m excited for portal models of dark sectors,” Kasieczka says, as if reading from a Marvel film script. He asks me to imagine a mirror copy of the Standard Model out there somewhere, sharing only one “portal” particle with the Standard Model we know and love. It’s as if this portal particle has a second secret family.

Kasieczka says that in the LHC’s third run, scientists are splitting their efforts roughly evenly between measuring more precisely what they know to exist and looking for what they don’t know to exist. In some cases, the former could enable the latter. The Standard Model predicts certain particle properties and the relationships between them. For example, it correctly predicted a property of the electron called the magnetic moment to about one part in a trillion. And precise measurements could turn up internal inconsistencies. “Then theorists can say, ‘Oh, if I introduce this new particle, it fixes this specific problem that you guys found. And this is how you look for this particle,’” Kasieczka says.

A colorful visualization shows many particle tracks radiating outward from a collision point.

An image from a single collision at the LHC shows an unusually complex spray of particles, flagged as anomalous by machine learning algorithms.

Advertisement

CERN

What’s more, the Standard Model has occasionally shown signs of cracks. Certain particles containing bottom quarks, for example, seem to decay into other particles in unexpected ratios. Plehn finds the bottom-quark incongruities intriguing. “Year after year, I feel they should go away, and they don’t. And nobody has a good explanation,” he says. “I wouldn’t even know who I would shout at”—the theorists or the experimentalists—“like, ‘Sort it out!’”

Exasperation isn’t exactly the right word for Plehn’s feelings, however. Physicists feel gratified when measurements reasonably agree with expectations, he says. “But I think deep down inside, we always hope that it looks unreasonable. Everybody always looks for the anomalous stuff. Everybody wants to see the standard explanation fail. First, it’s fame”—a chance for a Nobel—“but it’s also an intellectual challenge, right? You get excited when things don’t work in science.”

How Unsupervised AI Can Probe for New Physics

Now imagine you had a machine to find all the times things don’t work in science, to uncover all the anomalous stuff. That’s how researchers are using unsupervised learning. One day over ice cream, Plehn and a friend who works at the software company SAP began discussing autoencoders, one type of unsupervised learning algorithm. “He tells me that autoencoders are what they use in industry to see if a network was hacked,” Plehn remembers. “You have, say, a hundred computers, and they have network traffic. If the network traffic [to one computer] changes all of a sudden, the computer has been hacked, and they take it offline.”

Advertisement
a person wearing a hard hat walks down an aisle.
Photo show rows of electronic racks filled with cables and equipment inside a data-acquisition room.

In the LHC’s central data-acquisition room [top], incoming detector data flows through racks of electronics and field-programmable gate array (FPGA) cards [bottom] that decide which collision events to keep.

Fermilab/CERN

Autoencoders are neural networks that start with an input—it could be an image of a cat, or the record of a computer’s network traffic—and compress it, like making a tiny JPEG or MP3 file, and then decompress it. Engineers train them to compress and decompress data so that the output matches the input as closely as possible. Eventually a network becomes very good at that task. But if the data includes some items that are relatively rare—such as white tigers, or hacked computers’ traffic—the network performs worse on these, because it has less practice with them. The difference between an input and its reconstruction therefore signals how anomalous that input is.

“This friend of mine said, ‘You can use exactly our software, right?’” Plehn remembers. “‘It’s exactly the same question. Replace computers with particles.’” The two imagined feeding the autoencoder signatures of particles from a collider and asking: Are any of these particles not like the others? Plehn continues: “And then we wrote up a joint grant proposal.”

It’s not a given that AI will find new physics. Even learning what counts as interesting is a daunting hurdle. Beginning in the 1800s, men in lab coats delegated data processing to women, whom they saw as diligent and detail oriented. Women annotated photos of stars, and they acted as “computers.” In the 1950s, women were trained to scan bubble chambers, which recorded particle trajectories as lines of tiny bubbles in fluid. Physicists didn’t explain to them the theory behind the events, only what to look for based on lists of rules.

Advertisement

But, as the Harvard science historian Peter Galison writes in Image and Logic: A Material Culture of Physics, his influential account of how physicists’ tools shape their discoveries, the task was “subtle, difficult, and anything but routinized,” requiring “three-dimensional visual intuition.” He goes on: “Even within a single experiment, judgment was required—this was not an algorithmic activity, an assembly line procedure in which action could be specified fully by rules.”

Person in a suit with dark hair against a blue background.Gregor Kasieczka

“We are not looking for flying elephants but instead a few extra elephants than usual at the local watering hole.”

Over the last decade, though, one thing we’ve learned is that AI systems can, in fact, perform tasks once thought to require human intuition, such as mastering the ancient board game Go. So researchers have been testing AI’s intuition in physics. In 2019, Kasieczka and his collaborators announced the LHC Olympics 2020, a contest in which participants submitted algorithms to find anomalous events in three sets of (simulated) LHC data. Some teams correctly found the anomalous signal in one dataset, but some falsely reported one in the second set, and they all missed it in the third. In 2020, a research collective called Dark Machines announced a similar competition, which drew more than 1,000 submissions of machine learning models. Decisions about how to score them led to different rankings, showing that there’s no best way to explore the unknown.

Another way to test unsupervised learning is to play revisionist history. In 1995, a particle dubbed the top quark turned up at the Tevatron, a particle accelerator at the Fermi National Accelerator Laboratory (Fermilab), in Illinois. But what if it actually hadn’t? Researchers applied unsupervised learning to LHC data collected in 2012, pretending they knew almost nothing about the top quark. Sure enough, the AI revealed a set of anomalous events that were clustered together. Combined with a bit of human intuition, they pointed toward something like the top quark.

Advertisement
Person with long hair wearing a sweater and light-colored top against a blue background.

Georgia Karagiorgi

“An algorithm that can recognize any kind of disturbance would be a win.”

That exercise underlines the fact that unsupervised learning can’t replace physicists just yet. “If your anomaly detector detects some kind of feature, how do you get from that statement to something like a physics interpretation?” Kasieczka says. “The anomaly search is more a scouting-like strategy to get you to look into the right corner.” Georgia Karagiorgi, a physicist at Columbia University, agrees. “Once you find something unexpected, you can’t just call it quits and be like, ‘Oh, I discovered something,’” she says. “You have to come up with a model and then test it.”

Kyle Cranmer, a physicist and data scientist at the University of Wisconsin-Madison who played a key role in the discovery of the Higgs boson particle in 2012, also says that human expertise can’t be dismissed. “There’s an infinite number of ways the data can look different from what you expected,” he says, “and most of them aren’t interesting.” Physicists might be able to recognize whether a deviation suggests some plausible new physical phenomenon, rather than just noise. “But how you try to codify that and make it explicit in some algorithm is much less straightforward,” Cranmer says. Ideally, the guidelines would be general enough to exclude the unimaginable without eliminating the merely unimagined. “That’s gonna be your Goldilocks situation.”

Advertisement

In his 1987 book How Experiments End, Harvard’s Galison writes that scientific instruments can “import assumptions built into the apparatus itself.” He tells me about a 1973 experiment that looked for a phenomenon called neutral currents, signaled by an absence of a so-called heavy electron (later renamed the muon). One team initially used a trigger left over from previous experiments, which recorded events only if they produced those heavy electrons—even though neutral currents, by definition, produce none. As a result, for some time the researchers missed the phenomenon and wrongly concluded that it didn’t exist. Galison says that the physicists’ design choice “allowed the discovery of [only] one thing, and it blinded the next generation of people to this new discovery. And that is always a risk when you’re being selective.”

How AI Could Miss—or Fake—New Physics

I ask Galison if by automating the search for interesting events, we’re letting the AI take over the science. He rephrases the question: “Have we handed over the keys to the car of science to the machines?” One way to alleviate such concerns, he tells me, is to generate test data to see if an algorithm behaves as expected—as in the LHC Olympics. “Before you take a camera out and photograph the Loch Ness Monster, you want to make sure that it can reproduce a wide variety of colors” and patterns accurately, he says, so you can rely on it to capture whatever comes.

Galison, who is also a physicist, works on the Event Horizon Telescope, which images black holes. For that project, he remembers putting up utterly unexpected test images like Frosty the Snowman so that scientists could probe the system’s general ability to catch something new. “The danger is that you’ve missed out on some crucial test,” he says, “and that the object you’re going to be photographing is so different from your test patterns that you’re unprepared.”

The algorithms that physicists are using to seek new physics are certainly vulnerable to this danger. It helps that unsupervised learning is already being used in many applications. In industry, it’s surfacing anomalous credit-card transactions and hacked networks. In science, it’s identifying earthquake precursors, genome locations where proteins bind, and merging galaxies.

Advertisement

But one difference with particle-physics data is that the anomalies may not be stand-alone objects or events. You’re looking not just for a needle in a haystack; you’re also looking for subtle irregularities in the haystack itself. Maybe a stack contains a few more short stems than you’d expect. Or a pattern reveals itself only when you simultaneously look at the size, shape, color, and texture of stems. Such a pattern might suggest an unacknowledged substance in the soil. In accelerator data, subtle patterns might suggest a hidden force. As Kasieczka and his colleagues write in one paper, “We are not looking for flying elephants, but instead a few extra elephants than usual at the local watering hole.”

Even algorithms that weigh many factors can miss signals—and they can also see spurious ones. The stakes of mistakenly claiming discovery are high. Going back to the hacking scenario, Plehn says, a company might ultimately determine that its network wasn’t hacked; it was just a new employee. The algorithm’s false positive causes little damage. “Whereas if you stand there and get the Nobel Prize, and a year later people say, ‘Well, it was a fluke,’ people would make fun of you for the rest of your life,” he says. In particle physics, he adds, you run the risk of spotting patterns purely by chance in big data, or as a result of malfunctioning equipment.

False alarms have happened before. In 1976, a group at Fermilab led by Leon Lederman, who later won a Nobel for other work, announced the discovery of a particle they tentatively called the Upsilon. The researchers calculated the probability of the signal’s happening by chance as 1 in 50. After further data collection, though, they walked back the discovery, calling the pseudo-particle the Oops-Leon. (Today, particle physicists wait until the chance that a finding is a fluke drops below 1 in 3.5 million, the so-called five-sigma criterion.) And in 2011, researchers at the Oscillation Project with Emulsion-tRacking Apparatus (OPERA) experiment, in Italy, announced evidence for faster-than-light travel of neutrinos. Then, a few months later, they reported that the result was due to a faulty connection in their timing system.

Those cautionary tales linger in the minds of physicists. And yet, even while researchers are wary of false positives from AI, they also see it as a safeguard against them. So far, unsupervised learning has discovered no new physics, despite its use on data from multiple experiments at Fermilab and CERN. But anomaly detection may have prevented embarrassments like the one at OPERA. “So instead of telling you there’s a new physics particle,” Kasieczka says, “it’s telling you, this sensor is behaving weird today. You should restart it.”

Advertisement

Hardware for AI-Assisted Particle Physics

Particle physicists are pushing the limits of not only their computing software but also their computing hardware. The challenge is unparalleled. The LHC produces 40 million particle collisions per second, each of which can produce a megabyte of data. That’s much too much information to store, even if you could save it to disk that quickly. So the two largest detectors each use two-level data filtering. The first layer, called the Level-1 Trigger, or L1T, harvests 100,000 events per second, and the second layer, called the High-Level Trigger, or HLT, plucks 1,000 of those events to save for later analysis. So only one in 40,000 events is ever potentially seen by human eyes.

Person with long blonde hair in a white shirt against a solid blue background.

Katya Govorkova

That’s when I thought, we need something like [AlphaGo] in physics. We need a genius that can look at the world differently.”

HLTs use central processing units (CPUs) like the ones in your desktop computer, running complex machine learning algorithms that analyze collisions based on the number, type, energy, momentum, and angles of the new particles produced. L1Ts, as a first line of defense, must be fast. So the L1Ts rely on integrated circuits called field-programmable gate arrays (FPGAs), which users can reprogram for specialized calculations.

Advertisement

The trade-off is that the programming must be relatively simple. The FPGAs can’t easily store and run fancy neural networks; instead they follow scripted rules about, say, what features of a particle collision make it important. In terms of complexity level, it’s the instructions given to the women who scanned bubble chambers, not the women’s brains.

Ekaterina (Katya) Govorkova, a particle physicist at MIT, saw a path toward improving the LHC’s filters, inspired by a board game. Around 2020, she was looking for new physics by comparing precise measurements at the LHC with predictions, using little or no machine learning. Then she watched a documentary about AlphaGo, the program that used machine learning to beat a human Go champion. “For me the moment of realization was when AlphaGo would use some absolutely new type of strategy that humans, who played this game for centuries, hadn’t thought about before,” she says. “So that’s when I thought, we need something like that in physics. We need a genius that can look at the world differently.” New physics may be something we’d never imagine.

Govorkova and her collaborators found a way to compress autoencoders to put them on FPGAs, where they process an event every 80 nanoseconds (less than 10-millionth of a second). (Compression involved pruning some network connections and reducing the precision of some calculations.) They published their methods in Nature Machine Intelligence in 2022, and researchers are now using them during the LHC’s third run. The new trigger tech is installed in one of the detectors around the LHC’s giant ring, and it has found many anomalous events that would otherwise have gone unflagged.

Researchers are currently setting up analysis workflows to decipher why the events were deemed anomalous. Jennifer Ngadiuba, a particle physicist at Fermilab who is also one of the coordinators of the trigger system (and one of Govorkova’s coauthors), says that one feature stands out already: Flagged events have lots of jets of new particles shooting out of the collisions. But the scientists still need to explore other factors, like the new particles’ energies and their distributions in space. “It’s a high-dimensional problem,” she says.

Advertisement

Eventually they will share the data openly, allowing others to eyeball the results or to apply new unsupervised learning algorithms in the hunt for patterns. Javier Duarte, a physicist at the University of California, San Diego, and also a coauthor on the 2022 paper, says, “It’s kind of exciting to think about providing this to the community of particle physicists and saying, like, ‘Shrug, we don’t know what this is. You can take a look.’” Duarte and Ngadiuba note that high-energy physics has traditionally followed a top-down approach to discovery, testing data against well-defined theories. Adding in this new bottom-up search for the unexpected marks a new paradigm. “And also a return of sorts to before the Standard Model was so well established,” Duarte adds.

Yet it could be years before we know why AI marked those collisions as anomalous. What conclusions could they support? “In the worst case, it could be some detector noise that we didn’t know about,” which would still be useful information, Ngadiuba says. “The best scenario could be a new particle. And then a new particle implies a new force.”

Person with braided updo in checkered suit jacket and chambray shirt, light blue background.

Jennifer Ngadiuba

“The best scenario could be a new particle. And then a new particle implies a new force.”

Advertisement

Duarte says he expects their work with FPGAs to have wider applications. “The data rates and the constraints in high-energy physics are so extreme that people in industry aren’t necessarily working on this,” he says. “In self-driving cars, usually millisecond latencies are sufficient reaction times. But we’re developing algorithms that need to respond in microseconds or less. We’re at this technological frontier, and to see how much that can proliferate back to industry will be cool.”

Plehn is also working to put neural networks on FPGAs for triggers, in collaboration with experimentalists, electrical engineers, and other theorists. Encoding the nuances of abstract theories into material hardware is a puzzle. “In this grant proposal, the person I talked to most is the electrical engineer,” he says, “because I have to ask the engineer, which of my algorithms fits on your bloody FPGA?”

Hardware is hard, says Ryan Kastner, an electrical engineer and computer scientist at UC San Diego who works with Duarte on programming FPGAs. What allows the chips to run algorithms so quickly is their flexibility. Instead of programming them in an abstract coding language like Python, engineers configure the underlying circuitry. They map logic gates, route data paths, and synchronize operations by hand. That low-level control also makes the effort “painfully difficult,” Kastner says. “It’s kind of like you have a lot of rope, and it’s very easy to hang yourself.”

Seeking New Physics Among the Neutrinos

The next piece of new physics may not pop up at a particle accelerator. It may appear at a detector for neutrinos, particles that are part of the Standard Model but remain deeply mysterious. Neutrinos are tiny, electrically neutral, and so light that no one has yet measured their mass. (The latest attempt, in April, set an upper limit of about a millionth the mass of an electron.) Of all known particles with mass, neutrinos are the universe’s most abundant, but also among the most ghostly, rarely deigning to acknowledge the matter around them. Tens of trillions pass through your body every second.

Advertisement

If we listen very closely, though, we may just hear the secrets they have to tell. Karagiorgi, of Columbia, has chosen this path to discovery. Being a physicist is “kind of like playing detective, but where you create your own mysteries,” she tells me during my visit to Columbia’s Nevis Laboratories, located on a large estate about 20 km north of Manhattan. Physics research began at the site after World War II; one hallway features papers going back to 1951.

A person stands inside a room that has gold-colored grids covering the floor, walls, and ceiling.

A researcher stands inside a prototype for the Deep Underground Neutrino Experiment, which is designed to detect rare neutrino interactions.

CERN

Advertisement

Karagiorgi is eagerly awaiting a massive neutrino detector that’s currently under construction. Starting in 2028, Fermilab will send neutrinos west through 1,300 km of rock to South Dakota, where they’ll occasionally make their existence known in the Deep Underground Neutrino Experiment (DUNE). Why so far away? When neutrinos travel long distances, they have an odd habit of oscillating, transforming from one kind or “flavor” to another. Observing the oscillations of both the neutrinos and their mirror-image antiparticles, antineutrinos, could tell researchers something about the universe’s matter-antimatter asymmetry—which the Standard Model doesn’t explain—and thus, according to the Nevis website, “why we exist.”

“DUNE is the thing that’s been pushing me to develop these real-time AI methods,” Karagiorgi says, “for sifting through the data very, very, very quickly and trying to look for rare signatures of interest within them.” When neutrinos interact with the detector’s 70,000 tonnes of liquid argon, they’ll generate a shower of other particles, creating visual tracks that look like a photo of fireworks.

A simplified chart of the Standard Model of physics shows matter particles (quarks and leptons), force-carrying particles, and the Higgs, which conveys mass.

The Standard Model catalogs the known fundamental particles of matter and the forces that govern them, but leaves major mysteries unresolved.

Even when not bombarding DUNE with neutrinos, researchers will keep collecting data in the off chance that it captures neutrinos from a distant supernova. “This is a massive detector spewing out 5 terabytes of data per second,” Karagiorgi says, “and it’s going to run constantly for a decade.” They will need unsupervised learning to notice signatures that no one was looking for, because there are “lots of different models of how supernova explosions happen, and for all we know, none of them could be the right model for neutrinos,” she says. “To train your algorithm on such uncertain grounds is less than ideal. So an algorithm that can recognize any kind of disturbance would be a win.”

Advertisement

Deciding in real time which 1 percent of 1 percent of data to keep will require FPGAs. Karagiorgi’s team is preparing to use them for DUNE, and she walks me to a computer lab where they program the circuits. In the FPGA lab, we look at nondescript circuit boards sitting on a table. “So what we’re proposing is a scheme where you can have something like a hundred of these boards for DUNE deep underground that receive the image data frame by frame,” she says. This system could tell researchers whether a given frame resembled TV static, fireworks, or something in between.

Neutrino experiments, like many particle-physics studies, are very visual. When Karagiorgi was a postdoc, automated image processing at neutrino detectors was still in its infancy, so she and collaborators would often resort to visual scanning (bubble-chamber style) to measure particle tracks. She still asks undergrads to hand-scan as an educational exercise. “I think it’s wrong to just send them to write a machine learning algorithm. Unless you can actually visualize the data, you don’t really gain a sense of what you’re looking for,” she says. “I think it also helps with creativity to be able to visualize the different types of interactions that are happening, and see what’s normal and what’s not normal.”

Back in Karagiorgi’s office, a bulletin board displays images from The Cognitive Art of Feynman Diagrams, an exhibit for which the designer Edward Tufte created wire sculptures of the physicist Richard Feynman’s schematics of particle interactions. “It’s funny, you know,” she says. “They look like they’re just scribbles, right? But actually, they encode quantitatively predictive behavior in nature.” Later, Karagiorgi and I spend a good 10 minutes discussing whether a computer or a human could find Waldo without knowing what Waldo looked like. We also touch on the 1964 Supreme Court case in which Justice Potter Stewart famously declined to define obscenity, saying “I know it when I see it.” I ask whether it seems weird to hand over to a machine the task of deciding what’s visually interesting. “There are a lot of trust issues,” she says with a laugh.

On the drive back to Manhattan, we discuss the history of scientific discovery. “I think it’s part of human nature to try to make sense of an orderly world around you,” Karagiorgi says. “And then you just automatically pick out the oddities. Some people obsess about the oddities more than others, and then try to understand them.”

Advertisement

Reflecting on the Standard Model, she called it “beautiful and elegant,” with “amazing predictive power.” Yet she finds it both limited and limiting, blinding us to colors we don’t yet see. “Sometimes it’s both a blessing and a curse that we’ve managed to develop such a successful theory.”

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Australia’s NEXTDC launches A$2.2 billion capital plan

Published

on

The ASX-listed data centre operator is raising A$1.5 billion in a fully underwritten equity offering and expanding its hybrid securities programme by A$700 million, with La Caisse de dépôt et placement du Québec now committed to a total of A$1.7 billion.

The raise will fund accelerated development of the S4 Western Sydney campus, where contracted utilisation jumped 250 megawatts in a single quarter.


NEXTDC (ASX: NXT), Australia’s largest independent data centre operator, has halted trading to launch a A$2.2 billion capital plan anchored by a fully underwritten A$1.5 billion equity entitlement offer, the company announced on Monday.

The raise is a direct response to a step-change in demand: between December 2025 and 31 March 2026, NEXTDC’s pro forma contracted utilisation jumped 250 megawatts, a 60% increase in a single quarter, to reach 667MW.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Its forward order book grew 83% over the same period to 544MW, driven by hyperscale cloud providers and AI infrastructure customers.

The equity component is structured as a 1-for-5.4 pro-rata accelerated non-renounceable entitlement offer, priced at A$12.70 per share, an 8.6% discount to the theoretical ex-rights price of A$13.90.

Advertisement

New shares are expected to be issued to retail shareholders by 18 May, with the institutional bookbuild already underway at the time of the halt. Prior to the suspension, NEXTDC shares had risen approximately 25% through April, reflecting mounting investor enthusiasm for data centre infrastructure plays across Asia-Pacific.

The A$2.2 billion total capital plan combines the A$1.5 billion equity offer with a A$700 million expansion of the company’s hybrid securities programme.

NEXTDC’s hybrid securities, which are deeply subordinated instruments ranking junior to all existing debt, had previously been backed by a A$1 billion binding commitment from La Caisse de dépôt et placement du Québec (CDPQ), Canada’s second-largest pension fund with approximately C$517 billion in assets.

The expanded commitment brings La Caisse’s total backing to A$1.7 billion, cementing what the Canadian investor described as a “promising first step toward a long-term partnership” with NEXTDC.

Advertisement

The primary use of proceeds is the accelerated development of S4, NEXTDC’s data centre campus in Western Sydney, where the company intends to invest approximately A$1.5 billion through the end of financial year 2027.

A record 250MW customer commitment at S4 during the quarter is what triggered the announcement: CEO Craig Scroggie described the capital raise as a way to “materially expand NEXTDC’s contracted capacity and de-risk the company’s Western Sydney developments ahead of potential strategic partnership transactions with private capital partners from 2027.”

That last phrase signals intent to bring in joint venture partners or asset-level investors once the facility is contracted and de-risked, a common monetisation mechanism for large-scale data centre infrastructure.

The financial guidance accompanying the announcement is striking. NEXTDC raised its FY26 capital expenditure guidance by A$300 million to a range of A$2.7 billion to A$3.0 billion.

Advertisement

For FY27, capex is forecast at approximately A$5.0 billion. The company is simultaneously maintaining its existing FY26 revenue and EBITDA guidance while projecting that contracted EBITDA from existing customer agreements alone will exceed A$1 billion over time, roughly four times the midpoint of current FY26 guidance of A$235 million.

Following the raise and recent funding activity, NEXTDC expects pro forma liquidity of approximately A$5.9 billion.

NEXTDC operates or is developing 20 data centres across Australia, in Sydney, Melbourne, Brisbane, Perth, Port Hedland, Canberra, Adelaide, the Sunshine Coast, and Darwin, and is evaluating sites in Tokyo, Bangkok, Johor and Kuala Lumpur in Malaysia, and Singapore.

Australia’s deployable data centre capacity stands at approximately 1,350 megawatts today, with consensus forecasts projecting 3,100 MW by 2030–31 and potentially up to 7.4 gigawatts by 2035 under AI-driven scenarios.

Advertisement

NSW has endorsed A$51.9 billion worth of data centre projects through its Investment Delivery Authority, effectively concentrating approvals,  and the grid connections and planning support that come with them, in a small number of qualified operators.

Source link

Advertisement
Continue Reading

Tech

DIY Nuclear Battery With PV Cells And Tritium

Published

on

Nuclear batteries are pretty simple devices that are conceptually rather similar to photovoltaic (PV) solar, just using the radiation from a radioisotope rather than solar radiation. It’s also possible to make your own nuclear battery, with [Double M Innovations] putting together a version that uses standard PV cells combined with small tritium vials as radiation source.

The PV cells are the amorphous type, rated for 2.4 V, which means that they’re not too fussy about the exact wavelength at the cost of some general efficiency. You generally find these on solar-powered calculators for this reason. Meanwhile the tritium vials have an inner coating of phosphor so they glow. With a couple of these vials sandwiched in between two amorphous cells you thus have technically something that you could call a ‘nuclear battery’.

With an approximately 12 year half-life, tritium isn’t amazingly radioactive and thus the glow from the phosphor is also not really visible in daylight. With this DIY battery wrapped up in aluminium foil to cover it up fully, it does appear to generate some current in the nanoamp range, with a single-cell and series voltage of about 0.5 V.

Advertisement

A 170 VAC-rated capacitor is connected to collect some current over time, with just under 3 V measured after a night of charging. In how far the power comes from the phosphor and how much from sources like thermal radiation is hard to say in this setup. However, if you can match up the PV cell’s bandgap a bit more with the radiation source, you should be able to pull at least a few mW from a DIY nuclear battery, as seen with commercial examples.

This isn’t the first time we’ve seen this particular trick. A few years ago, a similar setup was used to power a handheld game, as long as you don’t mind waiting a few months for it to charge.

Advertisement

Source link

Continue Reading

Tech

Palantir posts mini-manifesto denouncing inclusivity and ‘regressive’ cultures

Published

on

Surveillance and analytics company Palantir recently posted what it called a “brief” 22-point summary of CEO Alex Karp’s book “The Technological Republic.”

Written by Karp and Palantir’s head of corporate affairs, Nicholas Zamiska, “The Technological Republic” was published last year and described by its authors as “the beginnings of the articulation of the theory” behind Palantir’s work. (One critic said it was “not a book at all, but a piece of corporate sales material.”)

The company’s ideological bent has come under more scrutiny since then, as tech industry figures have debated Palantir’s work with Immigration and Customs Enforcement (ICE), and as the company has positioned itself as an organization working for the defense of “the West.”

In fact, congressional Democrats recently sent a letter to ICE and the Department of Homeland Security demanding more information about how tools built by Palantir and “a range of surveillance companies” are being used in the Trump administration’s aggressive deportation strategy.

Advertisement

Palantir’s post doesn’t reference much of that context directly, simply saying that it’s providing the summary “because we get asked a lot.” It then suggests that “Silicon Valley owes a moral debt to the country that made its rise possible” and declares that “free email is not enough.”

“The decadence of a culture or civilization, and indeed its ruling class, will be forgiven only if that culture is capable of delivering economic growth and security for the public,” the company says.

The post is wide-ranging, at one point criticizing a culture that “almost snickers at [Elon] Musk’s interest in grand narrative” and at another point touching on recent debates about the use of artificial intelligence by the military.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

“The question is not whether A.I. weapons will be built; it is who will build them and for what purpose,” Palantir says. “Our adversaries will not pause to indulge in theatrical debates about the merits of developing technologies with critical military and national security applications. They will proceed.”

Advertisement

Similarly, the company suggests that “the atomic age is ending,” while “a new era of deterrence built on A.I. is set to begin.”

The post also takes a moment to denounce the “postwar neutering of Germany and Japan,” adding that the “defanging of Germany was an overcorrection for which Europe is now paying a heavy price” and that “a similar and highly theatrical commitment to Japanese pacifism” could “threaten to shift the balance of power in Asia.” 

The post ends by criticizing “the shallow temptation of a vacant and hollow pluralism.” In Palantir’s argument, a blind devotion to pluralism and inclusivity “glosses over the fact that certain cultures and indeed subcultures . . . have produced wonders. Others have proven middling, and worse, regressive and harmful.”

After Palantir posted this on Saturday, Eliot Higgins, the CEO of the investigative website Bellingcat, dryly remarked that it was “extremely normal and fine for a company to put this in a public statement.”

Advertisement

Higgins also argued that there’s more to the post than a simple “defense of the West” — in his view, it’s an attack on what he said are key pillars of democracy that need rebuilding: verification, deliberation, and accountability.

“It’s also worth being clear about who’s doing the arguing,” Higgins wrote. “Palantir sells operational software to defense, intelligence, immigration & police agencies. These 22 points aren’t philosophy floating in space, they’re the public ideology of a company whose revenue depends on the politics it’s advocating.”

Source link

Advertisement
Continue Reading

Tech

Home Depot’s spring sale is, dare I say, better than Black Friday? 40% off patio furniture, appliances, grills, and more

Published

on

Home Depot has a launched a massive spring sale, appropriately named ‘Spring Black Friday‘, with up to 40% in savings on patio furniture, appliances, grills, lawn mowers, tools and more.

Shop Home Depot’s full spring sale

As TechRadar’s deals editor and a huge fan of Home Depot, I’ve gone through Home Depot’s sale and hand-picked the best deals. While Home Depot’s Black Friday sale is always a popular event, with impressive savings, Home Depot’s spring sale is even better, because you get to save on seasonal items.

The retailer has record-low prices on outdoor essentials like patio furniture, gardening tools and grills, as well Black Friday-like discounts on major appliances, including refrigerators, washing machines and dishwashers from brands like LG, Samsung and Whirlpool.

Advertisement

You’ll find links to Home Depot’s most popular sale categories below, followed by my pick of the top deals. Keep in mind that Home Depot’s sale ends on April 29, so time is running out to score spring savings.

Source link

Advertisement
Continue Reading

Tech

Zoom Partners With Sam Altman’s Iris-Scanning Company To Offer Callers Verifications of Humanness

Published

on

Zoom “has partnered with World, Sam Altman’s iris-scanning identity company (previously known as Worldcoin), ” reports Digital Trends, “to add real-time human verification inside meetings.”

Zoom is now inviting organizations to join the beta version of the rollout, which Digital Trends says “lets hosts confirm that every face on the call belongs to a real person, not an AI-generated imposter. ”

For those wondering how World’s Deep Face technology works, it includes a three-step process. It cross-references a signed image from a user’s original Orb registration, a live face scan from the device, and the frame of the video that’s visible to the other participants in the meeting. Only when the three samples match does a “Verified Human” badge appear next to the user’s name…

Hosts can also make Deep Face verification mandatory for joining meetings, preventing unverified participants from joining entirely. Mid-call, on-the-spot checks are also possible…

Advertisement

Source link

Continue Reading

Tech

Threads redesigns web interface and adds direct messages to desktop for the first time

Published

on

Summary: Threads head Connor Hayes previewed a redesigned web interface that adds direct messages, a navigation sidebar with shortcuts to saved posts and insights, and a cleaner single-feed layout replacing the current multi-column design. DMs, which launched on mobile in June 2025, will roll out on web “over the coming weeks,” bringing one-on-one chats, group conversations of up to 50, and media sharing to the platform’s most engaged desktop users as Threads surpasses 450 million monthly active users and begins scaling its global advertising business.

Threads is getting a redesigned web interface that adds direct messages, a navigation sidebar, and quicker access to features that were previously buried in the mobile-first layout. Connor Hayes, who took over as head of Threads in September 2025, previewed the changes in a post on the platform this week, writing that “web is an important part of how our most engaged users interact with Threads, and we’ll be investing more here going forward.” Messages on the web version are not yet publicly testing, Hayes said, but users should “start to see them appear over the coming weeks.

The redesign replaces the current multi-column layout with a cleaner single-feed view anchored by a left-side navigation rail. The sidebar includes shortcuts to saved posts, performance insights, activity, notifications, and the ability to switch between feeds, all features that exist on the mobile app but required multiple taps or profile navigation to find on the web. The result looks significantly more like X’s desktop layout, which is either a pragmatic design choice or an admission that the format Threads was trying to replace turned out to be the right one.

DMs finally reach the desktop

Direct messages launched on the Threads mobile app in June 2025, nearly two years after the platform itself launched. The web version has operated without them since, meaning that the users Hayes describes as “most engaged,” those who use Threads on a computer, have been unable to access one of the platform’s core communication features. The web rollout will bring one-on-one chats, group conversations of up to 50 people, emoji reactions, and the ability to send photos, GIFs, and stickers.

Advertisement

Threads has been building out its messaging infrastructure steadily. In January, it launched a basketball mini-game within DMs. In February, it began testing a shortcut that converts the phrase “DM me” in a post into a clickable link that opens a direct message. The messaging system is built on Instagram’s infrastructure, which gives it reliability but also ties it to a platform with different privacy expectations and content norms.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The redesign preview came one day after Hayes showed changes to how replies look on mobile. Replies under a post will now be indented to make conversation threads easier to follow, a feature rolling out on iOS and currently testing on Android.

Advertisement

The competitive context

Threads has grown faster than any social platform in history and now has more than 450 million monthly active users, with daily active users estimated at roughly 137 to 141 million. In January, Similarweb data showed Threads had surpassed X in daily mobile users, 141.5 million to 125 million, a milestone that would have seemed improbable when the app launched as a text-based companion to Instagram in July 2023.

The growth has come alongside a broader decline of X under Elon Musk’s ownership, which has pushed users, advertisers, and publishers toward alternatives. Bluesky, which raised $100 million in its Series B and has grown to 43 million users under new CEO Toni Schneider, has captured a vocal segment of the market. But Threads’ integration with Instagram’s 2 billion-plus user base gives it a distribution advantage that no standalone competitor can match.

The web redesign is part of a shift from growth to retention. Threads has the users. What it has lacked is the feature depth that makes a platform indispensable for the power users who drive conversation and content creation. DMs, a proper desktop experience, and improved reply threading address the specific complaints that have kept some users treating Threads as a secondary platform rather than a primary one.

Monetisation and Meta’s broader bet

Meta began rolling out ads on Threads globally in late January 2026, after testing in the US and Japan throughout 2025. The rollout uses Meta’s existing Ads Manager and supports image, video, and carousel formats through both Advantage+ and manual campaigns. Early pricing has been lower than Facebook and Instagram, with CPMs estimated at $3 to $8 and cost per click at $0.30 to $1.50, reflecting the early stage of advertiser competition on the platform. Evercore ISI analysts have projected Threads advertising revenue of $8 billion by the end of 2025 and $11.3 billion by 2026.

Advertisement

The advertising rollout gives the web redesign commercial significance beyond user experience. Desktop users tend to have higher engagement times and are more valuable to advertisers. A web interface that keeps users on the platform longer and adds messaging, which increases session frequency, directly supports the revenue trajectory that analysts are projecting.

Hayes was appointed to lead Threads in July 2025, taking over from Adam Mosseri, who had been running the platform directly alongside Instagram. Hayes previously served as Meta’s VP of product for generative AI and spent 14 years at the company in various product roles, including a stint growing Instagram Reels. Mosseri said at the time that “given Threads’ maturity, we think we need a dedicated app lead who can focus all of their time on helping Threads move forward.” The web redesign and DM rollout are the most visible results of that dedicated focus.

Threads is also the largest platform running on the ActivityPub protocol, allowing users to share posts to Mastodon, WordPress, and other fediverse-compatible services. Meta says it has interacted with over 75% of all fediverse servers, though full account portability is not yet available.

The redesign is incremental rather than transformative. It brings the web version closer to feature parity with the mobile app, which is itself still catching up to the feature set that X has built over 17 years. But for a platform that has Meta’s resources behind it, 450 million monthly users in front of it, and a growing creator economy to support, the gap between what Threads offers and what its most engaged users expect is closing faster than most new platforms manage. Hayes is signalling that the web is where the next phase of that closure will happen.

Advertisement

Source link

Continue Reading

Tech

Wharfedale puts Heritage front and centre with new home cinema speaker

Published

on

Filling a gap that fans of its retro-inspired speaker range have long identified, Wharfedale has introduced the Heritage Centre.

This new speaker is a dedicated centre channel speaker that’s been built to integrate with Wharfedale’s Linton, Super Linton, Denton, and Dovedale models that have made the Heritage Series one of its most successful lines in recent memory.

The absence of a centre speaker has been a barrier for Heritage owners wanting to build a multichannel home cinema system as the range has until now been limited to stereo pairs.

That’s left buyers to either mix in a mismatched centre channel or go without one entirely when configuring a 3.1 or 5.1 channel setup. Now that’s no longer an issue.

Advertisement
Wharfedale Heritage Centre driversWharfedale Heritage Centre drivers
Image Credit (Wharfedale)

Advertisement

Wharfedale’s solution draws directly from the Super Denton’s driver architecture, adopting the same three-way configuration used across the broader Heritage range to keep the technical foundation consistent across the full speaker family.

That configuration pairs twin 165mm woven Kevlar bass drivers with a 50mm fabric dome midrange and a 25mm fabric dome treble unit, with all three driver types adapted directly from those developed for the Super Denton.

The midrange driver covers the 900Hz to 2.7kHz frequency band, the range most responsible for vocal clarity and dialogue intelligibility in film and television. The treble unit uses a damped rear chamber to push its resonant frequency well below the crossover point to keep high-frequency reproduction clean across a wide listening area.

Cabinet construction uses layered particle board and MDF bonded with a resonance-damping adhesive. It’s a build approach designed to distribute panel resonances across multiple frequencies rather than concentrating them at a single audible point. The internal bracing adds further control over cabinet colouration.

Advertisement

Peter Comeau, Wharfedale’s Director of Acoustic Design said: ““The Heritage Series was originally conceived purely for the enjoyment of stereo music, but the speakers’ richly expressive sonic qualities lend themselves perfectly to other forms of AV entertainment. When the demand for a dedicated centre speaker for people building multichannel systems with Linton and Denton speakers became clear, we embarked on the project with the rigorous attention to engineering detail applied to every Heritage model.”

Advertisement

Real-wood veneers in walnut, mahogany, or black oak finish the cabinet to a hand-polished satin lacquer, maintaining visual consistency with the full Heritage range across all three finish options.

The Wharfedale Heritage Centre arrives in late May, priced at £649, available in walnut, mahogany, or black oak to match whichever Heritage speaker system it sits in.

Advertisement

Source link

Continue Reading

Tech

IKEA’s iconic Billy bookcase just got a makeover, so here’s how to style it in your home

Published

on

Simple and iconic, IKEA’s Billy bookcase has been around since the 1970s, and over 140 million have bene sold worldwide. The classic wood and white finishes are timeless, but now it’s got a new look for 2026 with a limited-edition blue version.

A bold piece of furniture like this needs the right styling, and as TechRadar’s Homes Editor, I like making it pop by teaming it with black and white for a striking effect. This compelling cobalt bookcase would look particularly good in a home office, with an IKEA Kallax desk in black/brown, and white accessories.

Source link

Continue Reading

Tech

MOM reveals where new jobs are created & how much they pay

Published

on

Disclaimer: Unless otherwise stated, any opinions expressed below belong solely to the author.

In Mar, Singapore’s Ministry of Manpower (MOM) released its annual summary of job creation efforts that took place in the year before, highlighting the skills and expertise needed in both PMET and non-PMET professions.

For this piece, we focus on PMET roles—Professionals, Managers, Executives and Technicians—where many of Singapore’s best and most sought-after jobs are often concentrated (although some statistics may overlap).

Why are new jobs created in Singapore?

MOM’s analysis begins with the fundamental question: are Singaporean employers looking for replacements or are they genuinely adding new openings to their offer?

Advertisement

Fortunately, last year brought the highest reading yet: 49.3% of vacancies were for completely new positions. This means that local companies are looking for more people and are not simply rotating staff.

What’s more, a record-high share of this expansion is being driven by businesses creating entirely new functions. In 34.7% of cases, job growth came from new roles rather than the expansion of existing operations, which, unsurprisingly, still accounts for the majority at 55.8%.

It suggests that 2025, despite the fears caused by the US tariffs, was a very dynamic year, and companies still ventured into new areas.

Where are the jobs created?

Where are those new areas found, then?

Well, as has been typical over the past few years, the industry with the highest share of fresh openings remains Information & Communications, where close to three-quarters of vacancies are for roles that did not exist before.

Advertisement

It is followed by Construction (though it’s most likely driven by non-PMET employment), as well as Professional Services and Finance & Insurance, where more than half of the jobs on offer are new.

That is great news, of course, given that some of the best-paid roles are found in Singapore’s corporate sector.

Who are these jobs for?

Qualified people, naturally, but as we explained on Vulcan Post recently, paper degrees matter less and less, even for PMETs, where 70% of employers stated that academic qualifications are not their main consideration.

This doesn’t mean they don’t matter at all, but if all you have is a paper rather than practical experience, your job search may be considerably longer, just as it is a problem for employers to recruit workers for some vacancies for more than six months (listed in the table below).

Lack of skills and experience are the two primary reasons they remain in the market, with not enough talent available to fill them. And it’s not like employer expectations are huge, but over half of those looking for PMET specialists expect at least two to five years spent on the job somewhere before.

Only one in five is willing to employ complete newbies.

Advertisement

Here’s a more specific breakdown by industry:

If you’re a fresh graduate or someone without experience at a particular job, your best chance may be to look for something in the public sector, as it is the most open to candidates without a long CV. It also pays well and looks for applicants with greater educational attainment.

So, if you have a degree but are struggling for work, perhaps take a look at what state administration or education are offering.

How much do they pay?

Finally, let’s talk about the money.

Here’s the list of the Top 10 most in-demand PMET jobs, compiled from the data collected in 2025, together with the salaries you can expect.

Top 10 PMET Vacancies in 2025

Rank Occupation Range of wages offered
1 Teaching & Training Professional S$2,611 to S$8,580
2 Commercial & Marketing Sales Executive S$3,000 to S$4,350
3 Software, Web & Multimedia Developer S$7,000 to S$10,000
4 Policy & Planning Manager S$4,800 to S$9,700
5 Electronics Engineer S$5,000 to S$8,000
6 Civil Engineer S$3,500 to S$5,500
7 Industrial & Production Engineer S$4,200 to S$6,775
8 Accountant S$4,550 to S$6,700
9 Systems Analyst S$6,000 to S$9,700
10 Financial & Investment Adviser S$7,500 to S$12,000
Source: Job Vacancies 2025/ Singapore Ministry of Manpower

The podium is occupied by the same jobs as last year, with a switch between second and third places. But it’s the teachers who are still in the highest demand, while the upper pay band places their earnings at over S$100,000 per year. Not bad.

Software developers and related IT experts are still highly needed—and highly paid, as are Electronics Engineers, System Analysts and Financial Advisers.

Advertisement

Other jobs may not be quite as lucrative, but their availability should make up for it, as many Singaporeans (including young grads) are looking for their way into the labour market.

  • Read other articles we’ve written on Singaporean businesses here.

Featured Image Credit: Google Street View

Source link

Advertisement
Continue Reading

Tech

China narrows US lead to 2.7% while spending 23x less on AI investment

Published

on

In short: Stanford’s 2026 AI Index Report finds the performance gap between the best American and Chinese AI models has collapsed to 2.7%, down from 17.5-31.6 percentage points in May 2023, despite the US spending 23 times more on private AI investment ($285.9 billion vs $12.4 billion). China leads in AI patents (69.7% of global filings), publications (23.2% of global output), industrial robot installations (9x the US rate), and energy infrastructure, while AI talent migration to the US has dropped 89% since 2017.

The performance gap between the best American and Chinese AI models has collapsed to 2.7%, according to the 2026 AI Index Report published this week by Stanford University’s Institute for Human-Centered Artificial Intelligence. In May 2023, the gap was between 17.5 and 31.6 percentage points across major benchmarks. As of March 2026, Anthropic’s Claude Opus 4.6 leads the global leaderboard with an Arena score of 1,503, while ByteDance’s Dola-Seed-2.0-Preview sits at 1,464, a difference of 39 points. DeepSeek’s R1 reasoning model briefly matched the top US model in February 2025, and American and Chinese models have traded the lead multiple times since.

The 423-page report, the most comprehensive annual assessment of the global AI landscape, documents a situation in which the United States spends 23 times more on private AI investment than China but leads on the only metric that arguably matters, model performance, by less than three percentage points. The question the report raises without quite answering is whether that spending advantage is sustaining American leadership or whether China has found a way to compete without it.

Where each country leads

The United States dominates private AI investment, with $285.9 billion in 2025 compared with China’s $12.4 billion. California alone accounted for $218 billion, more than 75% of the US total. American companies produced 50 notable AI models last year, compared with China’s 30, though China’s count doubled from 15 the previous year while America’s grew more modestly. The US hosts 5,427 data centres, more than ten times any other country.

Advertisement

China leads in volume. Chinese researchers produced 23.2% of all global AI publications and 20.6% of citations, compared with 12.6% for the US. Chinese entities filed 69.7% of all AI patents worldwide. China installed 295,000 industrial robots in the most recent reporting period, nearly nine times the 34,200 installed in the United States. And China’s electricity reserve margin has never dipped below 80%, twice the necessary capacity, while the US power grid suffers from decades of underinvestment that the report identifies as a potential bottleneck for AI infrastructure growth.

 

The investment figures come with a significant caveat. The report notes that private investment data “likely understates” China’s actual AI spending because the Chinese government channels resources through guidance funds and state-initiated investment vehicles that do not appear in private capital databases. The 23-to-1 spending ratio may be less dramatic than it appears.

The talent crisis

The most striking finding may be about people rather than models. The number of AI scholars moving to the United States has dropped 89% since 2017, with 80% of that decline occurring in the last year alone. The report describes the fall as “precipitous.” Switzerland now ranks first in the world for AI researchers and developers per capita.

Advertisement

The talent migration data complicates the narrative that American AI leadership is secure because of its investment advantage. If the researchers who build frontier models are increasingly choosing not to come to the US, the spending premium buys hardware and infrastructure but not the intellectual capital that turns compute into capability. DeepSeek demonstrated in January 2025 that a Chinese lab could match Silicon Valley’s best with a fraction of the resources. The talent data suggests the conditions that produced DeepSeek are strengthening, not weakening.

What AI can and cannot do

The report documents performance gains that would have seemed implausible two years ago. On SWE-bench, a coding benchmark, model performance rose from 60% to near 100% in a single year. On graduate-level science questions, model accuracy hit 93%, above the expert human validator baseline of 81.2%. Google’s Gemini Deep Think won a gold medal at the International Mathematical Olympiad. On Humanity’s Last Exam, a benchmark designed to be unsolvable, frontier models gained 30 percentage points in a year.

But the report also documents what it calls a “jagged frontier.” The top model reads analog clocks correctly only 50.1% of the time. Robotic manipulation systems achieve 89.4% success in simulation but only 12% in real household tasks. Nearly half of the 500-plus clinical AI studies reviewed used exam-style questions rather than real patient data, and only 5% used actual clinical records. The gap between benchmark performance and real-world reliability remains wide in domains where errors have consequences.

Adoption, trust, and regulation

Generative AI reached 53% population adoption within three years of launch, faster than the personal computer or the internet. Eighty-eight per cent of organisations report using AI. Four in five university students now use generative AI tools. But the US ranks 24th globally in adoption at just 28.3%, behind Singapore at 61% and the UAE at 54%.

Advertisement

Public trust is lower still. Only 31% of Americans trust their government to regulate AI, the lowest figure of any country surveyed and well below the global average of 54%. The expert-public disconnect is a central theme of the report: 73% of AI experts expect a positive impact on jobs, compared with 23% of the general public. Only a third of Americans expect AI to make their jobs better.

stanford-ai-index-2026-china-us-performance-ga
Credit: Arena,2026
Performance of top United States vs. Chinese models on the Arena
stanford-ai-index-2026-china-us-performance-ga

Forty-seven countries now have active AI legislation, but only 12 have enforcement mechanisms. Documented enforcement actions rose from 43 in 2024 to 156 in 2025. Compliance costs vary eightfold between jurisdictions. The EU AI Act entered full enforcement in January 2026, but the broader regulatory picture is one of fragmentation rather than coordination.

The environmental cost

Training xAI’s Grok 4 produced 72,816 tonnes of CO2 equivalent, roughly the emissions of driving 17,000 cars for a year. AI data centre power capacity reached 29.6 gigawatts globally, enough to power New York State at peak demand. The environmental section of the report reads as a counterweight to the performance gains: the models are getting better, but the cost of making them better is scaling alongside the capabilities.

What the numbers mean

The headline finding, that China has nearly closed the performance gap with the US, will dominate the policy conversation. But the report’s deeper implication is about the relationship between spending and outcomes. The United States invested $285.9 billion in private AI capital last year. China invested $12.4 billion. The performance gap between their best models is 2.7%. Meanwhile, AI talent migration to the US has collapsed, China dominates patents and publications, and Chinese infrastructure investment in energy and manufacturing dwarfs America’s.

The open-versus-closed source debate adds another dimension. The top closed model now leads the top open model by 3.3%, up from 0.5% in August 2024, and six of the top ten Arena models are closed-source. The performance advantage of proprietary systems is widening, which favours the American companies that dominate the closed-source tier but also means the open-source models that have driven China’s catch-up may face diminishing returns.

Advertisement

Employment data for software developers aged 22 to 25 fell nearly 20% since 2022. One-third of surveyed organisations expect AI to reduce their workforce in the coming year. The Foundation Model Transparency Index dropped from 58 to 40, with most frontier models reporting nothing on fairness, security, or human agency. Documented AI incidents rose 55% in a year.

The Stanford AI Index does not make policy recommendations. It presents data. But the data in the 2026 edition tells a story that should unsettle anyone who assumes American AI dominance is durable. The US leads on investment and model performance. China leads on talent pipeline, patents, publications, robotics, and energy infrastructure. The performance gap is 2.7% and shrinking. The spending gap is 23 to 1 and growing. One of those trends is sustainable. The report leaves it to the reader to decide which one.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025