In 1930, a young physicist named Carl D. Anderson was tasked by his mentor with measuring the energies of cosmic rays—particles arriving at high speed from outer space. Anderson built an improved version of a cloud chamber, a device that visually records the trajectories of particles. In 1932, he saw evidence that confusingly combined the properties of protons and electrons. “A situation began to develop that had its awkward aspects,” he wrote many years after winning a Nobel Prize at the age of 31. Anderson had accidentally discovered antimatter.
Four years after his first discovery, he codiscovered another elementary particle, the muon. This one prompted one physicist to ask, “Who ordered that?”
Carl Anderson [top] sits beside the magnet cloud chamber he used to discover the positron. His cloud-chamber photograph [bottom] from 1932 shows the curved track of a positron, the first known antimatter particle. Caltech Archives & Special Collections
Over the decades since then, particle physicists have built increasingly sophisticated instruments of exploration. At the apex of these physics-finding machines sits the Large Hadron Collider, which in 2022 started its third operational run. This underground ring, 27 kilometers in circumference and straddling the border between France and Switzerland, was built to slam subatomic particles together at near light speed and test deep theories of the universe. Physicists from around the world turn to the LHC, hoping to find something new. They’re not sure what, but they hope to find it.
It’s the latest manifestation of a rich tradition. Throughout the history of science, new instruments have prompted hunts for the unexpected. Galileo Galilei built telescopes and found Jupiter’s moons. Antonie van Leeuwenhoek built microscopes and noticed “animalcules, very prettily a-moving.” And still today, people peer through lenses and pore through data in search of patterns they hadn’t hypothesized. Nature’s secrets don’t always come with spoilers, and so we gaze into the unknown, ready for anything.
Advertisement
But novel, fundamental aspects of the universe are growing less forthcoming. In a sense, we’ve plucked the lowest-hanging fruit. We know to a good approximation what the building blocks of matter are. The Standard Model of particle physics, which describes the currently known elementary particles, has been in place since the 1970s. Nature can still surprise us, but it typically requires larger or finer instruments, more detailed or expansive data, and faster or more flexible analysis tools.
Those analysis tools include a form of artificial intelligence (AI) called machine learning. Researchers train complex statistical models to find patterns in their data, patterns too subtle for human eyes to see, or too rare for a single human to encounter. At the LHC, which smashes together protons to create immense bursts of energy that decay into other short-lived particles of matter, a theorist might predict some new particle or interaction and describe what its signature would look like in the LHC data, often using a simulation to create synthetic data. Experimentalists would then collect petabytes of measurements and run a machine learning algorithm that compares them with the simulated data, looking for a match. Usually, they come up empty. But maybe new algorithms can peer into corners they haven’t considered.
A New Path for Particle Physics
“You’ve heard probably that there’s a crisis in particle physics,” says Tilman Plehn, a theoretical physicist at Heidelberg University, in Germany. At the LHC and other high-energy physics facilities around the world, the experimental results have failed to yield insights on new physics. “We have a lot of unhappy theorists who thought that their model would have been discovered, and it wasn’t,” Plehn says.
Tilman Plehn
Advertisement
“We have a lot of unhappy theorists who thought that their model would have been discovered, and it wasn’t.”
Gregor Kasieczka, a physicist at the University of Hamburg, in Germany, recalls the field’s enthusiasm when the LHC began running in 2008. Back then, he was a young graduate student and expected to see signs of supersymmetry, a theory predicting heavier versions of the known matter particles. The presumption was that “we turn on the LHC, and supersymmetry will jump in your face, and we’ll discover it in the first year or so,” he tells me. Eighteen years later, supersymmetry remains in the theoretical realm. “I think this level of exuberant optimism has somewhat gone.”
The result, Plehn says, is that models for all kinds of things have fallen in the face of data. “And I think we’re going on a different path now.”
That path involves a kind of machine learning called unsupervised learning. In unsupervised learning, you don’t teach the AI to recognize your specific prediction—signs of a particle with this mass and this charge. Instead, you might teach it to find anything out of the ordinary, anything interesting—which could indicate brand new physics. It’s the equivalent of looking with fresh eyes at a starry sky or a slide of pond scum. The problem is, how do you automate the search for something “interesting”?
Advertisement
Going Beyond the Standard Model
The Standard Model leaves many questions unanswered. Why do matter particles have the masses they do? Why do neutrinos have mass at all? Where is the particle for transmitting gravity, to match those for the other forces? Why do we see more matter than antimatter? Are there extra dimensions? What is dark matter—the invisible stuff that makes up most of the universe’s matter and that we assume to exist because of its gravitational effect on galaxies? Answering any of these questions could open the door to new physics, or fundamental discoveries beyond the Standard Model.
The Large Hadron Collider at CERN accelerates protons to near light speed before smashing them together in hopes of discovering “new physics.”
CERN
Advertisement
“Personally, I’m excited for portal models of dark sectors,” Kasieczka says, as if reading from a Marvel film script. He asks me to imagine a mirror copy of the Standard Model out there somewhere, sharing only one “portal” particle with the Standard Model we know and love. It’s as if this portal particle has a second secret family.
Kasieczka says that in the LHC’s third run, scientists are splitting their efforts roughly evenly between measuring more precisely what they know to exist and looking for what they don’t know to exist. In some cases, the former could enable the latter. The Standard Model predicts certain particle properties and the relationships between them. For example, it correctly predicted a property of the electron called the magnetic moment to about one part in a trillion. And precise measurements could turn up internal inconsistencies. “Then theorists can say, ‘Oh, if I introduce this new particle, it fixes this specific problem that you guys found. And this is how you look for this particle,’” Kasieczka says.
An image from a single collision at the LHC shows an unusually complex spray of particles, flagged as anomalous by machine learning algorithms.
Advertisement
CERN
What’s more, the Standard Model has occasionally shown signs of cracks. Certain particles containing bottom quarks, for example, seem to decay into other particles in unexpected ratios. Plehn finds the bottom-quark incongruities intriguing. “Year after year, I feel they should go away, and they don’t. And nobody has a good explanation,” he says. “I wouldn’t even know who I would shout at”—the theorists or the experimentalists—“like, ‘Sort it out!’”
Exasperation isn’t exactly the right word for Plehn’s feelings, however. Physicists feel gratified when measurements reasonably agree with expectations, he says. “But I think deep down inside, we always hope that it looks unreasonable. Everybody always looks for the anomalous stuff. Everybody wants to see the standard explanation fail. First, it’s fame”—a chance for a Nobel—“but it’s also an intellectual challenge, right? You get excited when things don’t work in science.”
How Unsupervised AI Can Probe for New Physics
Now imagine you had a machine to find all the times things don’t work in science, to uncover all the anomalous stuff. That’s how researchers are using unsupervised learning. One day over ice cream, Plehn and a friend who works at the software company SAP began discussing autoencoders, one type of unsupervised learning algorithm. “He tells me that autoencoders are what they use in industry to see if a network was hacked,” Plehn remembers. “You have, say, a hundred computers, and they have network traffic. If the network traffic [to one computer] changes all of a sudden, the computer has been hacked, and they take it offline.”
Advertisement
In the LHC’s central data-acquisition room [top], incoming detector data flows through racks of electronics and field-programmable gate array (FPGA) cards [bottom] that decide which collision events to keep.
Fermilab/CERN
Autoencoders are neural networks that start with an input—it could be an image of a cat, or the record of a computer’s network traffic—and compress it, like making a tiny JPEG or MP3 file, and then decompress it. Engineers train them to compress and decompress data so that the output matches the input as closely as possible. Eventually a network becomes very good at that task. But if the data includes some items that are relatively rare—such as white tigers, or hacked computers’ traffic—the network performs worse on these, because it has less practice with them. The difference between an input and its reconstruction therefore signals how anomalous that input is.
“This friend of mine said, ‘You can use exactly our software, right?’” Plehn remembers. “‘It’s exactly the same question. Replace computers with particles.’” The two imagined feeding the autoencoder signatures of particles from a collider and asking: Are any of these particles not like the others? Plehn continues: “And then we wrote up a joint grant proposal.”
It’s not a given that AI will find new physics. Even learning what counts as interesting is a daunting hurdle. Beginning in the 1800s, men in lab coats delegated data processing to women, whom they saw as diligent and detail oriented. Women annotated photos of stars, and they acted as “computers.” In the 1950s, women were trained to scan bubble chambers, which recorded particle trajectories as lines of tiny bubbles in fluid. Physicists didn’t explain to them the theory behind the events, only what to look for based on lists of rules.
Advertisement
But, as the Harvard science historian Peter Galison writes in Image and Logic: A Material Culture of Physics, his influential account of how physicists’ tools shape their discoveries, the task was “subtle, difficult, and anything but routinized,” requiring “three-dimensional visual intuition.” He goes on: “Even within a single experiment, judgment was required—this was not an algorithmic activity, an assembly line procedure in which action could be specified fully by rules.”
Gregor Kasieczka
“We are not looking for flying elephants but instead a few extra elephants than usual at the local watering hole.”
Over the last decade, though, one thing we’ve learned is that AI systems can, in fact, perform tasks once thought to require human intuition, such as mastering the ancient board game Go. So researchers have been testing AI’s intuition in physics. In 2019, Kasieczka and his collaborators announced the LHC Olympics 2020, a contest in which participants submitted algorithms to find anomalous events in three sets of (simulated) LHC data. Some teams correctly found the anomalous signal in one dataset, but some falsely reported one in the second set, and they all missed it in the third. In 2020, a research collective called Dark Machines announced a similar competition, which drew more than 1,000 submissions of machine learning models. Decisions about how to score them led to different rankings, showing that there’s no best way to explore the unknown.
Another way to test unsupervised learning is to play revisionist history. In 1995, a particle dubbed the top quark turned up at the Tevatron, a particle accelerator at the Fermi National Accelerator Laboratory (Fermilab), in Illinois. But what if it actually hadn’t? Researchers applied unsupervised learning to LHC data collected in 2012, pretending they knew almost nothing about the top quark. Sure enough, the AI revealed a set of anomalous events that were clustered together. Combined with a bit of human intuition, they pointed toward something like the top quark.
Advertisement
Georgia Karagiorgi
“An algorithm that can recognize any kind of disturbance would be a win.”
That exercise underlines the fact that unsupervised learning can’t replace physicists just yet. “If your anomaly detector detects some kind of feature, how do you get from that statement to something like a physics interpretation?” Kasieczka says. “The anomaly search is more a scouting-like strategy to get you to look into the right corner.” Georgia Karagiorgi, a physicist at Columbia University, agrees. “Once you find something unexpected, you can’t just call it quits and be like, ‘Oh, I discovered something,’” she says. “You have to come up with a model and then test it.”
Kyle Cranmer, a physicist and data scientist at the University of Wisconsin-Madison who played a key role in the discovery of the Higgs boson particle in 2012, also says that human expertise can’t be dismissed. “There’s an infinite number of ways the data can look different from what you expected,” he says, “and most of them aren’t interesting.” Physicists might be able to recognize whether a deviation suggests some plausible new physical phenomenon, rather than just noise. “But how you try to codify that and make it explicit in some algorithm is much less straightforward,” Cranmer says. Ideally, the guidelines would be general enough to exclude the unimaginable without eliminating the merely unimagined. “That’s gonna be your Goldilocks situation.”
Advertisement
In his 1987 book How Experiments End, Harvard’s Galison writes that scientific instruments can “import assumptions built into the apparatus itself.” He tells me about a 1973 experiment that looked for a phenomenon called neutral currents, signaled by an absence of a so-called heavy electron (later renamed the muon). One team initially used a trigger left over from previous experiments, which recorded events only if they produced those heavy electrons—even though neutral currents, by definition, produce none. As a result, for some time the researchers missed the phenomenon and wrongly concluded that it didn’t exist. Galison says that the physicists’ design choice “allowed the discovery of [only] one thing, and it blinded the next generation of people to this new discovery. And that is always a risk when you’re being selective.”
How AI Could Miss—or Fake—New Physics
I ask Galison if by automating the search for interesting events, we’re letting the AI take over the science. He rephrases the question: “Have we handed over the keys to the car of science to the machines?” One way to alleviate such concerns, he tells me, is to generate test data to see if an algorithm behaves as expected—as in the LHC Olympics. “Before you take a camera out and photograph the Loch Ness Monster, you want to make sure that it can reproduce a wide variety of colors” and patterns accurately, he says, so you can rely on it to capture whatever comes.
Galison, who is also a physicist, works on the Event Horizon Telescope, which images black holes. For that project, he remembers putting up utterly unexpected test images like Frosty the Snowman so that scientists could probe the system’s general ability to catch something new. “The danger is that you’ve missed out on some crucial test,” he says, “and that the object you’re going to be photographing is so different from your test patterns that you’re unprepared.”
The algorithms that physicists are using to seek new physics are certainly vulnerable to this danger. It helps that unsupervised learning is already being used in many applications. In industry, it’s surfacing anomalous credit-card transactions and hacked networks. In science, it’s identifying earthquake precursors, genome locations where proteins bind, and merging galaxies.
Advertisement
But one difference with particle-physics data is that the anomalies may not be stand-alone objects or events. You’re looking not just for a needle in a haystack; you’re also looking for subtle irregularities in the haystack itself. Maybe a stack contains a few more short stems than you’d expect. Or a pattern reveals itself only when you simultaneously look at the size, shape, color, and texture of stems. Such a pattern might suggest an unacknowledged substance in the soil. In accelerator data, subtle patterns might suggest a hidden force. As Kasieczka and his colleagues write in one paper, “We are not looking for flying elephants, but instead a few extra elephants than usual at the local watering hole.”
Even algorithms that weigh many factors can miss signals—and they can also see spurious ones. The stakes of mistakenly claiming discovery are high. Going back to the hacking scenario, Plehn says, a company might ultimately determine that its network wasn’t hacked; it was just a new employee. The algorithm’s false positive causes little damage. “Whereas if you stand there and get the Nobel Prize, and a year later people say, ‘Well, it was a fluke,’ people would make fun of you for the rest of your life,” he says. In particle physics, he adds, you run the risk of spotting patterns purely by chance in big data, or as a result of malfunctioning equipment.
False alarms have happened before. In 1976, a group at Fermilab led by Leon Lederman, who later won a Nobel for other work, announced the discovery of a particle they tentatively called the Upsilon. The researchers calculated the probability of the signal’s happening by chance as 1 in 50. After further data collection, though, they walked back the discovery, calling the pseudo-particle the Oops-Leon. (Today, particle physicists wait until the chance that a finding is a fluke drops below 1 in 3.5 million, the so-called five-sigma criterion.) And in 2011, researchers at the Oscillation Project with Emulsion-tRacking Apparatus (OPERA) experiment, in Italy, announced evidence for faster-than-light travel of neutrinos. Then, a few months later, they reported that the result was due to a faulty connection in their timing system.
Those cautionary tales linger in the minds of physicists. And yet, even while researchers are wary of false positives from AI, they also see it as a safeguard against them. So far, unsupervised learning has discovered no new physics, despite its use on data from multiple experiments at Fermilab and CERN. But anomaly detection may have prevented embarrassments like the one at OPERA. “So instead of telling you there’s a new physics particle,” Kasieczka says, “it’s telling you, this sensor is behaving weird today. You should restart it.”
Advertisement
Hardware for AI-Assisted Particle Physics
Particle physicists are pushing the limits of not only their computing software but also their computing hardware. The challenge is unparalleled. The LHC produces 40 million particle collisions per second, each of which can produce a megabyte of data. That’s much too much information to store, even if you could save it to disk that quickly. So the two largest detectors each use two-level data filtering. The first layer, called the Level-1 Trigger, or L1T, harvests 100,000 events per second, and the second layer, called the High-Level Trigger, or HLT, plucks 1,000 of those events to save for later analysis. So only one in 40,000 events is ever potentially seen by human eyes.
Katya Govorkova
“That’s when I thought, we need something like [AlphaGo] in physics. We need a genius that can look at the world differently.”
HLTs use central processing units (CPUs) like the ones in your desktop computer, running complex machine learning algorithms that analyze collisions based on the number, type, energy, momentum, and angles of the new particles produced. L1Ts, as a first line of defense, must be fast. So the L1Ts rely on integrated circuits called field-programmable gate arrays (FPGAs), which users can reprogram for specialized calculations.
Advertisement
The trade-off is that the programming must be relatively simple. The FPGAs can’t easily store and run fancy neural networks; instead they follow scripted rules about, say, what features of a particle collision make it important. In terms of complexity level, it’s the instructions given to the women who scanned bubble chambers, not the women’s brains.
Ekaterina (Katya) Govorkova, a particle physicist at MIT, saw a path toward improving the LHC’s filters, inspired by a board game. Around 2020, she was looking for new physics by comparing precise measurements at the LHC with predictions, using little or no machine learning. Then she watched a documentary about AlphaGo, the program that used machine learning to beat a human Go champion. “For me the moment of realization was when AlphaGo would use some absolutely new type of strategy that humans, who played this game for centuries, hadn’t thought about before,” she says. “So that’s when I thought, we need something like that in physics. We need a genius that can look at the world differently.” New physics may be something we’d never imagine.
Govorkova and her collaborators found a way to compress autoencoders to put them on FPGAs, where they process an event every 80 nanoseconds (less than 10-millionth of a second). (Compression involved pruning some network connections and reducing the precision of some calculations.) They published their methods in Nature Machine Intelligencein 2022, and researchers are now using them during the LHC’s third run. The new trigger tech is installed in one of the detectors around the LHC’s giant ring, and it has found many anomalous events that would otherwise have gone unflagged.
Researchers are currently setting up analysis workflows to decipher why the events were deemed anomalous. Jennifer Ngadiuba, a particle physicist at Fermilab who is also one of the coordinators of the trigger system (and one of Govorkova’s coauthors), says that one feature stands out already: Flagged events have lots of jets of new particles shooting out of the collisions. But the scientists still need to explore other factors, like the new particles’ energies and their distributions in space. “It’s a high-dimensional problem,” she says.
Advertisement
Eventually they will share the data openly, allowing others to eyeball the results or to apply new unsupervised learning algorithms in the hunt for patterns. Javier Duarte, a physicist at the University of California, San Diego, and also a coauthor on the 2022 paper, says, “It’s kind of exciting to think about providing this to the community of particle physicists and saying, like, ‘Shrug, we don’t know what this is. You can take a look.’” Duarte and Ngadiuba note that high-energy physics has traditionally followed a top-down approach to discovery, testing data against well-defined theories. Adding in this new bottom-up search for the unexpected marks a new paradigm. “And also a return of sorts to before the Standard Model was so well established,” Duarte adds.
Yet it could be years before we know why AI marked those collisions as anomalous. What conclusions could they support? “In the worst case, it could be some detector noise that we didn’t know about,” which would still be useful information, Ngadiuba says. “The best scenario could be a new particle. And then a new particle implies a new force.”
Jennifer Ngadiuba
“The best scenario could be a new particle. And then a new particle implies a new force.”
Advertisement
Duarte says he expects their work with FPGAs to have wider applications. “The data rates and the constraints in high-energy physics are so extreme that people in industry aren’t necessarily working on this,” he says. “In self-driving cars, usually millisecond latencies are sufficient reaction times. But we’re developing algorithms that need to respond in microseconds or less. We’re at this technological frontier, and to see how much that can proliferate back to industry will be cool.”
Plehn is also working to put neural networks on FPGAs for triggers, in collaboration with experimentalists, electrical engineers, and other theorists. Encoding the nuances of abstract theories into material hardware is a puzzle. “In this grant proposal, the person I talked to most is the electrical engineer,” he says, “because I have to ask the engineer, which of my algorithms fits on your bloody FPGA?”
Hardware is hard, says Ryan Kastner, an electrical engineer and computer scientist at UC San Diego who works with Duarte on programming FPGAs. What allows the chips to run algorithms so quickly is their flexibility. Instead of programming them in an abstract coding language like Python, engineers configure the underlying circuitry. They map logic gates, route data paths, and synchronize operations by hand. That low-level control also makes the effort “painfully difficult,” Kastner says. “It’s kind of like you have a lot of rope, and it’s very easy to hang yourself.”
Seeking New Physics Among the Neutrinos
The next piece of new physics may not pop up at a particle accelerator. It may appear at a detector for neutrinos, particles that are part of the Standard Model but remain deeply mysterious. Neutrinos are tiny, electrically neutral, and so light that no one has yet measured their mass. (The latest attempt, in April, set an upper limit of about a millionth the mass of an electron.) Of all known particles with mass, neutrinos are the universe’s most abundant, but also among the most ghostly, rarely deigning to acknowledge the matter around them. Tens of trillions pass through your body every second.
Advertisement
If we listen very closely, though, we may just hear the secrets they have to tell. Karagiorgi, of Columbia, has chosen this path to discovery. Being a physicist is “kind of like playing detective, but where you create your own mysteries,” she tells me during my visit to Columbia’s Nevis Laboratories, located on a large estate about 20 km north of Manhattan. Physics research began at the site after World War II; one hallway features papers going back to 1951.
A researcher stands inside a prototype for the Deep Underground Neutrino Experiment, which is designed to detect rare neutrino interactions.
CERN
Advertisement
Karagiorgi is eagerly awaiting a massive neutrino detector that’s currently under construction. Starting in 2028, Fermilab will send neutrinos west through 1,300 km of rock to South Dakota, where they’ll occasionally make their existence known in the Deep Underground Neutrino Experiment (DUNE). Why so far away? When neutrinos travel long distances, they have an odd habit of oscillating, transforming from one kind or “flavor” to another. Observing the oscillations of both the neutrinos and their mirror-image antiparticles, antineutrinos, could tell researchers something about the universe’s matter-antimatter asymmetry—which the Standard Model doesn’t explain—and thus, according to the Nevis website, “why we exist.”
“DUNE is the thing that’s been pushing me to develop these real-time AI methods,” Karagiorgi says, “for sifting through the data very, very, very quickly and trying to look for rare signatures of interest within them.” When neutrinos interact with the detector’s 70,000 tonnes of liquid argon, they’ll generate a shower of other particles, creating visual tracks that look like a photo of fireworks.
The Standard Model catalogs the known fundamental particles of matter and the forces that govern them, but leaves major mysteries unresolved.
Even when not bombarding DUNE with neutrinos, researchers will keep collecting data in the off chance that it captures neutrinos from a distant supernova. “This is a massive detector spewing out 5 terabytes of data per second,” Karagiorgi says, “and it’s going to run constantly for a decade.” They will need unsupervised learning to notice signatures that no one was looking for, because there are “lots of different models of how supernova explosions happen, and for all we know, none of them could be the right model for neutrinos,” she says. “To train your algorithm on such uncertain grounds is less than ideal. So an algorithm that can recognize any kind of disturbance would be a win.”
Advertisement
Deciding in real time which 1 percent of 1 percent of data to keep will require FPGAs. Karagiorgi’s team is preparing to use them for DUNE, and she walks me to a computer lab where they program the circuits. In the FPGA lab, we look at nondescript circuit boards sitting on a table. “So what we’re proposing is a scheme where you can have something like a hundred of these boards for DUNE deep underground that receive the image data frame by frame,” she says. This system could tell researchers whether a given frame resembled TV static, fireworks, or something in between.
Neutrino experiments, like many particle-physics studies, are very visual. When Karagiorgi was a postdoc, automated image processing at neutrino detectors was still in its infancy, so she and collaborators would often resort to visual scanning (bubble-chamber style) to measure particle tracks. She still asks undergrads to hand-scan as an educational exercise. “I think it’s wrong to just send them to write a machine learning algorithm. Unless you can actually visualize the data, you don’t really gain a sense of what you’re looking for,” she says. “I think it also helps with creativity to be able to visualize the different types of interactions that are happening, and see what’s normal and what’s not normal.”
Back in Karagiorgi’s office, a bulletin board displays images from The Cognitive Art of Feynman Diagrams, an exhibit for which the designer Edward Tufte created wire sculptures of the physicist Richard Feynman’s schematics of particle interactions. “It’s funny, you know,” she says. “They look like they’re just scribbles, right? But actually, they encode quantitatively predictive behavior in nature.” Later, Karagiorgi and I spend a good 10 minutes discussing whether a computer or a human could find Waldo without knowing what Waldo looked like. We also touch on the 1964 Supreme Court case in which Justice Potter Stewart famously declined to define obscenity, saying “I know it when I see it.” I ask whether it seems weird to hand over to a machine the task of deciding what’s visually interesting. “There are a lot of trust issues,” she says with a laugh.
On the drive back to Manhattan, we discuss the history of scientific discovery. “I think it’s part of human nature to try to make sense of an orderly world around you,” Karagiorgi says. “And then you just automatically pick out the oddities. Some people obsess about the oddities more than others, and then try to understand them.”
Advertisement
Reflecting on the Standard Model, she called it “beautiful and elegant,” with “amazing predictive power.” Yet she finds it both limited and limiting, blinding us to colors we don’t yet see. “Sometimes it’s both a blessing and a curse that we’ve managed to develop such a successful theory.”
Two NASA astronauts aboard the International Space Station (ISS) are about to climb into their spacesuits and enter the vacuum of space, and you can watch the event live.
The first NASA spacewalk in nearly a year will begin at about 8 a.m. ET on Wednesday, March 18. Read on for full details on how to watch.
Americans Jessica Meir and Chris Williams will exit the station’s Quest airlock to carry out work as part of preparations for a future roll-out solar array aimed at upgrading the station’s power supply.
It will be Meir’s fourth spacewalk and Williams’ first. The pair have spent the last few days prepping their spacesuits and equipment and also finalizing the configuration of tools they’ll use during the extravehicular activity.
Advertisement
Wednesday’s spacewalk also happens to be on the 61st anniversary of the first-ever spacewalk in 1965 when Soviet cosmonaut Alexei Leonov left his spacecraft for around 10 minutes during the Voskhod 2 mission. Later the same year, during the Gemini 4 mission, Ed White became the first NASA astronaut to achieve the same feat.
How to watch
NASA’s live coverage will start at 6:30 a.m. ET on Wednesday, March 18. The astronauts will exit the Quest airlock at about 8 a.m. ET. and will remain outside the ISS for around six-and-a-half hours.
You can watch the coverage via the video player embedded at the top of this page. NASA+, Amazon Prime, and the agency’s YouTube channel will also carry the same live feed.
What to expect
You’ll see live views from multiple cameras positioned outside the station, including from the astronauts’ helmet cams. You’ll be able to listen in on the live communications between the astronauts and Mission Control on Earth, too. A continuous commentary will also explain exactly what’s happening as the spacewalk proceeds.
ChatGPT has a talent for sounding sure of itself. Ask it a question, and it delivers a polished, coherent response. But should you always trust it?
The tone promises authoritative answers, and the confidence is enticing, but it can also mask the fact that the answer is only one possible interpretation of the problem.
A small adjustment to the conversation can switch things up and provide a much more definitive answer. After ChatGPT replies, simply type: “convince me otherwise”, and see what it says.
Advertisement
Article continues below
The same AI that just laid out a neat line of reasoning will then turn around and begin testing it, looking for cracks and weak points it did not mention the first time. You’ll be surprised.
The original answer might have recommended a decision, explained a concept, or justified a choice. The follow-up reframes that same material, pulling out limitations, alternative interpretations, and scenarios where the initial conclusion might not hold.
Advertisement
Convince me
Imagine asking ChatGPT whether it is worth paying for an app that promises to make you more productive. The first response might highlight the benefits, pointing to time savings and useful features in a clear endorsement.
Ask ChatGPT to convince you otherwise, and the new answer has a very different tone. Issues of subscription fatigue, free alternatives, and the app’s irrelevance to your actual life come up for the first time. It’s no longer a slam-dunk decision.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Consider a more personal scenario, like asking whether switching careers is a good move. The initial response may focus on new opportunities and the appeal of change. It can sound encouraging, almost motivational.
Advertisement
Ask the AI to convince you otherwise, and it begins to surface the uncertainties. ChatGPT might point out the financial risks, the difficulty of entering a new field, and the possibility that the current job has benefits that are easy to overlook. The second answer does not negate the first, but it adds weight to the side that was missing.
ChatGPT is capable of generating multiple lines of reasoning, but it tends to present one at a time. By default, it leans toward being helpful and aligned with the question being asked.
When you explicitly request the opposing view, you are not forcing it to invent something new so much as inviting it to reveal details that didn’t fit with the model’s inclinations.
Advertisement
Debate twin
What makes the phrase “convince me otherwise” so effective is how naturally it fits into a conversation. There is no need to structure a complex prompt or specify detailed instructions.
It is a familiar human move, the kind of thing you might say to a colleague when you want to pressure test an idea. ChatGPT responds in kind, shifting from presenting an answer to interrogating it. You start to see where the original reasoning relied on generalizations or skipped over complications.
There is a practical benefit to this approach, especially for everyday decisions. Many people use ChatGPT to think through purchases, plans, or personal choices. A single confident answer can be persuasive simply because it is well written. Asking for a counterargument introduces balance. It forces the system to acknowledge downsides and limitations before you act on its advice.
It also changes how you interpret what you are reading. The first response becomes one side of a discussion rather than the final word. Introducing disagreement, even from the same system, creates friction. That friction encourages you to slow down and weigh the options more carefully.
Advertisement
The approach is not perfect — ChatGPT can sometimes swing too far in the opposite direction. The value comes from comparing the two responses and noticing where they diverge.
The trick is expanding what ChatGPT offers as an answer. The AI lays out a position, then challenges it, giving you a chance to see the strengths and weaknesses side-by-side. A little self-criticism goes a long way to making AI seem less narrow in its usefulness.
Modern online scams operate across multiple platforms, perhaps spanning social media, messaging apps, email and online marketplaces. Google, Meta and Amazon are among 11 tech, retail and payments companies that have signed a new agreement to combat online scams by sharing threat intelligence across platforms, Axios first reported Monday.
The initiative, called the Industry Accord Against Online Scams & Fraud, is designed to improve how companies detect and respond to fraud that spans multiple services. Participants say they will exchange signals, such as scam-linked accounts and fraudulent domains, and coordinate enforcement actions.
By sharing intelligence in near real time, companies hope to identify these scams earlier and stop them before they spread.
Advertisement
The effort reflects how modern scams operate. A victim might encounter a fake celebrity investment ad on social media, move to a messaging app where the scammer builds trust, then faces prompts to send money through a fraudulent website, payment app or crypto wallet — spanning multiple companies’ ecosystems.
Google said it now blocks hundreds of millions of scam-related results every day using AI, underscoring how both attackers and defenders are increasingly relying on the same technology. Meta removed more than 159 million scam ads in 2025 and is expanding AI tools to detect impersonation and warn users.
Online scams are growing rapidly, in part because generative AI has lowered the barrier to entry. AI can be used not only to produce realistic phishing emails but also to clone voices and deepfake videos that impersonate executives, public figures and even family members.
The agreement is voluntary and doesn’t create new legal obligations, but it comes after regulators’ increased pressure on tech platforms to address fraud more aggressively. The companies say they will begin building frameworks for reporting and intelligence-sharing, though it’s not yet clear how quickly those systems will be deployed or how effective they will be in practice.
Cove, a Silicon Valley startup that helps workers collaborate while using AI agents, announced Tuesday that its team is joining Microsoft.
Terms of the deal were not disclosed. Cove will shut down its product on April 1.
“When we started Cove, we set out to reimagine how people collaborate with AI,” Cove co-founder and CEO Stephen Chau wrote on LinkedIn. “As model capabilities have accelerated, our conviction in that mission has only grown stronger. We’re thrilled to continue this work at Microsoft, where we’ll have the opportunity to pursue an even bigger vision.”
Cove raised a $6 million seed round in 2024 led by Sequoia Capital. The company built software to turn single-threaded chats with conversational agents into a visual workspace. It later allowed users to create custom AI apps. The startup has less than 10 employees, according to LinkedIn.
Chau previously was head of product at Uber Eats before launching Cove in 2023 with Mike Chu and Andy Szybalski. All three previously worked together at Google Maps.
Advertisement
Microsoft is aiming to boost adoption of its Copilot assistant, which remains a relatively small fraction of its commercial user base amid big investments in AI infrastructure. Last week the tech giant unveiled Copilot Cowork, a new AI assistant that can run tasks in the background, create documents, and work across Microsoft 365 apps,
Separately, Microsoft on Tuesday announced a reorg within its Copilot group, unifying its consumer and commercial AI efforts under former Snap executive Jacob Andreou while narrowing the role of Microsoft AI leader Mustafa Suleyman to focus on the superintelligence and frontier models.
The Trump administration is loosening restrictions on the sharing of law enforcement information with the CIA and other intelligence agencies, officials said, overriding controls that have been in place for decades to protect the privacy of U.S. citizens.
Government officials said the changes could give the intelligence agencies access to a database containing hundreds of millions of documents — from FBI case files and banking records to criminal investigations of labor unions — that touch on the activities of law-abiding Americans.
Advertisement
Administration officials said they are providing the intelligence agencies with more information from investigations by the FBI, Drug Enforcement Administration and other agencies to combat drug gangs and other transnational criminal groups that the administration has classified as terrorists.
But they have taken these steps with almost no public acknowledgement or notification to Congress. Inside the government, officials said, the process has been marked by a similar lack of transparency, with scant high-level discussion and little debate among government lawyers.
“None of this has been thought through very carefully — which is shocking,” one intelligence official said of the moves to expand information sharing. “There are a lot of privacy concerns out there, and nobody really wants to deal with them.”
A spokesperson for the Office of the Director of National Intelligence, Olivia Coleman, declined to answer specific questions about the expanded information sharing or the legal basis for it.
Advertisement
Instead, she cited some recent public statements by senior administration officials, including one in which the national intelligence director, Tulsi Gabbard, emphasized the importance of “making sure that we have seamless two-way push communications with our law enforcement partners to facilitate that bi-directional sharing of information.”
In the aftermath of the Watergate scandal, revelations that Presidents Lyndon Johnson and Richard Nixon had used the CIA to spy on American anti-war and civil rights activists outraged Americans who feared the specter of a secret police. The congressional reforms that followed reinforced the long-standing ban on intelligence agencies gathering information about the domestic activities of U.S. citizens.
Compared with the FBI and other federal law enforcement organizations, the intelligence agencies operate with far greater secrecy and less scrutiny from Congress and the courts. They are generally allowed to collect information on Americans only as part of foreign intelligence investigations. Exemptions must be approved by the U.S. attorney general and the director of national intelligence. The National Security Agency, for example, can intercept communications between people inside the United States and terror suspects abroad without the probable cause or judicial warrants that are generally required of law enforcement agencies.
Since the terror attacks of Sept. 11, 2001, the expansion of that surveillance authority in the fight against Islamist terrorism has been the subject of often intense debates among the three branches of government.
Advertisement
Word of the Trump administration’s efforts to expand the sharing of law enforcement information with the intelligence agencies was met with alarm by advocates for civil liberties protections.
“The Intelligence Community operates with broad authorities, constant secrecy and little-to-no judicial oversight because it is meant to focus on foreign threats,” Sen. Ron Wyden of Oregon, a senior Democrat on the Senate Select Committee on Intelligence, said in a statement to ProPublica.
Giving the intelligence agencies wider access to information on the activities of U.S. citizens not suspected of any crime “puts Americans’ freedoms at risk,” the senator added. “The potential for abuse of that information is staggering.”
Most of the current and former officials interviewed for this story would speak only on condition of anonymity because of the secrecy of the matter and because they feared retaliation for criticizing the administration’s approach.
Advertisement
Virtually all those officials said they supported the goal of sharing law enforcement information more effectively, so long as sensitive investigations and citizens’ privacy were protected. But after years in which Republican and Democratic administrations weighed those considerations deliberately — and made little headway with proposed reforms — officials said the Trump administration has pushed ahead with little regard for those concerns.
“There will always be those who simply want to turn on a spigot and comingle all available information, but you can’t just flip a switch — at least not if you want the government to uphold the rule of law,” said Russell Travers, a former acting director of the National Counterterrorism Center who served in senior intelligence roles under both Republican and Democratic administrations.
The 9/11 attacks — which exposed the CIA’s failure to share intelligence with the FBI even as Al Qaida moved its operatives into the United States — led to a series of reforms intended to transform how the government managed terrorism information.
A centerpiece of that effort was the establishment of the NCTC, as the counterterrorism center is known, to collect and analyze intelligence on foreign terrorist groups. The statutes that established the NCTC explicitly prohibit it from collecting information on domestic terror threats.
Advertisement
National security officials have spent much less time trying to remedy what they have acknowledged are serious deficiencies in the government’s management of intelligence on organized crime groups.
In 2011, President Barack Obama noted those problems in issuing a new national strategy to “build, balance and integrate the tools of American power to combat transnational organized crime.” Although the Obama plan stressed the need for improved information-sharing, it led to only minimal changes.
President Donald Trump has seized on the issue with greater urgency. He has also declared his intention to improve information-sharing across the government, signing an executive order to eliminate “information silos” of unclassified information.
More consequentially, he went on to brand more than a dozen Latin American drug mafias and criminal gangs as terrorist organizations.
Advertisement
The administration has used those designations to justify more extreme measures against the criminal groups. Since last year, it has killed at least 148 suspected drug smugglers with missile strikes in the Caribbean and the eastern Pacific, steps that many legal experts have denounced as violations of international law.
Some administration officials have argued that the terror designations entitle intelligence agencies to access all law enforcement case files related to the Sinaloa Cartel, the Jalisco New Generation Cartel and other gangs designated by the State Department as foreign terrorist organizations.
The first criterion for those designations is that a group must “be a foreign organization.” Yet unlike Islamist terror groups such as al-Qaida or al-Shabab, Latin drug mafias and criminal gangs like MS-13 have a large and complex presence inside the United States. Their members are much more likely to be U.S. citizens and to live and operate here.
Those steps were seen by some intelligence experts as potentially opening the door for the CIA and other agencies to monitor Americans who support antifa in violation of their free speech rights. The approach also echoed justifications that both Johnson and Nixon used for domestic spying by the CIA: that such investigations were needed to determine whether government critics were being supported by foreign governments.
The wider sharing of law enforcement case files is also being driven by the administration’s abrupt decision to disband the Justice Department office that for decades coordinated the work of different agencies on major drug trafficking and organized crime cases. That office, the Organized Crime Drug Enforcement Task Force, was abruptly shut down on Sept. 30 as the Trump administration was setting up a new network of Homeland Security Task Forces designed by the White House homeland security adviser, Stephen Miller.
The new task forces, which were first described in detail by ProPublica last year, are designed to refocus federal law enforcement agencies on what Miller and other officials have portrayed as an alarming nexus of immigration and transnational crime. The reorganization also gives the White House and the Department of Homeland Security new authority to oversee transnational crime investigations, subordinating the DEA and federal prosecutors, who were central to the previous system.
That reorganization has set off a struggle over the control of OCDETF’s crown jewel, a database of some 770 million records that is the only central, searchable repository of drug trafficking and organized crime case files in the federal government.
Advertisement
Until now, the records of that database, which is called Compass, have only been accessible to investigators under elaborate rules agreed to by the more than 20 agencies that shared their information. The system was widely viewed as cumbersome, but officials said it also encouraged cooperation among the agencies while protecting sensitive case files and U.S. citizens’ privacy.
Although the Homeland Security Task Forces took possession of the Compass system when their leadership moved into OCDETF’s headquarters in suburban Virginia, the administration is still deciding how it will operate that database, officials said.
However, officials said, intelligence agencies and the Defense Department have already taken a series of technical steps to connect their networks to Compass so they can access its information if they are permitted to do so.
The White House press office did not respond to questions about how the government will manage the Compass database and whether it will remain under the control of the Homeland Security Task Forces.
Advertisement
The National Counterterrorism Center, under its new director, Joe Kent, has been notably forceful in seeking to manage the Compass system, several officials said. Kent, a former Army Special Forces and CIA paramilitary officer who twice ran unsuccessfully for Congress in Washington state, was previously a top aide to the national intelligence director, Tulsi Gabbard.
The FBI, DEA and other law enforcement agencies have strongly opposed the NCTC effort, the officials said. In internal discussions, they added, the law enforcement agencies have argued that it makes no sense for an intelligence agency to manage sensitive information that comes almost entirely from law enforcement.
“The NCTC has taken a very aggressive stance,” one official said. “They think the agencies should be sharing everything with them, and it should be up to them to decide what is relevant and what U.S. citizen information they shouldn’t keep.”
The FBI declined to comment in response to questions from ProPublica. A DEA spokesperson also would not discuss the agency’s actions or views on the wider sharing of its information with the intelligence community. But in a statement the spokesman added, “DEA is committed to working with our IC and law enforcement partners to ensure reliable information-sharing and strong coordination to most effectively target the designated cartels.”
Advertisement
Even with the Trump administration’s expanded definition of what might constitute terrorist activity, the information on terror groups accounts for only a small fraction of the records in the Compass system, current and former officials said.
The records include State Department visa records, some files of U.S. Postal Service inspectors, years of suspicious transaction reports from the Treasury Department and call records from the Bureau of Prisons.
Investigative files of the FBI, DEA and other law enforcement agencies often include information about witnesses, associates of suspects and others who have never committed any crimes, officials said.
“You have witness information, target information, bank account information,” the former OCDETF director, Thomas Padden, said in an interview. “I can’t think of a dataset that would not be a concern if it were shared without some controls. You need checks and balances, and it’s not clear to me that those are in place.”
Advertisement
Officials familiar with the interagency discussions said NCTC and other intelligence officials have insisted they are interested only in terror-related information and that they have electronic systems that can appropriately filter out information on U.S. persons.
But FBI and other law enforcement agencies have challenged those arguments, officials said, contending that the NCTC proposal would almost inevitably breach privacy laws and imperil sensitive case information without necessarily strengthening the fight against transnational criminals.
Already, NCTC officials have been pressing the FBI and DEA to share all the information they have on the criminal groups that have been designated as terrorist organizations, officials said.
The DEA, which had previously earned a reputation for jealously guarding its case files, authorized the transfer of at least some of those files, officials said, adding to pressure on the FBI to do the same.
Advertisement
Administration lawyers have argued that such information sharing is authorized by the Intelligence Reform and Terrorism Prevention Act of 2004, the law that reorganized intelligence activities after 9/11. Officials have also cited the 2001 Patriot Act, which gives law enforcement agencies power to obtain financial, communications and other information on a subject they certify as having ties to terrorism.
The central role of the NCTC in collecting and analyzing terrorism information specifically excludes “intelligence pertaining exclusively to domestic terrorists and domestic counterterrorism.” But that has not stopped Kent or his boss, intelligence director Gabbard, from stepping over red lines that their predecessors carefully avoided.
In October, Kent drew sharp criticism from the FBI after he examined files from the bureau’s ongoing investigation of the assassination of Charlie Kirk, the right-wing activist. That episode was first reported by The New York Times.
Last month, Gabbard appeared to lead a raid at which the FBI seized truckloads of 2020 presidential voting records from an election center in Fulton County, Georgia. Officials later said she was sent by Trump but did not oversee the operation.
Advertisement
In years past, officials said, the possibility of crossing long-settled legal boundaries on citizens’ privacy would have precipitated a flurry of high-level meetings, legal opinions and policy memos. But almost none of that internal discussion has taken place, they said.
“We had lengthy interagency meetings that involved lawyers, civil liberties, privacy and operational security types to ensure that we were being good stewards of information and not trampling all over U.S. persons’ privacy rights,” said Travers, the former NCTC director.
When administration officials abruptly moved to close down OCDETF and supplant it with the Homeland Security Task Forces network, they seemed to have little grasp of the complexities of such a transition, several people involved in the process said.
The agencies that contributed records to OCDETF were ordered to sign over their information to the task forces, but they did so without knowing if the system’s new custodians would observe the conditions under which the files were shared.
Advertisement
Nor were they encouraged to ask, officials said.
While both the FBI and DEA have objected to a change in the protocols, officials said smaller agencies that contributed some of their records to the OCDETF system have been “reluctant to push back too hard,” as one of them put it.
The NCTC, which faced budget cuts during the Biden administration, has been among those most eager to service the new Homeland Security Task Forces. To that end, it set up a new fusion center to promote “two-way intelligence sharing of actionable information between the intelligence community and law enforcement,” as Gabbard described it.
The expanded sharing of law enforcement and intelligence information on trafficking groups is also a key goal of the Pentagon’s new Tucson, Arizona-based Joint Interagency Task Force-Counter Cartel. In announcing the task force’s creation last month, the U.S. Northern Command said it would work with the Homeland Security Task Forces “to ensure we are sharing all intelligence between our Department of War, law enforcement and Intelligence Community partners.”
Advertisement
In the last months of the Biden administration, a somewhat similar proposal was put forward by the then-DEA administrator, Anne Milgram. That plan involved setting up a pair of centers where DEA, CIA and other agencies would pool information on major Mexican drug trafficking groups.
At the time, one particularly strong objection came from the Defense Department’s counternarcotics and stabilization office, officials said. The sharing of such law enforcement information with the intelligence community, an official there noted, could violate laws prohibiting the CIA from gathering intelligence on Americans inside the United States.
The Pentagon, he warned, would want no part of such a plan.
Nvidia reveals hardware for use in orbital data centers
Space-1 Vera Rubin Module will offer huge increases in power and efficiency, with RTX PRO 6000 Blackwell Server Edition GPU back on Erath to process the data
Six space companies have alreadt signed up to work with Nvidia
Nvidia has laid out its plans to help launch the next generation of “space innovation” – namely through boosting data centers in space with the latest AI capabilities.
At Nvidia GTC 2026, the company revealed how its hardware is helping partners and “space operators” become more effective and powerful, particularly for operations such as disaster response, climate and weather predictions and more.
This includes Space-1 Vera Rubin Module, Nvidia’s latest tool for orbital data centers (ODCs) running LLMs and advanced foundation models, which includes a Rubin GPU delivering up to 25 times more AI compute than its H100, and high-bandwidth interconnect to process massive data streams from space-based instruments in real time
Article continues below
Advertisement
Looking ahead
Nvidia notes such power increases will allow for space-based inferencing, with its IGX Thor and Jetson Orin platforms offering energy-efficient, high-performance AI inference, image sensing and accelerated data processing to enable true edge computing on orbit in a compact module.
It will also help AI applications operate seamlessly, “from ground to space, and space to space,” while supporting increasingly complex missions and ODCs become more widespread.
Advertisement
Elsewhere, Nvidia’s data center platforms back on planet Earth, including the RTX PRO 6000 Blackwell Server Edition GPU, will provide high-throughput, on-demand processing for geospatial intelligence, delivering up to 100 times faster performance versus legacy CPU-based batch systems when analyzing massive imagery archives such as weather data.
The platform will also help AI applications operate seamlessly, “from ground to space, and space to space,” while supporting increasingly complex missions and ODCs become more widespread.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
All of this should help unlock processes such as on-orbit analytics, autonomous scientific discovery and rapid insight generation, pushing space technology even further, with six commercial space companies are understood to have already deployed Space-1 Vera Rubin Module.
Advertisement
“Space computing, the final frontier, has arrived. As we deploy satellite constellations and explore deeper into space, intelligence must live wherever data is generated,” said Jensen Huang, founder and CEO of Nvidia.
“AI processing acrss space and ground systems enables real-time sensing, decision-making and autonomy, transforming orbital data centers into instruments of discovery and spacecraft into self-navigating systems. With our partners, we’re extending Nvidia beyond our planet — boldly taking intelligence where it’s never gone before.”
Nvidia opened its GTC conference with a keynote by CEO Jensen Huang, revealing the company’s latest tech. Among the raft of the company’s AI developments, gamers were treated to the imminent version of its AI-powered upscaling and optimization technology, DLSS (Deep Learning Super Sampling), touted as the “biggest breakthrough in computer graphics”.
Nvidia published a video illustrating how DLSS 5 can enhance graphics in Resident Evil Requiem, Starfield and other games, showing before-and-after takes. But gamers weren’t thrilled. In fact, the response to DLSS 5 resembles more of a collective backlash, replete with memes, ridicule and outrage.
Gamers were quick to point out that DLSS 5 transformed the original graphics into something vastly different. Some called the visuals “AI slop” because they look like “yassified” AI-generated filters.
Advertisement
Many worry that DLSS 5 could deviate from a creator’s specific artistic vision. Critics also fear that if this technology becomes the industry standard, video game graphics might start to look the same, losing their unique visual identity.
“Everything about this is a betrayal of these game’s artistry,” said YouTuber The Sphere Hunter in a post on X Monday. “Painting over handcrafted, intentional 3D art with shiny, wrinkly, sunken-in, porous, puckered, fraudulent, filtered nonsense is deeply disrespectful. If you want this, just watch gen-AI videos all day.”
Countless memes mocking the tech’s exaggerated features flooded the internet. Others on social media parodied the effects DLSS 5 could produce in other games.
In a Q&A on Tuesday, Huang addressed the backlash from gamers, calling them “completely wrong.” Huang underlined that DLSS 5 “enhances and adds generative capability, but it doesn’t change the artistic control” and that “it’s in the direct control of the game developer.”
The team at Digital Foundry, which specializes in game technology and hardware reviews, called it “disruptive and transformative” but was generally positive about it, though they saw some hiccups.
“[The images] looked a little bit uncanny, I would say, but definitely the overall portrayal of those characters is much more sophisticated,” said Oliver Mackenzie, video producer and writer for Digital Foundry.
Bethesda’s official X account replied to comments from members of Digital Foundry about Starfield and The Elder Scrolls IV: Oblivion Remastered, both published by Bethesda.
Advertisement
“This is a very early look, and our art teams will be further adjusting the lighting and final effect to look the way we think works best for each game. This will all be under our artists’ control, and totally optional for players,” the publisher said.
DLSS 5 is set to be released sometime in the fall.
What is DLSS?
Nvidia first released its DLSS tech back in 2018 with its RTX 2080 card: The RTX architecture introduced the Tensor cores, which are essential for accelerating the calculations used by the DLSS AI. The deep learning technology was designed to upscale images and video from low resolution in real time to achieve higher frame rates.
Gamers weren’t impressed at first, but later versions of the technology did perform better in games that supported it. DLSS 4, released last year and tweaked to 4.5 as of January, made significant improvements to detail rendering, reducing motion artifacts, boosting frame rates, and generating more realistic lighting via path tracing (which incorporates interactions with ray-traced lighting).
Advertisement
What does DLSS 5 do?
DLSS 5 works a bit differently than previous versions of the technology. According to Nvidia, DLSS 5 shifts from processing simple pixels to understanding 3D elements. By deconstructing characters into specific components — such as skin, hair and clothing — the AI can render them more consistently. This results in faster performance and much more realistic details, especially for textures and lighting.
Game developers control how DLSS 5 enhances images and to what degree, ensuring it matches the game’s aesthetic. The demo video showcased some positive enhancements, but others looked like sweeping changes to the characters and the environment.
Which games will support DLSS 5 at launch?
On Monday, Nvidia released a list of games slated to support DLSS 5:
AION 2
Assassin’s Creed Shadows
Black State
Cinder City
Delta Force
Hogwarts Legacy
Justice
Naraka: Bladepoint
NTE: Neverness to Everness
Phantom Blade Zero
Resident Evil Requiem
Sea of Remnants
Starfield
The Elder Scrolls IV: Oblivion Remastered
Where Winds Meet
What cards will support DLSS 5?
Nvidia has yet to provide a list of GPUs that will support the new technology. In an FAQ, the company says it will release a list of supported cards closer to its release.
This small charger fits snugly against the back of your iPhone and snaps into place using magnets that keep it in place whether you’re out on a walk, making a phone call, or taking a photo. Anker designed this Nano Qi2 model, priced at $46 (was $55), as a 5,000 milliamp-hour power bank that measures 4.02 by 2.78 by 0.34 inches and weighs only 4.3 ounces, so it doesn’t add much bulk.
Wireless charging reaches 15 watts using the Qi2 standard, which is enough to charge an iPhone in about an hour depending on where you start. You can get a similar official MagSafe Battery from Apple, but it costs nearly twice as much at $99 and only delivers about 12 watts wirelessly.
Ultra-Slim Qi2-Compatible Magnetic Power Bank: At just 0.3 inches thin, this ultra-slim power bank suppots Qi2 certification and offers 5,000mAh of…
Enhanced Safety and Speed: Experience high-speed charging with enhanced safety. Qi2 MagSafe compatibility, temperature protection, and up to 20W max…
15W MagSafe-Compatible Charging: Get up to 15W max of Qi2-certified fast wireless charging—enough to power an iPhone 16 Pro to 25% in just…
The Anker unit combines graphene layers and built-in temperature sensors to keep the charging surface from overheating; even after continuous usage, it remains below 104 degrees Fahrenheit. If you flip the pack over, you’ll find a USB-C port capable of handling up to 20 watts for faster wired charging of the pack or powering your earphones and other small devices. A short cable is included, as is a two-year warranty that covers everyday knocks and bumps.
It works best with iPhone 16, 17, and Air versions since the magnets are properly aligned, but it also works well with other phones that are compatible with magnetic wireless charging. Recharging the pack takes less than two hours with any regular USB-C adaptor, so you’ll be ready for the next journey. Anyone who travels light and prefers not to carry huge bricks around will notice the difference after just one full charge cycle; it’s like having a small insurance policy against low-battery notifications, and it’s all due to the combination of size, speed, and savings.
Same-day delivery apparently isn’t fast enough for some Amazon shoppers. The retail giant said on Tuesday it’s adding new shipping options that will get products to front doors within a one- or three-hour window.
The company said in its announcement that the one-hour option is available in hundreds of cities across the US, while the three-hour option is now live in more than 2,000 areas. Amazon’s web page at amazon.com/getitfast shows whether those options are available to shoppers for their location. More than 90,000 products will be available for those shipping windows, the company said.
For those who can’t get those services (including the author of this post, who lives between Austin and San Antonio in Texas), a message will display: “3-hour delivery is currently unavailable. Check back at a later time or shop products with Same-Day delivery below. “
Advertisement
Pricing for the faster delivery options is not cheap: It’ll cost you $20 for one-hour delivery and $15 for three-hour delivery for those without an Amazon Prime account, or $10 and $5 for customers who subscribe to Prime.
In a video of the podcast Learn and Be Curious with Doug Herrington, hosted by Amazon’s CEO of worldwide stores, Kandace Kapps, the director of the company’s same-day strategy team, spoke in more detail about the challenges of fast shipping. Kapps discussed shifts in customer buying habits over the last few years, such as more people buying household essentials like toilet paper on Amazon.
She said that Amazon can deliver so quickly by placing same-day delivery hubs close to customers in metro areas and by getting products ready to ship within 15 minutes, aided by warehouse robots.
Advertisement
“I think customers are going to continue to get magically surprised by how fast we can deliver to their doorstop,” Kapps said.
Herrington said fast shipping increases sales: “When we speed up the service, the probability that somebody buys a product from us goes up.”
Other retailers, including Walmart, have been adding same-day delivery options or exploring other ways to speed up shipping times to compete with Amazon.
Removing buyers’ moments of hesitation
Part of Amazon’s strategy, which has involved a massive buildout of locations, deployment of thousands of trucks, deals with other delivery services and investment in logistics software, is actually pretty simple: being there when people need last-minute items or make impulse buys.
Advertisement
“It’s about removing the last moment where you would’ve reconsidered the purchase,” said Stephanie Carls, retail insights expert at coupon and promotional-code website RetailMeNot, a sibling site of CNET. “It changes how you shop, not just how fast you get things.”
Carls said that Amazon’s super-fast delivery is removing the timeframe when people might change their minds about a purchase.
“There used to be a gap between deciding to buy something and actually having it. That’s when you’d price check, rethink it, or decide you didn’t need it after all,” she said. “This closes that gap.”
The retail expert said that competitors, including Walmart and Target, have been speeding up delivery times in some markets. Still, they’re not matching Amazon’s scale or product range at those speeds or levels of consistency.
Advertisement
“And that’s what starts to make everyone else feel slow,” Carls said. “Amazon’s advantage is how tightly connected its technology, inventory and delivery networks are, which makes this level of speed more repeatable.”
It might be time to rethink what it means to be sick as a dog. On Tuesday, Fi, a smart pet technology company, announced a new AI-powered chatbot to help owners stay on top of their dog’s health using a blend of personal information and generalized dog breed data.
The AI agent, which the company is calling Fi Intelligence, is integrated directly into the Fi app. It has access to all the information gathered about your dog across the entire suite of Fi products, including the Fi Series 3 Plus and Fi Mini dog collars, as well as information and documents uploaded by the pet owner. The service is for dogs only (not cats, rabbits or other pets).
If you already own a Fi smart collar, existing data will be incorporated into the AI agent’s dataset to help it answer your questions.
Advertisement
When creating Fi Intelligence, the company identified a multitude of common questions that dog owners have, including whether their animal friend is walking or sleeping enough, or scratching more than usual. The chatbot was created to help owners find answers to these questions quickly and easily, according to Fi.
Fi designed its agent to answer these questions using a mix of general information about a dog’s breed, personal information and biometric data gathered by Fi smart pet collars.
This mock conversation between a pet owner and the Fi Intelligence AI agent shows how the chatbot uses detailed biometric data and uploaded documents to answer questions.
Advertisement
Fi
Pet owners can ask the chatbot questions in plain English and get back detailed responses. Fi Intelligence is equipped to answer general questions, contrast your dog’s current data to previous time periods and compare your dog’s data to other dogs of the same breed.
Fi says its chatbot is different from general-purpose AI agents because it has been trained on a proprietary dataset containing “the largest repository of real-world canine activity, sleep and behavior data in the world.”
Fi Intelligence doesn’t replace a trip to the vet — and the company stresses it’s not supposed to. Rather, the agent is supposed to grant owners “informed confidence” about their dog’s health and can help them “show up [to the vet] with specific, documented observations drawn from weeks of continuous data.”
“The strongest signal from our beta was that owners aren’t using this to replace their vet,” said Fi’s Vice President of Product Darrell Stone. “They’re using it to show up better prepared.”
Advertisement
According to Fi, the Fi Intelligence integration will provide the most complete dog health profile available in the app so far. Fi Intelligence is available to all Fi members immediately.
You must be logged in to post a comment Login