Connect with us

Tech

AI for Particle Physics: Searching for Anomalies

Published

on

In 1930, a young physicist named Carl D. Anderson was tasked by his mentor with measuring the energies of cosmic rays—particles arriving at high speed from outer space. Anderson built an improved version of a cloud chamber, a device that visually records the trajectories of particles. In 1932, he saw evidence that confusingly combined the properties of protons and electrons. “A situation began to develop that had its awkward aspects,” he wrote many years after winning a Nobel Prize at the age of 31. Anderson had accidentally discovered antimatter.

Four years after his first discovery, he codiscovered another elementary particle, the muon. This one prompted one physicist to ask, “Who ordered that?”

a photo shows a man in a suit sitting beside a large laboratory apparatus.

 a circular black-and-white image shows curved particle tracks. Carl Anderson [top] sits beside the magnet cloud chamber he used to discover the positron. His cloud-chamber photograph [bottom] from 1932 shows the curved track of a positron, the first known antimatter particle. Caltech Archives & Special Collections

Over the decades since then, particle physicists have built increasingly sophisticated instruments of exploration. At the apex of these physics-finding machines sits the Large Hadron Collider, which in 2022 started its third operational run. This underground ring, 27 kilometers in circumference and straddling the border between France and Switzerland, was built to slam subatomic particles together at near light speed and test deep theories of the universe. Physicists from around the world turn to the LHC, hoping to find something new. They’re not sure what, but they hope to find it.

It’s the latest manifestation of a rich tradition. Throughout the history of science, new instruments have prompted hunts for the unexpected. Galileo Galilei built telescopes and found Jupiter’s moons. Antonie van Leeuwenhoek built microscopes and noticed “animalcules, very prettily a-moving.” And still today, people peer through lenses and pore through data in search of patterns they hadn’t hypothesized. Nature’s secrets don’t always come with spoilers, and so we gaze into the unknown, ready for anything.

Advertisement

But novel, fundamental aspects of the universe are growing less forthcoming. In a sense, we’ve plucked the lowest-hanging fruit. We know to a good approximation what the building blocks of matter are. The Standard Model of particle physics, which describes the currently known elementary particles, has been in place since the 1970s. Nature can still surprise us, but it typically requires larger or finer instruments, more detailed or expansive data, and faster or more flexible analysis tools.

Those analysis tools include a form of artificial intelligence (AI) called machine learning. Researchers train complex statistical models to find patterns in their data, patterns too subtle for human eyes to see, or too rare for a single human to encounter. At the LHC, which smashes together protons to create immense bursts of energy that decay into other short-lived particles of matter, a theorist might predict some new particle or interaction and describe what its signature would look like in the LHC data, often using a simulation to create synthetic data. Experimentalists would then collect petabytes of measurements and run a machine learning algorithm that compares them with the simulated data, looking for a match. Usually, they come up empty. But maybe new algorithms can peer into corners they haven’t considered.

A New Path for Particle Physics

“You’ve heard probably that there’s a crisis in particle physics,” says Tilman Plehn, a theoretical physicist at Heidelberg University, in Germany. At the LHC and other high-energy physics facilities around the world, the experimental results have failed to yield insights on new physics. “We have a lot of unhappy theorists who thought that their model would have been discovered, and it wasn’t,” Plehn says.

Person wearing a patterned shirt against a pale blue background.

Tilman Plehn

Advertisement

“We have a lot of unhappy theorists who thought that their model would have been discovered, and it wasn’t.”

Gregor Kasieczka, a physicist at the University of Hamburg, in Germany, recalls the field’s enthusiasm when the LHC began running in 2008. Back then, he was a young graduate student and expected to see signs of supersymmetry, a theory predicting heavier versions of the known matter particles. The presumption was that “we turn on the LHC, and supersymmetry will jump in your face, and we’ll discover it in the first year or so,” he tells me. Eighteen years later, supersymmetry remains in the theoretical realm. “I think this level of exuberant optimism has somewhat gone.”

The result, Plehn says, is that models for all kinds of things have fallen in the face of data. “And I think we’re going on a different path now.”

That path involves a kind of machine learning called unsupervised learning. In unsupervised learning, you don’t teach the AI to recognize your specific prediction—signs of a particle with this mass and this charge. Instead, you might teach it to find anything out of the ordinary, anything interesting—which could indicate brand new physics. It’s the equivalent of looking with fresh eyes at a starry sky or a slide of pond scum. The problem is, how do you automate the search for something “interesting”?

Advertisement

Going Beyond the Standard Model

The Standard Model leaves many questions unanswered. Why do matter particles have the masses they do? Why do neutrinos have mass at all? Where is the particle for transmitting gravity, to match those for the other forces? Why do we see more matter than antimatter? Are there extra dimensions? What is dark matter—the invisible stuff that makes up most of the universe’s matter and that we assume to exist because of its gravitational effect on galaxies? Answering any of these questions could open the door to new physics, or fundamental discoveries beyond the Standard Model.

A long blue accelerator tube marked \u201cLHC\u201d runs through an underground tunnel.

The Large Hadron Collider at CERN accelerates protons to near light speed before smashing them together in hopes of discovering “new physics.”

CERN

Advertisement

“Personally, I’m excited for portal models of dark sectors,” Kasieczka says, as if reading from a Marvel film script. He asks me to imagine a mirror copy of the Standard Model out there somewhere, sharing only one “portal” particle with the Standard Model we know and love. It’s as if this portal particle has a second secret family.

Kasieczka says that in the LHC’s third run, scientists are splitting their efforts roughly evenly between measuring more precisely what they know to exist and looking for what they don’t know to exist. In some cases, the former could enable the latter. The Standard Model predicts certain particle properties and the relationships between them. For example, it correctly predicted a property of the electron called the magnetic moment to about one part in a trillion. And precise measurements could turn up internal inconsistencies. “Then theorists can say, ‘Oh, if I introduce this new particle, it fixes this specific problem that you guys found. And this is how you look for this particle,’” Kasieczka says.

A colorful visualization shows many particle tracks radiating outward from a collision point.

An image from a single collision at the LHC shows an unusually complex spray of particles, flagged as anomalous by machine learning algorithms.

Advertisement

CERN

What’s more, the Standard Model has occasionally shown signs of cracks. Certain particles containing bottom quarks, for example, seem to decay into other particles in unexpected ratios. Plehn finds the bottom-quark incongruities intriguing. “Year after year, I feel they should go away, and they don’t. And nobody has a good explanation,” he says. “I wouldn’t even know who I would shout at”—the theorists or the experimentalists—“like, ‘Sort it out!’”

Exasperation isn’t exactly the right word for Plehn’s feelings, however. Physicists feel gratified when measurements reasonably agree with expectations, he says. “But I think deep down inside, we always hope that it looks unreasonable. Everybody always looks for the anomalous stuff. Everybody wants to see the standard explanation fail. First, it’s fame”—a chance for a Nobel—“but it’s also an intellectual challenge, right? You get excited when things don’t work in science.”

How Unsupervised AI Can Probe for New Physics

Now imagine you had a machine to find all the times things don’t work in science, to uncover all the anomalous stuff. That’s how researchers are using unsupervised learning. One day over ice cream, Plehn and a friend who works at the software company SAP began discussing autoencoders, one type of unsupervised learning algorithm. “He tells me that autoencoders are what they use in industry to see if a network was hacked,” Plehn remembers. “You have, say, a hundred computers, and they have network traffic. If the network traffic [to one computer] changes all of a sudden, the computer has been hacked, and they take it offline.”

Advertisement
a person wearing a hard hat walks down an aisle.
Photo show rows of electronic racks filled with cables and equipment inside a data-acquisition room.

In the LHC’s central data-acquisition room [top], incoming detector data flows through racks of electronics and field-programmable gate array (FPGA) cards [bottom] that decide which collision events to keep.

Fermilab/CERN

Autoencoders are neural networks that start with an input—it could be an image of a cat, or the record of a computer’s network traffic—and compress it, like making a tiny JPEG or MP3 file, and then decompress it. Engineers train them to compress and decompress data so that the output matches the input as closely as possible. Eventually a network becomes very good at that task. But if the data includes some items that are relatively rare—such as white tigers, or hacked computers’ traffic—the network performs worse on these, because it has less practice with them. The difference between an input and its reconstruction therefore signals how anomalous that input is.

“This friend of mine said, ‘You can use exactly our software, right?’” Plehn remembers. “‘It’s exactly the same question. Replace computers with particles.’” The two imagined feeding the autoencoder signatures of particles from a collider and asking: Are any of these particles not like the others? Plehn continues: “And then we wrote up a joint grant proposal.”

It’s not a given that AI will find new physics. Even learning what counts as interesting is a daunting hurdle. Beginning in the 1800s, men in lab coats delegated data processing to women, whom they saw as diligent and detail oriented. Women annotated photos of stars, and they acted as “computers.” In the 1950s, women were trained to scan bubble chambers, which recorded particle trajectories as lines of tiny bubbles in fluid. Physicists didn’t explain to them the theory behind the events, only what to look for based on lists of rules.

Advertisement

But, as the Harvard science historian Peter Galison writes in Image and Logic: A Material Culture of Physics, his influential account of how physicists’ tools shape their discoveries, the task was “subtle, difficult, and anything but routinized,” requiring “three-dimensional visual intuition.” He goes on: “Even within a single experiment, judgment was required—this was not an algorithmic activity, an assembly line procedure in which action could be specified fully by rules.”

Person in a suit with dark hair against a blue background.Gregor Kasieczka

“We are not looking for flying elephants but instead a few extra elephants than usual at the local watering hole.”

Over the last decade, though, one thing we’ve learned is that AI systems can, in fact, perform tasks once thought to require human intuition, such as mastering the ancient board game Go. So researchers have been testing AI’s intuition in physics. In 2019, Kasieczka and his collaborators announced the LHC Olympics 2020, a contest in which participants submitted algorithms to find anomalous events in three sets of (simulated) LHC data. Some teams correctly found the anomalous signal in one dataset, but some falsely reported one in the second set, and they all missed it in the third. In 2020, a research collective called Dark Machines announced a similar competition, which drew more than 1,000 submissions of machine learning models. Decisions about how to score them led to different rankings, showing that there’s no best way to explore the unknown.

Another way to test unsupervised learning is to play revisionist history. In 1995, a particle dubbed the top quark turned up at the Tevatron, a particle accelerator at the Fermi National Accelerator Laboratory (Fermilab), in Illinois. But what if it actually hadn’t? Researchers applied unsupervised learning to LHC data collected in 2012, pretending they knew almost nothing about the top quark. Sure enough, the AI revealed a set of anomalous events that were clustered together. Combined with a bit of human intuition, they pointed toward something like the top quark.

Advertisement
Person with long hair wearing a sweater and light-colored top against a blue background.

Georgia Karagiorgi

“An algorithm that can recognize any kind of disturbance would be a win.”

That exercise underlines the fact that unsupervised learning can’t replace physicists just yet. “If your anomaly detector detects some kind of feature, how do you get from that statement to something like a physics interpretation?” Kasieczka says. “The anomaly search is more a scouting-like strategy to get you to look into the right corner.” Georgia Karagiorgi, a physicist at Columbia University, agrees. “Once you find something unexpected, you can’t just call it quits and be like, ‘Oh, I discovered something,’” she says. “You have to come up with a model and then test it.”

Kyle Cranmer, a physicist and data scientist at the University of Wisconsin-Madison who played a key role in the discovery of the Higgs boson particle in 2012, also says that human expertise can’t be dismissed. “There’s an infinite number of ways the data can look different from what you expected,” he says, “and most of them aren’t interesting.” Physicists might be able to recognize whether a deviation suggests some plausible new physical phenomenon, rather than just noise. “But how you try to codify that and make it explicit in some algorithm is much less straightforward,” Cranmer says. Ideally, the guidelines would be general enough to exclude the unimaginable without eliminating the merely unimagined. “That’s gonna be your Goldilocks situation.”

Advertisement

In his 1987 book How Experiments End, Harvard’s Galison writes that scientific instruments can “import assumptions built into the apparatus itself.” He tells me about a 1973 experiment that looked for a phenomenon called neutral currents, signaled by an absence of a so-called heavy electron (later renamed the muon). One team initially used a trigger left over from previous experiments, which recorded events only if they produced those heavy electrons—even though neutral currents, by definition, produce none. As a result, for some time the researchers missed the phenomenon and wrongly concluded that it didn’t exist. Galison says that the physicists’ design choice “allowed the discovery of [only] one thing, and it blinded the next generation of people to this new discovery. And that is always a risk when you’re being selective.”

How AI Could Miss—or Fake—New Physics

I ask Galison if by automating the search for interesting events, we’re letting the AI take over the science. He rephrases the question: “Have we handed over the keys to the car of science to the machines?” One way to alleviate such concerns, he tells me, is to generate test data to see if an algorithm behaves as expected—as in the LHC Olympics. “Before you take a camera out and photograph the Loch Ness Monster, you want to make sure that it can reproduce a wide variety of colors” and patterns accurately, he says, so you can rely on it to capture whatever comes.

Galison, who is also a physicist, works on the Event Horizon Telescope, which images black holes. For that project, he remembers putting up utterly unexpected test images like Frosty the Snowman so that scientists could probe the system’s general ability to catch something new. “The danger is that you’ve missed out on some crucial test,” he says, “and that the object you’re going to be photographing is so different from your test patterns that you’re unprepared.”

The algorithms that physicists are using to seek new physics are certainly vulnerable to this danger. It helps that unsupervised learning is already being used in many applications. In industry, it’s surfacing anomalous credit-card transactions and hacked networks. In science, it’s identifying earthquake precursors, genome locations where proteins bind, and merging galaxies.

Advertisement

But one difference with particle-physics data is that the anomalies may not be stand-alone objects or events. You’re looking not just for a needle in a haystack; you’re also looking for subtle irregularities in the haystack itself. Maybe a stack contains a few more short stems than you’d expect. Or a pattern reveals itself only when you simultaneously look at the size, shape, color, and texture of stems. Such a pattern might suggest an unacknowledged substance in the soil. In accelerator data, subtle patterns might suggest a hidden force. As Kasieczka and his colleagues write in one paper, “We are not looking for flying elephants, but instead a few extra elephants than usual at the local watering hole.”

Even algorithms that weigh many factors can miss signals—and they can also see spurious ones. The stakes of mistakenly claiming discovery are high. Going back to the hacking scenario, Plehn says, a company might ultimately determine that its network wasn’t hacked; it was just a new employee. The algorithm’s false positive causes little damage. “Whereas if you stand there and get the Nobel Prize, and a year later people say, ‘Well, it was a fluke,’ people would make fun of you for the rest of your life,” he says. In particle physics, he adds, you run the risk of spotting patterns purely by chance in big data, or as a result of malfunctioning equipment.

False alarms have happened before. In 1976, a group at Fermilab led by Leon Lederman, who later won a Nobel for other work, announced the discovery of a particle they tentatively called the Upsilon. The researchers calculated the probability of the signal’s happening by chance as 1 in 50. After further data collection, though, they walked back the discovery, calling the pseudo-particle the Oops-Leon. (Today, particle physicists wait until the chance that a finding is a fluke drops below 1 in 3.5 million, the so-called five-sigma criterion.) And in 2011, researchers at the Oscillation Project with Emulsion-tRacking Apparatus (OPERA) experiment, in Italy, announced evidence for faster-than-light travel of neutrinos. Then, a few months later, they reported that the result was due to a faulty connection in their timing system.

Those cautionary tales linger in the minds of physicists. And yet, even while researchers are wary of false positives from AI, they also see it as a safeguard against them. So far, unsupervised learning has discovered no new physics, despite its use on data from multiple experiments at Fermilab and CERN. But anomaly detection may have prevented embarrassments like the one at OPERA. “So instead of telling you there’s a new physics particle,” Kasieczka says, “it’s telling you, this sensor is behaving weird today. You should restart it.”

Advertisement

Hardware for AI-Assisted Particle Physics

Particle physicists are pushing the limits of not only their computing software but also their computing hardware. The challenge is unparalleled. The LHC produces 40 million particle collisions per second, each of which can produce a megabyte of data. That’s much too much information to store, even if you could save it to disk that quickly. So the two largest detectors each use two-level data filtering. The first layer, called the Level-1 Trigger, or L1T, harvests 100,000 events per second, and the second layer, called the High-Level Trigger, or HLT, plucks 1,000 of those events to save for later analysis. So only one in 40,000 events is ever potentially seen by human eyes.

Person with long blonde hair in a white shirt against a solid blue background.

Katya Govorkova

That’s when I thought, we need something like [AlphaGo] in physics. We need a genius that can look at the world differently.”

HLTs use central processing units (CPUs) like the ones in your desktop computer, running complex machine learning algorithms that analyze collisions based on the number, type, energy, momentum, and angles of the new particles produced. L1Ts, as a first line of defense, must be fast. So the L1Ts rely on integrated circuits called field-programmable gate arrays (FPGAs), which users can reprogram for specialized calculations.

Advertisement

The trade-off is that the programming must be relatively simple. The FPGAs can’t easily store and run fancy neural networks; instead they follow scripted rules about, say, what features of a particle collision make it important. In terms of complexity level, it’s the instructions given to the women who scanned bubble chambers, not the women’s brains.

Ekaterina (Katya) Govorkova, a particle physicist at MIT, saw a path toward improving the LHC’s filters, inspired by a board game. Around 2020, she was looking for new physics by comparing precise measurements at the LHC with predictions, using little or no machine learning. Then she watched a documentary about AlphaGo, the program that used machine learning to beat a human Go champion. “For me the moment of realization was when AlphaGo would use some absolutely new type of strategy that humans, who played this game for centuries, hadn’t thought about before,” she says. “So that’s when I thought, we need something like that in physics. We need a genius that can look at the world differently.” New physics may be something we’d never imagine.

Govorkova and her collaborators found a way to compress autoencoders to put them on FPGAs, where they process an event every 80 nanoseconds (less than 10-millionth of a second). (Compression involved pruning some network connections and reducing the precision of some calculations.) They published their methods in Nature Machine Intelligence in 2022, and researchers are now using them during the LHC’s third run. The new trigger tech is installed in one of the detectors around the LHC’s giant ring, and it has found many anomalous events that would otherwise have gone unflagged.

Researchers are currently setting up analysis workflows to decipher why the events were deemed anomalous. Jennifer Ngadiuba, a particle physicist at Fermilab who is also one of the coordinators of the trigger system (and one of Govorkova’s coauthors), says that one feature stands out already: Flagged events have lots of jets of new particles shooting out of the collisions. But the scientists still need to explore other factors, like the new particles’ energies and their distributions in space. “It’s a high-dimensional problem,” she says.

Advertisement

Eventually they will share the data openly, allowing others to eyeball the results or to apply new unsupervised learning algorithms in the hunt for patterns. Javier Duarte, a physicist at the University of California, San Diego, and also a coauthor on the 2022 paper, says, “It’s kind of exciting to think about providing this to the community of particle physicists and saying, like, ‘Shrug, we don’t know what this is. You can take a look.’” Duarte and Ngadiuba note that high-energy physics has traditionally followed a top-down approach to discovery, testing data against well-defined theories. Adding in this new bottom-up search for the unexpected marks a new paradigm. “And also a return of sorts to before the Standard Model was so well established,” Duarte adds.

Yet it could be years before we know why AI marked those collisions as anomalous. What conclusions could they support? “In the worst case, it could be some detector noise that we didn’t know about,” which would still be useful information, Ngadiuba says. “The best scenario could be a new particle. And then a new particle implies a new force.”

Person with braided updo in checkered suit jacket and chambray shirt, light blue background.

Jennifer Ngadiuba

“The best scenario could be a new particle. And then a new particle implies a new force.”

Advertisement

Duarte says he expects their work with FPGAs to have wider applications. “The data rates and the constraints in high-energy physics are so extreme that people in industry aren’t necessarily working on this,” he says. “In self-driving cars, usually millisecond latencies are sufficient reaction times. But we’re developing algorithms that need to respond in microseconds or less. We’re at this technological frontier, and to see how much that can proliferate back to industry will be cool.”

Plehn is also working to put neural networks on FPGAs for triggers, in collaboration with experimentalists, electrical engineers, and other theorists. Encoding the nuances of abstract theories into material hardware is a puzzle. “In this grant proposal, the person I talked to most is the electrical engineer,” he says, “because I have to ask the engineer, which of my algorithms fits on your bloody FPGA?”

Hardware is hard, says Ryan Kastner, an electrical engineer and computer scientist at UC San Diego who works with Duarte on programming FPGAs. What allows the chips to run algorithms so quickly is their flexibility. Instead of programming them in an abstract coding language like Python, engineers configure the underlying circuitry. They map logic gates, route data paths, and synchronize operations by hand. That low-level control also makes the effort “painfully difficult,” Kastner says. “It’s kind of like you have a lot of rope, and it’s very easy to hang yourself.”

Seeking New Physics Among the Neutrinos

The next piece of new physics may not pop up at a particle accelerator. It may appear at a detector for neutrinos, particles that are part of the Standard Model but remain deeply mysterious. Neutrinos are tiny, electrically neutral, and so light that no one has yet measured their mass. (The latest attempt, in April, set an upper limit of about a millionth the mass of an electron.) Of all known particles with mass, neutrinos are the universe’s most abundant, but also among the most ghostly, rarely deigning to acknowledge the matter around them. Tens of trillions pass through your body every second.

Advertisement

If we listen very closely, though, we may just hear the secrets they have to tell. Karagiorgi, of Columbia, has chosen this path to discovery. Being a physicist is “kind of like playing detective, but where you create your own mysteries,” she tells me during my visit to Columbia’s Nevis Laboratories, located on a large estate about 20 km north of Manhattan. Physics research began at the site after World War II; one hallway features papers going back to 1951.

A person stands inside a room that has gold-colored grids covering the floor, walls, and ceiling.

A researcher stands inside a prototype for the Deep Underground Neutrino Experiment, which is designed to detect rare neutrino interactions.

CERN

Advertisement

Karagiorgi is eagerly awaiting a massive neutrino detector that’s currently under construction. Starting in 2028, Fermilab will send neutrinos west through 1,300 km of rock to South Dakota, where they’ll occasionally make their existence known in the Deep Underground Neutrino Experiment (DUNE). Why so far away? When neutrinos travel long distances, they have an odd habit of oscillating, transforming from one kind or “flavor” to another. Observing the oscillations of both the neutrinos and their mirror-image antiparticles, antineutrinos, could tell researchers something about the universe’s matter-antimatter asymmetry—which the Standard Model doesn’t explain—and thus, according to the Nevis website, “why we exist.”

“DUNE is the thing that’s been pushing me to develop these real-time AI methods,” Karagiorgi says, “for sifting through the data very, very, very quickly and trying to look for rare signatures of interest within them.” When neutrinos interact with the detector’s 70,000 tonnes of liquid argon, they’ll generate a shower of other particles, creating visual tracks that look like a photo of fireworks.

A simplified chart of the Standard Model of physics shows matter particles (quarks and leptons), force-carrying particles, and the Higgs, which conveys mass.

The Standard Model catalogs the known fundamental particles of matter and the forces that govern them, but leaves major mysteries unresolved.

Even when not bombarding DUNE with neutrinos, researchers will keep collecting data in the off chance that it captures neutrinos from a distant supernova. “This is a massive detector spewing out 5 terabytes of data per second,” Karagiorgi says, “and it’s going to run constantly for a decade.” They will need unsupervised learning to notice signatures that no one was looking for, because there are “lots of different models of how supernova explosions happen, and for all we know, none of them could be the right model for neutrinos,” she says. “To train your algorithm on such uncertain grounds is less than ideal. So an algorithm that can recognize any kind of disturbance would be a win.”

Advertisement

Deciding in real time which 1 percent of 1 percent of data to keep will require FPGAs. Karagiorgi’s team is preparing to use them for DUNE, and she walks me to a computer lab where they program the circuits. In the FPGA lab, we look at nondescript circuit boards sitting on a table. “So what we’re proposing is a scheme where you can have something like a hundred of these boards for DUNE deep underground that receive the image data frame by frame,” she says. This system could tell researchers whether a given frame resembled TV static, fireworks, or something in between.

Neutrino experiments, like many particle-physics studies, are very visual. When Karagiorgi was a postdoc, automated image processing at neutrino detectors was still in its infancy, so she and collaborators would often resort to visual scanning (bubble-chamber style) to measure particle tracks. She still asks undergrads to hand-scan as an educational exercise. “I think it’s wrong to just send them to write a machine learning algorithm. Unless you can actually visualize the data, you don’t really gain a sense of what you’re looking for,” she says. “I think it also helps with creativity to be able to visualize the different types of interactions that are happening, and see what’s normal and what’s not normal.”

Back in Karagiorgi’s office, a bulletin board displays images from The Cognitive Art of Feynman Diagrams, an exhibit for which the designer Edward Tufte created wire sculptures of the physicist Richard Feynman’s schematics of particle interactions. “It’s funny, you know,” she says. “They look like they’re just scribbles, right? But actually, they encode quantitatively predictive behavior in nature.” Later, Karagiorgi and I spend a good 10 minutes discussing whether a computer or a human could find Waldo without knowing what Waldo looked like. We also touch on the 1964 Supreme Court case in which Justice Potter Stewart famously declined to define obscenity, saying “I know it when I see it.” I ask whether it seems weird to hand over to a machine the task of deciding what’s visually interesting. “There are a lot of trust issues,” she says with a laugh.

On the drive back to Manhattan, we discuss the history of scientific discovery. “I think it’s part of human nature to try to make sense of an orderly world around you,” Karagiorgi says. “And then you just automatically pick out the oddities. Some people obsess about the oddities more than others, and then try to understand them.”

Advertisement

Reflecting on the Standard Model, she called it “beautiful and elegant,” with “amazing predictive power.” Yet she finds it both limited and limiting, blinding us to colors we don’t yet see. “Sometimes it’s both a blessing and a curse that we’ve managed to develop such a successful theory.”

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Top 5 Siemens Appliances That Are Must-Haves for Every Modern Kitchen

Published

on

In contemporary kitchens, where style meets functionality and smart technology reigns supreme, Siemens appliances stand out as essential companions. Drawing from the latest lines available on Kitchen Brand Store and Siemens’ home branding, here are five must-have Siemens essentials that elevate any modern cooking space.

1. StudioLine blackSteel Oven (iQ700 Series)

The StudioLine blackSteel design delivers an elegant, minimalist look—its glass handle blends seamlessly into the door, creating a sleek visual statement. More than just appearance, the iQ700 range from Siemens packs advanced culinary features. With coolStart to eliminate pre-heating, ActiveClean pyrolytic cleaning, and even steam injection for perfectly moist baking, this oven simplifies cooking while saving time. Its intuitive smart programming and premium design make it an indispensable piece for modern kitchens.

2. Built-in Refrigerator with hyperFresh & LED lighting

Siemens refrigeration offers sublime interior visibility thanks to energy-efficient LEDs and thoughtful lighting design—including spotlighting hyperFresh drawers for produce storage. Their modularFit built-in models integrate seamlessly into cabinetry, supporting flexible layouts and clean lines. Freshness, style, and integration: a trifecta every modern kitchen demands.

3. iQDrive Dishwasher with VarioSpeed & AquaStop

A modern kitchen isn’t complete without smart, quiet dishwashing. The Siemens iQDrive motor offers powerful yet whisper-quiet operation, while AquaStop delivers flood protection around the clock. With VarioSpeed Plus, you can cut cleaning time by up to 66% when you’re short on time. Flexible loading via varioFlex Pro baskets and varioDrawer Pro ensures even large utensils fit comfortably.

Advertisement

4. InductionAir Plus Hob + Integrated Extraction

Siemens’ inductionAir Plus cleverly integrates hob and extractor into one sleek module, blending into your countertop for a minimalist, uncluttered look. This all-in-one solution delivers power and ventilation in a compact package—ideal for those who favor clean surfaces and maximum efficiency without compromising performance or design.

5. EQ Series Fully-Automatic Coffee Machine (e.g., iQ700 Coffee Center)

For coffee lovers, the Siemens built-in EQ series brings café-quality beverages to your home at the touch of a button. The iQ700 Coffee Center offers a full range of drink options—espresso, cappuccino, latte—all from one intuitive interface. Convenient, stylish, and high-performing, it’s the perfect finish to a modern kitchen setup.

Why These Five?

Synergy of style and performance: Each of these models combines refined aesthetics with cutting-edge innovation—from blackSteel finishes to integrated appliances.

Smart convenience and energy savings: Whether it’s oven steam functionality, water-saving dishwash cycles, or well-lit refrigeration, these appliances are designed for efficiency and ease.

Advertisement

Seamless integration: Built-in refrigerators, induction hobs, and ovens with minimal protrusion reinforce a clean, contemporary layout.

Culinary versatility and lifestyle appeal: From gourmet cooking within the blackSteel oven to designer integrated ventilation, these select devices cater to both daily practicality and elevated living.

If you’re designing or upgrading a modern kitchen, these five Siemens appliances—blackSteel oven (iQ700), built-in refrigerator, smart dishwasher (iQDrive), inductionAir Plus hob-extractor, and EQ coffee machine—are top-tier choices. Together, they offer the perfect blend of sleek design, smart technology, and luxurious convenience that today’s modern households crave.

If you want to buy, visit here: https://www.kitchenbrandstore.com/collections/siemens-109

Advertisement

Source link

Continue Reading

Tech

8849 TANK X Smartphone Boasts a Built-in DLP Projector, Night Vision Camera

Published

on

8849 TANK X Smartphone Projector
Most smartphones are preoccupied with being as slim and shiny as possible, but the 8849 TANK X doesn’t care. At 1.26 inches thick and 750 grams, it’s a hefty, heavy beast designed for places where your precious little smartphone would sugarcoat and die: dust storms, getting rained on, being dropped from chest level, -28°C or 56°C heat, you name it. It has IP68 and IP69K classifications, as well as military-grade ruggedness that would make even the most ardent outdoor enthusiast happy.



One of the Tank X’s most notable features is its built-in DLP projector, which will either convert you to the Church of Portable Movie Nights or make you laugh at the expense of some unfortunate soul who thought it sounded like a half-baked idea. The resolution is full 1080p (up to 1920×1080), and the brightness is 220 lumens. Plus, with laser focusing, you can expect razor-sharp shots from about half a meter to 3-4 meters away, and keystone correction ensures that the image remains level even if the phone is held at an angle. The projection area is around 10 feet square, making it ideal for movie evenings under the stars or displaying a map on a wall to confuse all of your lost buddies. You can get 5 hours of use out of it at maximum brightness in high mode or 6 in night mode. The previous Tank models were stuck to 720p, so this is a significant advance.

Sale


Google Pixel 9a with Gemini – Unlocked Android Smartphone with Incredible Camera and AI Photo Editing,…
  • Google Pixel 9a is engineered by Google with more than you expect, for less than you think; like Gemini, your built-in AI assistant[1], the incredible…
  • Take amazing photos and videos with the Pixel Camera, and make them better than you can imagine with Google AI; get great group photos with Add Me and…
  • Google Pixel’s Adaptive Battery can last over 30 hours[2]; turn on Extreme Battery Saver and it can last up to 100 hours, so your phone has power…

The battery capacity is a whopping 17,600mAh, split between two cells to keep it going for ages, and by “ages,” I mean several days of average use or 25 hours of movie playback. Or, if you’re having a lengthy phone session, you could easily talk for dozens, if not hundreds, of hours. Now, I get what you’re thinking: “But what about when the projector turns on?” Well, the power management is fairly conscientious, so it does not drain the battery.

Advertisement


So, what makes this thing tick? It’s powered by a MediaTek Dimensity 8200, a very strong 4nm octa-core processor paired with 16GB of LPDDR5 RAM (expandable by another 16GB, since who doesn’t like that?) and 512GB of UFS 3.1 storage. It’s all powered by Android 15, which, even on a beast like this, manages to keep things running smoothly whether you’re running multiple apps, playing a few games, or simply goofing around.

Connectivity is excellent, including 5G bands, Wi-Fi 6, Bluetooth 5.4, and GPS accuracy to within a few feet. There’s also a 3.5mm jack, an IR blaster for controlling your fancy TVs and appliances, and an FM radio for when you’re out of the loop.

8849 TANK X Smartphone Projector
Cameras are more than just good for taking casual images, beginning with the 50MP primary sensor, which employs Sony’s IMX766 to capture solid daylight shots with full-pixel focusing. Then there’s the 8MP telephoto, which can zoom in three times and should come in handy, but the true star of the show is the 64MP night vision camera, which is equipped with four infrared LEDs and autofocus, allowing you to see as clearly as day in almost complete darkness. A 50MP front camera for selfies and video calls completes the self-portrait package. With a dual-tone flash and a pair of extra IR lights to help you in low-light settings, you should look sharp.

8849 TANK X Smartphone Projector
On the back, there’s a 1,200-lumen RGB camping light that functions as a little spotlight; you can vary between modes such as white light, some great color options, SOS patterns, or even just a strobe or sound alert. It’s useful for emergencies or simply navigating a trail in the dark.

8849 TANK X Smartphone Projector
The 6.78-inch LCD display has 2460×1080 resolution and a 120Hz refresh rate, with a maximum brightness of 750 nits. It’s nice to see they eliminated the PWM flicker that causes eye strain after long viewing periods. Furthermore, with this display, outside visibility is acceptable, and the panel works well with the projector.

8849 TANK X Smartphone Projector
The Tank X was priced at $1,049.99 (ugh), but an early bird pricing of $549.99 made it slightly more affordable. You can also place a pre-order beginning February 1, 2026, and they will ship from warehouses in the United States, Canada, the United Kingdom, Australia, and other locations beginning March 1.
[Source]

Source link

Advertisement
Continue Reading

Tech

Today’s NYT Mini Crossword Answers for Feb. 4

Published

on

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Need some help with today’s Mini Crossword? I don’t know my Greek letters, so whenever there’s a clue like today’s 7-Across, I just have to hope the other answers fill it in for me. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Advertisement

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

completed-nyt-mini-crossword-puzzle-for-feb-4-2026.png

The completed NYT Mini Crossword puzzle for Feb. 4, 2026.

Advertisement

NYT/Screenshot by CNET

Mini across clues and answers

1A clue: “The Rachel Maddow Show” channel, after a 2025 rebranding
Answer: MSNOW

6A clue: Childhood
Answer: YOUTH

7A clue: Greek letter after zeta and eta
Answer: THETA

Advertisement

8A clue: What helicopter parents do
Answer: HOVER

9A clue: Sound at the dog park
Answer: ARF

Mini down clues and answers

1D clue: “Cats always land on their feet,” e.g.
Answer: MYTH

2D clue: Neighborhood in both London and Manhattan
Answer: SOHO

Advertisement

3D clue: ___ York, Spanish name for New York
Answer: NUEVA

4D clue: Furry mammal that eats crustaceans
Answer: OTTER

5D clue: Docking area
Answer: WHARF


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.

Advertisement

Source link

Continue Reading

Tech

Tech Moves: Tableau CEO steps down; Microsoft taps new executive VPs; Avanade’s new CEO

Published

on

Former Tableau CEO Ryan Aytay, pictured during a visit to Seattle in 2024. (GeekWire File Photo / Todd Bishop)

Ryan Aytay, a longtime Salesforce exec who has led Tableau as CEO since 2023, is departing.

Aytay revealed the news on LinkedIn on Tuesday. He called his 19-year tenure “a front-row seat to innovation, a masterclass in leadership, and a community that has shaped who I am professionally and personally.” Aytay said he’ll share more about a “new challenge” later.

Aytay joined Salesforce in 2007 and became chief business officer in 2020 before taking the president role at Tableau in February 2022. A year later he replaced Mark Nelson as CEO.

The appointment came four years after Salesforce paid $15.7 billion to acquire Seattle-based Tableau, a leader in the data visualization sector.

Tableau reported 4% revenue growth in Salesforce’s most recent quarter — down from 15% growth in the previous quarter.

Advertisement

In his post, Aytay praised Tableau’s “DataFam” community and said “the future of Tableau and Salesforce is incredibly bright.”

His departure follows the recent exit of Denise Dresser, who led Slack, another Salesforce division. Dresser is now chief revenue officer at OpenAI. Salesforce’s cybersecurity leader announced Monday that he left the company last week.

Salesforce stock is down more than 14% over the past week amid investor fears over AI disrupting traditional software providers. The company maintains an office in Seattle’s Fremont neighborhood and another in Bellevue.

— Microsoft is naming four new executive vice presidents, according to a memo viewed by CNBC.

Advertisement

Deb Cupp, Nick Parker, Ralph Haupter, and Mala Anand will get the new titles. They will continue reporting to Judson Althoff, who took on the newly created position of CEO of Microsoft’s commercial business in October. Althoff is overseeing a reformulated commercial team that includes engineering, sales, marketing, operations, and finance leaders representing more than 75% of Microsoft’s revenue.

Microsoft reported better-than-expected earnings last week but its shares fell as much as 12% in trading the day after the earnings report — erasing $357 billion from its market value. Several factors may be contributing to market skepticism, including the company’s massive AI spending bets and concern about dependence on OpenAI.

— Avanade named Chris Howarth as its new CEO. Howarth previously spent nearly three decades at Accenture, where he was a senior managing director leading the firm’s Accenture Business Group that focuses on Microsoft, Accenture, and Avanade.

Howarth replaces Rodrigo Caserta, who is joining Microsoft as a corporate vice president. He spent more than a decade at Avanade, and was named CEO in 2024.

Advertisement

“Rodrigo’s leadership has positioned Avanade for sustained momentum, and his move to Microsoft further strengthens our partnership,” Howarth said in a statement. “I’m excited to work with our people, clients, and partners at this pivotal moment, delivering on the huge potential of AI to drive transformation and accelerate value.”

Avanade formed in 2000 by Accenture and Microsoft and provides various digital, cloud, and AI-related services across the Microsoft ecosystem.

Kelsey Peterson, a former vice president at Weber Shandwick and senior director at Rubrik, joined Microsoft as a senior communications manager for the company’s security business.

Read about other Tech Moves from earlier today here.

Advertisement

Source link

Continue Reading

Tech

Robot Videos: DARPA Triage Challenge, Extreme Cold Test

Published

on

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2026: 1–5 June 2026, VIENNA

Enjoy today’s videos!

One of my favorite parts of robotics is watching research collide with non-roboticists in the real (or real-ish) world.

Advertisement

[ DARPA ]

Spot will put out fires for you. Eventually. If it feels like it.

[ Mechatronic and Robotic Systems Laboratory ]

Advertisement

All those robots rising out of their crates is not sinister at all.

[ LimX ]

The Lynx M20 quadruped robot recently completed an extreme cold-weather field test in Yakeshi, Hulunbuir, operating reliably in temperatures as low as –30°C.

Advertisement

[ DEEP Robotics ]

This is a teaser video for KIMLAB’s new teleoperation robot. For now, we invite you to enjoy the calm atmosphere, with students walking, gathering, and chatting across the UIUC Main Quad—along with its scenery and ambient sounds, without any technical details. More details will be shared soon. Enjoy the moment.

The most incredible part of this video is that they have publicly available power in the middle of their quad.

[ KIMLAB ]

Advertisement

For the eleventy-billionth time: Just because you can do a task with a humanoid robot doesn’t mean you should do a task with a humanoid robot.

[ UBTECH ]

Advertisement

[ KAIST ]

Okay, so figuring out where Spot’s face is just got a lot more complicated.

[ Boston Dynamics ]

Advertisement

An undergraduate team at HKU’s Tam Wing Fan Innovation Wing developed CLIO, an embodied tour-guide robot, just in months. Built on LimX Dynamics TRON 1, it uses LLMs for tour planning, computer vision for visitor recognition, and a laser pointer/expressive display for engaging tours.

[ CLIO ]

The future of work is doing work so that robots can then do the same work, except less well.

Advertisement

[ AgileX ]

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Apple integrates Anthropic’s Claude and OpenAI’s Codex into Xcode 26.3 in push for ‘agentic coding’

Published

on

Apple on Tuesday announced a major update to its flagship developer tool that gives artificial intelligence agents unprecedented control over the app-building process, a move that signals the iPhone maker’s aggressive push into an emerging and controversial practice known as “agentic coding.”

Xcode 26.3, available immediately as a release candidate, integrates Anthropic’s Claude Agent and OpenAI’s Codex directly into Apple’s development environment, allowing the AI systems to autonomously write code, build projects, run tests, and visually verify their own work — all with minimal human oversight.

The update is Apple’s most significant embrace of AI-assisted software development since introducing intelligence features in Xcode 26 last year, and arrives as “vibe coding” — the practice of delegating software creation to large language models — has become one of the most debated topics in technology.

Apple says that while integrating intelligence into the Xcode developer workflow is powerful, the model itself still has a somewhat limited aperture. It answers questions based on what the developer provides, but it doesn’t have access to the full context of the project, and it’s not able to take action on its own. That changes with this update, the company said during a press conference Tuesday morning.

Advertisement

How Apple’s new AI coding features let developers build apps faster than ever

The key innovation in Xcode 26.3 is the depth of integration between AI agents and Apple’s development tools. Unlike previous iterations that offered code suggestions and autocomplete features, the new system grants AI agents access to nearly every aspect of the development process.

During a live demonstration, an Apple engineer showed how the Claude agent could receive a simple prompt — “add a new feature to show the weather at a landmark” — and then independently analyze the project’s file structure, consult Apple’s documentation, write the necessary code, build the project, and take screenshots of the running application to verify its work matched the requested design.

According to Apple, the agent is able to use tools like build and screenshot previews to verify its work, visually analyze the image, and confirm that everything has been built accordingly. Before, when interacting with a model, it would provide an answer and just stop there.

The system creates automatic checkpoints as developers interact with the AI, allowing them to roll back changes if results prove unsatisfactory — a safeguard that acknowledges the unpredictable nature of AI-generated code.

Advertisement

Apple says it worked directly with Anthropic and OpenAI to optimize the experience with particular attention paid to reducing token usage — the computational units that determine costs when using cloud-based AI models — and improving the efficiency of tool calling.

According to the company, developers can download new agents with a single click, and they update automatically.

Why Apple’s adoption of the Model Context Protocol could reshape the AI development landscape

Underlying the integration is the Model Context Protocol, or MCP, an open standard that Anthropic developed for connecting AI agents with external tools. Apple’s adoption of MCP means that any compatible agent — not just Claude or Codex — can now interact with Xcode’s capabilities.

Apple says this also works for agents that are running outside of Xcode. Any agent that is compatible with MCP can now work with Xcode to do all the same things — project discovery and change management, building and testing apps, working with previews and code snippets, and accessing the latest documentation.

Advertisement

The decision to embrace an open protocol, rather than building a proprietary system, represents a notable departure for Apple, which has historically favored closed ecosystems. It also positions Xcode as a potential hub for a growing universe of AI development tools.

Xcode’s troubled history with AI tools — and why Apple says this time is different

The announcement comes against a backdrop of mixed experiences with AI-assisted coding in Apple’s tools. During the press conference, one developer described previous attempts to use AI agents with Xcode as “horrible,” citing constant crashes and an inability to complete basic tasks.

Apple acknowledged the concerns while arguing that the new integration addresses fundamental limitations of earlier approaches.

The company says the big shift is that Claude and Codex have so much more visibility into the breadth of the project. If they hallucinate and write code that doesn’t work, they can now build, see the compile errors, and iterate in real time to fix those issues — in some cases before presenting it as a finished work.

Advertisement

Apple argues that the power of IDE integration extends beyond error correction. Agents can now automatically add entitlements to projects when needed to access protected APIs — a task the company says would be otherwise very difficult for an AI operating outside the development environment dealing with binary files it may not have the format for.

From Andrej Karpathy’s tweet to LinkedIn certifications: The unstoppable rise of vibe coding

Apple’s announcement arrives at a crucial moment in the evolution of AI-assisted development. The term “vibe coding,” coined by AI researcher Andrej Karpathy in early 2025, has transformed from a curiosity into a genuine cultural phenomenon that is reshaping how software gets built.

LinkedIn announced last week that it will begin offering official certifications in AI coding skills, drawing on usage data from platforms like Lovable and Replit. Job postings requiring AI proficiency doubled in the past year, according to edX research, with Indeed’s Hiring Lab reporting that 4.2% of U.S. job listings now mention AI-related keywords.

The enthusiasm is driven by genuine productivity gains. Casey Newton, the technology journalist, recently described building a complete personal website using Claude Code in about an hour — a task that previously required expensive Squarespace subscriptions and years of frustrated attempts with various website builders.

Advertisement

More dramatically, Jaana Dogan, a principal engineer at Google, posted that she gave Claude Code “a description of the problem” and “it generated what we built last year in an hour.” Her post, which accumulated more than 8 million views, began with the disclaimer: “I’m not joking and this isn’t funny.”

Security experts warn that AI-generated code could lead to ‘catastrophic explosions’

But the rapid adoption of agentic coding has also sparked significant concerns among security researchers and software engineers.

David Mytton, founder and CEO of developer security provider Arcjet, warned last month that the proliferation of vibe-coded applications “into production will lead to catastrophic problems for organizations that don’t properly review AI-developed software.”

“In 2026, I expect more and more vibe-coded applications hitting production in a big way,” Mytton wrote. “That’s going to be great for velocity… but you’ve still got to pay attention. There’s going to be some big explosions coming!”

Advertisement

Simon Willison, co-creator of the Django web framework, drew an even starker comparison. “I think we’re due a Challenger disaster with respect to coding agent security,” he said, referring to the 1986 space shuttle explosion that killed all seven crew members. “So many people, myself included, are running these coding agents practically as root. We’re letting them do all of this stuff.”

A pre-print paper from researchers this week warned that vibe coding could pose existential risks to the open-source software ecosystem. The study found that AI-assisted development pulls user interaction away from community projects, reduces visits to documentation websites and forums, and makes launching new open-source initiatives significantly harder.

Stack Overflow usage has plummeted as developers increasingly turn to AI chatbots for answers—a shift that could ultimately starve the very knowledge bases that trained the AI models in the first place.

Previous research painted an even more troubling picture: a 2024 report found that vibe coding using tools like GitHub Copilot “offered no real benefits unless adding 41% more bugs is a measure of success.”

Advertisement

The hidden mental health cost of letting AI write your code

Even enthusiastic adopters have begun acknowledging the darker aspects of AI-assisted development.

Peter Steinberger, creator of the viral AI agent originally known as Clawdbot (now OpenClaw), recently revealed that he had to step back from vibe coding after it consumed his life.

“I was out with my friends and instead of joining the conversation in the restaurant, I was just like, vibe coding on my phone,” Steinberger said in a recent podcast interview. “I decided, OK, I have to stop this more for my mental health than for anything else.”

Steinberger warned that the constant building of increasingly powerful AI tools creates the “illusion of making you more productive” without necessarily advancing real goals. “If you don’t have a vision of what you’re going to build, it’s still going to be slop,” he added.

Advertisement

Google CEO Sundar Pichai has expressed similar reservations, saying he won’t vibe code on “large codebases where you really have to get it right.”

“The security has to be there,” Pichai said in a November podcast interview.

Boris Cherny, the Anthropic engineer who created Claude Code, acknowledged that vibe coding works best for “prototypes or throwaway code, not software that sits at the core of a business.”

“You want maintainable code sometimes. You want to be very thoughtful about every line sometimes,” Cherny said.

Advertisement

Apple is gambling that deep IDE integration can make AI coding safe for production

Apple appears to be betting that the benefits of deep IDE integration can mitigate many of these concerns. By giving AI agents access to build systems, test suites, and visual verification tools, the company is essentially arguing that Xcode can serve as a quality control mechanism for AI-generated code.

Susan Prescott, Apple’s vice president of Worldwide Developer Relations, framed the update as part of Apple’s broader mission.

In a statement, Apple said its goal is to make tools that put industry-leading technologies directly in developers’ hands so they can build the very best apps. The company says agentic coding supercharges productivity and creativity, streamlining the development workflow so developers can focus on innovation.

But the question remains whether the safeguards will prove sufficient as AI agents grow more autonomous. Asked about debugging capabilities, Apple noted that while Xcode has a powerful debugger built in, there is no direct MCP tool for debugging.

Advertisement

Developers can run the debugger and manually relay information to the agent, but the AI cannot yet independently investigate runtime issues — a limitation that could prove significant as the complexity of AI-generated code increases.

The update also does not currently support running multiple agents simultaneously on the same project, though Apple noted that developers can open projects in multiple Xcode windows using Git worktrees as a workaround.

The future of software development hangs in the balance — and Apple just raised the stakes

Xcode 26.3 is available immediately as a release candidate for members of the Apple Developer Program, with a general release expected soon on the App Store. The release candidate designation — Apple’s final beta before production — means developers who download today will automatically receive the finished version when it ships.

The integration supports both API keys and direct account credentials from OpenAI and Anthropic, offering developers flexibility in managing their AI subscriptions. But those conveniences belie the magnitude of what Apple is attempting: nothing less than a fundamental reimagining of how software comes into existence.

Advertisement

For the world’s most valuable company, the calculus is straightforward. Apple’s ability to attract and retain developers has always underpinned its platform dominance. If agentic coding delivers on its promise of radical productivity gains, early and deep integration could cement Apple’s position for another generation. If it doesn’t — if the security disasters and “catastrophic explosions” that critics predict come to pass — Cupertino could find itself at the epicenter of a very different kind of transformation.

The technology industry has spent decades building systems to catch human errors before they reach users. Now it must answer a more unsettling question: What happens when the errors aren’t human at all?

As Apple conceded during Tuesday’s press conference, with what may prove to be unintentional understatement: “Large language models, as agents sometimes do, sometimes hallucinate.”

Millions of lines of code are about to find out how often.

Advertisement

Source link

Continue Reading

Tech

Optical Combs Help Radio Telescopes Work Together

Published

on

Very-long baseline interferometry (VLBI) is a technique in radio astronomy whereby multiple radio telescopes cooperate to bundle their received data and in effect create a much larger singular radio telescope. For this to work it is however essential to have exact timing and other relevant information to accurately match the signals from each individual radio telescope. As VLBI is used for increasingly higher ranges and bandwidths this makes synchronizing the signals much harder, but an optical frequency comb technique may offer a solution here.

In the paper by [Minji Hyun] et al. it’s detailed how they built the system and used it with the Korean VLBI Network (VLB) Yonsei radio telescope in Seoul as a proof of concept. This still uses the same hydrogen maser atomic clock as timing source, but with the optical transmission of the pulses a higher accuracy can be achieved, limited only by the photodiode on the receiving end.

In the demonstration up to 50 GHz was possible, but commercial 100 GHz photodiodes are available. It’s also possible to send additional signals via the fiber on different wavelengths for further functionality, all with the ultimately goal of better timing and adjustment for e.g. atmospheric fluctuations that can affect radio observations.

Advertisement

Source link

Advertisement
Continue Reading

Tech

AMD suggests the next-gen Xbox will arrive in 2027

Published

on

Microsoft could launch the next-generation Xbox console sometime in 2027, AMD CEO Lisa Su has revealed during the semiconductor company’s latest earnings call. Valve is on track to start shipping its AMD-powered Steam Machine early this year, she said, while Microsoft’s development of an Xbox with a semi-custom SOC from AMD is “progressing well to support a launch in 2027.” While it doesn’t necessarily mean Microsoft is releasing a new Xbox console next year, that seems to be the company’s current goal.

Xbox president Sarah Bond announced Microsoft’s multi-year partnership with AMD for its consoles in mid-2025. Based on Bond’s statement back then, Microsoft is embracing the use of artificial intelligence and machine learning in future Xbox games. She also said that the companies are going to “co-engineer silicon” across devices, “in your living room and in your hands,” implying the development of future handheld consoles.

Leaked documents from the FTC vs. Microsoft court battle revealed in the past that Microsoft was planning to make the next Xbox a “hybrid game platform,” which combines local hardware and cloud computing. The documents also said that Microsoft was planning to release the next Xbox in 2028. Whether the company has chosen to launch the new Xbox early remains to be seen, but it is possible when the Xbox X and S were released in 2020, and they haven’t sold as well as the Xbox One.

Source link

Advertisement
Continue Reading

Tech

Washington’s ‘millionaires tax’ targets top earners as tech leaders warn of startup fallout

Published

on

Washington state’s Legislative Building, which houses the Legislature. (GeekWire Photo / Brent Roraback)

Washington state Democratic leaders on Tuesday at last unveiled their so-called “millionaires tax” — a proposed 9.9% tax applied to taxable, personal annual income that exceeds $1 million.

For the first time in decades, the lawmakers are advancing a personal income tax aimed at high‑income residents that would go into effect in two years, and pairing it with small business and low‑income tax breaks.

The action comes as the state is struggling to plug a more than $2 billion budget hole with spending cuts and a slate of potential tax changes, while at the same time some of Washington’s largest employers are cutting thousands of jobs from their payrolls.

The combined pressures — set against a backdrop of ongoing uncertainty around federal policies and funding — has leaders in the business community concerned about additional financial burdens in an increasingly shaky economy.

“Proposing a personal income tax is a major economic move for our state — one that will have consequences — and it’s not something that we, or anyone in Washington, is taking lightly,” said Rachel Smith, president of Washington Roundtable, nonprofit representing business executives, in a statement.

Advertisement

Others were more blunt.

“This tax is just another brick in the wall of anti-entrepreneurialism from state and local legislators. The average Amazon employee probably won’t mind, but this stuff is devastating to company creation,” Kirby Winfield, founding general partner at Seattle venture capital firm Ascend, said via email.

The message, said Winfield, is that “Washington does not value job creation or wealth creation for risk-taking founders and startup employees.”

In a state that has historically relied heavily on property, sales and business taxes to balance its books, Gov. Bob Ferguson has repeatedly expressed support in recent months for an income tax on the state’s highest earners.

Advertisement

In December, he said that a tax similar to what has been proposed would apply to fewer than 0.5% of Washington residents and would raise more than $3 billion each year. An official fiscal note on the bill has not been released.

But the governor on Tuesday said the draft legislation fell short in supporting small businesses and lower-income residents in the state. The bill is “a good start, but we still have a long way to go,” he said in a press conference.

“We are listening and hearing the voices of many, many Washingtonians who are struggling right now and having a lack of affordability in our state,” Ferguson said. “And we need to address that head on.”

Gov. Bob Ferguson holds a press conference in Olympia on Tuesday regarding a proposed income tax in Washington state. (Screenshot via TVW stream)

Tax increases and new deductions

The proposed tax, which is being introduced as Senate Bill 6346 and House Bill 2724, includes multiple provisions:

  • A 9.9% tax on Washington taxable income above a $1 million standard deduction per individual, built off of federal adjusted gross income.
  • It allows up to $50,000 a year in charitable deductions per filer (or per couple), and nonrefundable credits to avoid double‑taxing income already hit by Washington’s B&O, capital‑gains taxes, or other specific exemptions.
  • There are multiple definitions of residents subject to the tax, including someone who lives here more than 183 days per year.
  • It would apply to income earned beginning Jan. 1, 2028, with the first payments dues in April 2029.

Supporters of the tax say it brings more fairness to the state’s tax structure. Washington is one of nine states that lack an income tax, and has prohibited the taxation of personal wages.

“Washington’s antiquated tax code is the second-most regressive in the country, which means that working people pay more, while the gap between rich and poor continues to widen,” Invest in Washington Now, a Seattle nonprofit supporting progressive tax policy, said in a statement.

Advertisement

The measure includes targeted tax breaks:

  • The small business B&O tax credit doubles, so businesses with annual gross receipts of less than $250,000 would no longer pay that tax.
  • The temporary B&O surcharge on high-grossing companies would end one year early, in 2028.
  • The Working Families Tax Credit removes the age limit for participation.
  • A new sales tax exemption for grooming and hygiene products would take effect Jan. 1, 2029.

At his press conference Tuesday, Ferguson called for bigger benefits for small businesses and families. The governor said he wants to devote $1 billion of tax relief to small business owners, while the proposed bill provides a little more than $100 million. Ferguson also called for expanded eligibility for the family tax credit and to provide larger amounts to recipients, plus more extensive sales tax relief.

Now comes negotiations on a tight timeline. This year’s 60-day legislative session is scheduled to end March 12.

“So it’s a challenge for something this big and this complex” to find solution, Ferguson said, but added that he sees potential for “a lot of collaboration.”

If approved by lawmakers, the governor said the proposed tax was certain to go before voters for approval and would face legal challenges as well.

Advertisement

Nixing Washington’s ‘tax advantage’

While the new income tax has worried some on the business community, it’s not the only controversial tax being considered in Olympia this year.

Tech industry leaders have been up in arms over a separate proposal that would broaden the state’s capital gains tax to apply to profits from the sale of qualified small business stock (QSBS) even when gains are exempt under federal law. The change, codified in SB 6229 and HB 2292, would impact startup company founders, early employees and investors.

Aviel Ginzburg, a Seattle-based venture capitalist at Founders’ Co-op and leader of the startup community Foundations, recently posted a satirical video to highlight his opposition to the QSBS and millionaires tax.

“People are happy to pay more taxes. I am too, especially when the …. money is spent well,” Ginzburg said, asserting that’s not the case here. “We’re about to kill the golden goose.”

Advertisement

Another piece of legislation that’s modeled on Seattle’s payroll tax, which targets Amazon and other big companies, was floated unsuccessfully last year and is not gaining traction this session.

Other states are likewise struggling with affordability issues and looking to raise income taxes on the highest earners, with Colorado moving toward a ballot measure and Michigan considering a similar move. California, meanwhile, is exploring a one-time, 5% tax on residents a net worth exceeding $1 billion — which has caused at least six billionaires to flee the state.

Winfield of Ascend dismisses comparisons between Washington’s and California’s tax burdens given other, outsized strengths in the state to the south.

“Given the choice between paying absurd taxes here or California, founders will just move to the Bay Area,” he said. The billions of dollars of venture capital, massive tech talent and tolerance for risk are beyond comparison.

Advertisement

“Seattle is great but it doesn’t come close,” Winfield said. “And when you remove the tax advantage you lose your biggest draw.”

Source link

Continue Reading

Tech

Project G Stereo: A 60s Design Icon

Published

on

Dizzy Gillespie was a fan. Frank Sinatra bought one for himself and gave them to his Rat Pack friends. Hugh Hefner acquired one for the Playboy Mansion. Clairtone Sound Corp.’s Project G high-fidelity stereo system, which debuted in 1964 at the National Furniture Show in Chicago, was squarely aimed at trendsetters. The intent was to make the sleek, modern stereo an object of desire.

By the time the Project G was introduced, the Toronto-based Clairtone was already well respected for its beautiful, high-end stereos. “Everyone knew about Clairtone,” Peter Munk, president and cofounder of the company, boasted to a newspaper columnist. “The prime minister had one, and if the local truck driver didn’t have one, he wanted one.” Alas, with a price tag of CA $1,850—about the price of a small car—it’s unlikely that the local truck driver would have actually bought a Project G. But he could still dream.

The design of the Project G seemed to come from a dream.

“I want you to imagine that you are visitors from Mars and that you have never seen a Canadian living room, let alone a hi-fi set,” is how designer Hugh Spencer challenged Clairtone’s engineers when they first started working on the Project G. “What are the features that, regardless of design considerations, you would like to see incorporated in a new hi-fi set?”

Advertisement

Black and white photo of a young woman sitting on the floor in front of a stereo system and looking toward the floor. The film “I’ll Take Sweden” featured a Project G, shown here with co-star Tuesday Weld.Nina Munk/The Peter Munk Estate

The result was a stereo system like no other. Instead of speakers, the Project G had sound globes. Instead of the heavy cabinetry typical of 1960s entertainment consoles, it had sleek, angled rosewood panels balanced on an aluminum stand. At over 2 meters long, it was too big for the average living room but perfect for Hollywood movies—Dean Martin had one in his swinging Malibu bachelor pad in the 1965 film Marriage on the Rocks. According to the 1964 press release announcing the Project G, it was nothing less than “a new sculptured representation of modern sound.”

The first-generation Project G had a high-end Elac Miracord 10H turntable, while later models used a Garrard Lab Series turntable. The transistorized chassis and control panel provided AM, FM, and FM-stereo reception. There was space for storing LPs or for an optional Ampex 1250 reel-to-reel tape recorder.

The “G” in Project G stood for “globe.” The hermetically sealed 46-centimeter-diameter sound globes were made of spun aluminum and mounted at the ends of the cantilevered base; inside were Wharfedale speakers. The sound globes rotated 340 degrees to project a cone of sound and could be tuned to re-create the environment in which the music was originally recorded—a concert hall, cathedral, nightclub, or opera house.

Between 1965 and 1967, Clairtone sponsored the Miss Canada beauty pageant. Miss Canada 1963 was Diane Landry, seen here with a Project G2 at Clairtone\u2019s factory showroom in Rexdale, Ontario. Diane Landry, winner of the 1963 Miss Canada beauty pageant, poses with a Project G2. Nina Munk/The Peter Munk Estate

Initially, Clairtone intended to produce only a handful of the stereos. As one writer later put it, it was more like a concept car “intended to give Clairtone an aura of futuristic cool.” Eventually fewer than 500 were made. But the Project G still became an icon of mod ’60s Canadian design, winning a silver medal at the 13th Milan Triennale, the international design exhibition.

Advertisement

And then it was over; the dream had ended. Eleven years after its founding, Clairtone collapsed, and Munk and cofounder David Gilmour lost control of the company.

The birth of Clairtone Sound Corp.

Clairtone’s Peter Munk lived a colorful life, with a nightmarish start and many fantastic and dreamlike parts too. He was born in 1927 in Budapest to a prosperous Jewish family. In the spring of 1944, Munk and 13 members of his family boarded a train with more than 1,600 Jews bound for the Bergen-Belsen concentration camp. They arrived, but after some weeks the train moved on, eventually reaching neutral Switzerland. It later emerged that the Nazis had extorted large sums of cash and valuables from the occupants in exchange for letting the train proceed.

As a teenager in Switzerland, Munk was a self-described party animal. He enjoyed dancing and dating and going on long ski trips with friends. Schoolwork was not a top priority, and he didn’t have the grades to attend a Swiss university. His mother, an Auschwitz survivor, encouraged him to study in Canada, where he had an uncle.

Before he could enroll, though, Munk blew his tuition money entertaining a young woman during a trip to New York. He then found work picking tobacco, earned enough for tuition, and graduated from the University of Toronto in 1952 with a degree in electrical engineering.

Advertisement

Color photo of two men in office attire. Clairtone cofounders Peter Munk [left] and David Gilmour envisioned the company as a luxury brand.Nina Munk/The Peter Munk Estate

At the age of 30, Munk was making custom hi-fi sets for wealthy clients when he and David Gilmour, who owned a small business importing Scandinavian goods, decided to join forces. Their idea was to create high-fidelity equipment with a contemporary Scandinavian design. Munk’s father-in-law, William Jay Gutterson, invested $3,000. Gilmour mortgaged his house. In 1958, Clairtone Sound Corp. was born.

From the beginning, Munk and Gilmour sought a high-end clientele. They positioned Clairtone as a luxury brand, part of an elegant lifestyle. If you were the type of woman who listened to music while wearing pearls and a strapless gown and lounging on a shag rug, your music would be playing on a Clairtone. If you were a man who dressed smartly and owned an Arne Jacobsen Egg chair, you would also be listening on a Clairtone. That was the modern lifestyle captured in the company’s advertisements.

In 1958, Clairtone produced its first prototype: the monophonic 100-M, which had a long, low cabinet made from oiled teak, with a Dual 1004 turntable, a Granco tube chassis, and a pair of Coral speakers. It never went into production, but the next model, the stereophonic 100-S, won a Design Award from Canada’s National Industrial Design Council in 1959. By 1963, Clairtone was selling 25,000 units a year.

Black and white photo of a line of stereo components under assembly, with a man in a lab coat at one end and a man in a suit at the other.  Peter Munk visits the Project G assembly line in 1965. Nina Munk/The Peter Munk Estate

Design was always front and center at Clairtone, not just for the products but also for the typography, advertisements, and even the annual reports. Yet nothing in the early designs signaled the dramatic turn it would take with the Project G. That came about because of Hugh Spencer.

Advertisement

Spencer was not an engineer, nor did he have experience designing consumer electronics. His day job was designing sets for the Canadian Broadcast Corp. He consulted regularly with Clairtone on the company’s graphics and signage. The only stereo he ever designed for Clairtone was the Project G, which he first modeled as a wooden box with tennis balls stuck to the sides.

From both design and quality perspectives, Clairtone was successful. But the company was almost always hemorrhaging cash. In 1966, with great fanfare and large government incentives, the company opened a state-of-the-art production facility in Nova Scotia. It was a mismatch. The local workforce didn’t have the necessary skills, and the surrounding infrastructure couldn’t handle the production. On 27 August 1967, Munk and Gilmour were forced out of Clairtone, which became the property of the government of Nova Scotia.

Despite the demise of their first company (and the government inquiry that followed), Munk and Gilmour remained friends and went on to become serial entrepreneurs. Their next venture? A resort in Fiji, which became part of a large hotel chain in that country, Australia, and New Zealand. (Gilmour later founded Fiji Water.) Then Munk and Gilmour bought a gold mine and cofounded Barrick Gold (now Barrick Mining Corp., one of the largest gold mining operations in the world). Their businesses all had ups and downs, but both men became extremely wealthy and noted philanthropists.

Preserving Canadian design

As an example of iconic design, the Project G seems like an ideal specimen for museum collections. And in 1991, Frank Davies, one of the designers who worked for Clairtone, donated a Project G to the recently launched Design Exchange in Toronto. It would be the first object in the DX’s permanent collection, which sought to preserve examples of Canadian design. The museum quickly became Canada’s center for the promotion of design, hosting more than 50 programs each year to teach people about how design influences every aspect of our lives.

Advertisement

In 2008, the museum opened The Art of Clairtone: The Making of a Design Icon, 1958–1971, an exhibition showcasing the company’s distinctive graphic design, industrial design, engineering, and photography.

Color photo of a modern stereo system in the foreground and a woman sitting in a modern arm chair in the back. David Gilmour’s wife, Anna Gilmour, was the company’s first in-house model.Nina Munk/The Peter Munk Estate

But what happened to the DX itself is a reminder that any museum, however worthy, shouldn’t be taken for granted. In 2019, the DX abruptly closed its permanent collection, and curators were charged with deaccessioning its objects. Fortunately, the Royal Ontario Museum, Carleton and York Universities, and the Archives of Ontario, among others, were able to accept the artifacts and companion archives. (The Project G pictured at top is now at the Royal Ontario Museum.)

Researchers at York and Carleton have been working to digitize and virtually reconstitute the DX collection, through the xDX Project. They’re using the Linked Infrastructure for Networked Cultural Scholarship (LINCS) to turn interlinked and contextualized data about the collection into a searchable database. It’s a worthy goal, even if it’s not quite the same as having all of the artifacts and supporting papers physically together in one place. I admit to feeling both pleased about this virtual workaround, and also a little sad that a unified collection that once spoke to the historical significance of Canadian design no longer exists.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

Advertisement

An abridged version of this article appears in the February 2026 print issue as “The Project G Stereo Defined 1960s Cool.”

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025