Tech
MacBook Neo benchmark results are predictably close to iPhone 16 Pro, M1 comparable
Apple announced the MacBook Neo with the A18 Pro iPhone processor, and early benchmarks reveal expected results are in line with AppleInsider’s previous analysis, making it an easy spec swap for the M1 MacBook Air.

MacBook Neo has an A18 Pro processor and that should be plenty for most users
The MacBook Neo is a new and somewhat controversial product in Apple’s lineup. It cuts out a lot of premium features associated with modern Macs to achieve the low price.
One of the cost-saving features is the A18 Pro processor originally revealed for the iPhone 16 Pro. Early benchmarks first discovered by MacRumors show what are nearly identical specs to the iPhone 16 Pro but with a Mac identifier 17,5.
Continue Reading on AppleInsider | Discuss on our Forums
Tech
iPhone 17e RAM confirmed and it’s exactly what we expected
Apple’s upcoming iPhone 17e might be positioned as the more affordable entry in the iPhone 17 lineup. Up until now, we didn’t know how much memory the phone would have – but now it’s been revealed and it’s exactly as we expected.
According to data uncovered in Apple’s developer tool Xcode, the iPhone 17e comes with 8GB of RAM. This is the same amount found in the iPhone 16e and the baseline configuration we expected for this generation.
In this case, the Xcode listing confirms the device meets the minimum requirement for Apple Intelligence, Apple’s suite of AI features introduced alongside its latest iOS releases. That means even the lower-end iPhone 17e will be capable of running Apple’s AI-powered features – putting it on relatively equal footing with the standard iPhone 17 in that area.
The two models also share another important piece of hardware: Apple’s A19 chip. However, there is still some separation between the devices. While both run the same processor, the iPhone 17e features a 4-core GPU. By contrast, the regular iPhone 17 includes a 5-core GPU.
The iPhone 17e is available to pre-order now and is scheduled to launch on March 11.
Tech
OPPO and MediaTek Highlight New On-Device AI Features at MWC 2026
The Chinese tech firm OPPO unveiled new on-device AI advancements at Mobile World Congress 2026 in Barcelona. It presented these innovations in collaboration with MediaTek. At the event, the duo emphasized the significance of their collaboration, which is helping the world embrace the latest developments in the field of artificial intelligence through the latest smartphones. The focus of the event was the development of the latest AI phones, which will have the ability to process information quickly on the device itself.
New On-Device AI Features
The Chinese tech firm OPPO unveiled AI features powered by the MediaTek Dimensity 9500 chip at the event. Among the notable features is the AI Translate, which enables the user to translate information directly from the device. This offers better accuracy and smoother results, even when the network is poor. Another notable feature is the AI Portrait Glow, which helps the user to achieve sharper portrait photos in low-light conditions. These features are expected to arrive with the latest OPPO Find X9 Series via the upcoming ColorOS 16 update.
Omni Full-Modal AI Model

OPPO and MediaTek previewed Omni, the first full-modal AI model designed to run directly on smartphones. This technology allows the device to read its surroundings using voice, video, and text, making it easier to interact with the device.
The Find X9 Pro was also shown at the event, with the device’s camera with Hasselblad support and AI capabilities. The OPPO Reno15 Pro showed the AI capabilities of the device in terms of its camera.
Improved Cross-Device Connectivity with Quick Share
To improve connectivity between different platforms, OPPO introduced support for Android Quick Share. With this feature, users will be able to easily share files between OPPO devices and other devices running the operating systems iOS, iPadOS, and macOS, without the need for any other app. OPPO announced that the feature will begin rolling out to compatible devices via a software update in March.
Industry Recognition and Future AI Plans
The OPPO Find X9 Pro was recognized as one of the finalists in the GLOMO Awards under the “Best Smartphone” category. This shows the capabilities of the device in terms of performance, camera, and the presence of AI features. This also shows the kind of innovation that OPPO is bringing to the device. In the future, OPPO and MediaTek will continue to collaborate in the development of the AI features of the device in order to take smartphones to the next level in terms of speed.
Tech
There’s another contender looking to be the best phone of 2026
Infinix has thrown its hat firmly into the flagship ring with the Note 60 Ultra, a new top-tier phone unveiled at MWC 2026 in Barcelona. It’s aiming far higher than the brand’s usual mid-range territory.
The headline feature is a 200MP camera system, but the phone is also packing satellite connectivity, a massive battery and a striking design developed with Italian automotive design house Pininfarina.
Taken together, it looks like Infinix’s most serious attempt yet to compete with the big names in the premium phone space.
The design is where Infinix wants to make its first impression. Rather than the increasingly chunky camera bumps we’ve seen across recent flagships, the Note 60 Ultra uses an aluminium unibody rear with what the company calls a Uni-Chassis camera module.
This module is formed from a single sheet of Gorilla Glass Victus. The idea is to keep the back smooth and uninterrupted. Therefore, it is more like the bodywork of a sports car than a traditional smartphone.
There are some flashy touches too. A “Floating Taillight” lighting strip runs across the back and lights up when the phone powers on, while a hidden Active Matrix rear display can show notifications, icons or a pixel-style companion.
Under the surface sits a triple-camera setup anchored by a 200MP Samsung ISOCELL HPE sensor, joined by a 50MP periscope telephoto camera and an ultra-wide lens. Zoom runs from a 2× optical crop and 3.5× optical zoom through to 7× lossless digital zoom. Meanwhile, it stretches all the way to 100× hybrid zoom for long-distance shots.
Elsewhere, the phone leans heavily into big-spec hardware. A 4nm MediaTek Dimensity 8400 Ultimate chip powers the device alongside 12GB of RAM and 256GB of storage. Additionally, a 7000mAh silicon-carbon battery supports 100W wired charging and 50W wireless charging, with a full top-up claimed in around 48 minutes.
Infinix is also adding two-way satellite calling and messaging, allowing users to stay connected in areas without mobile coverage.
The display is another highlight, with a 1.5K panel capable of 144Hz refresh rates and a peak brightness of 4500 nits, backed up by stereo speakers tuned by JBL.
The Note 60 Ultra runs Android 16 with Infinix’s new GlowSpace interface, and the company is promising three years of OS updates and five years of security patches.
Whether it can truly challenge the heavy hitters remains to be seen, but on paper at least, the Note 60 Ultra is shaping up to be one of the more ambitious phones launching in 2026.
Tech
The Double Whammy Of The CBS, Warner Brothers Mergers Will Be A Layoff Nightmare
from the here-comes-the-synergies dept
You might recall that Paramount and CBS had only just started to lay off workers in the wake of the merger with David Ellison’s Skydance. Now, after Ellison (or more accurately his dad and the Saudis) dramatically overpaid for Warner Brothers ($111 billion plus numerous incentives), the overall debt load at the company is so massive, it could make past Warner Brothers chaos seem somewhat charming:
“The deal is tied up with so much debt that it virtually guarantees layoffs the likes of which Hollywood hasn’t seen before. That’s going to mean far less output from the suite of properties under Paramount and Warner’s control. And it will mean that the production apocalypse which has been brewing since the pandemic, the end of Peak TV, and the contraction of runaway green lights for streaming networks will grow still more apocalyptic.”
The real world costs of this kind of pointless consolidation is always borne by consumers and labor. Executives get disproportionate compensation, tax breaks, and a brief stock bump. Workers get shitcanned and consumers get higher prices and shittier overall product in a bid to pay doubt debt. We have seen this happen over and over and over again in U.S. media. It’s not subtle or up for debate.
Keep in mind Warner Brothers has seen nothing but this kind of operational chaos over the last two decades as it bounced between pointless mergers with AOL, AT&T, and Discovery, all of which promised vast synergies and new innovation, but instead resulted in oceans of layoffs, higher prices, and consistently shittier product.
Now comes the granddaddy deal of them all to try and cement Larry Ellison’s obvious desire to try and dominate what’s left of U.S. media. Run by his son David, whose operational judgement (if Bari Weiss’ start at CBS is any indication) is arguably worse than all the terrible, fail upward, trust-fund brunchlord types that preceded him.
All of the debt from past deals just keeps piling up and being kicked down the road in a lazy, pseudo-innovative shell game (and this doesn’t; include CBS!):
“In its initial $30-a-share bid for Warner Bros., Paramount was financing the purchase with up to $84 billion in pro forma debt. That has now risen to $31 a share, tacking on roughly another $2.5 billion, plus a “ticking fee” of 25 cents per share per quarter for every quarter the deal doesn’t close after September 30 of this year. Paramount is also paying Netflix’s breakup fee of $2.8 billion. Paramount has not released the financing details for the new deal, but it’s likely to be an even higher debt load.”
Ellison is pretty broadly also leveraged in the AI investment hype cycle, and if that bubble pops (or pops worse, as the case may be), this entire gambit could go wrong very, very quickly. Even the ongoing Saudi cash infusions may not be enough to save them. Larry Ellison’s nepobaby son will of course be fine; the employees, consumers, and broader U.S. media market, not so much.
Filed Under: consolidation, crash, david ellison, hype, larry ellison, layoffs, media, merger, nepobabies
Companies: cbs, oracle, paramount, warner bros. discovery
Tech
Lego’s new F1 reveals have us excited for the new season
As the 2026 Formula 1 season kicks off in Melbourne, Lego has unveiled two new display sets celebrating Ferrari drivers Charles Leclerc and Lewis Hamilton.
The new Lego Editions Scuderia Ferrari HP helmet sets recreate the drivers’ 2025 helmet designs in detailed brick form. They come complete with signature plaques and for the first time minifigures of both drivers in Ferrari colours.
Both models are designed as compact display pieces for F1 fans rather than full race cars. The Charles Leclerc helmet includes 886 pieces and highlights details from the Monaco-born driver’s real helmet. It includes his number 16, Ferrari’s prancing horse logo, and personal tributes to his father and the late driver Jules Bianchi. In addition, it includes Leclerc’s signature on a display plaque and his first official Lego minifigure.
The Lewis Hamilton helmet set is slightly smaller at 884 pieces, but still packs in plenty of detail. It features Hamilton’s number 44, a signature plaque, and a minifigure version of the seven-time world champion wearing a red Ferrari race suit. This reflects his high-profile switch to the team.
To mark the announcement, Lego also created life-size brick-built versions of both helmets that appeared in the paddock during the Australian Grand Prix weekend. These oversized builds use more than 3,500 Lego elements each. They measure roughly 26cm tall, and weigh just under 3kg. They were designed by Lego Certified Professional Ryan “The Brickman” McNaught. Reportedly, each took about 60 hours to construct.
Each set is aimed at builders aged 14 and up and cost £79.99. These sets will be available globally too from May 1, although pre-orders are open now.
Tech
FLASH Radiotherapy’s Bold Approach to Cancer Treatment
Inside a cavernous hall at the Swiss-French border, the air hums with high voltage and possibility. From his perch on the wraparound observation deck, physicist Walter Wuensch surveys a multimillion-dollar array of accelerating cavities, klystrons, modulators, and pulse compressors—hardware being readied to drive a new generation of linear particle accelerators.
Wuensch has spent decades working with these machines to crack the deepest mysteries of the universe. Now he and his colleagues are aiming at a new target: cancer. Here at CERN (the European Organization for Nuclear Research) and other particle-physics labs, scientists and engineers are applying the tools of fundamental physics to develop a technique called FLASH radiotherapy that offers a radical and counterintuitive vision for treating the disease.
CERN researcher Walter Wuensch says the particle physics lab’s work on FLASH radiotherapy is “generating a lot of excitement.”CERN
Radiation therapy has been a cornerstone of cancer treatment since shortly after Wilhelm Conrad Röntgen discovered X-rays in 1895. Today, more than half of all cancer patients receive it as part of their care, typically in relatively low doses of X-rays delivered over dozens of sessions. Although this approach often kills the tumor, it also wreaks havoc on nearby healthy tissue. Even with modern precision targeting, the potential for collateral damage limits how much radiation doctors can safely deliver.
FLASH radiotherapy flips the conventional approach on its head, delivering a single dose of ultrahigh-power radiation in a burst that typically lasts less than one-tenth of a second. In study after study, this technique causes significantly less injury to normal tissue than conventional radiation does, without compromising its antitumor effect.
At CERN, which I visited last July, the approach is being tested and refined on accelerators that were never intended for medicine. If ongoing experiments here and around the world continue to bear out results, FLASH could transform radiotherapy—delivering stronger treatments, fewer side effects, and broader access to lifesaving care.
“It’s generating a lot of excitement,” says Wuensch, a researcher at CERN’s Linear Electron Accelerator for Research (CLEAR) facility. “We accelerator people are thinking, Oh, wow, here’s an application of our technology that has a societal impact which is more immediate than most high-energy physics.”
The Unlikely Birth of FLASH Therapy
The breakthrough that led to FLASH emerged from a line of experiments that began in the 1990s at Institut Curie in Orsay, near Paris. Researcher Vincent Favaudon was using a low-energy electron accelerator to study radiation chemistry. Targeting the accelerator at mouse lungs, Favaudon expected the radiation to produce scar tissue, or fibrosis. But when he exposed the lungs to ultrafast blasts of radiation, at doses a thousand times as high as what’s used in conventional radiation therapy, the expected fibrosis never appeared.
Puzzled, Favaudon turned to Marie-Catherine Vozenin, a radiation biologist at Curie who specialized in radiation-induced fibrosis. “When I looked at the slides, there was indeed no fibrosis, which was very, very surprising for this type of dose,” recalls Vozenin, who now works at Geneva University Hospitals, in Switzerland.
The pair expanded the experiments to include cancerous tumors. The results upended a long-held trade-off of radiotherapy: the idea that you can’t destroy a tumor without also damaging the host. “This differential effect is really what we want in radiation oncology, not damaging normal tissue but killing the tumors,” Vozenin says.
They repeated the protocol across different types of tissue and tumors. By 2014, they had gathered enough evidence to publish their findings in Science Translational Medicine. Their experiments confirmed that delivering an ultrahigh dose of 10 gray or more in less than a tenth of a second could eradicate tumors in mice while leaving surrounding healthy tissue virtually unharmed. For comparison, a typical chest X-ray delivers about 0.1 milligray, while a session of conventional radiation therapy might deliver a total of about 2 gray per day. (The authors called the effect “FLASH” because of the quick, high doses involved, but it’s not an acronym.)
Although many cancer experts were skeptical about the FLASH effect on healthy tissue when it was first announced in 2014, numerous studies have since confirmed and expanded on those results. In a 2020 paper, a lung tissue sample taken 4 months after being exposed to conventional radiotherapy [center] shows many more dark spots indicating scarring than a sample exposed to FLASH [right]. The nonirradiated sample [left] is the control.
Many cancer experts were skeptical. The FLASH effect seemed almost too good to be true. “It didn’t get a lot of traction at first,” recalls Billy Loo, a Stanford radiation oncologist specializing in lung cancer. “They described a phenomenon that ran counter to decades of established radiobiology dogma.”
But in the years since then, researchers have observed the effect across a wide range of tumor types and animals—beyond mice to zebra fish, fruit flies, and even a few human subjects, with the same protective effect in the brain, lungs, skin, muscle, heart, and bone.
Why this happens remains a mystery. “We have investigated a lot of hypotheses, and all of them have been wrong,” says Vozenin. Currently, the most plausible theory emerging from her team’s research points to metabolism: Healthy and cancerous cells may process reactive oxygen species—unstable oxygen-containing molecules generated during radiation—in very different ways.
Adapting Accelerators for FLASH
At the time of the first FLASH publication, Loo and his team at Stanford were also focused on dramatically speeding up radiation delivery. But Loo wasn’t chasing a radiobiological breakthrough. He was trying to solve a different problem: motion.
“The tumors that we treat are always moving targets,” he says. “That’s particularly true in the lung, where because of breathing motion, the tumors are constantly moving.”
To bring FLASH therapy out of the lab and into clinical use, researchers like Vozenin and Loo needed machines capable of delivering fast, high doses with pinpoint precision deep inside the body. Most early studies relied on low-energy electron beams like Favaudon’s 4.5-megaelectron-volt Kinetron—sufficient for surface tumors, but unable to reach more than a few centimeters into a human body. Treating deep-seated cancers in the lung, brain, or abdomen would require far higher particle energies.
At CERN, researchers working on FLASH are developing this hardware to boost electrons to ultrahigh power within a short distance.
CERN
They also needed an alternative to conventional X-rays. In a clinical linac, X-ray photons are produced by dumping high-energy electrons into a bremsstrahlung target, which is made of a material with a high atomic number, like tungsten or copper. The target slows the electrons, converting their kinetic energy into X-ray photons. It’s an inherently inefficient process that wastes most of the beam power as heat and makes it extremely difficult to reach the ultrahigh dose rates required for FLASH. High-energy electrons, by contrast, can be switched on and off within milliseconds. And because they have a charge and can be steered by magnets, electrons can be precisely guided to reach tumors deep within the body. (Researchers are also investigating protons and carbon ions; see the sidebar, “What’s the Best Particle for FLASH Therapy?”)
Loo turned to the SLAC National Accelerator Laboratory in Menlo Park, Calif., where physicist Sami Gamal-Eldin Tantawi was redefining how electromagnetic waves move through linear accelerators. Tantawi’s findings allowed scientists to precisely control how energy is delivered to particles—paving the way for compact, efficient, and finely tunable machines. It was exactly the kind of technology FLASH therapy would need to target tumors deep inside the body.
Meanwhile, Vozenin and other European researchers turned to CERN, best known for its 27-kilometer Large Hadron Collider (LHC) and the 2012 discovery of the Higgs boson, the “God particle” that gives other particles their mass.
CERN is also home to a range of smaller linear accelerators—including CLEAR, where Wuensch and his team are adapting high-energy physics tools for medicine.
Unlike the LHC, which loops particles around a massive ring to build up energy before smashing them together, linear accelerators like CLEAR send particles along a straight, one-time path. That setup allows for greater precision and compactness, making it ideal for applications like FLASH.
At the heart of the CLEAR facility, Wuensch points out the 200-MeV linear accelerator with its 20-meter beamline. This is “a playground of creativity,” he says, for the physicists and engineers who arrive from all over the world to run experiments.
The process begins when a laser pulse hits a photocathode, releasing a burst of electrons that form the initial beam. These electrons travel through a series of precisely machined copper cavities, where high-frequency microwaves push them forward. The electrons then move through a network of magnets, monitors, and focusing elements that shape and steer them toward the experimental target with submillimeter precision.
Instead of a continuous stream, the electron beam is divided into nanosecond-long bunches—billions of electrons riding the radio-frequency field like surfers. Inside the accelerator’s cavities, the field flips polarity 12 billion times per second, so timing is everything: Only electrons that arrive perfectly in phase with the accelerating wave will gain energy. That process repeats through a chain of cavities, each giving the bunches another push, until the beam reaches its final energy of 200 MeV.
Physicist Marçà Boronat inspects one of the high-precision components used to accelerate the electrons for FLASH radiotherapy.
CERN
Much of this architecture draws directly from the Compact Linear Collider study, a decades-long CERN project aimed at building a next-generation collider. The proposed CLIC machine would stretch 11 kilometers and collide electrons and positrons at 380 gigaelectron volts. To do that in a linear configuration—without the multiple passes around a ring like the LHC—CERN engineers have had to push for extremely high acceleration gradients to boost the electrons to high energies over relatively short distances—up to 100 megavolts per meter.
Wuensch leads me to a large experimental hall housing prototype structures from the CLIC effort, and points out the microwave devices that now help drive FLASH research. Though the future of CLIC as a collider remains uncertain, its infrastructure is already yielding dividends: smaller, high-gradient accelerators that may one day be as suited for curing cancer as they are for smashing particles.
The power behind the high gradients comes from CERN’s Xboxes, the X-band RF systems that dominate the experimental hall. Each Xbox houses a klystron, modulator, pulse compressor, and waveguide network to generate and shape the microwave pulses. The pulse compressors store energy in resonant cavities and then release it in a microsecond burst, producing peaks of up to 200 megawatts; if it were continuous, that’s enough to power at least 40,000 homes. The Xboxes let researchers fine-tune the power, timing, and pulse shape.
According to Wuensch, many of the recent accelerator developments were enabled by advances in computer simulation and high-precision three-dimensional machining. These tools allow the team to iterate quickly, designing new accelerator components and improving beam control with each generation.
Still, real-world challenges remain. The power demands are formidable, as are the space requirements; for all the talk of its “compact” design, the original CLIC was meant to span kilometers. Obviously, a hospital needs something that’s actually compact.
“A big challenge of the project,” says Wuensch, “is to transform this kind of technology and these kinds of components into something that you can imagine installing in a hospital, and it will run every day reliably.”
To that end, CERN researchers have teamed up with the Lausanne University Hospital (known by its French acronym, CHUV) and the French medical technology company Theryq to design a hospital facility capable of treating large and deep-seated tumors with the very short time scales needed for FLASH and scaled down to fit in a clinical setting.
Theryq’s Approach to FLASH
Theryq’s research center and factory are located in southern France, near the base of Montagne Sainte-Victoire, a jagged spine of limestone that Paul Cézanne painted dozens of times, capturing its shifting light and form.
“The solution that we are trying to develop here is something which is extremely versatile,” says Ludovic Le Meunier, CEO of the expanding company. “The ultimate goal is to be able to treat any solid tumor anywhere in the body, which is about 90 percent of the cancer these days.”
Theryq’s FLASHDEEP system, under development with CERN and the company’s clinical partners, has a 13.5-meter-long, 140-MeV linear accelerator. That’s strong enough to treat tumors at depths of up to about 20 centimeters in the body. The patient will remain in a supported standing position during the split-second irradiation.THERYQ
Theryq’s push to bring FLASH radiotherapy from the lab to clinic has followed a three-pronged rollout, with each device engineered for a specific depth and clinical use. The first machine, FLASHKNiFE, was unveiled in 2020. Designed for superficial tumors and intraoperative use, the system delivers electron beams at 6 or 9 MeV. A prototype installed that same year at CHUV is conducting a phase-two trial for patients with localized skin cancer.
More recently, Theryq launched FLASHLAB, a compact, 7-MeV platform for radiobiology research.
The company’s most ambitious system, FLASHDEEP, is still under development. The 13.5-meter-long electron source will deliver very high-energy electrons of as much as 140 MeV up to 20 centimeters inside the body in less than 100 milliseconds. An integrated CT scanner, built into a patient-positioning system developed by Leo Cancer Care, captures images that stream directly into the treatment-planning software, enabling precise calculation of the radiation dose. “Before we actually trigger the beam or the treatment, we make stereo images to verify at the very last second that the tumor is exactly where it should be,” says Theryq technical manager Philippe Liger.
FLASH Therapy Moves to Animal Tests
While CERN’s CLEAR accelerator has been instrumental in characterizing FLASH parameters, researchers seeking to study FLASH in living organisms must look elsewhere: CERN doesn’t allow animal experiments on-site. That’s one reason why a growing number of scientists are turning to PITZ, the Photo Injector Test Facility in Zeuthen, a leafy lakeside suburb of Berlin.
PITZ is part of Germany’s national accelerator lab and is responsible for developing the electron source for the European X-ray Free-Electron Laser. Now PITZ is emerging as a hub for FLASH research, with an unusually tunable accelerator and a dedicated biomedical lab to ensure controlled conditions for preclinical studies.
At Germany’s Photo Injector Test Facility in Zeuthen (PITZ), the electron-beam accelerator [top] is used to irradiate biological targets in early-stage animal tests of FLASH radiotherapy [bottom].Top: Frieder Mueller; Bottom: MWFK
“The biggest advantage of our facility is that we can do a very stepwise, very defined and systematic study of dose rates,” says Anna Grebinyk, a biochemist who heads the new biomedical lab, “and systematically optimize the FLASH effect to see where it gets the best properties.”
The experiments begin with zebra-fish embryos, prized for early-stage studies because they’re transparent and develop rapidly. After the embryos, researchers test the most promising parameters in mice. To do that, the PITZ team uses a small-animal radiation research platform, complete with CT imaging and a robotic positioning system adapted from CERN’s CLEAR facility.
What sets PITZ apart is the flexibility of its beamline. The 30-meter accelerator system steers electrons with micrometer precision, producing electron bunches with exceptional brightness and emittance—a metric of beam quality. “We can dial in any distribution of bunches we want,” says Frank Stephan, group leader at PITZ. “That gives us tremendous control over time structure.”
Timing matters. At PITZ, the laser-struck photocathode generates electron bunches that are accelerated immediately, at up to 60 million volts per meter. A fast electromagnetic kicker system acts as a high-speed gatekeeper, selectively deflecting individual electron bunches from a high-repetition beam and steering them according to researchers’ needs. This precise, bunch-by-bunch control is essential for fine-tuning beam properties for FLASH experiments and other radiation therapy studies.
“The idea is to make the complete treatment within one millisecond,” says Stephan. “But of course, you have to [trust] that within this millisecond, everything works fine. There is not a chance to stop [during] this millisecond. It has to work.”
Regulating the dose remains one of the biggest technical hurdles in FLASH. The ionization chambers used in standard radiotherapy can’t respond accurately when dose rates spike hundreds of times higher in a matter of microseconds. So researchers are developing new detector systems to precisely measure these bursts and keep pace with the extreme speed of FLASH delivery.
FLASH as a Research Tool
Beyond its therapeutic potential, FLASH may also open new windows to illuminate cancer biology. “What is really, really superinteresting, in my opinion,” says Vozenin, “is that we can use FLASH as a tool to understand the difference between normal tissue and tumors. There must be something we’re not aware of that really distinguishes the two—and FLASH can help us find it.” Identifying those differences, she says, could lead to entirely new interventions, not just with radiation, but also with drugs.
Vozenin’s team is currently testing a hypothesis involving long-lived proteins present in healthy tissue but absent in tumors. If those proteins prove to be key, she says, “we’re going to find a way to manipulate them—and perhaps reverse the phenomenon, even [turn] a tumor back into a normal tissue.”
Proponents of FLASH believe it could help close the cancer care gap worldwide; in low-income countries, only about 10 percent of patients have access to radiotherapy, and in middle-income countries, only about 60 percent of patients do, according to the International Atomic Energy Agency. Because FLASH treatment can often be delivered in a single brief session, it could spare patients from traveling long distances for weeks of treatment and allow clinics to treat many more people.
High-income countries stand to benefit as well. Fewer sessions mean lower costs, less strain on radiotherapy facilities, and fewer side effects and disruptions for patients.
The big question now is, How long will it take? Researchers I spoke with estimate that FLASH could become a routine clinical option in about 10 years—after the completion of remaining preclinical studies and multiphase human trials, and as machines become more compact, affordable, and efficient. Much of the momentum comes from a growing field of startups competing to build devices, but the broader scientific community remains remarkably open and collaborative.
“Everyone has a relative who knows about cancer because of their own experience,” says Stephan. “My mother died of it. In the end, we want to do something good for mankind. That’s why people work together.”
This article appears in the March 2026 print issue.
From Your Site Articles
Related Articles Around the Web
Tech
London doctor carries out remote robot surgery on cancer patient 1,500 miles away
![]()
The milestone procedure saw Professor Prokar Dasgupta, based at The London Clinic’s robotic center in Harley Street, operate on 62-year-old patient Paul Buxton, who was in St Bernard’s Hospital in Gibraltar, a British overseas territory in southern Spain.
Read Entire Article
Source link
Tech
Tech Moves: Amperity and Siteimprove name CMOs; AWS director departs; Gong’s new exec

— Amperity, a Seattle-based startup that helps companies collect and manage customer data, named Bridget Perry as chief marketing officer.
Earlier in her career, Perry was a marketing director for Microsoft for nearly nine years and worked for more than eight years at Adobe, leaving the role of CMO of Europe, Middle East and Africa. She was most recently interim CMO for Later, an influencer marketing company, and has held strategic advisor roles.
“Bridget has led marketing teams through real platform shifts, not incremental change. She knows what it takes to build credibility in a market and scale it globally,” said Tony Owens, CEO of Amperity, in a statement. The company is ranked No. 39 on GeekWire 200, our list of top Pacific Northwest startups.

— Seattle-based Simon Frey was promoted to chief customer officer of Gong. He was previously senior vice president of customer outcomes for the San Francisco startup that builds agentic AI technology to optimize revenue performance and automate workflows.
“Simon has spent years partnering closely with our customers, helping them unlock meaningful growth across their revenue organizations,” said Shane Evans, Gong’s chief revenue architect, in a statement.
Frey joined Gong in 2024 after leaving TaxBit, where he was VP of revenue. Other past employers include Qualtrics and McKinsey. He also served as an advisor to Jargon, which was acquired by Remitly.

— Elizabeth Scallon is now director of healthcare AI startups at Nvidia where she will oversee its global Healthcare and Life Sciences Inception program. Scallon, a longtime leader in Seattle’s startup ecosystem, joins Nvidia from HP where she worked for nearly four years as director of technical and business incubation and strategy.
Scallon is also an affiliate instructor at the University of Washington and has held leadership roles at Amazon and WeWork. She was director of the UW’s CoMotion Labs for five years and co-founded Find Ventures.
“With this role, I’m returning to my roots in biotech and genetics and bringing the skills, experience, and connections I’ve built along the way to do my life’s work,” Scallon said on LinkedIn.

— After nearly a decade at Amazon Web Services, Jenny Brinkley is resigning as director of security readiness.
“I start a new role next week in a rapidly growing space, and I am excited to be part of something transformative once again. To my AWS colleagues, thank you for the kind words and support,” Brinkley said on LinkedIn.
Brinkley, who is based in Portland, Ore., earlier co-founded an AI startup and ran a consultancy.
— Siteimprove announced Jen Jones as its chief marketing officer. The company, which helps businesses improve their website functionality, is based in Denmark and has an office in Bellevue, Wash., where much of its executive leadership team is based. Jones was previously at commercetools.
— Padmashree Koneti had departed her role as chief product officer of Yoodli after roughly five months. The Seattle startup has not yet named a replacement. Yoodli, which is using generative AI to analyze speech and offer tips for improving communication skills, also just hired Alexandra Breymeier as customer success lead. She previously worked at employee referral company ERIN.

— Vandana Shah is now vice president of product for Scowtt, a Kirkland, Wash.-based startup that wants to reshape how advertisers optimize paid campaigns. The company in December announced a $12 million Series A funding round.
Shah joins Scowtt from Ladder. She was previously Google’s director of product management for the advertising platform, working at the Bay Area company for more than 16 years.
“Having spent years leading complex platform initiatives at Google Ads, I have seen the power of building resilient, customer-first foundations at scale. I am thrilled to bring that experience to Scowtt,” she said on LinkedIn.

—- Dinesh Govindasamy was promoted to director of engineering at Meta, supporting teams across Tupperware, Public Cloud and Meta Kubernetes Service. Govindasamy joined Meta in October 2023.
“This milestone is thanks to the mentors, collaborators, and teams who believed in me and pushed me to grow. You know who you are — thank you,” he said on LinkedIn.
Govindasamy, based in the Seattle area, was previously at Microsoft for more than 15 years, leaving the role of group engineering manager in which he led teams working on Azure Kubernetes Service Hybrid and other initiatives.

— Beto Yarce has started his tenure as director of the City of Seattle’s Office of Economic Development. Yarce joins the city from the U.S. Small Business Administration where he was regional administrator for the Pacific Northwest.
“I am incredibly honored by Mayor Wilson’s trust in me to lead OED and to help shape the economic ecosystems that make Seattle not only a great place to live, work, and play, but also the best place in the country to open, run, and grow a business,” Yarce said in a statement.
He earlier served as executive director of the Seattle nonprofit Ventures for more than eight years. The organization supports underserved entrepreneurs including women, people of color, immigrants and low-income individuals.
— Rob Lloyd, Seattle’s chief technology officer, will become executive director of the Center for Digital Government at the end of this month. The organization describes itself as “a national research and advisory institute on information technology policies and best practices in state and local government.”
“Looking forward to working with peers and leaders across the nation on solving the biggest challenges facing our communities, in smarter ways,” he said on LinkedIn.
Lloyd served as CTO for less than two years. Read more about his departure in earlier GeekWire coverage.
— Dan Rodgers is now chief financial officer for CTL, a Beaverton, Ore., company that manufactures Chromebooks, desktop PCs, servers and Google Meet video conferencing tools. Rodgers’ past roles include leadership at companies including PwC, McCormick and Schmick’s, Nike and New Seasons Market.
“CTL’s commitment to innovation and its dedication to sustainability present a unique opportunity to pair financial discipline with a mission-driven strategy,” Rodgers said in a statement.
— Scott Roberts, a longtime executive at LinkedIn where he is currently an AI product initiative advisor, has joined the board of directors for the San Francisco company Voices.
Tech
Sydney Opera House to be lit up by art created on iPad
Apple and the Sydney Opera House are collaborating on a series of creativity projects for young people, including the chance to have iPad-created art projected on the famous building.

How the new artwork will look when projected onto the Sydney Opera House — image credit: Apple
Just as it did for Christmas with its UK headquarters, Apple is inviting people to submit artwork on the iPad to the Sydney Opera house. It’s part of a 12-month collaboration which will see Apple supporting arts programming, including a new international children’s festival later in 2026.
“For 50 years, Apple has been at the forefront of empowering creativity, providing tools that allow people to imagine, design, and share their unique visions with the world,” said Greg Joswiak, Apple’s senior vice president of Worldwide Marketing, in a statement. “We are thrilled to be working with such an iconic Australian cultural landmark to help inspire the next generation of creatives.”
Continue Reading on AppleInsider | Discuss on our Forums
Tech
Databricks built a RAG agent it says can handle every kind of enterprise search
Most enterprise RAG pipelines are optimized for one search behavior. They fail silently on the others. A model trained to synthesize cross-document reports handles constraint-driven entity search poorly. A model tuned for simple lookup tasks falls apart on multi-step reasoning over internal notes. Most teams find out when something breaks.
Databricks set out to fix that with KARL, short for Knowledge Agents via Reinforcement Learning. The company trained an agent across six distinct enterprise search behaviors simultaneously using a new reinforcement learning algorithm. The result, the company claims, is a model that matches Claude Opus 4.6 on a purpose-built benchmark at 33% lower cost per query and 47% lower latency, trained entirely on synthetic data the agent generated itself with no human labeling required. That comparison is based on KARLBench, which Databricks built to evaluate enterprise search behaviors.
“A lot of the big reinforcement learning wins that we’ve seen in the community in the past year have been on verifiable tasks where there is a right and a wrong answer,” Jonathan Frankle, Chief AI Scientist at Databricks, told VentureBeat in an exclusive interview. “The tasks that we’re working on for KARL, and that are just normal for most enterprises, are not strictly verifiable in that same way.”
Those tasks include synthesizing intelligence across product manager meeting notes, reconstructing competitive deal outcomes from fragmented customer records, answering questions about account history where no single document has the full answer and generating battle cards from unstructured internal data. None of those has a single correct answer that a system can check automatically.
“Doing reinforcement learning in a world where you don’t have a strict right and wrong answer, and figuring out how to guide the process and make sure reward hacking doesn’t happen — that’s really non-trivial,” Frankle said. “Very little of what companies do day to day on knowledge tasks are verifiable.”
The generalization trap in enterprise RAG
Standard RAG breaks down on ambiguous, multi-step queries drawing on fragmented internal data that was never designed to be queried.
To evaluate KARL, Databricks built the KARLBench benchmark to measure performance across six enterprise search behaviors: constraint-driven entity search, cross-document report synthesis, long-document traversal with tabular numerical reasoning, exhaustive entity retrieval, procedural reasoning over technical documentation and fact aggregation over internal company notes. That last task is PMBench, built from Databricks’ own product manager meeting notes — fragmented, ambiguous and unstructured in ways that frontier models handle poorly.
Training on any single task and testing on the others produces poor results. The KARL paper shows that multi-task RL generalizes in ways single-task training does not. The team trained KARL on synthetic data for two of the six tasks and found it performed well on all four it had never seen.
To build a competitive battle card for a financial services customer, for example, the agent has to identify relevant accounts, filter for recency, reconstruct past competitive deals and infer outcomes — none of which is labeled anywhere in the data.
Frankle calls what KARL does “grounded reasoning”: running a difficult reasoning chain while anchoring every step in retrieved facts. “You can think of this as RAG,” he said, “but like RAG plus plus plus plus plus plus, all the way up to 200 vector database calls.”
The RL engine: why OAPL matters
KARL’s training is powered by OAPL, short for Optimal Advantage-based Policy Optimization with Lagged Inference policy. It’s a new approach, developed jointly by researchers from Cornell, Databricks and Harvard and published in a separate paper the week before KARL.
Standard LLM reinforcement learning uses on-policy algorithms like GRPO (Group Relative Policy Optimization), which assume the model generating training data and the model being updated are in sync. In distributed training, they never are. Prior approaches corrected for this with importance sampling, introducing variance and instability. OAPL embraces the off-policy nature of distributed training instead, using a regression objective that stays stable with policy lags of more than 400 gradient steps, 100 times more off-policy than prior approaches handled. In code generation experiments, it matched a GRPO-trained model using roughly three times fewer training samples.
OAPL’s sample efficiency is what keeps the training budget accessible. Reusing previously collected rollouts rather than requiring fresh on-policy data for every update meant the full KARL training run stayed within a few thousand GPU hours. That is the difference between a research project and something an enterprise team can realistically attempt.
Agents, memory and the context stack
There has been a lot of discussion in the industry in recent months about how RAG can be replaced with contextual memory, also sometimes referred to as agentic memory.
For Frankle, it’s not an either/or discussion, rather he sees it as a layered stack. A vector database with millions of entries sits at the base, which is too large for context. The LLM context window sits at the top. Between them, compression and caching layers are emerging that determine how much of what an agent has already learned it can carry forward.
For KARL, this is not abstract. Some KARLBench tasks required 200 sequential vector database queries, with the agent refining searches, verifying details and cross-referencing documents before committing to an answer, exhausting the context window many times over. Rather than training a separate summarization model, the team let KARL learn compression end-to-end through RL: when context grows too large, the agent compresses it and continues, with the only training signal being the reward at the end of the task. Removing that learned compression dropped accuracy on one benchmark from 57% to 39%.
“We just let the model figure out how to compress its own context,” Frankle said. “And this worked phenomenally well.”
Where KARL falls short
Frankle was candid about the failure modes. KARL struggles most on questions with significant ambiguity, where multiple valid answers exist and the model can’t determine whether the question is genuinely open-ended or just hard to answer. That judgment call is still an unsolved problem.
The model also exhibits what Frankle described as giving up early on some queries — stopping before producing a final answer. He pushed back on framing this as a failure, noting that the most expensive queries are typically the ones the model gets wrong anyway. Stopping is often the right call.
KARL was also trained and evaluated exclusively on vector search. Tasks requiring SQL queries, file search, or Python-based calculation are not yet in scope. Frankle said those capabilities are next on the roadmap, but they are not in the current system.
What this means for enterprise data teams
KARL surfaces three decisions worth revisiting for teams evaluating their retrieval infrastructure.
The first is pipeline architecture. If your RAG agent is optimized for one search behavior, the KARL results suggest it is failing on others. Multi-task training across diverse retrieval behaviors produces models that generalize. Narrow pipelines do not.
The second is why RL matters here — and it’s not just a training detail. Databricks tested the alternative: distilling from expert models via supervised fine-tuning. That approach improved in-distribution performance but produced negligible gains on tasks the model had never seen. RL developed general search behaviors that transferred. For enterprise teams facing heterogeneous data and unpredictable query types, that distinction is the whole game.
The third is what RL efficiency actually means in practice. A model trained to search better completes tasks in fewer steps, stops earlier on queries it cannot answer, diversifies its search rather than repeating failed queries, and compresses its own context rather than running out of room. The argument for training purpose-built search agents rather than routing everything through general-purpose frontier APIs is not primarily about cost. It is about building a model that knows how to do the job.
-
Politics3 days agoAlan Cumming Brands Baftas Ceremony A ‘Triggering S**tshow’
-
Fashion7 days agoWeekend Open Thread: Iris Top
-
Tech6 days agoUnihertz’s Titan 2 Elite Arrives Just as Physical Keyboards Refuse to Fade Away
-
NewsBeat6 days agoAbusive parents will now be treated like sex offenders and placed on a ‘child cruelty register’ | News UK
-
NewsBeat6 days agoDubai flights cancelled as Brit told airspace closed ’10 minutes after boarding’
-
Sports6 days ago
The Vikings Need a Duck
-
NewsBeat6 days agoThe empty pub on busy Cambridge road that has been boarded up for years
-
NewsBeat5 days ago‘Significant’ damage to boarded-up Horden house after fire
-
Tech1 day agoBitwarden adds support for passkey login on Windows 11
-
Business1 hour ago
Form 8K Entergy Mississippi LLC For: 6 March
-
Entertainment4 days agoBaby Gear Guide: Strollers, Car Seats
-
Sports22 hours ago499 runs and 34 sixes later, India beat England to enter T20 World Cup final | Cricket News
-
Politics6 days ago
FIFA hypocrisy after Israel murder over 400 Palestinian footballers
-
NewsBeat5 days agoEmirates confirms when flights will resume amid Dubai airport chaos
-
NewsBeat4 days agoIs it acceptable to comment on the appearance of strangers in public? Readers discuss
-
Tech5 days agoViral ad shows aged Musk, Altman, and Bezos using jobless humans to power AI
-
Video4 days agoHow to Build Finance Dashboards With AI in Minutes
-
Business3 days agoGuthrie Disappearance Enters Fifth Week as Family Visits Memorial
-
Crypto World5 days agoUS Judge Lets Binance Unregistered Token Class Action Proceed
-
NewsBeat4 days agoUkraine-Russia war latest: Belgium releases video showing forces boarding Russian shadow fleet oil tanker

