Connect with us

Tech

The best iPhone 17e pre-orders and plans in Australia for March 2026

Published

on

Apple has had a big week full of product launches, and while the new MacBook Neo stole the show, the iPhone 17e was the first domino to fall on March 3.

More of the best iPhone plans

Advertisement

The iPhone 17e, however, retains the notch at the top of the display and the camera array from the 16e, giving you just a single 48MP rear shooter and 12MP front-facing lens. It also doesn’t have the ProMotion display from the flagship range, meaning the 17e has the same 60Hz refresh rate as its predecessor.

Vodafone and Optus are offering pre-orders for Apple’s new budget handset, and I’ve perused all the offers from both telcos to find the best plans for different users.

Amazon, JB Hi-Fi and The Good Guys are offering pre-orders for the iPhone 17e, but there are no discounts at this time. As mentioned earlier, however, pairing the phone with one of our best SIM-only plans can save you more money in the long run.

You can find all the retailer offers available below.

  • Apple AU: Trade in an eligible iPhone SE (2nd generation) or higher for credit towards an iPhone 17e, worth from AU$80 to A$540
  • Amazon AU: All three colours in both 256GB and 512GB available at the world’s biggest retailer
  • JB Hi-Fi: Trade in any iPhone 11, 12, 13 and 14 range handset for a bonus gift card worth AU$150 to AU$615 to apply to your iPhone 17e purchase
  • The Good Guys: Purchase the 256GB models in all three colours

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

FLASH Radiotherapy’s Bold Approach to Cancer Treatment

Published

on

Inside a cavernous hall at the Swiss-French border, the air hums with high voltage and possibility. From his perch on the wraparound observation deck, physicist Walter Wuensch surveys a multimillion-dollar array of accelerating cavities, klystrons, modulators, and pulse compressors—hardware being readied to drive a new generation of linear particle accelerators.

Wuensch has spent decades working with these machines to crack the deepest mysteries of the universe. Now he and his colleagues are aiming at a new target: cancer. Here at CERN (the European Organization for Nuclear Research) and other particle-physics labs, scientists and engineers are applying the tools of fundamental physics to develop a technique called FLASH radiotherapy that offers a radical and counterintuitive vision for treating the disease.

Photo of a white-haired man standing next to floor-to-ceiling experimental equipment with many tubes and wires. CERN researcher Walter Wuensch says the particle physics lab’s work on FLASH radiotherapy is “generating a lot of excitement.”CERN

Radiation therapy has been a cornerstone of cancer treatment since shortly after Wilhelm Conrad Röntgen discovered X-rays in 1895. Today, more than half of all cancer patients receive it as part of their care, typically in relatively low doses of X-rays delivered over dozens of sessions. Although this approach often kills the tumor, it also wreaks havoc on nearby healthy tissue. Even with modern precision targeting, the potential for collateral damage limits how much radiation doctors can safely deliver.

FLASH radiotherapy flips the conventional approach on its head, delivering a single dose of ultrahigh-power radiation in a burst that typically lasts less than one-tenth of a second. In study after study, this technique causes significantly less injury to normal tissue than conventional radiation does, without compromising its antitumor effect.

Advertisement

At CERN, which I visited last July, the approach is being tested and refined on accelerators that were never intended for medicine. If ongoing experiments here and around the world continue to bear out results, FLASH could transform radiotherapy—delivering stronger treatments, fewer side effects, and broader access to lifesaving care.

“It’s generating a lot of excitement,” says Wuensch, a researcher at CERN’s Linear Electron Accelerator for Research (CLEAR) facility. “We accelerator people are thinking, Oh, wow, here’s an application of our technology that has a societal impact which is more immediate than most high-energy physics.”

The Unlikely Birth of FLASH Therapy

The breakthrough that led to FLASH emerged from a line of experiments that began in the 1990s at Institut Curie in Orsay, near Paris. Researcher Vincent Favaudon was using a low-energy electron accelerator to study radiation chemistry. Targeting the accelerator at mouse lungs, Favaudon expected the radiation to produce scar tissue, or fibrosis. But when he exposed the lungs to ultrafast blasts of radiation, at doses a thousand times as high as what’s used in conventional radiation therapy, the expected fibrosis never appeared.

Puzzled, Favaudon turned to Marie-Catherine Vozenin, a radiation biologist at Curie who specialized in radiation-induced fibrosis. “When I looked at the slides, there was indeed no fibrosis, which was very, very surprising for this type of dose,” recalls Vozenin, who now works at Geneva University Hospitals, in Switzerland.

Advertisement

The pair expanded the experiments to include cancerous tumors. The results upended a long-held trade-off of radiotherapy: the idea that you can’t destroy a tumor without also damaging the host. “This differential effect is really what we want in radiation oncology, not damaging normal tissue but killing the tumors,” Vozenin says.

They repeated the protocol across different types of tissue and tumors. By 2014, they had gathered enough evidence to publish their findings in Science Translational Medicine. Their experiments confirmed that delivering an ultrahigh dose of 10 gray or more in less than a tenth of a second could eradicate tumors in mice while leaving surrounding healthy tissue virtually unharmed. For comparison, a typical chest X-ray delivers about 0.1 milligray, while a session of conventional radiation therapy might deliver a total of about 2 gray per day. (The authors called the effect “FLASH” because of the quick, high doses involved, but it’s not an acronym.)

Three sets of images comparing highly magnified tissue samples.

Although many cancer experts were skeptical about the FLASH effect on healthy tissue when it was first announced in 2014, numerous studies have since confirmed and expanded on those results. In a 2020 paper, a lung tissue sample taken 4 months after being exposed to conventional radiotherapy [center] shows many more dark spots indicating scarring than a sample exposed to FLASH [right]. The nonirradiated sample [left] is the control.

Advertisement

Vincent Favaudon/American Association for Cancer Research

Many cancer experts were skeptical. The FLASH effect seemed almost too good to be true. “It didn’t get a lot of traction at first,” recalls Billy Loo, a Stanford radiation oncologist specializing in lung cancer. “They described a phenomenon that ran counter to decades of established radiobiology dogma.”

But in the years since then, researchers have observed the effect across a wide range of tumor types and animals—beyond mice to zebra fish, fruit flies, and even a few human subjects, with the same protective effect in the brain, lungs, skin, muscle, heart, and bone.

Why this happens remains a mystery. “We have investigated a lot of hypotheses, and all of them have been wrong,” says Vozenin. Currently, the most plausible theory emerging from her team’s research points to metabolism: Healthy and cancerous cells may process reactive oxygen species—unstable oxygen-containing molecules generated during radiation—in very different ways.

Advertisement

Adapting Accelerators for FLASH

At the time of the first FLASH publication, Loo and his team at Stanford were also focused on dramatically speeding up radiation delivery. But Loo wasn’t chasing a radiobiological breakthrough. He was trying to solve a different problem: motion.

“The tumors that we treat are always moving targets,” he says. “That’s particularly true in the lung, where because of breathing motion, the tumors are constantly moving.”

To bring FLASH therapy out of the lab and into clinical use, researchers like Vozenin and Loo needed machines capable of delivering fast, high doses with pinpoint precision deep inside the body. Most early studies relied on low-energy electron beams like Favaudon’s 4.5-megaelectron-volt Kinetron—sufficient for surface tumors, but unable to reach more than a few centimeters into a human body. Treating deep-seated cancers in the lung, brain, or abdomen would require far higher particle energies.

Photo of floor-to-ceiling electromagnetic hardware with many tubes and pipes, some of which is copper-colored.

At CERN, researchers working on FLASH are developing this hardware to boost electrons to ultrahigh power within a short distance.

Advertisement

CERN

They also needed an alternative to conventional X-rays. In a clinical linac, X-ray photons are produced by dumping high-energy electrons into a bremsstrahlung target, which is made of a material with a high atomic number, like tungsten or copper. The target slows the electrons, converting their kinetic energy into X-ray photons. It’s an inherently inefficient process that wastes most of the beam power as heat and makes it extremely difficult to reach the ultrahigh dose rates required for FLASH. High-energy electrons, by contrast, can be switched on and off within milliseconds. And because they have a charge and can be steered by magnets, electrons can be precisely guided to reach tumors deep within the body. (Researchers are also investigating protons and carbon ions; see the sidebar, “What’s the Best Particle for FLASH Therapy?”)

Loo turned to the SLAC National Accelerator Laboratory in Menlo Park, Calif., where physicist Sami Gamal-Eldin Tantawi was redefining how electromagnetic waves move through linear accelerators. Tantawi’s findings allowed scientists to precisely control how energy is delivered to particles—paving the way for compact, efficient, and finely tunable machines. It was exactly the kind of technology FLASH therapy would need to target tumors deep inside the body.

Advertisement

Meanwhile, Vozenin and other European researchers turned to CERN, best known for its 27-kilometer Large Hadron Collider (LHC) and the 2012 discovery of the Higgs boson, the “God particle” that gives other particles their mass.

CERN is also home to a range of smaller linear accelerators—including CLEAR, where Wuensch and his team are adapting high-energy physics tools for medicine.

Unlike the LHC, which loops particles around a massive ring to build up energy before smashing them together, linear accelerators like CLEAR send particles along a straight, one-time path. That setup allows for greater precision and compactness, making it ideal for applications like FLASH.

Advertisement

At the heart of the CLEAR facility, Wuensch points out the 200-MeV linear accelerator with its 20-meter beamline. This is “a playground of creativity,” he says, for the physicists and engineers who arrive from all over the world to run experiments.

The process begins when a laser pulse hits a photocathode, releasing a burst of electrons that form the initial beam. These electrons travel through a series of precisely machined copper cavities, where high-frequency microwaves push them forward. The electrons then move through a network of magnets, monitors, and focusing elements that shape and steer them toward the experimental target with submillimeter precision.

Instead of a continuous stream, the electron beam is divided into nanosecond-long bunches—billions of electrons riding the radio-frequency field like surfers. Inside the accelerator’s cavities, the field flips polarity 12 billion times per second, so timing is everything: Only electrons that arrive perfectly in phase with the accelerating wave will gain energy. That process repeats through a chain of cavities, each giving the bunches another push, until the beam reaches its final energy of 200 MeV.

Close-up photo of an etched copper disc being held under a microscope by a gloved hand.

Physicist Marçà Boronat inspects one of the high-precision components used to accelerate the electrons for FLASH radiotherapy.

Advertisement

CERN

Much of this architecture draws directly from the Compact Linear Collider study, a decades-long CERN project aimed at building a next-generation collider. The proposed CLIC machine would stretch 11 kilometers and collide electrons and positrons at 380 gigaelectron volts. To do that in a linear configuration—without the multiple passes around a ring like the LHC—CERN engineers have had to push for extremely high acceleration gradients to boost the electrons to high energies over relatively short distances—up to 100 megavolts per meter.

Wuensch leads me to a large experimental hall housing prototype structures from the CLIC effort, and points out the microwave devices that now help drive FLASH research. Though the future of CLIC as a collider remains uncertain, its infrastructure is already yielding dividends: smaller, high-gradient accelerators that may one day be as suited for curing cancer as they are for smashing particles.

Advertisement

The power behind the high gradients comes from CERN’s Xboxes, the X-band RF systems that dominate the experimental hall. Each Xbox houses a klystron, modulator, pulse compressor, and waveguide network to generate and shape the microwave pulses. The pulse compressors store energy in resonant cavities and then release it in a microsecond burst, producing peaks of up to 200 megawatts; if it were continuous, that’s enough to power at least 40,000 homes. The Xboxes let researchers fine-tune the power, timing, and pulse shape.

According to Wuensch, many of the recent accelerator developments were enabled by advances in computer simulation and high-precision three-dimensional machining. These tools allow the team to iterate quickly, designing new accelerator components and improving beam control with each generation.

Still, real-world challenges remain. The power demands are formidable, as are the space requirements; for all the talk of its “compact” design, the original CLIC was meant to span kilometers. Obviously, a hospital needs something that’s actually compact.

Advertisement

“A big challenge of the project,” says Wuensch, “is to transform this kind of technology and these kinds of components into something that you can imagine installing in a hospital, and it will run every day reliably.”

To that end, CERN researchers have teamed up with the Lausanne University Hospital (known by its French acronym, CHUV) and the French medical technology company Theryq to design a hospital facility capable of treating large and deep-seated tumors with the very short time scales needed for FLASH and scaled down to fit in a clinical setting.

Theryq’s Approach to FLASH

Theryq’s research center and factory are located in southern France, near the base of Montagne Sainte-Victoire, a jagged spine of limestone that Paul Cézanne painted dozens of times, capturing its shifting light and form.

“The solution that we are trying to develop here is something which is extremely versatile,” says Ludovic Le Meunier, CEO of the expanding company. “The ultimate goal is to be able to treat any solid tumor anywhere in the body, which is about 90 percent of the cancer these days.”

Advertisement

Futuristic scientific equipment setup, featuring streamlined machinery and intricate components. Theryq’s FLASHDEEP system, under development with CERN and the company’s clinical partners, has a 13.5-meter-long, 140-MeV linear accelerator. That’s strong enough to treat tumors at depths of up to about 20 centimeters in the body. The patient will remain in a supported standing position during the split-second irradiation.THERYQ

Theryq’s push to bring FLASH radiotherapy from the lab to clinic has followed a three-pronged rollout, with each device engineered for a specific depth and clinical use. The first machine, FLASHKNiFE, was unveiled in 2020. Designed for superficial tumors and intraoperative use, the system delivers electron beams at 6 or 9 MeV. A prototype installed that same year at CHUV is conducting a phase-two trial for patients with localized skin cancer.

More recently, Theryq launched FLASHLAB, a compact, 7-MeV platform for radiobiology research.

The company’s most ambitious system, FLASHDEEP, is still under development. The 13.5-meter-long electron source will deliver very high-energy electrons of as much as 140 MeV up to 20 centimeters inside the body in less than 100 milliseconds. An integrated CT scanner, built into a patient-positioning system developed by Leo Cancer Care, captures images that stream directly into the treatment-planning software, enabling precise calculation of the radiation dose. “Before we actually trigger the beam or the treatment, we make stereo images to verify at the very last second that the tumor is exactly where it should be,” says Theryq technical manager Philippe Liger.

FLASH Therapy Moves to Animal Tests

While CERN’s CLEAR accelerator has been instrumental in characterizing FLASH parameters, researchers seeking to study FLASH in living organisms must look elsewhere: CERN doesn’t allow animal experiments on-site. That’s one reason why a growing number of scientists are turning to PITZ, the Photo Injector Test Facility in Zeuthen, a leafy lakeside suburb of Berlin.

Advertisement

PITZ is part of Germany’s national accelerator lab and is responsible for developing the electron source for the European X-ray Free-Electron Laser. Now PITZ is emerging as a hub for FLASH research, with an unusually tunable accelerator and a dedicated biomedical lab to ensure controlled conditions for preclinical studies.

A photo showing a row of experimental electronic equipment on racks

A photo of a closeup of a gloved hand holding a sample of a purple liquid above a piece of equipment. At Germany’s Photo Injector Test Facility in Zeuthen (PITZ), the electron-beam accelerator [top] is used to irradiate biological targets in early-stage animal tests of FLASH radiotherapy [bottom].Top: Frieder Mueller; Bottom: MWFK

“The biggest advantage of our facility is that we can do a very stepwise, very defined and systematic study of dose rates,” says Anna Grebinyk, a biochemist who heads the new biomedical lab, “and systematically optimize the FLASH effect to see where it gets the best properties.”

The experiments begin with zebra-fish embryos, prized for early-stage studies because they’re transparent and develop rapidly. After the embryos, researchers test the most promising parameters in mice. To do that, the PITZ team uses a small-animal radiation research platform, complete with CT imaging and a robotic positioning system adapted from CERN’s CLEAR facility.

What sets PITZ apart is the flexibility of its beamline. The 30-meter accelerator system steers electrons with micrometer precision, producing electron bunches with exceptional brightness and emittance—a metric of beam quality. “We can dial in any distribution of bunches we want,” says Frank Stephan, group leader at PITZ. “That gives us tremendous control over time structure.”

Advertisement

Timing matters. At PITZ, the laser-struck photocathode generates electron bunches that are accelerated immediately, at up to 60 million volts per meter. A fast electromagnetic kicker system acts as a high-speed gatekeeper, selectively deflecting individual electron bunches from a high-repetition beam and steering them according to researchers’ needs. This precise, bunch-by-bunch control is essential for fine-tuning beam properties for FLASH experiments and other radiation therapy studies.

“The idea is to make the complete treatment within one millisecond,” says Stephan. “But of course, you have to [trust] that within this millisecond, everything works fine. There is not a chance to stop [during] this millisecond. It has to work.”

Regulating the dose remains one of the biggest technical hurdles in FLASH. The ionization chambers used in standard radiotherapy can’t respond accurately when dose rates spike hundreds of times higher in a matter of microseconds. So researchers are developing new detector systems to precisely measure these bursts and keep pace with the extreme speed of FLASH delivery.

FLASH as a Research Tool

Beyond its therapeutic potential, FLASH may also open new windows to illuminate cancer biology. “What is really, really superinteresting, in my opinion,” says Vozenin, “is that we can use FLASH as a tool to understand the difference between normal tissue and tumors. There must be something we’re not aware of that really distinguishes the two—and FLASH can help us find it.” Identifying those differences, she says, could lead to entirely new interventions, not just with radiation, but also with drugs.

Advertisement

Vozenin’s team is currently testing a hypothesis involving long-lived proteins present in healthy tissue but absent in tumors. If those proteins prove to be key, she says, “we’re going to find a way to manipulate them—and perhaps reverse the phenomenon, even [turn] a tumor back into a normal tissue.”

Proponents of FLASH believe it could help close the cancer care gap worldwide; in low-income countries, only about 10 percent of patients have access to radiotherapy, and in middle-income countries, only about 60 percent of patients do, according to the International Atomic Energy Agency. Because FLASH treatment can often be delivered in a single brief session, it could spare patients from traveling long distances for weeks of treatment and allow clinics to treat many more people.

High-income countries stand to benefit as well. Fewer sessions mean lower costs, less strain on radiotherapy facilities, and fewer side effects and disruptions for patients.

The big question now is, How long will it take? Researchers I spoke with estimate that FLASH could become a routine clinical option in about 10 years—after the completion of remaining preclinical studies and multiphase human trials, and as machines become more compact, affordable, and efficient. Much of the momentum comes from a growing field of startups competing to build devices, but the broader scientific community remains remarkably open and collaborative.

Advertisement

“Everyone has a relative who knows about cancer because of their own experience,” says Stephan. “My mother died of it. In the end, we want to do something good for mankind. That’s why people work together.”

This article appears in the March 2026 print issue.

From Your Site Articles

Related Articles Around the Web

Advertisement

Source link

Continue Reading

Tech

London doctor carries out remote robot surgery on cancer patient 1,500 miles away

Published

on


The milestone procedure saw Professor Prokar Dasgupta, based at The London Clinic’s robotic center in Harley Street, operate on 62-year-old patient Paul Buxton, who was in St Bernard’s Hospital in Gibraltar, a British overseas territory in southern Spain.
Read Entire Article
Source link

Continue Reading

Tech

Tech Moves: Amperity and Siteimprove name CMOs; AWS director departs; Gong’s new exec

Published

on

Bridget Perry. (LinkedIn Photo)

Amperity, a Seattle-based startup that helps companies collect and manage customer data, named Bridget Perry as chief marketing officer.

Earlier in her career, Perry was a marketing director for Microsoft for nearly nine years and worked for more than eight years at Adobe, leaving the role of CMO of Europe, Middle East and Africa. She was most recently interim CMO for Later, an influencer marketing company, and has held strategic advisor roles.

“Bridget has led marketing teams through real platform shifts, not incremental change. She knows what it takes to build credibility in a market and scale it globally,” said Tony Owens, CEO of Amperity, in a statement. The company is ranked No. 39 on GeekWire 200, our list of top Pacific Northwest startups.

Simon Frey. (LinkedIn Photo)

— Seattle-based Simon Frey was promoted to chief customer officer of Gong. He was previously senior vice president of customer outcomes for the San Francisco startup that builds agentic AI technology to optimize revenue performance and automate workflows.

“Simon has spent years partnering closely with our customers, helping them unlock meaningful growth across their revenue organizations,” said Shane Evans, Gong’s chief revenue architect, in a statement.

Frey joined Gong in 2024 after leaving TaxBit, where he was VP of revenue. Other past employers include Qualtrics and McKinsey. He also served as an advisor to Jargon, which was acquired by Remitly.

Advertisement
Elizabeth Scallon, Find Ventures co-founder and board chair. (LinkedIn Photo)

Elizabeth Scallon is now director of healthcare AI startups at Nvidia where she will oversee its global Healthcare and Life Sciences Inception program. Scallon, a longtime leader in Seattle’s startup ecosystem, joins Nvidia from HP where she worked for nearly four years as director of technical and business incubation and strategy.

Scallon is also an affiliate instructor at the University of Washington and has held leadership roles at Amazon and WeWork. She was director of the UW’s CoMotion Labs for five years and co-founded Find Ventures.

“With this role, I’m returning to my roots in biotech and genetics and bringing the skills, experience, and connections I’ve built along the way to do my life’s work,” Scallon said on LinkedIn.

Jenny Brinkley. (LinkedIn Photo)

— After nearly a decade at Amazon Web Services, Jenny Brinkley is resigning as director of security readiness.

“I start a new role next week in a rapidly growing space, and I am excited to be part of something transformative once again. To my AWS colleagues, thank you for the kind words and support,” Brinkley said on LinkedIn.

Brinkley, who is based in Portland, Ore., earlier co-founded an AI startup and ran a consultancy.

Advertisement

 Siteimprove announced Jen Jones as its chief marketing officer. The company, which helps businesses improve their website functionality, is based in Denmark and has an office in Bellevue, Wash., where much of its executive leadership team is based. Jones was previously at commercetools.

Padmashree Koneti had departed her role as chief product officer of Yoodli after roughly five months. The Seattle startup has not yet named a replacement. Yoodli, which is using generative AI to analyze speech and offer tips for improving communication skills, also just hired Alexandra Breymeier as customer success lead. She previously worked at employee referral company ERIN.

Vandana Shah. (LinkedIn Photo)

Vandana Shah is now vice president of product for Scowtt, a Kirkland, Wash.-based startup that wants to reshape how advertisers optimize paid campaigns. The company in December announced a $12 million Series A funding round.

Shah joins Scowtt from Ladder. She was previously Google’s director of product management for the advertising platform, working at the Bay Area company for more than 16 years.

“Having spent years leading complex platform initiatives at Google Ads, I have seen the power of building resilient, customer-first foundations at scale. I am thrilled to bring that experience to Scowtt,” she said on LinkedIn.

Advertisement
Dinesh Govindasamy. (LinkedIn Photo)

—- Dinesh Govindasamy was promoted to director of engineering at Meta, supporting teams across Tupperware, Public Cloud and Meta Kubernetes Service. Govindasamy joined Meta in October 2023.

“This milestone is thanks to the mentors, collaborators, and teams who believed in me and pushed me to grow. You know who you are — thank you,” he said on LinkedIn.

Govindasamy, based in the Seattle area, was previously at Microsoft for more than 15 years, leaving the role of group engineering manager in which he led teams working on Azure Kubernetes Service Hybrid and other initiatives.

Beto Yarce. (City of Seattle Photo)

Beto Yarce has started his tenure as director of the City of Seattle’s Office of Economic Development. Yarce joins the city from the U.S. Small Business Administration where he was regional administrator for the Pacific Northwest.

“I am incredibly honored by Mayor Wilson’s trust in me to lead OED and to help shape the economic ecosystems that make Seattle not only a great place to live, work, and play, but also the best place in the country to open, run, and grow a business,” Yarce said in a statement.

He earlier served as executive director of the Seattle nonprofit Ventures for more than eight years. The organization supports underserved entrepreneurs including women, people of color, immigrants and low-income individuals.

Advertisement

Rob Lloyd, Seattle’s chief technology officer, will become executive director of the Center for Digital Government at the end of this month. The organization describes itself as “a national research and advisory institute on information technology policies and best practices in state and local government.”

“Looking forward to working with peers and leaders across the nation on solving the biggest challenges facing our communities, in smarter ways,” he said on LinkedIn.

Lloyd served as CTO for less than two years. Read more about his departure in earlier GeekWire coverage.

Dan Rodgers is now chief financial officer for CTL, a Beaverton, Ore., company that manufactures Chromebooks, desktop PCs, servers and Google Meet video conferencing tools. Rodgers’ past roles include leadership at companies including PwC, McCormick and Schmick’s, Nike and New Seasons Market.

Advertisement

“CTL’s commitment to innovation and its dedication to sustainability present a unique opportunity to pair financial discipline with a mission-driven strategy,” Rodgers said in a statement.

Scott Roberts, a longtime executive at LinkedIn where he is currently an AI product initiative advisor, has joined the board of directors for the San Francisco company Voices.

Source link

Advertisement
Continue Reading

Tech

Sydney Opera House to be lit up by art created on iPad

Published

on

Apple and the Sydney Opera House are collaborating on a series of creativity projects for young people, including the chance to have iPad-created art projected on the famous building.

Sydney Opera House sails lit with vivid rainbow graffiti-style projections, sweeping blue and orange strokes over neon greens, pinks, and yellows, against a black night sky and outlined forecourt
How the new artwork will look when projected onto the Sydney Opera House — image credit: Apple

Just as it did for Christmas with its UK headquarters, Apple is inviting people to submit artwork on the iPad to the Sydney Opera house. It’s part of a 12-month collaboration which will see Apple supporting arts programming, including a new international children’s festival later in 2026.
“For 50 years, Apple has been at the forefront of empowering creativity, providing tools that allow people to imagine, design, and share their unique visions with the world,” said Greg Joswiak, Apple’s senior vice president of Worldwide Marketing, in a statement. “We are thrilled to be working with such an iconic Australian cultural landmark to help inspire the next generation of creatives.”
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Databricks built a RAG agent it says can handle every kind of enterprise search

Published

on

Most enterprise RAG pipelines are optimized for one search behavior. They fail silently on the others. A model trained to synthesize cross-document reports handles constraint-driven entity search poorly. A model tuned for simple lookup tasks falls apart on multi-step reasoning over internal notes. Most teams find out when something breaks.

Databricks set out to fix that with KARL, short for Knowledge Agents via Reinforcement Learning. The company trained an agent across six distinct enterprise search behaviors simultaneously using a new reinforcement learning algorithm. The result, the company claims, is a model that matches Claude Opus 4.6 on a purpose-built benchmark at 33% lower cost per query and 47% lower latency, trained entirely on synthetic data the agent generated itself with no human labeling required. That comparison is based on KARLBench, which Databricks built to evaluate enterprise search behaviors.

“A lot of the big reinforcement learning wins that we’ve seen in the community in the past year have been on verifiable tasks where there is a right and a wrong answer,” Jonathan Frankle, Chief AI Scientist at Databricks, told VentureBeat in an exclusive interview. “The tasks that we’re working on for KARL, and that are just normal for most enterprises, are not strictly verifiable in that same way.”

Those tasks include synthesizing intelligence across product manager meeting notes, reconstructing competitive deal outcomes from fragmented customer records, answering questions about account history where no single document has the full answer and generating battle cards from unstructured internal data. None of those has a single correct answer that a system can check automatically.

Advertisement

“Doing reinforcement learning in a world where you don’t have a strict right and wrong answer, and figuring out how to guide the process and make sure reward hacking doesn’t happen — that’s really non-trivial,” Frankle said. “Very little of what companies do day to day on knowledge tasks are verifiable.”

The generalization trap in enterprise RAG

Standard RAG breaks down on ambiguous, multi-step queries drawing on fragmented internal data that was never designed to be queried.

To evaluate KARL, Databricks built the KARLBench benchmark to measure performance across six enterprise search behaviors: constraint-driven entity search, cross-document report synthesis, long-document traversal with tabular numerical reasoning, exhaustive entity retrieval, procedural reasoning over technical documentation and fact aggregation over internal company notes. That last task is PMBench, built from Databricks’ own product manager meeting notes — fragmented, ambiguous and unstructured in ways that frontier models handle poorly.

Training on any single task and testing on the others produces poor results. The KARL paper shows that multi-task RL generalizes in ways single-task training does not. The team trained KARL on synthetic data for two of the six tasks and found it performed well on all four it had never seen.

Advertisement

To build a competitive battle card for a financial services customer, for example, the agent has to identify relevant accounts, filter for recency, reconstruct past competitive deals and infer outcomes — none of which is labeled anywhere in the data.

Frankle calls what KARL does “grounded reasoning”: running a difficult reasoning chain while anchoring every step in retrieved facts. “You can think of this as RAG,” he said, “but like RAG plus plus plus plus plus plus, all the way up to 200 vector database calls.”

The RL engine: why OAPL matters

KARL’s training is powered by OAPL, short for Optimal Advantage-based Policy Optimization with Lagged Inference policy. It’s a new approach, developed jointly by researchers from Cornell, Databricks and Harvard and published in a separate paper the week before KARL.

Standard LLM reinforcement learning uses on-policy algorithms like GRPO (Group Relative Policy Optimization), which assume the model generating training data and the model being updated are in sync. In distributed training, they never are. Prior approaches corrected for this with importance sampling, introducing variance and instability. OAPL embraces the off-policy nature of distributed training instead, using a regression objective that stays stable with policy lags of more than 400 gradient steps, 100 times more off-policy than prior approaches handled. In code generation experiments, it matched a GRPO-trained model using roughly three times fewer training samples.

Advertisement

OAPL’s sample efficiency is what keeps the training budget accessible. Reusing previously collected rollouts rather than requiring fresh on-policy data for every update meant the full KARL training run stayed within a few thousand GPU hours. That is the difference between a research project and something an enterprise team can realistically attempt.

Agents, memory and the context stack

There has been a lot of discussion in the industry in recent months about how RAG can be replaced with contextual memory, also sometimes referred to as agentic memory.

For Frankle, it’s not an either/or discussion, rather he sees it as a layered stack. A vector database with millions of entries sits at the base, which is too large for context. The LLM context window sits at the top. Between them, compression and caching layers are emerging that determine how much of what an agent has already learned it can carry forward.

For KARL, this is not abstract. Some KARLBench tasks required 200 sequential vector database queries, with the agent refining searches, verifying details and cross-referencing documents before committing to an answer, exhausting the context window many times over. Rather than training a separate summarization model, the team let KARL learn compression end-to-end through RL: when context grows too large, the agent compresses it and continues, with the only training signal being the reward at the end of the task. Removing that learned compression dropped accuracy on one benchmark from 57% to 39%.

Advertisement

“We just let the model figure out how to compress its own context,” Frankle said. “And this worked phenomenally well.”

Where KARL falls short

Frankle was candid about the failure modes. KARL struggles most on questions with significant ambiguity, where multiple valid answers exist and the model can’t determine whether the question is genuinely open-ended or just hard to answer. That judgment call is still an unsolved problem.

The model also exhibits what Frankle described as giving up early on some queries — stopping before producing a final answer. He pushed back on framing this as a failure, noting that the most expensive queries are typically the ones the model gets wrong anyway. Stopping is often the right call.

KARL was also trained and evaluated exclusively on vector search. Tasks requiring SQL queries, file search, or Python-based calculation are not yet in scope. Frankle said those capabilities are next on the roadmap, but they are not in the current system.

Advertisement

What this means for enterprise data teams

KARL surfaces three decisions worth revisiting for teams evaluating their retrieval infrastructure.

The first is pipeline architecture. If your RAG agent is optimized for one search behavior, the KARL results suggest it is failing on others. Multi-task training across diverse retrieval behaviors produces models that generalize. Narrow pipelines do not.

The second is why RL matters here — and it’s not just a training detail. Databricks tested the alternative: distilling from expert models via supervised fine-tuning. That approach improved in-distribution performance but produced negligible gains on tasks the model had never seen. RL developed general search behaviors that transferred. For enterprise teams facing heterogeneous data and unpredictable query types, that distinction is the whole game.

The third is what RL efficiency actually means in practice. A model trained to search better completes tasks in fewer steps, stops earlier on queries it cannot answer, diversifies its search rather than repeating failed queries, and compresses its own context rather than running out of room. The argument for training purpose-built search agents rather than routing everything through general-purpose frontier APIs is not primarily about cost. It is about building a model that knows how to do the job.

Advertisement

Source link

Continue Reading

Tech

Is the MacBook Neo the one?

Published

on

It’s been a wild week for Apple. After announcing a slew of new hardware, the company capped things off with its cheapest laptop ever: the $599 MacBook Neo. It’s low on specs, but high on character and value. In this episode, Devindra and Engadget Deputy Editor Nathan Ingraham dive into the MacBook Neo, as well as the refreshed MacBook Air M5, MacBook Pro M5 Pro/Max, iPad Air M4 and iPhone 17e.

Also, Devindra chats with Spencer Ackerman, author of Forever Wars and recent Iron Man comics, about the ongoing battle between Anthropic and the Department of Defense. It turns out the DOD still used Claude for attacks on Iran, after banning Anthropic’/s AI last week. And really, what do these AI companies expect to happen when they jump at military contracts?

Subscribe!

Topic

  • Apple announces a the MacBook Neo priced at $599 and it’s shockingly great – 0:53

  • MacBook Air got the M5, MacBook Pro got the M5 Pro and M5 Max, and who needs the new iPad Air now? – 22:31

  • Anthropic vs. DoD with Spencer Ackerman, author of The Forever Wars – 30:34

  • Gemini encouraged a man to end his own life to be with his ‘AI wife’ – 58:53

  • Polymarket nixes bets on nuclear detonation after public outcry – 1:01:55

  • No Yōtei on PC: Sony closes down first party titles outside of PS5 – 1:03:56

  • Wildlight Studios’ Highguard shuts down after 46 days live – 1:08:23

  • Working on: Dell’s XPS 14 will be great when the keyboard fix comes through – 1:15:09

  • Pop culture picks – 1:15:58

Credits

Hosts: Devindra Hardawar and Nathan Ingraham
Guest: Spencer Ackerman
Producer: Ben Ellman
Music: Dale North and Terrence O’Brien

Source link

Advertisement
Continue Reading

Tech

Building A Heading Sensor Resistant To Magnetic Disturbances

Published

on

Light aircraft often use a heading indicator as a way to know where they’re going. Retired instrumentation engineer [Don Welch] recreated a heading indicator of his own, using cheap off-the-shelf hardware to get the job done.

The heart of the build is a Teensy 4.0 microcontroller. It’s paired with a BNO085 inertial measurement unit (IMU), which combines a 3-axis gyro, 3-axis accelerometer, and 3-axis magnetometer into a single package. [Don] wanted to build a heading indicator that was immune to magnetic disturbances, so ignored the magnetometer readings entirely, using the rest of the IMU data instead.

Upon startup, the Teensy 4.0 initializes a small round TFT display, and draws the usual compass rose with North at the top of the display. Any motion after this will update the heading display accordingly, with [Don] noting the IMU has a fast update rate of 200 Hz for excellent motion tracking. The device does not self-calibrate to magnetic North; instead, an encoder can be used to calibrate the device to match a magnetic compass you have on hand. Or, you can just ensure it’s already facing North when you turn it on.

Advertisement

Thanks to the power of the Teensy 4.0 and the rapid updates of the BNO085, the display updates are nicely smooth and responsive. However, [Don] notes that it’s probably not quite an aircraft-spec build. We’ve featured some interesting investigations of just how much you can expect out of MEMS-based sensors like these before, too.

Advertisement

Source link

Continue Reading

Tech

Xbox surprise: Microsoft reveals ‘Project Helix’ as the codename of its next console

Published

on

(Xbox Image)

In the days leading up to one of the games industry’s bigger trade conferences, Microsoft has quietly unveiled the code name for its next-generation Xbox console: Project Helix.

The name appeared without initial fanfare in a post on X on Thursday morning.

Xbox CEO Asha Sharma, who just replaced longtime leader Phil Spencer, followed up in a post on her own account, in which she briefly discussed her team’s “commitment to the return of Xbox.” Sharma also noted that Project Helix will “lead in performance” and “play your Xbox and PC games.”

Next week marks the annual Game Developers’ Conference in San Francisco, which has gained some prominence for news and announcements in recent years. It’s possible that some new information about this next-gen Xbox will come out of this year’s GDC, which is both Sharma’s first time at the show and her first time attending as the head of Xbox. Sharma reportedly has plans to meet with both partners and studios while at GDC.

That marks the end of the information about Project Helix that’s currently publicly available. The most remarkable fact about it for now may simply be that it exists, in the face of persistent rumors that Microsoft’s executives would like to sunset Xbox entirely and an ongoing memory shortage caused by the rise of AI data centers.

Advertisement

Despite industry expectations, it looks like Microsoft’s games division plans to stick it out for at least one more console generation. The start of that generation may be pushed off a couple of years from its initially rumored late-2027 starting point, as RAM is currently getting scarcer on the market, but whenever it begins, it looks like Xbox will still be there.

Source link

Advertisement
Continue Reading

Tech

Linux Hotplug Events Explained | Hackaday

Published

on

There was a time when Linux was much simpler. You’d load a driver, it would find your device at boot up, or it wouldn’t. That was it. Now, though, people plug and unplug USB devices all the time and expect the system to react appropriately. [Arcanenibble] explains all “the gory details” about what really happens when you plug or unplug a device.

You might think, “Oh, libusb handles that.” But, of course, it doesn’t do the actual work. In fact, there are two possible backends: netlink or udev. However, the libusb developers strongly recommend udev. Turns out, udev also depends on netlink underneath, so if you use udev, you are sort of using netlink anyway.

If netlink sounds familiar, it is a generic BSD-socket-like API the kernel can use to send notifications to userspace. The post shows example code for listening to kernel event messages via netlink, just like udev does.

When udev sees a device add message from netlink, it resends a related udev message using… netlink! Turns out, netlink can send messages between two userspace programs, not just between the kernel and userspace. That means that the code to read udev events isn’t much different from the netlink example.

Advertisement

The next hoop is the udev event format. It uses a version number, but it seems stable at version 0xfeedcafe. Part of the structure contains a hash code that allows a bloom filter to quickly weed out uninteresting events, at least most of the time.

The post documents much of the obscure inner workings of USB hotplug events. However, there are some security nuances that aren’t clear. If you can explain them, we bet [Arcanenibble] would like to hear from you.

If you like digging into the Linux kernel and its friends, you might want to try creating kernel modules. If you get overwhelmed trying to read the kernel source, maybe go back a few versions.

Advertisement

Source link

Continue Reading

Tech

Silicon Valley tech vet: ‘No better time to start companies than now’

Published

on

Pablo Casilimas (left), founding partner at OneSixOne Ventures, with Sudheesh Nair, co-founder and CEO of TinyFish. (GeekWire Photo / Taylor Soper)

The AI moment is not just another tech cycle — it’s one of the best openings founders have seen in years.

That was the message from Sudheesh Nair, a longtime Bay Area tech leader and co-founder of enterprise web agent startup TinyFish, speaking Thursday at a Seattle Enterprise AI Summit event hosted by OneSixOne.

“There is no better time to start companies than now,” he said. “It’s just magical.”

He believes the AI boom could produce the same kind of lasting infrastructure and category-defining companies that came out of earlier economic and technology shifts. Nair said this wave may be as significant as the internet, and possibly even bigger, because “for the first time, reasoning can be on tap.”

He added: “The way I think of it is, completely be constrained by your imagination — but nothing else.”

Advertisement

Nair previously helped scale Nutanix and ThoughtSpot. In 2024 he launched TinyFish, which raised $47 million last year to build infrastructure for AI agents to operate across the web. “I couldn’t stand on the sidelines,” he said.

He likened today’s moment to a gold rush, noting that most of the enduring outcomes from 1849 were second‑order products and infrastructure: durable jeans, safer elevators, modern banking systems. He said these were built not for the gold rush, but because of the gold rush.

Nair pushed back on the instinct to wait for clarity in a fast‑moving market where even frontier AI labs are still figuring out how their models behave. “No one who knows what the heck is happening,” he said.

But Nair also was careful not to romanticize startups. He said company-building is not for everyone, and noted that some people are better suited to join startups or build inside larger organizations. His broader point was that the tools, the pace of change, and the raw opportunity around AI have created a rare moment for people willing make the startup leap.

Advertisement

“If you just happen to have a pickaxe and shovel, the best thing might be to just jump in,” Nair said.

Source link

Continue Reading

Trending

Copyright © 2025