SAP unveiled the Autonomous Enterprise at Sapphire 2026, embedding 200+ AI agents into its core business applications and partnering with Anthropic to make Claude its primary reasoning engine, betting that owning business process logic matters more than owning the AI model as its stock falls 41 per cent.
Christian Klein opened the SAP Sapphire keynote on Monday with a question that no chief executive of Europe’s most valuable technology company should need to ask. “Will SAP be a software company in the future?” The answer, delivered by SAP’s own AI assistant Joule at the end of the presentation, was that SAP is becoming a business AI company. The question was rhetorical. The 41 per cent decline in SAP’s share price over the past six months was not.
Advertisement
SAP unveiled what it calls the Autonomous Enterprise, a unified platform comprising more than 50 domain-specific AI assistants orchestrating over 200 specialised agents across finance, supply chain, procurement, human resources, and customer experience. The company announced a partnership with Anthropic to embed Claude as a primary reasoning engine across its AI-enabled portfolio. It launched a 100 million euro partner fund to accelerate deployment. It introduced seven vertical Industry AI solutions. It revealed agent-led migration tooling that it claims can reduce ERP transformation efforts by more than 35 per cent.
The announcement is the largest AI product launch in SAP’s 53-year history. It is also, unmistakably, a survival strategy.
The context
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
Advertisement
SAP’s stock has lost more than a third of its value since peaking at 306.60 euros in July 2025. The January 2026 earnings call triggered a 15 per cent single-day decline, the steepest since 2020, after cloud revenue guidance fell short of expectations. The Q1 2026 results in April showed cloud revenue growing 27 per cent at constant currencies to 5.96 billion euros, but total revenue of 9.56 billion missed analyst forecasts, sending the stock down another six per cent in after-hours trading.
The problem is not SAP’s cloud business. Cloud ERP Suite revenue grew 30 per cent at constant currencies. Current cloud backlog reached 21.9 billion euros. The problem is the market’s judgement of what cloud revenue will be worth when AI agents start replacing the human users who generate per-seat licence fees.
In February, Workday’s chief technology officer traded his C-suite title for a technical staff role at Anthropic, a defection that crystallised the talent drain from legacy enterprise software to the AI companies building tools to displace it. The same month, a wave of agentic AI product launches from Anthropic, Salesforce, and Google erased roughly 285 billion dollars from SaaS company valuations in 48 hours, an event the financial press now calls the SaaSpocalypse.
SAP’s market capitalisation has fallen from more than 300 billion dollars to roughly 200 billion. The company that runs the back office of the global economy is being repriced as though it might not run the back office of the future.
Advertisement
The bet
The Autonomous Enterprise is SAP’s answer. The architecture has three layers. The SAP Business AI Platform provides the infrastructure for building, contextualising, and governing AI agents. The Autonomous Suite embeds those agents into core business applications. And Joule Work, a new interface, replaces the traditional screen-by-screen navigation with a conversational layer in which users describe a desired business outcome and Joule orchestrates the workflows, data, and agents to deliver it.
The most concrete demonstration is the Autonomous Close Assistant, which SAP says can compress a financial close process from weeks to days by automating journal entries, reconciliation, and error resolution across the entire cycle. The assistant does not replace the finance team. It orchestrates the agents that execute the tasks the finance team currently performs manually, while the humans approve, override, and govern.
This distinction matters. SAP is not selling AI that eliminates enterprise software. It is selling AI that makes enterprise software do more of the work that humans currently do inside enterprise software. The agents run within the same approval workflows, compliance frameworks, and governance controls that already govern human decisions in SAP systems. The lock-in does not weaken. It deepens.
The partnership
The Anthropic deal makes Claude a primary reasoning and agentic capability embedded across SAP’s solution portfolio. The integration goes beyond a standard API arrangement. Anthropic and SAP will collaborate to build custom agents and agentic workflows optimised for industries including public sector, healthcare, education, life sciences, and utilities.
Advertisement
SAP also announced expanded partnerships with Microsoft, bringing RISE with SAP onto Azure with deeper integration; Amazon Web Services, enabling zero-copy data sharing between SAP Business Data Cloud and Amazon Athena; Google Cloud, for bidirectional agent-to-agent interoperability; and Palantir, whose AIP platform will handle data migration scenarios alongside SAP’s agent-led transformation toolchain.
Anthropic is already embedding Claude into accounting software through its partnership with Xero, bringing AI-powered financial intelligence to millions of small businesses. The SAP deal extends that logic to the enterprise. Claude will power agents that take action for hundreds of thousands of SAP customers across finance, HR, procurement, and supply chain. A treasury manager can ask Joule to prepare a CFO briefing for a bank meeting and receive a completed presentation populated with live data, flagged risks, and analysis within minutes.
The question is whether the AI partner is also the AI competitor. Anthropic’s enterprise revenue has grown to the point where more than 1,000 businesses spend over a million dollars a year on its services. Its marketplace sells Claude-powered tools that perform functions SAP’s own applications handle. SAP is embedding the technology of a company whose long-term trajectory is to make SAP’s traditional product unnecessary.
The migration
SAP holds one card that no AI startup can match. Roughly 17,000 companies are still running SAP ECC, the legacy ERP system whose mainstream maintenance ends in December 2027. Extended support runs to 2030, but at higher cost and with diminishing returns. Every one of those companies must migrate to S/4HANA Cloud or find an alternative. Most will migrate.
Advertisement
The Autonomous Enterprise announcement converts that forced migration into an AI upsell. RISE with SAP customers will receive three Joule Assistants activated within their first year. SAP GROW customers get access to the full assistant portfolio at onboarding. The agent-led transformation tooling, built with Palantir, automates system analysis, code remediation, configuration, and testing at scale, reducing the effort, cost, and risk that have kept thousands of companies on the legacy platform.
SAP is using the deadline it created to sell the AI platform it just built. The 17,000 holdouts are not just a migration challenge. They are a captive market for the most expensive AI product launch in enterprise software history.
Oracle has assembled more than 16 billion dollars in data centre financing to pivot toward AI infrastructure, a bet that the future of enterprise technology is measured in compute capacity rather than software licences. SAP’s approach is different. It is not building data centres. It is embedding agents into the business processes that the data centres ultimately serve.
Advertisement
The strategic logic is that AI will commoditise software interfaces but not business process logic. Anyone can build a chatbot. Not anyone can build a chatbot that understands the intercompany elimination rules in a multinational financial close, or the procurement compliance requirements of a German automotive manufacturer, or the lot-tracing regulations in pharmaceutical supply chains. SAP’s 53 years of accumulated process knowledge is the moat. The AI agents are the means of monetising it.
The question
Anthropic has reached a one trillion dollar implied valuation on secondary markets, roughly five times SAP’s current market capitalisation. The company that SAP just made its primary AI partner is worth more than SAP. The company that builds the reasoning engine is valued higher than the company that owns the business processes the engine reasons about.
That valuation gap is the market’s current answer to Klein’s question. The market believes that AI companies will capture more value than the enterprise software companies AI is embedded into. SAP is betting that the market is wrong, that the value accrues to whoever owns the process, the data, and the governance layer, not whoever builds the model.
The Autonomous Enterprise will take years to validate. The 200 agents and 50 assistants are launching in phases through 2026 and into 2027. The Industry AI solutions roll out quarterly. The Anthropic integration is in its early stages. The migration deadline will force millions of decisions over the next 18 months about whether to adopt SAP’s AI stack or look elsewhere.
Advertisement
Klein asked whether SAP will be a software company in the future. The honest answer is that SAP does not know. What it knows is that 300,000 customers run their most critical business operations on SAP systems, and that the only way to keep them is to make those systems do things that used to require the people who operate them. The Autonomous Enterprise is not a product launch. It is a wager that the company which automates the work will remain more valuable than the companies whose workers it automates away.
The JP4x4 is a new take on two of the original Renault 4s: the Plein Air version, built in 1969 for open-air fun, and the JP4 from 1981, which seemed to channel carefree days by the sea. The name JP4 is derived from Journée à la Plage, which translates to “a day at the beach.” The new name JP4x4 incorporates the four-wheel drive feature, which is self-explanatory.
On May 18, visitors to the 2026 Roland-Garros French Open will get their first look at the vehicle, which joins three previous concepts built on the same electric Renault 4 E-Tech platform, each of which explored new ways to use the compact hatchback. The most recent version focuses squarely on leisure and light adventure. The vehicle joins three previous prototypes based on the same electric Renault 4 E-Tech chassis, each exploring new ways to use the compact hatchback. This most recent edition focuses solely on leisure and minor adventure. Emerald green paint covers the bodywork in a somewhat iridescent tint that resembles the colors offered on the classic 4L in the 1970s. Bright orange fills the interior, creating a sharp, cheerful contrast that draws the eye from all sides. Half-doors replace the traditional five-door layout, stopping just short of the B-pillar enabling simple entry and departure. There are no side windows or a canvas roof, so the hut is always open to the breeze.
The openwork roof is made up of a cross-shaped structure that provides enough stiffness while allowing plenty of sky to be visible. The same frame supports a surfboard strapped securely on top. At the back, the tailgate folds flat like the side of a pickup truck, transforming the cargo area into a simple loading platform. Skateboards fit nicely into the free area behind the seats, ready for whatever happens next.
The dashboard and digital screens are carried over from the production car, but Renault added a passenger grab handle for rougher terrain and a floating center console to keep the space airy. Inside, the seats replicate the distinctive bucket style of 1970s Renault models, complete with integrated headrests that resemble wrapped Egyptian mummies. The seats are covered in mixed fabrics, combining a crepe base with diagonal mesh sections for a sporty yet comfortable feel. Orange accents appear. They are covered in a mix of fabrics, including a crepe base and diagonal mesh parts for a sporty yet comfortable feel. The dashboard and digital panels are carried over from the production car, but Renault has added a passenger grasp hold for rougher terrain and a floating center console to keep the area open. Orange accents emerge on the door panels and surrounding the console, bringing everything together.
The JP4x4 is mechanically similar to last year’s Savane 4×4 concept, with a second electric motor driving the rear wheels, giving the vehicle permanent all-wheel drive instead of the front-wheel-drive setup found on the standard Renault 4 E-Tech. The ground clearance rises by 15 millimeters, and each track widens by 10 millimeters for better stability. The 18-inch wheels wear a fresh design inspired by the original JP4, wrapped in Goodyear UltraGrip Performan A second electric motor powers the back wheels, providing the vehicle permanent all-wheel drive rather than the front-wheel-drive system seen on the ordinary Renault 4 E-Tech. The ground clearance increases by 15 millimeters, and each track expands by 10 millimeters to improve stability. The 18-inch wheels feature a new design inspired by the original JP4, as well as Goodyear UltraGrip Performance+ tires in the 225/55 size. The wheelbase remains at 2,624 millimeters, as it was on the production vehicle.
Renault built the entire package for sandy beaches, stony pathways, and unpaved treks where extra traction is critical. The combination of raised height, wider stance, and all-wheel drive gives the car a capable feel without making it a serious off-road vehicle. Nobody expects this particular vehicle to hit showrooms. Instead, it serves as a showcase for the electric Renault 4 platform’s versatility.
A recent Figure AI tech showcase depicts two F.03 humanoid robots walking into a clean but lived-in environment. One robot goes straight to a coat thrown on a bed and hangs it neatly on a wall hook. At the same time, the second robot closes a laptop on the desk and places a pair of headphones back onto their stand. They keep progressing without pausing, each catching up on what the other has previously accomplished. When they approach the unmade bed, they naturally split off, one on each side, and begin manipulating the sheets and comforter together until everything is level and smooth.
People have seen robots doing laundry and stacking boxes before, but this time, not one, but TWO machines went through a whole sequence of everyday jobs in the same room at the same time, not bad for a minute and a half of work. The list of tasks included opening doors, pushing a chair under the desk, closing a book, emptying a small trash can, and generally tidying up the joint so it looked ready for the next day. But this time, not one, but TWO computers performed a whole sequence of daily tasks in the same room at the same time, which is not bad for a minute and a half of effort. The list of activities included opening doors, moving a chair beneath the desk, closing a book, emptying a little trash can, and generally cleaning up the space so it looked ready for the next day.
Sleek & Durable Design: Standing at 132cm tall and weighing only approx. 35kg, the G1 is constructed with aerospace-grade aluminum alloy and carbon…
High Flexibility & Safe Movement: Boasting 23 joint degrees of freedom (6 per leg, 5 per arm), it offers an extensive range of motion. For safety, it…
Smart Interaction & Connectivity: Powered by an 8-core high-performance CPU and equipped with a depth camera and 3D LiDAR. It supports Wi-Fi 6 and…
Each F.03 stands about five feet eight inches tall, walks around on two legs with arms that have hands made up of five individual fingers, and their heads have stereo cameras that feed live video straight into their central brain, with no need for any external sensors or additional computers to help them out. The trick is in a special bit of software called Helix-02, which Figure built in as a single rule that takes images from the cameras and then The trick is in a special piece of software called Helix-02, which Figure included as a single rule that takes images from the cameras and a simple aim and translates them into an infinite succession of joint movements, with no further planners or coding required.
Engineers basically gave the system thousands of hours of practice time through simulated runs, then threw in some real-world examples from their earlier tests of the robots doing grocery shopping and kitchen cleaning, and then they simply added some new data showing the robots working together in a room, and the model just learned the new pattern, no need for new code.
When the comforter becomes all bunched up and begins to slide out from under the robots, the one on the left will tilt its head slightly, and the one on the right will notice and adjust its grip just in time to catch it; because they don’t send any messages to each other, everything happens on the fly, with each watching the other’s body language and adjusting their own plan as it goes.
Engineers will tell you that the problem is that objects that can change shape, such as a blanket, provide a significant challenge. The policy predicts when the shape will change and adjusts accordingly, all in a fraction of a second! It also makes the robots steady when they reach, stride, and turn, as it was interesting to see one of them stand on one leg to press the pedal while the other just kept walking. [Source]
MUFG, Mizuho, and SMFG would be the first Japanese institutions added to Anthropic’s restricted Project Glasswing rollout, a source familiar with the matter told Reuters
Japan’s three megabanks are set to gain access to Claude Mythos, Anthropic’s vulnerability-hunting AI model, within roughly two weeks, a source familiar with the matter told Reuters on Tuesday.
It would be the first time a Japanese company has been granted entry to the restricted preview, which has so far been confined to Anthropic’s American and a handful of European partners.
Mitsubishi UFJ Financial Group, Mizuho Financial Group, and Sumitomo Mitsui Financial Group were informed of the move during meetings in Tokyo this week with US Treasury Secretary Scott Bessent. The three lenders are expected to be onboarded by the end of May.
Advertisement
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
Mythos has been treated by regulators and chief executives as a category-shifting event since Anthropic disclosed its existence earlier this month.
The model has discovered thousands of previously unknown zero-day vulnerabilities across every major operating system and every major web browser, and in internal testing it wrote working exploits, including chains that escape both renderer and operating-system sandboxes in a browser.
Advertisement
Mozilla last week shipped Firefox 150 with fixes for 271 vulnerabilities found by Mythos in a single evaluation pass.
Anthropic has not released the model publicly. Instead, it has run a controlled rollout under what it calls Project Glasswing, with 12 named launch partners, including AWS, Apple, Cisco, Google, JPMorganChase, Microsoft, Nvidia, and Palo Alto Networks, and around 40 further institutions granted access on a case-by-case basis.
Tokyo is moving in parallel. Finance Minister Satsuki Katayama announced the formation of a 36-entity public-private working group on Mythos-class risks, comprising the country’s major banks, the Bank of Japan, and the Japanese units of Anthropic and OpenAI.
Advertisement
The group is chaired by Mizuho’s chief information security officer and is charged with identifying exposures, implementing defensive measures, and drafting contingency plans for what would amount to a co-ordinated patching push across the Japanese financial system.
For the three banks involved, the immediate question is operational. Mythos under Glasswing terms is delivered with restrictions on output disclosure, with the model used to find vulnerabilities in a partner’s own systems and to draft remediation, not to publish exploits.
The Mozilla case offers a template: 271 vulnerabilities patched in a single Firefox release after a Mythos sweep, with the model’s findings handed back to Mozilla engineers under non-disclosure rather than published.
The geopolitical layer is unusually visible. Bessent’s role in conveying the access decision in Tokyo aligns Mythos rollout with US Treasury statecraft rather than with Anthropic’s commercial channel, an arrangement that has drawn complaints from European capitals.
Advertisement
Eurozone finance ministers raised the issue at an Ecofin meeting last week, where no EU government had access to the model while the White House was reported to be blocking further expansion of the partner list.
Industry views on Mythos remain split. Some cybersecurity researchers have argued that the vulnerabilities Mythos surfaced are reachable through clever orchestration of public models, and that the bigger story is the rate of improvement of frontier AI in offensive cyber, not Mythos itself.
Others, including Anthropic chief executive Dario Amodei, have described the moment as a “cyber moment of danger” that justifies the access controls.
Anthropic and the three Japanese banks did not immediately respond to requests for comment, according to the Reuters source’s account.
Lady Gaga’s “Mayhem Requiem” filmed live performance will stream on Thursday, May 14, via Apple Music Live and at select AMC theaters across the United States.
At 11:00 p.m. Eastern / 8:00 p.m. Pacific, Lady Gaga fans can head to the Apple Music app on their iPhone, iPad, Mac, Apple TV, or in-browser at music.apple.com to tune into an exclusive stream of the Mayhem Requiem filmed live performance. In addition to streaming on the app, 15 select AMC theaters across the U.S. will show the performance at the same time.
The premiere is free for anyone to watch; no Apple Music subscription is required. However, Apple Music subscribers will be able to watch the performance on demand after the event is over.
Apple Music describes the event:
Advertisement
“The opera house from Lady Gaga’s MAYHEM Ball has been reduced to rubble— and now it’s time for MAYHEM Requiem, a celebration and musical reimagining of her sixth album.”
It’s worth noting that the filmed live performance isn’t actually live, either. It was recorded on January 14 at Los Angeles’ Walter Theater.
A live album of all songs mastered in spatial audio will be available on Apple Music. Fans can unlock bonus content, like wallpapers and Apple Watch faces, through the Shazam app by identifying any Lady Gaga song.
A week ahead of the Google I/O event, during the Android Show stream, there were some iPhone-friendly Android features teased. We already knew about them.
An Android smartphone and an iPhone
Google I/O 2026 is taking place on May 19 and 20, and the search giant is warming up for its biggest presentation of the year. To prepare its users for that event, it held a smaller presentation on Tuesday about Android. The Android Show I/O Edition 2026 was a 40-minute prerecorded stream, introducing a number of changes to Google’s ecosystem. There was obviously a lot of Google, Chromebook, and Android-specific content, but also some that was Apple-related in nature. Continue Reading on AppleInsider | Discuss on our Forums
The organisation plans to use the investment as a means of accelerating the application of its AI model, at scale.
AI-powered drug design and development company Isomorphic Labs has announced the raising of $2.1bn in Series B funding. The round was led by Thrive Capital and includes participation from existing backers Alphabet and GV alongside new investors MGX, Temasek, CapitalG and the UK Sovereign AI Fund.
Founded in 2021 and led by CEO Demis Hassabis and its president Max Jaderberg, Isomorphic Labs is headquartered in London and has additional premises in Cambridge, Massachusetts and Lausanne, Switzerland. The company, which is a spin-off from Google DeepMind, an AI research lab acquired by Alphabet in 2014, aims to address the challenges of drug discovery using AI technology.
Isomorphic labs intends to put the recently raised funds towards the continued development and deployment of its AI drug design engine (IsoDDE) and the acceleration and expansion of its pipeline of therapeutic programmes. Additionally, the funding will support current hiring targets.
Advertisement
Commenting on the announcement Ruth Porat, the president and chief investment officer at Alphabet and Google said, “The application of AI in healthcare offers a profound opportunity.
“Isomorphic Labs has already made extraordinary progress in harnessing AI to accelerate drug discovery, and we are excited by this momentum and the early promise of the technology platform.This trajectory is encouraging, and this funding will be used to accelerate the work and bring important interventions to market with greater speed.”
Jaderberg added, “This milestone is built on the strength of our AI drug design engine, which has already proven its worth across our internal programmes by hitting key milestones and identifying viable candidates with unprecedented speed.
“Our drug design engine works, and it’s giving us a repeatable way to design new medicines for a wide range of diseases, building a future of medicine that was previously out of reach.”
Advertisement
Reportedly, Isomorphic expects to run its first clinical trials by the end of 2026, a delay from the CEO’s earlier target of having AI-designed drugs in trials by the end of 2025.
In late April, Alphabet was among some of the large scale organisations posting positive quarterly reports. Alphabet beat revenue expectations for the past quarter, led by its growing cloud business, which rose 63pc to hit $20bn. Consolidated revenue grew 22pc to nearly $110bn.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
It’s possible that among Hackaday readers are the largest community of people who have designed their own CPU in the world. We have featured many here, but it’s possible that not so many of them have gone on to power an everyday project. Step forward [Baltazar Studios] then, with a scientific calculator sporting a self-designed CPU on an FPGA.
The calculator itself is nice enough, with a smart 3D printed case, an OLED display which almost evokes a VFD, and very well made buttons. But it’s the CPU which is of most interest, because while it follows a conventional Harvard architecture with a 12-bit instruction set, it works with 4-bit nibbles. This choice follows one used by HP in their calculator designs, seemingly because it can be optimised for the binary coded decimal which the calculator uses.
With calculators being yet another app on our spartphones or comnputers, there seems to be less use of calculators outside of education in 2026. But if you are a calculator user there’s nothing like a calculator you made yourself, and with a CPU of your own design it has few equals. We like this project almost as much as we like the Flapulator!
Artificial intelligence didn’t roll out slowly. In fact, at times it feels like it landed all at once.
In just a few years, systems that began as internal experiments are now embedded in customer support, fraud detection, software development, and even IT infrastructure operations.
AI is now part of the operational backbone of modern enterprises.
Advertisement
Latest Videos From
But there’s a problem.
Anand Kashyap
CEO and co-founder, Fortanix.
Advertisement
While AI capabilities have advanced, the way we secure them hasn’t kept up.
Most organizations are still applying traditional security models to a fundamentally different kind of workload, and it’s leaving a critical gap at runtime, or the exact moment when AI systems do their work.
The Illusion of Coverage
For years, enterprise security has focused on two primary states of data: when it’s stored and when it’s moving. Encryption for data at rest and in transit, with identity and access controls for both.
Advertisement
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
These controls still matter. But there’s a third state that’s far more complex and far less protected: data in use.
When an AI model runs, sensitive data is actively processed in memory. Model weights, which are often the most valuable intellectual property an organization owns, are loaded into memory. Prompts, responses and contextual data are generated and transformed in real time.
In most environments, all of that becomes visible to the underlying system. The uncomfortable reality is that even well-secured environments can expose their most valuable assets at the moment they’re being used.
Advertisement
Where AI Security Actually Breaks
When security teams investigate AI-related risks, the root cause rarely traces back to perimeter defenses. The issues tend to emerge deeper in the lifecycle across three key phases:
1. Training: When data quietly leaks into models. Training pipelines span storage systems, shared compute environments, orchestration layers and debugging tools. They can be messy: data moves constantly, intermediate artifacts are created and cached, and logs accumulate quickly.
Advertisement
In this environment, sensitive information might surface in unexpected places. Models themselves may unintentionally retain elements of the sensitive data they were trained on. And model weights, which encapsulate that learning, are often handled more casually than they should be.
This all creates a subtle but serious risk where exposure doesn’t always come from a direct attack. Sometimes it comes from normal development practices.
2. Inference: An overlooked exposure layer. Once a model is deployed, attention shifts to inference, or the point at which inputs become outputs.
On the surface, it looks simple. But in practice, inference workflows involve multiple streams of sensitive data, including user prompts and queries, generated responses, internal enterprise data retrieved to ground outputs, and the model itself.
Advertisement
Much of this data is processed through monitoring tools, logging systems and debugging pipelines, often in plaintext.
Even without a breach, sensitive information can be exposed through routine operations. Troubleshooting dashboards might capture more than intended, or logs could persist longer than expected. Shared infrastructure also introduces more potential for leakage.
Inference security isn’t only about blocking access. It’s about controlling what happens during execution, and most organizations aren’t doing that yet.
3. Runtime: The blind spot in modern security. The most critical yet least protected phase is the runtime phase. This is where models actually execute, encrypted data is decrypted, and model weights exist in memory. And it’s precisely where traditional security models fall short.
Advertisement
Even in environments with strong identity management controls and encryption policies, runtime assumes a certain level of trust in the underlying system. If that system is compromised, or even simply misconfigured, the protections around it don’t matter because keys are still released, workloads still run, and sensitive assets are still exposed.
This is why runtime is currently the weakest link, and why it has emerged as the true security boundary for AI systems.
Why the Problem Becomes Worse at Scale
As organizations expand their use of AI tools, the risks don’t just increase. They multiply. AI workloads are rarely isolated. They more commonly run across distributed environments, shared accelerators, and multi-tenant infrastructure. They interact with internal systems and external services, and they operate continuously, not intermittently.
Advertisement
This creates a compounding effect:
1. More data flowing through more systems.
2. More models deployed across more environments.
3. More opportunities for exposure during execution.
Advertisement
At the same time, the value of what’s being processed is going way up. Proprietary models are becoming core business assets, and sensitive enterprise data is being used to fine-tune outputs and drive decisions.
In this context, a single weak point at runtime becomes a major systemic risk.
Top Priority: Rethinking Trust in AI Systems
The core issue isn’t a lack of security tools. It’s a mismatch in assumptions when it comes to trusting the infrastructure AI runs on.
Advertisement
With traditional security, the assumption has always been that once a workload is inside a trusted environment, it can be relied upon to behave securely. But AI changes this because these systems are dynamic. They process sensitive data continuously, rely on complex stacks that are difficult to fully validate, and often run in environments that organizations don’t fully control.
In other words, crossing the perimeter isn’t the hard part anymore. Staying secure after crossing it is.
To address this, security needs to move closer to the workload itself. So, instead of focusing only on protecting access to systems, organizations need to protect what happens inside them, particularly during execution. That means:
1. Ensuring that data remains protected even while it’s being processed,
Advertisement
2. Preventing unauthorized access to model weights during runtime,
3. Verifying that workloads are running in trusted environments before allowing them to execute.
This is where approaches like Confidential Computing and hardware-based isolation are making a difference. By creating protected execution environments and tying access to cryptographic verification, the industry is moving security from assumption-based trust to proof-based trust.
In simple terms: don’t trust the system. Make it prove it’s secure.
Advertisement
Security Has Moved to the Moment of Use
For years, organizations have invested in securing where data lives and how it moves. But with AI, the most important moment is when the model runs, and data, logic and decision-making converge in real time.
That’s where the real risks are, and that’s where security needs to be focused.
The organizations that recognize this shift early will set themselves up to scale AI safely. Those that don’t may find that their most advanced systems, built on an outdated trust models, are highly vulnerable.
Advertisement
In modern AI, security isn’t defined by the perimeter. It’s defined by what happens inside it.
This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.
The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit
An anonymous reader quotes a report from the Financial Times (via Ars Technica): Amazon employees are using an internal AI tool to automate non-essential tasks in a bid to show managers they are using the technology more frequently. The Seattle-based group has started to widely deploy its in-house “MeshClaw” product in recent weeks, allowing employees to create AI agents that can connect to workplace software and carry out tasks on a user’s behalf, according to three people familiar with the matter. Some employees said colleagues were using the software to automate additional, unnecessary AI activity to increase their consumption of tokens — units of data processed by models. They said the move reflected pressure to adopt the technology after Amazon introduced targets for more than 80 percent of developers to use AI each week, and earlier this year began tracking AI token consumption on internal leader boards.
“There is just so much pressure to use these tools,” one Amazon employee told the FT. “Some people are just using MeshClaw to maximize their token usage.” Amazon has told employees that the AI token statistics would not be used in performance evaluations. But several staff members said they believed managers were monitoring the data. “Managers are looking at it,” said another current employee. “When they track usage it creates perverse incentives and some people are very competitive about it.”
You must be logged in to post a comment Login