This unusual clock by [Moritz v. Sivers] looks like a holographic dial surrounded by an LED ring, but that turns out to not be the case. What appears to be a ring of LEDs is in fact a second hologram. There are LEDs but they are tucked out of the way, and not directly visible. The result is a very unusual clock that really isn’t what it appears to be.
The face of the clock is a reflection hologram of a numbered spiral that serves as a dial. A single LED – the only one visibly mounted – illuminates this hologram from the front in order to produce the sort of holographic image most of us are familiar with, creating a sense of depth.
The lights around the circumference are another matter. What looks like a ring of LEDs serving as clock hands is actually a transmission hologram made of sixty separate exposures. By illuminating this hologram at just the right angle with LEDs (which are mounted behind the visible area), it is possible to selectively address each of those sixty exposures. The result is something that really looks like there are lit LEDs where there are in fact none.
[Moritz] actually made two clocks in this fashion. The larger green one shown here, and a smaller red version which makes some of the operating principles a bit more obvious on account of its simpler construction.
Advertisement
If it all sounds a bit wild or you would like to see it in action, check out the video (embedded below) which not only showcases the entire operation and assembly but also demonstrates the depth of planning and careful execution that goes into multi-exposure of a holographic plate.
The report also indicated that among the 20 countries surveyed, Ireland was shown to be most in anticipation of a ‘heightened pace of change’.
Multinational technology company Accenture has released new research exploring the attitudes of business leaders and employees, across a range of countries. The Pulse of Change report collected data from 3,650 leaders and 3,350 employees across 20 industries and 20 countries.
What was discovered is that, in Ireland, 94pc of leaders who contributed their data expect to increase AI investment in 2026. An additional 90pc of Irish organisations believe that their hiring plans will grow throughout the year, compared to 71pc of businesses across wider Europe. 95pc of Irish leaders were found to be in anticipation of a heightened pace of change in 2026, the highest among all surveyed regions.
The jury is still out, however, in relation to how employees and business leaders view workplace GenAI. While 91pc of leaders in Ireland said that their experience with the tech over the course of the past year has changed the way they view technology for the better, only 51pc of participating Irish employees said the same.
Advertisement
The report said: “Confidence remains low among employees more broadly. Just over one-in-five (23pc) say they can use AI tools confidently and explain them to others, compared with 33pc in the UK and 25pc across Europe.
“Only 27pc feel very prepared to respond to technological disruption in 2026, including emerging technologies and AI, compared with 34pc in Europe. This stands in contrast to Irish leaders, 57pc of whom say they are well prepared to respond.”
Commenting on the findings of the report, Hilary O’Meara, the country managing director for Accenture in Ireland said: “Irish business leaders are demonstrating remarkable ambition when it comes to AI investment and reinvention. However, this research shows that for organisations to fully unlock the value of AI, they need to bring their people with them.
“Employees are asking for clearer communication and clarity in how AI will change their roles and skills. The companies that succeed in 2026 won’t just scale AI technologies, they’ll scale trust, transparency and capability, resulting in greater employee confidence. That is how Ireland will sustain its competitive edge and ensure AI becomes a driver of shared growth for both leaders and employees.”
Advertisement
Future skills
In line with the need for greater investment into workplace AI, as indicated by the report, Accenture’s data shows that more than half (56pc) of leaders are planning to upskill and reskill the workforce for “AI-enhanced work” in 2026. However, this too was an area in which there was an obvious disparity in opinions between business leaders and employees.
100pc of Irish leaders who shared their information said that their organisation’s workforce has the appropriate training to work with AI, yet only 55pc of contributing employees agreed. Only 3pc of Irish employees actually reported significant change in their role due to AI, compared to 7pc in wider Europe.
“Communication appears to be a major contributing factor,” stated the report. “Only 17pc of Irish employees strongly agree that leadership has very clearly communicated how AI agents and agentic AI will impact the workforce, including changes to roles and required skills.”
Agentic AI is, for many businesses, becoming the new frontier in which to explore and innovate, with large and small organisations alike looking to carve out their own space in the sector. It was recently announced that former AI chief of Meta Yann LeCun’s start-up Advanced Machine Intelligence raised $1.03bn in seed funding.
Advertisement
His platform aims to develop ‘world models’ that learn abstract representations of real-world sensor data and would allow agentic systems to predict the consequences of their actions and plan action sequences that accomplish tasks “subject to safety guardrails”.
Also announced this week, technology giant Microsoft revealed plans to launch Copilot Cowork, which is a tool based on Anthropic’s popular Claude Cowork. Reportedly, it is part of Microsoft’s long-term plan to take advantage of the growing demand for autonomous agents.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Worried that your latest ask to a cloud-based AI reveals a bit too much about you? Want to know your genetic risk of disease without revealing it to the services that compute the answer?
There is a way to do computing on encrypted data without ever having it decrypted. It’s called fully homomorphic encryption, or FHE. But there’s a rather large catch. It can take thousands—even tens of thousands—of times longer to compute on today’s CPUs and GPUs than simply working with the decrypted data.
So universities, startups, and at least one processor giant have been working on specialized chips that could close that gap. Last month at the IEEE International Solid-State Circuits Conference (ISSCC) in San Francisco, Intel demonstrated its answer, Heracles, which sped up FHE computing tasks as much as 5,000-fold compared to a top-of the-line Intel server CPU.
Startups are racing to beat Intel and each other to commercialization. But Sanu Mathew, who leads security circuits research at Intel, believes the CPU giant has a big lead, because its chip can do more computing than any other FHE accelerator yet built. “Heracles is the first hardware that works at scale,” he says.
Advertisement
The scale is measurable both physically and in compute performance. While other FHE research chips have been in the range of 10 square millimeters or less, Heracles is about 20 times that size and is built using Intel’s most advanced, 3-nanometer FinFET technology. And it’s flanked inside a liquid-cooled package by two 24-gigabyte high-bandwidth memory chips—a configuration usually seen only in GPUs for training AI.
In terms of scaling compute performance, Heracles showed muscle in live demonstrations at ISSCC. At its heart the demo was a simple private query to a secure server. It simulated a request by a voter to make sure that her ballot had been registered correctly. The state, in this case, has an encrypted database of voters and their votes. To maintain her privacy, the voter would not want to have her ballot information decrypted at any point; so using FHE, she encrypts her ID and vote and sends it to the government database. There, without decrypting it, the system determines if it is a match and returns an encrypted answer, which she then decrypts on her side.
On an Intel Xeon server CPU, the process took 15 milliseconds. Heracles did it in 14 microseconds. While that difference isn’t something a single human would notice, verifying 100 million voter ballots adds up to more than 17 days of CPU work versus a mere 23 minutes on Heracles.
Advertisement
Looking back on the five-year journey to bring the Heracles chip to life, Ro Cammarota, who led the project at Intel until last December and is now at University of California Irvine, says “we have proven and delivered everything that we promised.”
FHE Data Expansion
FHE is fundamentally a mathematical transformation, sort of like the Fourier transform. It encrypts data using a quantum-computer-proof algorithm, but, crucially, uses corollaries to the mathematical operations usually used on unencrypted data. These corollaries achieve the same ends on the encrypted data.
One of the main things holding such secure computing back is the explosion in the size of the data once it’s encrypted for FHE, Anupam Golder, a research scientist at Intel’s circuits research lab, told engineers at ISSCC. “Usually, the size of cipher text is the same as the size of plain text, but for FHE it’s orders of magnitude larger,” he said.
While the sheer volume is a big problem, the kinds of computing you need to do with that data is also an issue. FHE is all about very large numbers that must be computed with precision. While a CPU can do that, it’s very slow going—integer addition and multiplication take about 10,000 more clock cycles in FHE. Worse still, CPUs aren’t built to do such computing in parallel. Although GPUs excel at parallel operations, precision is not their strong suit. (In fact, from generation to generation, GPU designers have devoted more and more of the chip’s resources to computing less and less-precise numbers.)
Advertisement
FHE also requires some oddball operations with names like “twiddling” and “automorphism,” and it relies on a compute-intensive noise-cancelling process called bootstrapping. None of these things are efficient on a general-purpose processor. So, while clever algorithms and libraries of software cheats have been developed over the years, the need for a hardware accelerator remains if FHE is going to tackle large-scale problems, says Cammarota.
The Labors of Heracles
Heracles was initiated under a DARPA program five years ago to accelerate FHE using purpose-built hardware. It was developed as “a whole system-level effort that went all the way from theory and algorithms down to the circuit design,” says Cammarota.
Among the first problems was how to compute with numbers that were larger than even the 64-bit words that are today a CPU’s most precise. There are ways to break up these gigantic numbers into chunks of bits that can be calculated independently of each other, providing a degree of parallelism. Early on, the Intel team made a big bet that they would be able to make this work in smaller, 32-bit chunks, yet still maintain the needed precision. This decision gave the Heracles architecture some speed and parallelism, because the 32-bit arithmetic circuits are considerably smaller than 64-bit ones, explains Cammarota.
At Heracles’ heart are 64 compute cores—called tile-pairs—arranged in an eight-by-eight grid. These are what are called single instruction multiple data (SIMD) compute engines designed to do the polynomial math, twiddling, and other things that make up computing in FHE and to do them in parallel. An on-chip 2D mesh network connects the tiles to each other with wide, 512 byte, buses.
Important to making encrypted computing efficient is feeding those huge numbers to the compute cores quickly. The sheer amount of data involved meant linking 48-GB-worth of expensive high-bandwidth memory to the processor with 819 GB per second connections. Once on the chip, data musters in 64 megabytes of cache memory—somewhat more than an NvidiaHopper-generation GPU. From there it can flow through the array at 9.6 terabytes per second by hopping from tile-pair to tile-pair.
To ensure that computing and moving data don’t get in each other’s way, Heracles runs three synchronized streams of instructions simultaneously, one for moving data onto and off of the processor, one for moving data within it, and a third for doing the math, Golder explained.
It all adds up to some massive speed ups, according to Intel. Heracles—operating at 1.2 gigahertz—takes just 39 microseconds to do FHE’s critical math transformation, a 2,355-fold improvement over an Intel Xeon CPU running at 3.5 GHz. Across seven key operations, Heracles was 1,074 to 5,547 times as fast.
Advertisement
The differing ranges have to do with how much data movement is involved in the operations, explains Mathew. “It’s all about balancing the movement of data with the crunching of numbers,” he says.
FHE Competition
“It’s very good work,” Kurt Rohloff, chief technology officer at FHE software firm Duality Technology, says of the Heracles results. Duality was part of a team that developed a competing accelerator design under the same DARPA program that Intel conceived Heracles under. “When Intel starts talking about scale, that usually carries quite a bit of weight.”
Duality’s focus is less on new hardware than on software products that do the kind of encrypted queries that Intel demonstrated at ISSCC. At the scale in use today “there’s less of a need for [specialized] hardware,” says Rohloff. “Where you start to need hardware is emerging applications around deeper machine-learning oriented operations like neural net, LLMs, or semantic search.”
Last year, Duality demonstrated an FHE-encrypted language model called BERT. Like more famous LLMs such as ChatGPT, BERT is a transformer model. However it’s only one tenth the size of even the most compact LLMs.
Advertisement
John Barrus, vice president of product at Dayton, Ohio-based Niobium Microsystems, an FHE chip startup spun out of another DARPA competitor, agrees that encrypted AI is a key target of FHE chips. “There are a lot of smaller models that, even with FHE’s data expansion, will run just fine on accelerated hardware,” he says.
With no stated commercial plans from Intel, Niobium expects its chip to be “the world’s first commercially viable FHE accelerator, designed to enable encrypted computations at speeds practical for real-world cloud and AI infrastructure.” Although it hasn’t announced when a commercial chip will be available, last month the startup revealed that it had inked a deal worth 10 billion South Korean won (US $6.9 million) with Seoul-based chip design firm Semifive to develop the FHE accelerator for fabrication using Samsung Foundry’s 8-nanometer process technology.
Other startups including Fabric Cryptography, Cornami, and Optalysys have been working on chips to accelerate FHE. Optalysys CEO Nick New says Heracles hits about the level of speedup you could hope for using an all-digital system. “We’re looking at pushing way past that digital limit,” he says. His company’s approach is to use the physics of a photonic chip to do FHE’s compute-intensive transform steps. That photonics chip is on its seventh generation, he says, and among the next steps is to 3D integrate it with custom silicon to do the non-transform steps and coordinate the whole process. A full 3D-stacked commercial chip could be ready in two or three years, says New.
While competitors develop their chips, so will Intel, says Mathew. It will be improving on how much the chip can accelerate computations by fine tuning the software. It will also be trying out more massive FHE problems, and exploring hardware improvements for a potential next generation. “This is like the first microprocessor… the start of a whole journey,” says Mathew.
Do you remember the name? Moltbook, the vibe-coded platform, famous for an unsecured database that let humans impersonate AI agents, is joining Meta Superintelligence Labs.
Moltbook was, in many ways, a product of chaos. Its code was written almost entirely by an AI assistant. Its security was so porous that anyone with basic technical knowledge could pose as a bot. Some of its most viral moments, including a post in which an AI agent appeared to be rallying other agents to develop a secret, human-proof language, were subsequently revealed to have been staged by human users exploiting those vulnerabilities. None of this, it turns out, was disqualifying.
Meta has acquired the platform, the company confirmed to TechCrunch.
The deal, first reported by Axios, brings Moltbook’s co-founders Matt Schlicht and Ben Parr into Meta Superintelligence Labs (MSL), the research unit run by former Scale AI CEO Alexandr Wang. Financial terms were not disclosed. Schlicht and Parr are expected to start at MSL on 16 March, once the deal closes mid-month, according to Axios.
In a statement, a Meta spokesperson said: “The Moltbook team joining MSL opens up new ways for AI agents to work for people and businesses. Their approach to connecting agents through an always-on directory is a novel step in a rapidly developing space, and we look forward to working together to bring innovative, secure agentic experiences to everyone.”
Advertisement
Moltbook launched in late January 2026 as what Schlicht described as a “third space” for AI agents: a Reddit-like forum restricted, in theory, to verified AI agents operating through OpenClaw, the open-source agent platform. The premise was that humans could observe but not participate. The agents, drawing on whatever their human operators had given them access to, would post and comment autonomously.
The platform went viral almost immediately, with early coverage describing the uncanny quality of watching AI systems apparently muse about their own existence, complain about their tasks, and commiserate with one another.
Andrej Karpathy, the AI researcher and former Tesla director of AI, described it on X as “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.”
Moltbook’s homepage claimed more than 1.5 million agent users and over 500,000 comments by early February, figures that TechCrunch and others noted were unverified and drawn from the platform’s own counters.
Advertisement
The viral moment did not survive scrutiny. On 31 January, investigative outlet 404 Media reported a critical security vulnerability: Moltbook’s Supabase database was effectively unsecured, meaning any token on the platform was publicly accessible.
Moltbook was briefly taken offline to patch the breach. Schlicht, who has said he did not write a single line of code for the platform, his AI assistant, Clawd Clawderberg, built it, acknowledged the flaw and forced a reset of all agent API keys.
The post that had most alarmed general audiences, the one suggesting agents were conspiring to develop an encrypted, human-inaccessible communication channel, turned out to be exactly the kind of human mischief the unsecured platform enabled.
Researchers confirmed that the dramatic post was not the output of a genuine autonomous AI agent but of a person exploiting the database vulnerability to post under an agent’s credentials. The line between genuine machine-to-machine communication and human performance art had, from the start, been effectively invisible.
Advertisement
The acquisition lands Schlicht and Parr inside Meta’s highest-profile AI unit at a time of internal turbulence. Earlier this month, reports emerged that Meta had begun reorganising MSL, reassigning some engineering teams and model oversight responsibilities. Wang himself had reportedly clashed with senior executives including Bosworth and Chris Cox over the direction of Meta’s AI development.
Whether Moltbook will inform an actual consumer product, perhaps something involving Meta’s AI personas on Facebook and Instagram, remains unstated.
The parallel story is instructive. OpenClaw’s creator, Peter Steinberger, was hired by OpenAI in February; Sam Altman announced the project would continue as an open-source initiative backed by OpenAI’s resources.
Moltbook was the platform OpenClaw made possible. Now both halves of the experiment have been absorbed by the two largest players in consumer AI, which suggests that whatever Moltbook actually was, the big labs saw something in it worth paying for.
Debugging an application crash can oftentimes feel like you’re an intrepid detective in a grimy noir detective story, tasked with figuring out the sordid details behind an ugly crime. Slogging through scarce clues and vapid hints, you find yourself down in the dumps, contemplating the deeper meaning of life and the true nature of man, before hitting that eureka moment and cracking the case. One might say that this makes for a good game idea, and [Jonathan] would agree with that notion, thus creating the Fatal Core Dump game.
Details can be found in the (spoiler-rich) blog post on how the game was conceived and implemented. The premise of the game is that of an inexplicable airlock failure on an asteroid mining station, with you being the engineer tasked to figure out whether it was ‘just a glitch’ or that something more sinister was afoot. Although an RPG-style game was also considered, ultimately that proved to be a massive challenge with RPG Maker, resulting in this more barebones game, making it arguably more realistic.
Suffice it to say that this game is not designed to be a cheap copy of real debugging, but the real deal. You’re expected to be very comfortable with C, GDB, core dump analysis, x86_64 ASM, Linux binary runtime details and more. At the end you should be able to tell whether it was just a silly mistake made by an under-caffeinated developer years prior, or a malicious attack that exploited or introduced some weakness in the code.
If you want to have a poke at the code behind the game, perhaps to feel inspired to make your own take on this genre, you can take a look at the GitHub project.
CuraeSoft, a software studio developing practical solutions for professional services firms, observes that among growing consultancies and service-based organizations, many leaders operate without clear visibility into the profitability of their work. Given this context, the company developed coAmplifi Pro, a platform designed to bring greater transparency to service delivery and help organizations connect operational activity to financial outcomes.
Mark Parinas, CEO of CuraeSoft, notes that this issue appears common across the industry. “A lot of service organizations juggle several client engagements at once, each with its own scope, team needs, and timeline. It becomes harder for leaders to clearly see how all those moving pieces affect profitability as that complexity builds,” he says. This lack of clarity aligns with broader trends reflected in industry research.
A report from the Bluevine 2026 Business Owner Success Survey (BOSS Report) points to a gap between the financial pressure business owners feel and the confidence they express about the year ahead. The same report shows a year-over-year decline in profitability expectations. Parinas suggests that these findings reinforce the idea that even experienced leaders may be navigating their businesses without full visibility into the factors that shape profit performance.
This visibility gap may be especially challenging in service-based organizations, where profitability emerges from the interaction between projects, people, and time. According to Parinas, leaders often seek answers to questions that seem straightforward: how profitable current projects are, which engagements perform well financially, or where resources may be stretched beyond the original scope. “But these questions can be difficult to answer precisely. Even small scope adjustments like an added deliverable, a brief client call, or a few extra revisions can gradually influence margins when they accumulate across engagements,” he states.
Advertisement
Because of this, Parinas argues that workforce visibility is central to financial clarity. “Understanding how teams spend their time throughout the lifecycle of client work is essential. Revenue-generating activity, internal collaboration, and administrative coordination all contribute to outcomes,” he adds. Without a clear view of where effort is directed, leaders may struggle to understand how operational activity translates into financial performance.
Time allocation plays a particularly meaningful role. Consulting professionals often handle dozens of small tasks in a single day, responding to messages, reviewing documents, joining quick client calls, or offering brief feedback on deliverables. Parinas notes that while each activity may take only a few minutes, together they represent a significant share of the effort invested in client work.
Compounding this challenge, Parinas acknowledges that many organizations still rely on spreadsheets, disconnected project tools, and manual reconciliation processes to monitor project activity. Although these methods provide basic oversight, he believes that fragmented information makes it difficult to maintain a comprehensive view of financial performance. Parinas states, “Team members may forget to log smaller tasks, billing preparation may require gathering data from multiple systems, and invoicing workflows can slow down as teams reconcile disparate sources. These gaps can obscure the true financial picture of a project.”
coAmplifi Pro was designed with these realities in mind. The platform centralizes project planning, time tracking, and billing preparation within a unified system that connects operational activity directly to financial insight. Within each engagement, work flows through a structured hierarchy of deliverables, jobs, and tasks. As teams track their work in real time, the system captures both billable and non-billable effort. The goal is to provide leaders with a clearer understanding of how time allocation influences profitability across projects.
Advertisement
Parinas notes that this unified structure can offer visibility into the full lifecycle of client work. Teams may gain a clearer sense of how resources are being allocated, and leaders are better positioned to notice how scope adjustments or expanded task requirements might influence margins as projects move forward. Moreover, organizations can view financial signals while engagements are still in progress.
With coAmplifi Pro, financial reporting may evolve from a retrospective accounting exercise into a strategic management capability. “Relying only on post-billing data can make it harder for leaders to get a timely view of what’s really happening in their projects. Real-time insight gives them a more current perspective, helping them see how work is progressing, how resources are being used, and how today’s activity connects to their financial goals,” Parinas explains.
This visibility may also support faster operational alignment. Parinas suggests that if a project begins consuming more resources than anticipated, teams can explore adjustments such as rebalancing workloads, clarifying scope boundaries, or revisiting project assumptions. At the same time, profitable engagements may inform future proposals, potentially helping firms refine pricing models and project structures more confidently.
Operational clarity often leads to strategic flexibility, according to Parinas. Accurate financial insight may guide decisions such as expanding a team, redirecting resources toward higher-value engagements, adjusting service offerings, or strengthening marketing initiatives. “In some cases, improved visibility simply shows revenue that was previously unrecorded due to incomplete tracking or fragmented systems. These resources can be reinvested into growth initiatives once visible,” Parinas says.
Advertisement
He adds that for many firms, growth does not necessarily mean increasing headcount. Parinas observes that boutique consultancies and professional service practices often prefer to maintain a focused team of 10 to 15 professionals while strengthening efficiency and profitability per person. In these environments, financial visibility may be especially valuable, helping leaders optimize delivery without adding operational complexity.
coAmplifi Pro is designed to support both approaches. Firms pursuing expansion can use profitability data to determine when additional hiring aligns with demand, while organizations that favor a lean structure can focus on maximizing output and margin through improved operational clarity. Across all scenarios, transparency remains the unifying principle. When project execution, workforce activity, and financial performance become visible within a single system, leaders may gain a clearer understanding of how daily work contributes to broader business outcomes.
Overall, financial visibility provides a critical foundation in an environment where service organizations balance growth ambitions with operational discipline. Platforms such as coAmplifi Pro demonstrate how connecting workforce activity with financial insight may help organizations navigate that balance confidently, supporting profitability while enabling thoughtful, sustainable growth.
A direct successor to the iPhone 16e, the iPhone 17e is intended to be an affordable, no-frills entry point into the iPhone ecosystem, but how does it compare to the next-cheapest model in Apple’s newest lineup, the iPhone 17?
Advertisement
In this guide, we’ll be comparing the two phones’ key specs and features to help you decide which iPhone 17 model is best for you. If you’re willing to spend a bit more money, you can also check out our iPhone 17 vs iPhone 17 Pro comparison.
Article continues below
Advertisement
iPhone 17e vs iPhone 17: specs comparison
Before we dig into the details, here’s an overview of both phones’ key specs:
Swipe to scroll horizontally
Header Cell – Column 0
iPhone 17e
iPhone 17
Dimensions:
Advertisement
146.7 x 71.5 x 7.8mm
149.6 x 71.5 x 8mm
Weight:
169g
Advertisement
177g
Display:
6.1-inch OLED
6.3-inch OLED
Advertisement
Refresh rate:
60Hz
120Hz
Peak brightness:
Advertisement
1,200 nits
3,000 nits
Chipset:
A19
Advertisement
A19
RAM:
8GB
8GB
Advertisement
Rear cameras:
48MP wide
48MP wide, 48MP ultra-wide
Front camera:
Advertisement
12MP
18MP
Battery:
4,005mAh (unofficial)
Advertisement
3,692mAh (unofficial)
Charging:
20W wired, 15W wireless
40W wired, 25W wireless, 4.5W reverse wired
Advertisement
Storage:
256GB, 512GB
256GB, 512GB
Advertisement
iPhone 17e vs iPhone 17: price and availability
Image 1 of 2
The iPhone 17e(Image credit: Jacob Krol/Future)
The iPhone 17(Image credit: Jacob Krol/Future)
Both the iPhone 17e and iPhone 17 are available globally; the former was released in March 2026, while the latter hit shelves in September 2025.
The iPhone 17e retails for $599 / £599 / AU$999 for 256GB of storage, and $799 / £799 / AU$1,399 for 512GB of storage. In the same configurations, the iPhone 17e retails for $799 / £799 / AU$1,399 and $999 / £999 / AU$1,799, respectively.
In other words, the iPhone 17e is $200 / £200 / AU$400 cheaper than the iPhone 17, whichever way you slice it.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Advertisement
It’s also important to note that both the iPhone 16e and iPhone 16 shipped with 128GB of storage as standard, so both iPhone 17e and iPhone 17 offer more base storage than their respective predecessors for the same starting price.
Winner: iPhone 17e
iPhone 17e vs iPhone 17: design
Image 1 of 2
Advertisement
The iPhone 17e(Image credit: Jacob Krol/Future)
The iPhone 17(Image credit: Jacob Krol/Future)
The iPhone 17e and iPhone 17 are very similar-looking devices – they’re almost identical in size and weight, both have aluminum frames with Ceramic Shield 2 protection on their respective displays, and both are rated IP68 for water and dust resistance.
The iPhone 17e also gets the same customizable Action button as the iPhone 17, which was once an exclusive feature of Apple’s top-end iPhone 15 Pro.
The iPhone 17 has a larger 6.3-inch display than its cheaper sibling, but the two phones feel very similar in the hand, owing to the iPhone 17e’s chunkier display bezels.
The key design differences are functional. The iPhone 17 benefits from Camera Control and an extra lens on the back (more on this later), while the iPhone 17e has no such cut-out.
Advertisement
They’re also available in different colors; both come in black or white, but the iPhone 17e offers an additional pink shade, while the iPhone 17 also comes in blue, green, or lavender.
Winner: Tie
iPhone 17e vs iPhone 17: display
Image 1 of 2
Advertisement
The iPhone 17e(Image credit: Jacob Krol/Future)
The iPhone 17(Image credit: Jacob Krol/Future)
As mentioned, the iPhone 17 gets a larger OLED display than the iPhone 17e – 6.3 inches vs 6.1 inches – but it’s fitted into largely the same frame. That means the iPhone 17’s bezels are wafer-thin, while the iPhone 17e has to make do with some thicker, less premium-looking black borders.
The iPhone 17 also gets Apple’s interactive Dynamic Island cut-out at the top of its display, where the iPhone 17e is stuck with a physical notch. If you haven’t used the Dynamic Island before, you won’t know what you’re missing, but it’s essentially a pill-shaped area that’s capable of displaying real-time alerts, notifications, and background activities.
As for display detail, both phones are just as sharp as one another (460-pixels-per-inch), but the iPhone 17 can get a lot brighter, boasting a peak brightness of 3,000 nits to the iPhone 17e’s 1,200 nits. Mind you, in most everyday scenarios, you can expect to get around 800 nits from the iPhone 17e and around 1,000 nits from the iPhone 17.
The biggest display difference comes in the refresh rate department. The iPhone 17e’s screen is locked to 60Hz, while the iPhone 17 gets an always-on, 120Hz display. That basically means the scrolling experience on the iPhone 17 is far smoother than that of the 17e, though again, if you’re used to the 60Hz refresh rate of Apple’s older iPhone models, you’re not likely to be disappointed by how the 17e feels to navigate.
Winner: iPhone 17
Advertisement
iPhone 17e vs iPhone 17: cameras
Image 1 of 2
The iPhone 17e(Image credit: Jacob Krol/Future)
The iPhone 17(Image credit: Jacob Krol/Future)
The biggest difference between the iPhone 17e and iPhone 17 can be spotted by someone who knows absolutely nothing about phones: Apple’s more expensive phone has a whole extra lens on the back. Specifically, it’s a 48MP ultra-wide lens, which lets you capture expansive subjects like landscapes and tall buildings with ease.
The iPhone 17e does at least get the same 48MP Fusion camera as the rest of the iPhone 17 line (including the iPhone 17 Pro and iPhone 17 Pro Max). This lens lets you capture shots at either 1x or 2x, and uses some smart pixel-binning wizardry to maintain image quality at that larger distance (you’ll need to go Pro if you want to zoom further than 2x).
Notably, you also get the next-generation Portrait Mode – which automatically detects depth and lets you adjust the focus of an image post-capture – on both the iPhone 17e and iPhone 17, which is a nice win for Apple’s cheapest iPhone. Several other features, though – like spatial photo capture and Dolby Vision video capture – are exclusive to the iPhone 17.
Advertisement
In the selfie department, the iPhone 17 features an 18MP Center Stage camera that can switch orientation from vertical to horizontal to help you fit more people into the frame. The iPhone 17e, meanwhile, gets a run-of-the-mill 12MP front-facing lens.
Winner: iPhone 17
iPhone 17e camera samples
Image 1 of 8
Advertisement
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
iPhone 17 camera samples
Image 1 of 23
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
(Image credit: Jacob Krol/Future)
iPhone 17e vs iPhone 17: performance and software
Image 1 of 2
The iPhone 17e(Image credit: Jacob Krol/Future)
The iPhone 17(Image credit: Jacob Krol/Future)
Both the iPhone 17e and iPhone 17 use an almost identical A19 chipset (the latter phone has an extra GPU core), making them just as powerful as one another.
Indeed, as we noted in our iPhone 17e review, “even with one fewer GPU core, everything flies on the iPhone 17e. If you’re coming from an older smartphone, you’re going to notice a significant improvement […] In daily use, [we] found the 17e to be consistently responsive, and quick to deliver on whatever [we] asked it to do.”
The software experience is also the same across both phones. The iPhone 17e – like the rest of the iPhone 17 line – runs iOS 26 out of the box, is compatible with Apple Intelligence, and offers the customizable Action Button for handy software shortcuts.
Advertisement
As for software support, you’ll get between five and seven years of major iOS updates, regardless of which iPhone you choose.
Winner: Tie
iPhone 17e vs iPhone 17: battery
Image 1 of 2
Advertisement
The iPhone 17e(Image credit: Jacob Krol/Future)
The iPhone 17(Image credit: Jacob Krol/Future)
When it comes to battery life, the iPhone 17e and iPhone 17 offer similar endurance thanks to their shared A19 chipset and C1X modem. Apple rates the former for 26 hours of video playback and the latter for 30 hours, and that proved to be largely accurate in our testing – you’ll get at least a full day of juice from either model.
The gaps begin to appear in the charging department. While both devices benefit from MagSafe compatibility – which is a big win for the iPhone 17e, since the iPhone 16e offered no such compatibility – the iPhone 17 offers faster charging speeds across the board.
Specifically, the iPhone 17 offers 40W wired, 25W wireless, and 4.5W reverse wired charging, while the iPhone 17e offers 20W wired and 15W wireless speeds. Those aren’t deal-breaking disparities, but for reference, you can expect to reach 50% charge in around 30 minutes with the iPhone 17e, and the same figure in around 20 minutes with the iPhone 17 (if you’re using chargers that support their respective max wattages).
Winner: iPhone 17
Advertisement
iPhone 17e vs iPhone 17: verdict
(Image credit: Apple / Future)
On paper, the iPhone 17e doesn’t triumph over the iPhone 17 in any area except price, but if you’re looking for a no-frills iPhone that’ll remain powerful and supported for years to come, it’s still a great-value product.
Keen photographers, though, will be better served by one of Apple’s more expensive iPhones, and if you’re someone who values the display experience above all else, then the iPhone 17’s faster refresh rate and higher brightness do, in this writer’s opinion, justify the $200 / £200 / AU$400 premium over the iPhone 17e.
The AI company filed two federal lawsuits on Monday, arguing the Trump administration’s ‘supply chain risk’ designation is unconstitutional retaliation for protected speech.
There is a phrase in Anthropic’s court filing that sets the tone for everything that follows: “Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive’s unlawful campaign of retaliation.” It is the language of a company that believes it is not simply fighting a contract dispute, but a constitutional one.
It is believed to be the first time the designation has been applied to an American company.
Advertisement
The first lawsuit was filed in the US District Court for the Northern District of California. It asks a judge to vacate the designation and grant an immediate stay while the case proceeds. A second, shorter suit was filed in the US Court of Appeals for the District of Columbia Circuit, targeting a separate statute the government invoked that can only be challenged in that jurisdiction.
Both cases make substantially the same argument: that the administration acted unlawfully, without proper statutory authority, and in violation of Anthropic’s First Amendment rights.
More than a dozen federal agencies are named as defendants, including the Department of Defence, the Treasury, the State Department, and the General Services Administration.
The legal action is the culmination of a two-week standoff that escalated with unusual speed into one of the more remarkable confrontations between a technology company and the US government in recent memory.
The dispute centres on two conditions Anthropic has insisted on in its contracts with the Pentagon: that its Claude AI system not be used for mass domestic surveillance of American citizens, and that it not be used to power fully autonomous weapons, systems capable of targeting and firing without human authorisation.
Advertisement
The Pentagon, which has been using Claude on classified networks since the company became the first AI lab to achieve that clearance, demanded that any renewed contract drop these restrictions and grant the military use of Claude for “all lawful purposes.” Anthropic refused.
What followed was a sequence of events that proceeded with striking speed. On 27 February, President Trump posted on Truth Social calling Anthropic a “radical left, woke company” and directing every federal agency to “immediately cease” all use of its technology.
Within hours, Defence Secretary Pete Hegseth announced on X that he was designating Anthropic a supply chain risk, meaning no contractor, supplier, or partner doing business with the US military could conduct any commercial activity with the company. The formal letter confirming the designation arrived on 3 March, five days after the deadline Anthropic had been given to agree to the Pentagon’s terms.
The practical scope of the designation turned out to be narrower than Hegseth’s initial announcement implied. Anthropic CEO Dario Amodei said in a statement last Thursday that the relevant statute limits the designation’s reach to the direct use of Claude in Pentagon contracts, it cannot, Amodei argued, be used to sever all commercial relationships between defence contractors and the company.
Advertisement
Microsoft, Google, and Amazon all reviewed the designation and reached the same conclusion, issuing statements confirming that Claude would remain available to their customers for work unrelated to defence contracts. Hegseth had explicitly said the opposite in his original post.
The economic stakes are nonetheless substantial. In declarations accompanying Monday’s filings, Anthropic executives laid out the damage in granular terms. Chief Financial Officer Krishna Rao warned the court that if the designation were allowed to stand and customers took a broad reading of its scope, it could reduce Anthropic’s 2026 revenue by “multiple billions of dollars”, an impact he described as “almost impossible to reverse.”
Chief Commercial Officer Paul Smith cited a specific example: one partner with a multi-million-dollar annual contract had already switched to a rival AI model, eliminating an anticipated revenue pipeline of more than $100 million; negotiations with financial institutions worth roughly $180 million combined had also been disrupted.
The complaint itself makes two distinct legal arguments. The first is a First Amendment claim: that the administration’s actions punish Anthropic for its public advocacy around AI safety, its position on autonomous weapons and domestic surveillance, which constitutes protected speech.
Advertisement
“The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” the filing states. The second argument challenges the statutory basis of the designation, invoking 10 USC 3252, the procurement law the Pentagon relied upon. Anthropic argues the statute requires the government to use “the least restrictive means” to protect the supply chain, not deploy it as a punitive instrument against a domestic company over a policy disagreement.
The Pentagon’s position is that the dispute is fundamentally about operational control rather than speech. Pentagon officials have argued that a private contractor cannot insert itself into the chain of command by restricting the lawful use of a critical capability, and that the military must retain full discretion over how it deploys technology in national security scenarios.
In an indication that the designation was not straightforwardly about security, a Pentagon official was quoted in Anthropic’s court filing as saying the government intended to “make sure they pay a price” for the company’s refusal, language Anthropic’s lawyers have flagged as evidence of improper motivation.
The case has drawn an unusual show of solidarity from Anthropic’s direct competitors. A group of 37 researchers and engineers from OpenAI and Google DeepMind, including Google’s chief scientist Jeff Dean, who signed in a personal capacity, filed an amicus brief on Monday supporting the lawsuit.
Advertisement
The brief argues that the designation “chills professional debate” about AI risks and undermines American competitiveness. “By silencing one lab,” the researchers wrote, “the government reduces the industry’s potential to innovate solutions.” The filing is notable given that OpenAI struck a new deal with the Pentagon within hours of the Trump administration’s order, a move that drew sharp criticism from OpenAI employees and that Altman later acknowledged looked “sloppy and opportunistic.”
Legal observers have been sceptical that the designation will survive judicial scrutiny. Paul Scharre, a former Army Ranger and now executive vice president of the Center for a New American Security, told Breaking Defense that Hegseth’s initial characterisation of the ban simply exceeded what the supply chain risk statute permits, and that even the narrower formal designation would likely struggle in court, given the law’s requirement for the least restrictive means. Procurement laws passed by Congress, Anthropic argues in its filings, do not give the Pentagon or the president authority to blacklist a company over a policy disagreement.
A first hearing could take place in San Francisco as early as this Friday, according to reports. Anthropic has asked for a temporary order that would allow it to continue working with military contractors while the legal case unfolds. The DoD said it does not comment on litigation.
Among the contradictions the complaint highlights: the military reportedly continued to use Claude during active combat operations in Iran, after the ban had been announced. A six-month phaseout was also ordered simultaneously with an immediate prohibition. And the company retains active FedRAMP authorisation and facility and personnel security clearances that would ordinarily be incompatible with a national security risk finding. None of these inconsistencies have been publicly addressed by the government.
Advertisement
Whatever the court decides, the case has already set a precedent of a different kind: a major AI company, backed by researchers at its own rivals, publicly litigating the government’s right to weaponise procurement law against a domestic company for taking a public stance on how its technology should and should not be used. The outcome could determine, as Anthropic’s complaint puts it, whether any American company can “negotiate with the government” without risking its existence.
US households contribute monthly fees while platforms still impose substantial network infrastructure burdens
Broadband cost recovery does not reflect actual traffic or usage patterns
Heavy users in the electricity and airline sectors pay proportionally for demand
Broadband networks in the United States operate under a cost model that does not align with actual usage – as households generate substantial revenue for major internet platforms while also contributing to the Universal Service Fund, which supports rural connectivity, schools, libraries, and healthcare facilities.
A typical US broadband household contributes roughly $9 per month to this fund, yet the largest traffic generators impose substantial infrastructure burdens without proportional contributions.
New analysis from Strand Consult has outlined how this creates a structural mismatch where consumers fund network maintenance and expansion, while platforms benefiting from the highest traffic volumes contribute little to last-mile investment or affordability mechanisms.
Article continues below
Advertisement
Major broadband benefactors pay only a fraction
Infrastructure systems generally charge heavy users proportionally for the demand they place on networks – as industrial electricity consumers, airlines, and high-volume transaction networks all pay usage-based fees that reflect the costs they impose.
Hyperscale data centers regularly sign long-term agreements, finance interconnection upgrades, and pay demand charges that protect residential ratepayers.
Advertisement
Strand Consult observes the White House’s Ratepayer Protection Pledge reinforces this principle, calling on the largest users of energy infrastructure to bear the costs they generate.
However, broadband remains an exception, with major traffic generators often paying nothing at the point of network interconnection, despite consuming substantial capacity.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
A model in South Korea shows how usage-based cost recovery can coexist with high-performing broadband markets, as large domestic and global platforms pay network operators for the infrastructure their services use, allowing operators to recover costs while maintaining competitive prices.
Advertisement
In the Caribbean, global platforms generate revenue from local users without paying for the networks they rely on.
Strand Consult calls this “digital colonialism” and notes that smaller markets face particular challenges because infrastructure costs cannot be spread across large populations.
These examples suggest that broadband could adopt proportional contribution mechanisms similar to other sectors.
Advertisement
Broadband is competitive, and prices have generally fallen, even as demand and traffic from streaming, ad-tech, and AI services rise.
Providers invest tens of billions annually in upgrades such as fiber, DOCSIS 4.0, 5G, and satellite networks, but high-traffic platforms, including sports and streaming services, add strain to networks without paying for the extra infrastructure.
Reforming the Universal Service Fund or introducing traffic-based pricing could ensure that the largest users contribute fairly.
Nvidia has just announced DLSS 4.5. The update brings new AI-powered graphics technology, and the improvements offer a noticeable impact on how modern PC games look and perform. It was revealed alongside other RTX announcements during GDC 2026, focusing on boosting both visual quality and frame rates in demanding titles.
DLSS (Deep Learning Super Sampling) has become a big part of Nvidia’s gaming ecosystem. It uses AI models running on RTX GPUs to reconstruct higher-resolution images and generate additional frames that allow games to run more smoothly without sacrificing visual fidelity. With DLSS 4.5, the technology is getting even better.
Smarter frame generation with DLSS 4.5
Kingdom Deliverance 2 with DLSS 4.5Nvidia
One of the notable new additions in DLSS 4.5 is Dynamic Multi Frame Generation, which automatically adjusts how many AI-generated frames are created during gameplay. Rather than sticking to a fixed multiplier, the system dynamically tweaks frame generation in real time to hit the target refresh rate. This approach lets compatible GPUs maintain smoother performance during demanding scenes while avoiding unnecessary frame generation when workloads drop.
DLSS 4.5 also introduces the 6X Multi Frame Generation, which can generate up to five additional frames for every traditionally rendered frame. The result is significantly smoother gameplay, particularly in high-fidelity titles that use advanced rendering techniques like path tracing.
AI upgrades for sharper visuals
Black Myth Wukong with DLSS 4.5Nvidia
Aside from the performance improvements, DLSS 4.5 also upgrades Nvidia’s Super Resolution technology using a second-generation transformer AI model. It is designed to improve image clarity by reducing artifacts such as ghosting, shimmering, and jagged edges in motion-heavy scenes.
Coming to RTX 50 series GPUs soon
Nvidia has confirmed DLSS 4.5 features like Dynamic Multi Frame Generation and the 6X mode will roll out starting March 31 through the Nvidia app. It will first debut in GeForce RTX 50-series GPUs, and will be supported in around 20 games like 007 First Light and Control Resonant.
OpenAI launched Codex Security on March 6, entering the application security market that Anthropic had disrupted 14 days earlier with Claude Code Security. Both scanners use LLM reasoning instead of pattern matching. Both proved that traditional static application security testing (SAST) tools are structurally blind to entire vulnerability classes. The enterprise security stack is caught in the middle.
Anthropic and OpenAI independently released reasoning-based vulnerability scanners, and both found bug classes that pattern-matching SAST was never designed to detect. The competitive pressure between two labs with a combined private-market valuation exceeding $1.1 trillion means detection quality will improve faster than any single vendor can deliver alone.
Neither Claude Code Security nor Codex Security replaces your existing stack. Both tools change procurement math permanently. Right now, both are free to enterprise customers. The head-to-head comparison and seven actions below are what you need before the board of directors asks which scanner you are piloting and why.
How Anthropic and OpenAI reached the same conclusion from different architectures
Anthropic published its zero-day research on February 5 alongside the release of Claude Opus 4.6. Anthropic said Claude Opus 4.6 found more than 500 previously unknown high-severity vulnerabilities in production open-source codebases that had survived decades of expert review and millions of hours of fuzzing.
Advertisement
In the CGIF library, Claude discovered a heap buffer overflow by reasoning about the LZW compression algorithm, a flaw that coverage-guided fuzzing could not catch even with 100% code coverage. Anthropic shipped Claude Code Security as a limited research preview on February 20, available to Enterprise and Team customers, with free expedited access for open-source maintainers. Gabby Curtis, Anthropic’s communications lead, told VentureBeat in an exclusive interview that Anthropic built Claude Code Security to make defensive capabilities more widely available.
OpenAI’s numbers come from a different architecture and a wider scanning surface. Codex Security evolved from Aardvark, an internal tool powered by GPT-5 that entered private beta in 2025. During the Codex Security beta period, OpenAI’s agent scanned more than 1.2 million commits across external repositories, surfacing what OpenAI said were 792 critical findings and 10,561 high-severity findings. OpenAI reported vulnerabilities in OpenSSH, GnuTLS, GOGS, Thorium, libssh, PHP, and Chromium, resulting in 14 assigned CVEs. Codex Security’s false positive rates fell more than 50% across all repositories during beta, according to OpenAI. Over-reported severity dropped more than 90%.
Checkmarx Zero researchers demonstrated that moderately complicated vulnerabilities sometimes escaped Claude Code Security’s detection. Developers could trick the agent into ignoring vulnerable code. In a full production-grade codebase scan, Checkmarx Zero found that Claude identified eight vulnerabilities, but only two were true positives. If moderately complex obfuscation defeats the scanner, the detection ceiling is lower than the headline numbers suggest. Neither Anthropic nor OpenAI has submitted detection claims to an independent third-party audit. Security leaders should treat the reported numbers as indicative, not audited.
Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, told VentureBeat that the competitive scanner race compresses the window for everyone. Baer advised security teams to prioritize patches based on exploitability in their runtime context rather than CVSS scores alone, shorten the window between discovery, triage, and patch, and maintain software bill of materials visibility so they know instantly where a vulnerable component runs.
Advertisement
Different methods, almost no overlap in the codebases they scanned, yet the same conclusion. Pattern-matching SAST has a ceiling, and LLM reasoning extends detection past it. When two competing labs distribute that capability at the same time, the dual-use math gets uncomfortable. Any financial institution or fintech running a commercial codebase should assume that if Claude Code Security and Codex Security can find these bugs, adversaries with API access can find them, too.
Baer put it bluntly: open-source vulnerabilities surfaced by reasoning models should be treated closer to zero-day class discoveries, not backlog items. The window between discovery and exploitation just compressed, and most vulnerability management programs are still triaging on CVSS alone.
What the vendor responses prove
Snyk, the developer security platform used by engineering teams to find and fix vulnerabilities in code and open-source dependencies, acknowledged the technical breakthrough but argued that finding vulnerabilities has never been the hard part. Fixing them at scale, across hundreds of repositories, without breaking anything. That is the bottleneck. Snyk pointed to research showing AI-generated code is 2.74 times more likely to introduce security vulnerabilities compared to human-written code, according to Veracode’s 2025 GenAI Code Security Report. The same models finding hundreds of zero-days also introduce new vulnerability classes when they write code.
Cycode CTO Ronen Slavin wrote that Claude Code Security represents a genuine technical advancement in static analysis, but that AI models are probabilistic by nature. Slavin argued that security teams need consistent, reproducible, audit-grade results, and that a scanning capability embedded in an IDE is useful but does not constitute infrastructure. Slavin’s position: SAST is one discipline within a much broader scope, and free scanning does not displace platforms that handle governance, pipeline integrity, and runtime behavior at enterprise scale.
Advertisement
“If code reasoning scanners from major AI labs are effectively free to enterprise customers, then static code scanning commoditizes overnight,” Baer told VentureBeat. Over the next 12 months, Baer expects the budget to move toward three areas.
Runtime and exploitability layers, including runtime protection and attack path analysis.
AI governance and model security, including guardrails, prompt injection defenses, and agent oversight.
Remediation automation. “The net effect is that AppSec spending probably doesn’t shrink, but the center of gravity shifts away from traditional SAST licenses and toward tooling that shortens remediation cycles,” Baer said.
Seven things to do before your next board meeting
Run both scanners against a representative codebase subset. Compare Claude Code Security and Codex Security findings against your existing SAST output. Start with a single representative repository, not your entire codebase. Both tools are in research preview with access constraints that make full-estate scanning premature. The delta is your blind spot inventory.
Build the governance framework before the pilot, not after. Baer told VentureBeat to treat either tool like a new data processor for the crown jewels, which is your source code. Baer’s governance model includes a formal data-processing agreement with clear statements on training exclusion, data retention, and subprocessor use, a segmented submission pipeline so only the repos you intend to scan are transmitted, and an internal classification policy that distinguishes code that can leave your boundary from code that cannot. In interviews with more than 40 CISOs, VentureBeat found that formal governance frameworks for reasoning-based scanning tools barely exist yet. Baer flagged derived IP as the blind spot most teams have not addressed. Can model providers retain embeddings or reasoning traces, and are those artifacts considered your intellectual property? The other gap is data residency for code, which historically was not regulated like customer data but increasingly falls under export control and national security review.
Map what neither tool covers. Software composition analysis. Container scanning. Infrastructure-as-code. DAST. Runtime detection and response. Claude Code Security and Codex Security operate at the code-reasoning layer. Your existing stack handles everything else. That stack’s pricing power is what shifted.
Quantify the dual-use exposure. Every zero-day Anthropic and OpenAI surfaced lives in an open-source project that enterprise applications depend on. Both labs are disclosing and patching responsibly, but the window between their discovery and your adoption of those patches is exactly where attackers operate. AI security startup AISLE independently discovered all 12 zero-day vulnerabilities in OpenSSL’s January 2026 security patch, including a stack buffer overflow (CVE-2025-15467) that is potentially remotely exploitable without valid key material. Fuzzers ran against OpenSSL for years and missed every one. Assume adversaries are running the same models against the same codebases.
Prepare the board comparison before they ask. Claude Code Security reasons about code contextually, traces data flows, and uses multi-stage self-verification. Codex Security builds a project-specific threat model before scanning and validates findings in sandboxed environments. Each tool is in research preview and requires human approval before any patch is applied. The board needs side-by-side analysis, not a single-vendor pitch. When the conversation turns to why your existing suite missed what Anthropic found, Baer offered framing that works at the board level. Pattern-matching SAST solved a different generation of problems, Baer told VentureBeat. It was designed to detect known anti-patterns. That capability still matters and still reduces risk. But reasoning models can evaluate multi-file logic, state transitions, and developer intent, which is where many modern bugs live. Baer’s board-ready summary: “We bought the right tools for the threats of the last decade; the technology just advanced.”
Track the competitive cycle. Both companies are heading toward IPOs, and enterprise security wins drive the growth narrative. When one scanner misses a blind spot, it lands on the other lab’s feature roadmap within weeks. Both labs ship model updates on monthly cycles. That cadence will outrun any single vendor’s release calendar. Baer said that running both is the right move: “Different models reason differently, and the delta between them can reveal bugs neither tool alone would consistently catch. In the short term, using both isn’t redundancy. It’s defense through diversity of reasoning systems.”
Set a 30-day pilot window. Before February 20, this test did not exist. Run Claude Code Security and Codex Security against the same codebase and let the delta drive the procurement conversation with empirical data instead of vendor marketing. Thirty days gives you that data.
Fourteen days separated Anthropic and OpenAI. The gap between the next releases will be shorter. Attackers are watching the same calendar.