OpenAI on Monday released a new desktop application for its Codex artificial intelligence coding system, a tool the company says transforms software development from a collaborative exercise with a single AI assistant into something more akin to managing a team of autonomous workers.
The Codex app for macOS functions as what OpenAI executives describe as a “command center for agents,” allowing developers to delegate multiple coding tasks simultaneously, automate repetitive work, and supervise AI systems that can run for up to 30 minutes independently before returning completed code.
“This is the most loved internal product we’ve ever had,” Sam Altman, OpenAI’s chief executive, told VentureBeat in a press briefing ahead of Monday’s launch. “It’s been totally an amazing thing for us to be using recently at OpenAI.”
The release arrives at a pivotal moment for the enterprise AI market. According to a survey of 100 Global 2000 companies published last week by venture capital firm Andreessen Horowitz, 78% of enterprise CIOs now use OpenAI models in production, though competitors Anthropic and Google are gaining ground rapidly. Anthropic posted the largest share increase of any frontier lab since May 2025, growing 25% in enterprise penetration, with 44% of enterprises now using Anthropic in production.
Advertisement
The timing of OpenAI’s Codex app launch — with its focus on professional software engineering workflows — appears designed to defend the company’s position in what has become the most contested segment of the AI market: coding tools.
Why developers are abandoning their IDEs for AI agent management
The Codex app introduces a fundamentally different approach to AI-assisted coding. While previous tools like GitHub Copilot focused on autocompleting lines of code in real-time, the new application enables developers to “effortlessly manage multiple agents at once, run work in parallel, and collaborate with agents over long-running tasks.”
Alexander Embiricos, the product lead for Codex, explained the evolution during the press briefing by tracing the product’s lineage back to 2021, when OpenAI first introduced a model called Codex that powered GitHub Copilot.
Advertisement
“Back then, people were using AI to write small chunks of code in their IDEs,” Embiricos said. “GPT-5 in August last year was a big jump, and then 5.2 in December was another massive jump, where people started doing longer and longer tasks, asking models to do work end to end. So what we saw is that developers, instead of working closely with the model, pair coding, they started delegating entire features.”
The shift has been so profound that Altman said he recently completed a substantial coding project without ever opening a traditional integrated development environment.
“I was astonished by this…I did this fairly big project in a few days earlier this week and over the weekend. I did not open an IDE during the process. Not a single time,” Altman said. “I did look at some code, but I was not doing it the old-fashioned way, and I did not think that was going to be happening by now.”
How skills and automations extend AI coding beyond simple code generation
The Codex app introduces several new capabilities designed to extend AI coding beyond writing lines of code. Chief among these are “Skills,” which bundle instructions, resources, and scripts so that Codex can “reliably connect to tools, run workflows, and complete tasks according to your team’s preferences.”
Advertisement
The app includes a dedicated interface for creating and managing skills, and users can explicitly invoke specific skills or allow the system to automatically select them based on the task at hand. OpenAI has published a library of skills for common workflows, including tools to fetch design context from Figma, manage projects in Linear, deploy web applications to cloud hosts like Cloudflare and Vercel, generate images using GPT Image, and create professional documents in PDF, spreadsheet, and Word formats.
To demonstrate the system’s capabilities, OpenAI asked Codex to build a racing game from a single prompt. Using an image generation skill and a web game development skill, Codex built the game by working independently using more than 7 million tokens with just one initial user prompt, taking on “the roles of designer, game developer, and QA tester to validate its work by actually playing the game.”
The company has also introduced “Automations,” which allow developers to schedule Codex to work in the background on an automatic schedule. “When an Automation finishes, the results land in a review queue so you can jump back in and continue working if needed.”
Thibault Sottiaux, who leads the Codex team at OpenAI, described how the company uses these automations internally: “We’ve been using Automations to handle the repetitive but important tasks, like daily issue triage, finding and summarizing CI failures, generating daily release briefs, checking for bugs, and more.”
Advertisement
The app also includes built-in support for “worktrees,” allowing multiple agents to work on the same repository without conflicts. “Each agent works on an isolated copy of your code, allowing you to explore different paths without needing to track how they impact your codebase.”
OpenAI battles Anthropic and Google for control of enterprise AI spending
The launch comes as enterprise spending on AI coding tools accelerates dramatically. According to the Andreessen Horowitz survey, average enterprise AI spend on large language models has risen from approximately $4.5 million to $7 million over the last two years, with enterprises expecting growth of another 65% this year to approximately $11.6 million.
Leadership in the enterprise AI market varies significantly by use case. OpenAI dominates “early, horizontal use cases like general purpose chatbots, enterprise knowledge management and customer support,” while Anthropic leads in “software development and data analysis, where CIOs consistently cite rapid capability gains since the second half of 2024.”
When asked during the press briefing how Codex differentiates from Anthropic’s Claude Code, which has been described as having its “ChatGPT moment,” Sottiaux emphasized OpenAI’s focus on model capability for long-running tasks.
Advertisement
“One of the things that our models are extremely good at—they really sit at the frontier of intelligence and doing reliable work for long periods of time,” Sottiaux said. “This is also what we’re optimizing this new surface to be very good at, so that you can start many parallel agents and coordinate them over long periods of time and not get lost.”
Altman added that while many tools can handle “vibe coding front ends,” OpenAI’s 5.2 model remains “the strongest model by far” for sophisticated work on complex systems.
“Taking that level of model capability and putting it in an interface where you can do what Thibault was saying, we think is going to matter quite a bit,” Altman said. “That’s probably the, at least listening to users and sort of looking at the chatter on social that’s that’s the single biggest differentiator.”
The surprising satisfies on AI progress: how fast humans can type
The philosophical underpinning of the Codex app reflects a view that OpenAI executives have been articulating for months: that human limitations — not AI capabilities — now constitute the primary constraint on productivity.
Advertisement
In a December appearance on Lenny’s Podcast, Embiricos described human typing speed as “the current underappreciated limiting factor” to achieving artificial general intelligence. The logic: if AI can perform complex coding tasks but humans can’t write prompts or review outputs fast enough, progress stalls.
The Codex app attempts to address this by enabling what the team calls an “abundance mindset” — running multiple tasks in parallel rather than perfecting single requests. During the briefing, Embiricos described how power users at OpenAI work with the tool.
“Last night, I was working on the app, and I was making a few changes, and all of these changes are able to run in parallel together. And I was just sort of going between them, managing them,” Embiricos said. “Behind the scenes, all these tasks are running on something called gate work trees, which means that the agents are running independently, and you don’t have to manage them.”
In the Sequoia Capital podcast “Training Data,” Embiricos elaborated on this mindset shift: “The mindset that works really well for Codex is, like, kind of like this abundance mindset and, like, hey, let’s try anything. Let’s try anything even multiple times and see what works.” He noted that when users run 20 or more tasks in a day or an hour, “they’ve probably understood basically how to use the tool.”
Advertisement
Building trust through sandboxes: how OpenAI secures autonomous coding agents
OpenAI has built security measures into the Codex architecture from the ground up. The app uses “native, open-source and configurable system-level sandboxing,” and by default, “Codex agents are limited to editing files in the folder or branch where they’re working and using cached web search, then asking for permission to run commands that require elevated permissions like network access.”
Embiricos elaborated on the security approach during the briefing, noting that OpenAI has open-sourced its sandbox technology.
“Codex has this sandbox that we’re actually incredibly proud of, and it’s open source, so you can go check it out,” Embiricos said. The sandbox “basically ensures that when the agent is working on your computer, it can only make writes in a specific folder that you want it to make rights into, and it doesn’t access network without information.”
The system also includes a granular permission model that allows users to configure persistent approvals for specific actions, avoiding the need to repeatedly authorize routine operations. “If the agent wants to do something and you find yourself annoyed that you’re constantly having to approve it, instead of just saying, ‘All right, you can do everything,’ you can just say, ‘Hey, remember this one thing — I’m actually okay with you doing this going forward,’” Embiricos explained.
Advertisement
Altman emphasized that the permission architecture signals a broader philosophy about AI safety in agentic systems.
“I think this is going to be really important. I mean, it’s been so clear to us using this, how much you want it to have control of your computer, and how much you need it,” Altman said. “And the way the team built Codex such that you can sensibly limit what’s happening and also pick the level of control you’re comfortable with is important.”
He also acknowledged the dual-use nature of the technology. “We do expect to get to our internal cybersecurity high moment of our models very soon. We’ve been preparing for this. We’ve talked about our mitigation plan,” Altman said. “A real thing for the world to contend with is going to be defending against a lot of capable cybersecurity threats using these models very quickly.”
The same capabilities that make Codex valuable for fixing bugs and refactoring code could, in the wrong hands, be used to discover vulnerabilities or write malicious software—a tension that will only intensify as AI coding agents become more capable.
Advertisement
From Android apps to research breakthroughs: how Codex transformed OpenAI’s own operations
Perhaps the most compelling evidence for Codex’s capabilities comes from OpenAI’s own use of the tool. Sottiaux described how the system has accelerated internal development.
“A Sora Android app is an example of that where four engineers shipped in only 18 days internally, and then within the month we give access to the world,” Sottiaux said. “I had never noticed such speed at this scale before.”
Beyond product development, Sottiaux described how Codex has become integral to OpenAI’s research operations.
“Codex is really involved in all parts of the research — making new data sets, investigating its own screening runs,” he said. “When I sit in meetings with researchers, they all send Codex off to do an investigation while we’re having a chat, and then it will come back with useful information, and we’re able to debug much faster.”
Advertisement
The tool has also begun contributing to its own development. “Codex also is starting to build itself,” Sottiaux noted. “There’s no screen within the Codex engineering team that doesn’t have Codex running on multiple, six, eight, ten, tasks at a time.”
When asked whether this constitutes evidence of “recursive self-improvement” — a concept that has long concerned AI safety researchers — Sottiaux was measured in his response.
“There is a human in the loop at all times,” he said. “I wouldn’t necessarily call it recursive self-improvement, a glimpse into the future there.”
Altman offered a more expansive view of the research implications.
Advertisement
“There’s two parts of what people talk about when they talk about automating research to a degree where you can imagine that happening,” Altman said. “One is, can you write software, extremely complex infrastructure, software to run training jobs across hundreds of thousands of GPUs and babysit them. And the second is, can you come up with the new scientific ideas that make algorithms more efficient.”
He noted that OpenAI is “seeing early but promising signs on both of those.”
The end of technical debt? AI agents take on the work engineers hate most
One of the more unexpected applications of Codex has been addressing technical debt — the accumulated maintenance burden that plagues most software projects.
Altman described how AI coding agents excel at the unglamorous work that human engineers typically avoid.
Advertisement
“The kind of work that human engineers hate to do — go refactor this, clean up this code base, rewrite this, write this test — this is where the model doesn’t care. The model will do anything, whether it’s fun or not,” Altman said.
He reported that some infrastructure teams at OpenAI that “had sort of like, given up hope that you were ever really going to long term win the war against tech debt, are now like, we’re going to win this, because the model is going to constantly be working behind us, making sure we have great test coverage, making sure that we refactor when we’re supposed to.”
The observation speaks to a broader theme that emerged repeatedly during the briefing: AI coding agents don’t experience the motivational fluctuations that affect human programmers. As Altman noted, a team member recently observed that “the hardest mental adjustment to make about working with these sort of like aI coding teammates, unlike a human, is the models just don’t run out of dopamine. They keep trying. They don’t run out of motivation. They don’t get, you know, they don’t lose energy when something’s not working. They just keep going and, you know, they figure out how to get it done.”
What the Codex app costs and who can use it starting today
The Codex app launches today on macOS and is available to anyone with a ChatGPT Plus, Pro, Business, Enterprise, or Edu subscription. Usage is included in ChatGPT subscriptions, with the option to purchase additional credits if needed.
Advertisement
In a promotional push, OpenAI is temporarily making Codex available to ChatGPT Free and Go users “to help more people try agentic workflows.” The company is also doubling rate limits for existing Codex users across all paid plans during this promotional period.
The pricing strategy reflects OpenAI’s determination to establish Codex as the default tool for AI-assisted development before competitors can gain further traction. More than a million developers have used Codex in the past month, and usage has nearly doubled since the launch of GPT-5.2-Codex in mid-December, building on more than 20x usage growth since August 2025.
Customers using Codex include large enterprises like Cisco, Ramp, Virgin Atlantic, Vanta, Duolingo, and Gap, as well as startups like Harvey, Sierra, and Wonderful. Individual developers have also embraced the tool: Peter Steinberger, creator of OpenClaw, built the project entirely with Codex and reports that since fully switching to the tool, his productivity has roughly doubled across more than 82,000 GitHub contributions.
OpenAI’s ambitious roadmap: Windows support, cloud triggers, and continuous background agents
OpenAI outlined an aggressive development roadmap for Codex. The company plans to make the app available on Windows, continue pushing “the frontier of model capabilities,” and roll out faster inference.
Advertisement
Within the app, OpenAI will “keep refining multi-agent workflows based on real-world feedback” and is “building out Automations with support for cloud-based triggers, so Codex can run continuously in the background—not just when your computer is open.”
The company also announced a new “plan mode” feature that allows Codex to read through complex changes in read-only mode, then discuss with the user before executing. “This means that it lets you build a lot of confidence before, again, sending it to do a lot of work by itself, independently, in parallel to you,” Embiricos explained.
Additionally, OpenAI is introducing customizable personalities for Codex. “The default personality for Codex has been quite terse. A lot of people love it, but some people want something more engaging,” Embiricos said. Users can access the new personalities using the /personality command.
Altman also hinted at future integration with ChatGPT’s broader ecosystem.
Advertisement
“There will be all kinds of cool things we can do over time to connect people’s ChatGPT accounts and leverage sort of all the history they’ve built up there,” Altman said.
Microsoft still dominates enterprise AI, but the window for disruption is open
The Codex app launch occurs as most enterprises have moved beyond single-vendor strategies. According to the Andreessen Horowitz survey, “81% now use three or more model families in testing or production, up from 68% less than a year ago.”
Despite the proliferation of AI coding tools, Microsoft continues to dominate enterprise adoption through its existing relationships. “Microsoft 365 Copilot leads enterprise chat though ChatGPT has closed the gap meaningfully,” and “Github Copilot is still the coding leader for enterprises.” The survey found that “65% of enterprises noted they preferred to go with incumbent solutions when available,” citing trust, integration, and procurement simplicity.
However, the survey also suggests significant opportunity for challengers: “Enterprises consistently say they value faster innovation, deeper AI focus, and greater flexibility paired with cutting edge capabilities that AI native startups bring.”
Advertisement
OpenAI appears to be positioning Codex as a bridge between these worlds. “Codex is built on a simple premise: everything is controlled by code,” the company stated. “The better an agent is at reasoning about and producing code, the more capable it becomes across all forms of technical and knowledge work.”
The company’s ambition extends beyond coding. “We’ve focused on making Codex the best coding agent, which has also laid the foundation for it to become a strong agent for a broad range of knowledge work tasks that extend beyond writing code.”
When asked whether AI coding tools could eventually move beyond early adopters to become mainstream, Altman suggested the transition may be closer than many expect.
“Can it go from vibe coding to serious software engineering? That’s what this is about,” Altman said. “I think we are over the bar on that. I think this will be the way that most serious coders do their job — and very rapidly from now.”
Advertisement
He then pivoted to an even bolder prediction: that code itself could become the universal interface for all computer-based work.
“Code is a universal language to get computers to do what you want. And it’s gotten so good that I think, very quickly, we can go not just from vibe coding silly apps but to doing all the non-coding knowledge work,” Altman said.
At the close of the briefing, Altman urged journalists to try the product themselves: “Please try the app. There’s no way to get this across just by talking about it. It’s a crazy amount of power.”
For developers who have spent careers learning to write code, the message was clear: the future belongs to those who learn to manage the machines that write it for them.
In Project Glasswing, announced Tuesday, the company is giving a select group of major tech and financial firms access to Claude Mythos Preview, a frontier model that has already uncovered thousands of previously unknown software vulnerabilities. Anthropic says the model is too dangerous to release to the general public. Read Entire Article Source link
Elon Musk’s X is continuing its push to bake AI deeper into the platform with two new Grok-powered features aimed at helping users reach a wider audience and edit images seamlessly.
What’s new on X?
The company has rolled out automatic translation for posts worldwide, allowing users to instantly read content in their preferred language without needing to tap on the translation option. The feature, powered by xAI’s Grok models, is designed to give posts a broader global reach while reducing friction for cross-language conversations. Users who prefer the original text can still toggle translations off on a per-language basis.
We’re rolling out auto-translate worldwide to give posts in any language global reach on X.
The translations are powered by Grok and have improved substantially over the last couple months.
If you prefer to read in the original language, you can always turn off auto-translate…
Alongside translation, X has also introduced a new in-app photo editor on iOS. The tool gives users access to basic editing options like drawing, text overlays, and blur controls for hiding sensitive information, such as faces or personal details.
Ladies and gentlemen, we’re launching a brand new Photo Editor in our post composer.
It has long-overdue features like drawing & text. But we also included special add-ons that are unique to X:
• Edit with words, powered by Grok • Add a blur to redact parts of the photo… pic.twitter.com/38Zaw8b5jl
The editor also utilizes AI to help users edit images with natural language prompts. According to X’s head of product, Nikita Bier, users can ask Grok to transform images in specific ways. For example, they can ask Grok to turn a regular photo into something styled like a painting. For now, the feature is limited to X’s iOS app, but Android support is coming soon.
What does this mean for users?
With these additions, X is trying to get users to spend more time inside its app instead of relying on third-party tools. Other social media platforms have released similar AI-driven translation features, and X is now joining the fray to make Grok a core part of how people create and engage on the platform.
Whether this push pays off will ultimately come down to execution. If these tools feel genuinely useful and intuitive, they could make posting and discovery smoother. If not, they risk blending into the background as features more users ignore, adding complexity without meaningfully improving the experience.
‘Agentic commerce’ is seen as a natural consequence of AI-powered search, which already makes up more than half of global search engine volume. McKinsey trend analysis finds this number could rise significantly over the coming years.
Advertisement
McKinsey found that by 2030, agentic commerce could orchestrate up to $5trn globally. But while Morgan Stanley earlier this year noted that only 1pc of shoppers currently choose the agentic route, newer research elsewhere finds that AI agents could make up a significant portion of customers a business receives in the coming years.
In the background, infrastructure works to make agentic commerce possible are underway at fintechs such as Revolut, Stripe, Visa, Mastercard and PayPal. More are expected to follow.
Did you mean to buy that?
A growing number of users say they would trust AI systems to place orders and execute payments on their behalf. But such a combination of trust and automation will end up creating a whole new category of purchase disputes that companies are yet to get ahead of, says Monica Eaton, the founder and CEO of Chargebacks 911.
“The infrastructure for agentic commerce is being built quickly, but the safeguards need to evolve at the same pace,” she says.
Advertisement
In the era of agentic commerce, both customers and businesses will find it hard to define intent – or a lack thereof – when purchases are made by AI agents. It is easier to determine intent when humans make a deliberate choice to press ‘buy’, but agentic commerce removes that moment in the transaction. And currently, there aren’t many ways to dispute an agentic AI-made purchase, Eaton notes.
“Most customers do not have access to detailed records of the instructions they gave, the permissions in place, or how the agent reached its decision. In many cases, the transaction is technically authorised, which makes it difficult to challenge,” she adds.
To solve this, platforms need to prioritise transparency before a transaction occurs. The AI agent in question must be able to show what it is about to do and why, and ensure it has customer authorisation before going forward with a transaction. An audit trail for agentic purchases will provide an added layer of protection, says Eaton.
Meanwhile, clear permission frameworks that define where and what agents can purchase, and how much they can spend, will further protect customers.
Advertisement
This may only work in the short term, says Eaton. Longer term protections would involve platforms providing transparency and access to activity logs, while dispute processes will need to evolve to recognise when an agent’s decision does not align with the customer’s intent.
Shift in responsibility
This new category of purchase dispute lies somewhere between fraud and ‘buyer’s remorse’, and current systems are not equipped to handle this anomaly, says Eaton.
“In an agentic environment, platforms need to take greater responsibility for how instructions are captured, interpreted and executed”, and merchants should not be expected to absorb this liability by default, she explains.
Moreover, if effective frameworks are not built ahead of time, customers could end up in a situation where they are arguing with an AI customer service bot about an unauthorised purchase made by a personal AI agent.
Advertisement
There is still time to get ahead of this eventuality, but the window is narrowing, Eaton says. “Businesses need to treat agentic commerce as a fundamentally different transaction environment, not just a faster version of existing e-commerce.”
It is important not to wait for regulation to catch up, Eaton warns. “Businesses that build trust into agentic commerce early will be in a much stronger position than those that react later.
“As for the future of customer service, it does not have to become AI versus AI. The key is to keep the human at the centre of the process. Agentic commerce should reflect and support human intent. If that principle is lost, trust will follow.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Valve has released a native Steam Link beta for Apple Vision Pro, letting users stream their existing Steam games onto a large virtual screen in visionOS. It supports up to 4K resolution and will let you dynamically adjust the curve of the display. The Mac Observer reports: Steam Link does not support VR titles in this beta, and Valve clearly states that the app is limited to 2D game streaming, but this still opens up a large library of games that users can play on a massive virtual screen inside Vision Pro.
At the same time, Vision Pro already handles 2D media very well, and this update builds on that strength by turning the headset into a portable gaming display that connects directly to your existing setup without needing extra hardware.
You can join the Steam Link beta through TestFlight right now, and this early release shows how Apple Vision Pro continues to expand beyond media into more practical and everyday use cases like gaming.
The new OPPO F33 series is just around the corner, and the Chinese smartphone maker has shared a lot more about the upcoming phones ahead of its India launch on April 15, 2026. The headline features are the all-new ultrawide selfie camera and a more polished design. Here’s everything we know so far.
New Cameras
Like the new iPhone 17, the highlight of the OPPO F33 Pro is its 50MP ultra-wide front camera with a 100° field of view. That’s significantly wider than what most phones in this segment offer, and OPPO says it can capture up to 30% more area in group selfies. To make that useful in real-world scenarios, the phone also includes an “AI Groupfie Expert” system. It can automatically switch to a wider 0.6x view when more people enter the frame and correct facial distortion for up to six faces at once.
On the back, the F33 series uses a 50MP main sensor paired with a 2MP depth sensor. While that setup isn’t groundbreaking on paper, one of the new features is AI Portrait Glow, which adjusts lighting in real time depending on the scene. It offers multiple lighting styles, including Natural, Rim, and Studio modes, to improve portraits in tricky lighting conditions. Another interesting addition is the Colorful Front Fill Light, which replaces the usual harsh white flash with softer, adjustable tones to make selfies look more natural, especially at night. The phone also introduces creative features like Popout, which lets users create layered photos with a sense of depth directly from the camera, and Dual-View Video
Redesigned Build
Beyond cameras, OPPO is also making noticeable changes to the design. The F33 Pro introduces a new “Starry Sea” camera module with a cleaner layout and a more prominent lens design. The phone uses a one-piece back panel made from a thicker composite material, which OPPO claims improves durability without adding the fragility of glass. It’s also using a CNC carving process to create a mix of glossy and matte finishes on the same surface.
The F33 Pro will be available in three finishes: Misty Forest, Starry Blue, and Passion Red, each with a slightly different texture and visual style. The device features a 6.57-inch flat display and weighs 194 grams, keeping things relatively slim and manageable.
Demand for Apple’s Mac Mini has skyrocketed, particularly in China, as the small computer has become an ideal platform for experimenting with autonomous AI agents like OpenClaw and others. Now, a company called Astropad is building out a remote desktop solution specifically for this use case.
On Tuesday, Astropad CEO Matt Ronge introduced Astropad Workbench, a remote desktop solution for Apple devices that he pitches as made “for the AI era.”
While an AI agent running on a Mac Mini may not need a screen, its operator (the human) will want to log in at times to see what’s happening in order to check logs, monitor outputs, or restart stuck tasks, he says.
Image Credits:Astropad
The new remote desktop solution offers a variety of features, including high-fidelity streaming; the ability to dictate prompts and commands with your voice; plus support for other input methods like the keyboard, Apple Pencil, or touch; and clients for both the iPad and iPhone — the latter essentially putting the remote desktop solution into your pocket for on-the-go access.
If you’re running AI agents across multiple Macs, Workbench offers a device chooser so you can move between them.
Advertisement
Image Credits:Astropad
The idea came about because it was something the team at Astropad had wanted for themselves, as had their friends.
“We have heavily adopted AI at Astropad, and we’ve been using agents. And sometimes, you have an agent running on a long task, and you want to check on it,” says Ronge. “There’s not a great way to do this…there were existing remote desktop tools, but nothing built specifically for this,” he continues. “There have also been ways where you can use a terminal, or there are things like Telegram chats, but they’re limited. I mean, there are times you’ve got to see what’s happening on your Mac. You’ve got to approve a dialog or save something, or just visually see what’s happening.”
Workbench also leverages the company’s proprietary, low-latency display protocol, which it calls LIQUID, which supports the workflows creative professionals use. It retains full fidelity, even at Retina resolutions, Astropad claims, and doesn’t blur lines or pixelate data. The protocol already powers Astropad’s other products, like Luna Display, which turns your iPad into a second display, and Astropad Studio, which lets you use an iPad as a professional drawing tablet.
While monitoring an AI agent may not always need a high-fidelity solution, Ronge points out that it’s something that’s nice to have — especially if you’re approving designs or mock-ups your AI agent made.
Image Credits:Astropad
Of course, remote desktop software has existed for some time, meaning Astropad has well-established rivals like Jump Desktop, RustDesk, AnyDesk, Parsec, VNC-based solutions, and many more.
But Ronge suggests that those weren’t designed for the specific needs of using remote desktop software to keep tabs on AI agents. With Workbench, it’s easy to check on the status of logs to see your AI agents’ progress in order to spot issues, restart stalled jobs, and make other changes, but what’s more, you can do this from your iPhone or iPad.
Advertisement
“We’ve been doing iPad stuff for years — it’s been, like, our whole company for the past 10 years. So we have a lot of experience in making good iPad apps,” Ronge says. “We know how to make good iOS apps…so we did that, and then we also added a voice model.”
Image Credits:Astropad
The tech uses Apple’s voice model so you can talk to your phone and direct your AI agent to do something with a press of the microphone button.
“It’s a very natural way to work with agents. That’s the kind of feature that existing remote desktop [apps] just don’t have — they’re built for more traditional, enterprise-style remote desktop.”
As a new release, there will still be some bugs and polishing needed, but the team is continuing to work on the product. Next up, they plan to launch Windows and Linux support and refine the iPhone app.
The new software runs on macOS 15 and up and iOS 26, and is available as a free download offering 20 minutes of access per day. For unlimited access, the cost is $10 per month, or $50 per year.
Advertisement
Astropad, a bootstrapped and profitable small tech business, has over 100,000 customers, including those who have bought its iPad hardware accessories and its software. With Workbench, Ronge believes the company has the potential to reach both AI enthusiasts and businesses as remote support for AI agents becomes more common.
“I totally think businesses are gonna buy it. I mean, just the productivity gains I’m seeing from it myself — this is totally headed to businesses. It’s just too powerful,” he notes.
Kyle McGinley graduated from high school in 2018 and, like many teenagers, he was unsure what career he wanted to pursue. Recuperating from a sports injury led him to consider becoming a physical therapist for athletes. But he was skilled at repairing cars and fixing things around the house, so he thought about becoming an engineer, like his father.
McGinley, who lives in Sellersville, Pa., took some classes at Montgomery County Community College in Blue Bell, while also working. During his years at the college, he took a variety of courses and was drawn to electrical engineering and computing, he says. He left to pursue a bachelor’s degree in electrical and computer engineering in Philadelphia at Temple University, where he is currently a junior.
Kyle McGinley
MEMBER GRADE
Student member
Advertisement
UNIVERSITY
Temple, in Philadelphia
MAJOR
Electrical and computer engineering
Advertisement
The 26-year-old is also a teaching assistant and a research assistant at Temple. His research focuses on applying artificial intelligence to electrical hardware and robotics. He helped build an AI-integrated android companion to assist in-home caregivers.
Temple recognized McGinley’s efforts last year with its Butz scholarship, which is awarded annually to an electrical and computer engineering undergraduate with an interest in software development, AI development systems, health education software, or a similar field.
An IEEE student member, he is active within the university’s student branch.
“My career ambition after I graduate is to gain real-world experience in the engineering industry to learn skills outside of academia,” he says. “Long term, I want to do project management or work in a technical lead role, with the primary goal of creating impactful projects that I can be proud of.”
Advertisement
Building a robot aide
McGinley is a teaching assistant for his digital circuit design course. In a class of 35 students, it can be a struggle for some to digest the professor’s words, he says.
“My job is to answer students’ questions if they are having problems following the professor’s lecture or are confused about any of the topics,” he says. “In the lab, I help students debug code or with hardware issues they have on the FPGA [field-programmable gate array] boards.”
He also conducts research for the university’s Computer Fusion Lab under the supervision of IEEE Senior Member Li Bai, a professor of electrical and computer engineering. McGinley writes software programs at the lab.
“In school, they don’t teach you how to communicate with people. They only teach you how to remember stuff. Working well with people is one of the most underrated skills that a lot of students don’t understand is important.”
“I realized the need for this with my grandmother, when she was taking care of my grandfather,” he says. “It was a lot for her, trying to remember everything.”
Using the latest software and hardware, he and three classmates rebuilt an older lab robot. They installed an operating system and used Python and C++ for its control, perception, and behavior, he says. The students also incorporated Google’s Gemini AI to help with routine tasks such as scheduling medication reminders and setting alarms for upcoming doctor visits.
The AI-integrated android was intended to assist, not replace, the caregivers by handling the mental load of remembering tasks, he says.
Advertisement
“This was one of the cool things that drew me to working in the robotics field,” he says. “Something where AI could be used to help caregivers do simple tasks.”
The benefits of a student branch
McGinley joined Temple’s IEEE student branch last year after one of his professors offered extra credit to students who did so. After attending meetings and participating in a few workshops, he found he really liked the club, he says, adding that he made new friends and enjoyed the camaraderie with other engineering students.
After the student branch’s board members got to know McGinley better, they asked him to become the club’s historian and manage its social media account. He also helps with event planning, creating and posting fliers, taking pictures, and shooting videos of the gatherings.
The branch has benefited from McGinley’s involvement, but he says it’s a two-way street.
Advertisement
“The biggest things I’ve learned are being held accountable and being reliable,” he says. “I am responsible for other people knowing what’s going on.”
Being an active volunteer has improved his communication skills, he says.
“Learning to clearly communicate with other people to make sure everyone is on the same page is important,” he says. “In school, they don’t teach you how to communicate with people. They only teach you how to remember stuff. Working well with people is one of the most underrated skills that a lot of students don’t understand is important.”
“I know it can be scary because you might not know anyone, but it honestly can’t hurt you; it could actually benefit you,” he says. “Being active is going to help you with a lot of skills that you need.
“You’ll definitely get opportunities that you would have never known about, like a scholarship or working in the research lab. I would have never gotten these opportunities if I hadn’t shown up. Joining IEEE and being active is the best thing you can do for your career.”
In short:Intel has signed on as the primary foundry partner for Elon Musk’s Terafab, a $25 billion joint venture between Tesla, SpaceX, and xAI targeting a terawatt of AI compute per year, handing the struggling chip giant the marquee customer it has been searching for since pivoting to a foundry-first strategy.
On 7 April 2026, Intel announced it is joining the Terafab project, becoming the foundry partner for the most ambitious semiconductor facility ever proposed in the United States. The announcement came two weeks after Musk first unveiled Terafab at the North Campus of Giga Texas in Austin, a joint venture between Tesla, SpaceX, and xAI that claims it will produce one terawatt of AI compute every year. Intel’s role is to contribute its most advanced process node, packaging expertise, and manufacturing scale to make that claim real. For Intel chief executive Lip-Bu Tan, who has spent the past year attempting to rebuild Intel around an external foundry business, the deal is the most significant external customer win the company has landed since he took the job.
What Terafab is claiming to build
Terafab is designed as a vertically integrated semiconductor complex, covering chip design, lithography, fabrication, memory production, advanced packaging, and testing under a single roof, with a stated goal of producing between 100 billion and 200 billion custom AI and memory chips per year. The initial buildout targets 100,000 wafer starts per month, with ambitions to eventually scale to one million wafer starts per month at full capacity. The project involves two separate facilities on the Giga Texas campus: one dedicated to chips for automotive and humanoid robotics applications, including Tesla’s Full Self-Driving system, its Cybercab robotaxi programme, and the Optimus robot line; and a second for high-performance AI data centre infrastructure and specialised processors for orbital deployments.
That orbital component is central to the project’s rationale. SpaceX, which completed its acquisition of xAI in an all-stock deal in February 2026, creating a combined entity valued at approximately $1.25 trillion, is building out a constellation of space-based AI satellites internally designated AI Sat Mini. Musk has said 80% of Terafab’s compute output will be directed toward that orbital infrastructure, with the remaining 20% for ground-based applications. The full cost of the project has been cited as between $20 billion and $25 billion, though independent analysts have been sharply sceptical of whether that figure is remotely sufficient to meet the stated production targets. A note from Bernstein Research estimated the true capital required to hit one terawatt of annual compute at approximately $5 trillion, more than 70% of the total annual United States federal budget.
Intel will contribute its 18A process node, the company’s most advanced logic manufacturing technology, currently ramping to high-volume production at Intel’s fabrication plants in Arizona and Oregon. Intel’s 18A is a 1.8-nanometre-class node, placing it in the same tier as the most advanced processes currently entering commercial production globally, and it represents the most sophisticated semiconductor capability manufactured entirely within the United States. Intel’s statement on joining Terafab was direct: “Intel is proud to join the Terafab project with SpaceX, xAI, and Tesla to help refactor silicon fab technology.” The company added: “Our ability to design, fabricate, and package ultra-high-performance chips at scale will help accelerate Terafab’s aim to produce 1 TW/year of compute to power future advances in AI and robotics.”
Advertisement
Tan’s post on X was more personal in its framing. “Elon has a proven track record of reimagining entire industries,” he wrote. “This is exactly what is needed in semiconductor manufacturing today. Terafab represents a step change in how silicon logic, memory and packaging will get built in the future. Intel is proud to be a partner.” Intel’s shares rose approximately 4% on the announcement, closing at $52.91. The market reaction reflects how significant the deal is for Intel’s foundry ambitions: in its most recent full year, Intel Foundry generated just $307 million in external customer revenue, a figure that makes the company a distant also-ran against Taiwan Semiconductor Manufacturing Company, which generates tens of billions annually from external customers. Terafab, if even partially realised, would transform Intel Foundry’s commercial profile entirely.
Intel’s recovery, and what this bet requires
Tan inherited an Intel in acute crisis. The company had lost ground to TSMC and AMD across almost every major product category, its own manufacturing roadmap had slipped repeatedly, and its foundry business, the effort to manufacture chips for external customers as TSMC does, had attracted little meaningful interest beyond government-supported contracts under the US CHIPS and Science Act. Tan’s restructuring has been aggressive: thousands of redundancies, a sharper focus on Intel’s 18A and 14A process nodes as the foundation of the foundry pitch, and a deliberate effort to position Intel’s domestic manufacturing capability as a geopolitical differentiator at a moment when US policymakers are intensely focused on reducing dependence on Taiwanese chipmaking.
Terafab is the clearest expression yet of where that pitch lands. The CHIPS Act tailwinds, the Trump administration’s desire to see advanced semiconductor production in the United States, and the specific demand Musk’s companies represent for high-volume, US-manufactured chips at the leading edge, all of those forces converge in this partnership. Whether Intel’s 18A can deliver at the yields and volumes Terafab’s targets require is a separate question. The node has been in development for several years and is only now entering volume ramp; the gap between a controlled high-volume manufacturing ramp and the production scales Terafab envisions remains very large. Chipmakers building the largest foundries in the world require several years of construction and billions of dollars before the first wafer is processed.The scale of capital commitments now characterising AI infrastructure investmentgives some context for what serious execution at Terafab’s claimed targets would actually require.
The credibility problem Terafab has not solved
The scepticism around Terafab is structural, not merely financial. Building a 2nm-class fabrication facility capable of 100,000 wafer starts per month costs roughly $25-35 billion on its own, according to Tom’s Hardware’s analysis of Bernstein’s research, meaning the entire stated Terafab budget is roughly enough to build a single fab operating at a fraction of the claimed full-capacity scale. Reaching one million wafer starts per month would require dozens of such facilities. The $20-25 billion figure appears to represent initial construction capital for the first phase, rather than the cost of the stated ambition.
Advertisement
There is also the question of the companies at the table. SpaceX-xAI’s internal situation has been turbulent:all 11 of xAI’s original co-founders have now left the companysince the SpaceX acquisition, a rate of attrition that has raised questions about the organisation’s technical continuity. Musk’s companies have a documented history of announcing timelines for facilities and products that subsequently stretch by years. Tesla’s Cybertruck, Optimus, and Full Self-Driving have each missed multiple committed dates without affecting the company’s willingness to make new commitments. None of this disqualifies Terafab, Musk’s companies have also delivered on goals that were widely dismissed, most notably SpaceX’s orbital launch programme, but it establishes why analysts are not taking the one-terawatt headline at face value.
What the partnership means for the chip industry
Intel’s arrival at Terafab lands at a moment when the chip industry is navigating a broader restructuring of who makes what and for whom. The rise of custom AI silicon, Amazon’s Trainium, Google’s TPUs, Microsoft’s Maia, has been eating into the share of AI workloads that run on Nvidia hardware.Nvidia’s response has been to open its NVLink Fusion interconnect to third-party silicon, including Marvell’s custom AI accelerators, a strategy designed to keep custom chip buyers inside Nvidia’s ecosystem even as they move off pure Nvidia hardware. Terafab represents something different: a vertically integrated attempt to produce custom silicon at a scale that has no precedent outside of the established foundry giants. If the project proceeds anywhere near its stated ambitions, it would add a third major domestic US semiconductor manufacturing ecosystem to a landscape currently dominated by TSMC’s Arizona expansion and Samsung’s Texas operations.
For Intel, the strategic logic is clear.As hyperscalers and technology companies increasingly pilot non-Nvidia chips for AI training and inference workloads, the market for foundry services from a domestically situated, leading-edge manufacturer is growing precisely when Intel has positioned itself to serve it. Whether Terafab is the vehicle that finally validates that positioning, or another ambitious announcement that tests the distance between Musk’s projections and physical reality, will become clearer as construction begins and wafer starts are counted rather than promised.The capital flowing into AI infrastructure at this scalehas a way of turning implausible timelines into achieved ones, and Intel, for the first time in years, is positioned to benefit if it does.
We still don’t have an idea of what the TV looks like (but we assume it’ll look like any other Sony TV), and there’s no word on pricing yet, but we do know they’ll be multiple TVs as the press alert refers to “Bravia TVs”. And on top of that you won’t have too long to wait. They’re set for release this spring.
Decades in the making
This latst release provides a nugget of more information, just a few days after Sony and TCL came to an agreement over the new TV venture they’ve established together that’s rather nicely called Bravia Inc.
Advertisement
Sony comments that its True RGB technology intends to set a new benchmark for RGB LED picture performance. Unlike conventional approaches to the technology, True RGB is said to use independently controlled red, gree, and blue light sources (diodes) that can apparently deliver “purer colour, greater brightness, and the largest colour volume ever achieved in Sony’s home TV history”
Advertisement
Sony introduces Sony’s proprietary “True RGB” technology – the naming convention behind the breakthrough display technology powering upcoming Sony’s True RGB televisions and setting a new benchmark for RGB LED picture performance. By combining the individual RGB LEDs with the strengths of both Mini LED and OLED into one TV, we’re potentially looking at the ultimate TV viewing experience.
Sony’s hope with its True RGB technology is that picture quality looks more natural, more three dimensional, and more accurate, whether you’re viewing in a bright living room or otherwise.
Advertisement
Image Credit (Sony)
What makes Sony’s True RGB tick is the “proprietary optical structure and precision backlight control” that’s driven by a new RGB backlight driver. You can add “faithful colour reproduction from wider viewing angles” to the list of plusses that Sony’s True RGB backlight is bringing to the table.
Sony says that its True RGB is the culmination of more than 20 years of its “innovation in LED control”, fomr the first RGB light sources introduced in the QUALIA 005 in 2004, through to the flagship and much praised Backlight Master Drive that launched in 2016.
Similar to how James Bond will Return, additional details will be shared in the “near future”. Trusted Reviews has been invited to a Sony Home Cinema event in May. It’s looking likely that we’ll be seeing the future of Sony’s TVs there and then. Will it be a brighter and colourful one?
“Anthropic has unveiled Claude Mythos, a new AI model capable of discovering critical vulnerabilities at scale,” writes Slashdot reader wiredmikey. “It’s already powering Project Glasswing, a joint effort with major tech firms to secure critical software. But the same capabilities could also accelerate offensive cyber operations.” SecurityWeek reports: Mythos is not an incremental improvement but a step change in performance over Anthropic’s current range of frontier models: Haiku (smallest), Sonnet (middle ground), and Opus (most powerful). Mythos sits in a fourth tier named Copybara, and Anthropic describes it as superior to any other existing AI frontier model. It incorporates the current trend in the use of AI: the modern use of agentic AI. “The powerful cyber capabilities of Claude Mythos Preview are a result of its strong agentic coding and reasoning skills… the model has the highest scores of any model yet developed on a variety of software coding tasks,” notes Anthropic in a blog titled Project Glasswing — Securing critical software for the AI era.
In the last few weeks, Mythos Preview has identified thousands of zero-day vulnerabilities with many classified as critical. Several are ten or 20 years old — the oldest found so far is a 27-years old bug in OpenBSD. Elsewhere, a 16-years old vulnerability found in video software has survived five million hits from other automated testing tools without ever being discovered. And it autonomously found and chained together several in the Linux kernel allowing an attacker to escalate from ordinary user access to complete control of the machine. […] Anthropic is concerned that Mythos’ capabilities could unleash cyberattacks too fast and too sophisticated for defenders to block. It hopes that Mythos can be used to improve cybersecurity generally before malicious actors can get access to it.
To this end, the firm has announced the next stage of this preparation as Project Glasswing, powered by Mythos Preview. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. “Project Glasswing is a starting point. No one organization can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play.” Claude Mythos Preview is described as a general-purpose, unreleased frontier model from Anthropic that has nevertheless completed its training phase. The firm does not plan to make Mythos Preview generally available. The implication is that ‘Preview’ is a term used solely to describe the current state of Mythos and the market’s readiness to receive it, and will be dropped when the firm gets closer to general release.
You must be logged in to post a comment Login