OpenAI on Monday released a new desktop application for its Codex artificial intelligence coding system, a tool the company says transforms software development from a collaborative exercise with a single AI assistant into something more akin to managing a team of autonomous workers.
The Codex app for macOS functions as what OpenAI executives describe as a “command center for agents,” allowing developers to delegate multiple coding tasks simultaneously, automate repetitive work, and supervise AI systems that can run for up to 30 minutes independently before returning completed code.
“This is the most loved internal product we’ve ever had,” Sam Altman, OpenAI’s chief executive, told VentureBeat in a press briefing ahead of Monday’s launch. “It’s been totally an amazing thing for us to be using recently at OpenAI.”
The release arrives at a pivotal moment for the enterprise AI market. According to a survey of 100 Global 2000 companies published last week by venture capital firm Andreessen Horowitz, 78% of enterprise CIOs now use OpenAI models in production, though competitors Anthropic and Google are gaining ground rapidly. Anthropic posted the largest share increase of any frontier lab since May 2025, growing 25% in enterprise penetration, with 44% of enterprises now using Anthropic in production.
Advertisement
The timing of OpenAI’s Codex app launch — with its focus on professional software engineering workflows — appears designed to defend the company’s position in what has become the most contested segment of the AI market: coding tools.
Why developers are abandoning their IDEs for AI agent management
The Codex app introduces a fundamentally different approach to AI-assisted coding. While previous tools like GitHub Copilot focused on autocompleting lines of code in real-time, the new application enables developers to “effortlessly manage multiple agents at once, run work in parallel, and collaborate with agents over long-running tasks.”
Alexander Embiricos, the product lead for Codex, explained the evolution during the press briefing by tracing the product’s lineage back to 2021, when OpenAI first introduced a model called Codex that powered GitHub Copilot.
Advertisement
“Back then, people were using AI to write small chunks of code in their IDEs,” Embiricos said. “GPT-5 in August last year was a big jump, and then 5.2 in December was another massive jump, where people started doing longer and longer tasks, asking models to do work end to end. So what we saw is that developers, instead of working closely with the model, pair coding, they started delegating entire features.”
The shift has been so profound that Altman said he recently completed a substantial coding project without ever opening a traditional integrated development environment.
“I was astonished by this…I did this fairly big project in a few days earlier this week and over the weekend. I did not open an IDE during the process. Not a single time,” Altman said. “I did look at some code, but I was not doing it the old-fashioned way, and I did not think that was going to be happening by now.”
How skills and automations extend AI coding beyond simple code generation
The Codex app introduces several new capabilities designed to extend AI coding beyond writing lines of code. Chief among these are “Skills,” which bundle instructions, resources, and scripts so that Codex can “reliably connect to tools, run workflows, and complete tasks according to your team’s preferences.”
Advertisement
The app includes a dedicated interface for creating and managing skills, and users can explicitly invoke specific skills or allow the system to automatically select them based on the task at hand. OpenAI has published a library of skills for common workflows, including tools to fetch design context from Figma, manage projects in Linear, deploy web applications to cloud hosts like Cloudflare and Vercel, generate images using GPT Image, and create professional documents in PDF, spreadsheet, and Word formats.
To demonstrate the system’s capabilities, OpenAI asked Codex to build a racing game from a single prompt. Using an image generation skill and a web game development skill, Codex built the game by working independently using more than 7 million tokens with just one initial user prompt, taking on “the roles of designer, game developer, and QA tester to validate its work by actually playing the game.”
The company has also introduced “Automations,” which allow developers to schedule Codex to work in the background on an automatic schedule. “When an Automation finishes, the results land in a review queue so you can jump back in and continue working if needed.”
Thibault Sottiaux, who leads the Codex team at OpenAI, described how the company uses these automations internally: “We’ve been using Automations to handle the repetitive but important tasks, like daily issue triage, finding and summarizing CI failures, generating daily release briefs, checking for bugs, and more.”
Advertisement
The app also includes built-in support for “worktrees,” allowing multiple agents to work on the same repository without conflicts. “Each agent works on an isolated copy of your code, allowing you to explore different paths without needing to track how they impact your codebase.”
OpenAI battles Anthropic and Google for control of enterprise AI spending
The launch comes as enterprise spending on AI coding tools accelerates dramatically. According to the Andreessen Horowitz survey, average enterprise AI spend on large language models has risen from approximately $4.5 million to $7 million over the last two years, with enterprises expecting growth of another 65% this year to approximately $11.6 million.
Leadership in the enterprise AI market varies significantly by use case. OpenAI dominates “early, horizontal use cases like general purpose chatbots, enterprise knowledge management and customer support,” while Anthropic leads in “software development and data analysis, where CIOs consistently cite rapid capability gains since the second half of 2024.”
When asked during the press briefing how Codex differentiates from Anthropic’s Claude Code, which has been described as having its “ChatGPT moment,” Sottiaux emphasized OpenAI’s focus on model capability for long-running tasks.
Advertisement
“One of the things that our models are extremely good at—they really sit at the frontier of intelligence and doing reliable work for long periods of time,” Sottiaux said. “This is also what we’re optimizing this new surface to be very good at, so that you can start many parallel agents and coordinate them over long periods of time and not get lost.”
Altman added that while many tools can handle “vibe coding front ends,” OpenAI’s 5.2 model remains “the strongest model by far” for sophisticated work on complex systems.
“Taking that level of model capability and putting it in an interface where you can do what Thibault was saying, we think is going to matter quite a bit,” Altman said. “That’s probably the, at least listening to users and sort of looking at the chatter on social that’s that’s the single biggest differentiator.”
The surprising satisfies on AI progress: how fast humans can type
The philosophical underpinning of the Codex app reflects a view that OpenAI executives have been articulating for months: that human limitations — not AI capabilities — now constitute the primary constraint on productivity.
Advertisement
In a December appearance on Lenny’s Podcast, Embiricos described human typing speed as “the current underappreciated limiting factor” to achieving artificial general intelligence. The logic: if AI can perform complex coding tasks but humans can’t write prompts or review outputs fast enough, progress stalls.
The Codex app attempts to address this by enabling what the team calls an “abundance mindset” — running multiple tasks in parallel rather than perfecting single requests. During the briefing, Embiricos described how power users at OpenAI work with the tool.
“Last night, I was working on the app, and I was making a few changes, and all of these changes are able to run in parallel together. And I was just sort of going between them, managing them,” Embiricos said. “Behind the scenes, all these tasks are running on something called gate work trees, which means that the agents are running independently, and you don’t have to manage them.”
In the Sequoia Capital podcast “Training Data,” Embiricos elaborated on this mindset shift: “The mindset that works really well for Codex is, like, kind of like this abundance mindset and, like, hey, let’s try anything. Let’s try anything even multiple times and see what works.” He noted that when users run 20 or more tasks in a day or an hour, “they’ve probably understood basically how to use the tool.”
Advertisement
Building trust through sandboxes: how OpenAI secures autonomous coding agents
OpenAI has built security measures into the Codex architecture from the ground up. The app uses “native, open-source and configurable system-level sandboxing,” and by default, “Codex agents are limited to editing files in the folder or branch where they’re working and using cached web search, then asking for permission to run commands that require elevated permissions like network access.”
Embiricos elaborated on the security approach during the briefing, noting that OpenAI has open-sourced its sandbox technology.
“Codex has this sandbox that we’re actually incredibly proud of, and it’s open source, so you can go check it out,” Embiricos said. The sandbox “basically ensures that when the agent is working on your computer, it can only make writes in a specific folder that you want it to make rights into, and it doesn’t access network without information.”
The system also includes a granular permission model that allows users to configure persistent approvals for specific actions, avoiding the need to repeatedly authorize routine operations. “If the agent wants to do something and you find yourself annoyed that you’re constantly having to approve it, instead of just saying, ‘All right, you can do everything,’ you can just say, ‘Hey, remember this one thing — I’m actually okay with you doing this going forward,’” Embiricos explained.
Advertisement
Altman emphasized that the permission architecture signals a broader philosophy about AI safety in agentic systems.
“I think this is going to be really important. I mean, it’s been so clear to us using this, how much you want it to have control of your computer, and how much you need it,” Altman said. “And the way the team built Codex such that you can sensibly limit what’s happening and also pick the level of control you’re comfortable with is important.”
He also acknowledged the dual-use nature of the technology. “We do expect to get to our internal cybersecurity high moment of our models very soon. We’ve been preparing for this. We’ve talked about our mitigation plan,” Altman said. “A real thing for the world to contend with is going to be defending against a lot of capable cybersecurity threats using these models very quickly.”
The same capabilities that make Codex valuable for fixing bugs and refactoring code could, in the wrong hands, be used to discover vulnerabilities or write malicious software—a tension that will only intensify as AI coding agents become more capable.
Advertisement
From Android apps to research breakthroughs: how Codex transformed OpenAI’s own operations
Perhaps the most compelling evidence for Codex’s capabilities comes from OpenAI’s own use of the tool. Sottiaux described how the system has accelerated internal development.
“A Sora Android app is an example of that where four engineers shipped in only 18 days internally, and then within the month we give access to the world,” Sottiaux said. “I had never noticed such speed at this scale before.”
Beyond product development, Sottiaux described how Codex has become integral to OpenAI’s research operations.
“Codex is really involved in all parts of the research — making new data sets, investigating its own screening runs,” he said. “When I sit in meetings with researchers, they all send Codex off to do an investigation while we’re having a chat, and then it will come back with useful information, and we’re able to debug much faster.”
Advertisement
The tool has also begun contributing to its own development. “Codex also is starting to build itself,” Sottiaux noted. “There’s no screen within the Codex engineering team that doesn’t have Codex running on multiple, six, eight, ten, tasks at a time.”
When asked whether this constitutes evidence of “recursive self-improvement” — a concept that has long concerned AI safety researchers — Sottiaux was measured in his response.
“There is a human in the loop at all times,” he said. “I wouldn’t necessarily call it recursive self-improvement, a glimpse into the future there.”
Altman offered a more expansive view of the research implications.
Advertisement
“There’s two parts of what people talk about when they talk about automating research to a degree where you can imagine that happening,” Altman said. “One is, can you write software, extremely complex infrastructure, software to run training jobs across hundreds of thousands of GPUs and babysit them. And the second is, can you come up with the new scientific ideas that make algorithms more efficient.”
He noted that OpenAI is “seeing early but promising signs on both of those.”
The end of technical debt? AI agents take on the work engineers hate most
One of the more unexpected applications of Codex has been addressing technical debt — the accumulated maintenance burden that plagues most software projects.
Altman described how AI coding agents excel at the unglamorous work that human engineers typically avoid.
Advertisement
“The kind of work that human engineers hate to do — go refactor this, clean up this code base, rewrite this, write this test — this is where the model doesn’t care. The model will do anything, whether it’s fun or not,” Altman said.
He reported that some infrastructure teams at OpenAI that “had sort of like, given up hope that you were ever really going to long term win the war against tech debt, are now like, we’re going to win this, because the model is going to constantly be working behind us, making sure we have great test coverage, making sure that we refactor when we’re supposed to.”
The observation speaks to a broader theme that emerged repeatedly during the briefing: AI coding agents don’t experience the motivational fluctuations that affect human programmers. As Altman noted, a team member recently observed that “the hardest mental adjustment to make about working with these sort of like aI coding teammates, unlike a human, is the models just don’t run out of dopamine. They keep trying. They don’t run out of motivation. They don’t get, you know, they don’t lose energy when something’s not working. They just keep going and, you know, they figure out how to get it done.”
What the Codex app costs and who can use it starting today
The Codex app launches today on macOS and is available to anyone with a ChatGPT Plus, Pro, Business, Enterprise, or Edu subscription. Usage is included in ChatGPT subscriptions, with the option to purchase additional credits if needed.
Advertisement
In a promotional push, OpenAI is temporarily making Codex available to ChatGPT Free and Go users “to help more people try agentic workflows.” The company is also doubling rate limits for existing Codex users across all paid plans during this promotional period.
The pricing strategy reflects OpenAI’s determination to establish Codex as the default tool for AI-assisted development before competitors can gain further traction. More than a million developers have used Codex in the past month, and usage has nearly doubled since the launch of GPT-5.2-Codex in mid-December, building on more than 20x usage growth since August 2025.
Customers using Codex include large enterprises like Cisco, Ramp, Virgin Atlantic, Vanta, Duolingo, and Gap, as well as startups like Harvey, Sierra, and Wonderful. Individual developers have also embraced the tool: Peter Steinberger, creator of OpenClaw, built the project entirely with Codex and reports that since fully switching to the tool, his productivity has roughly doubled across more than 82,000 GitHub contributions.
OpenAI’s ambitious roadmap: Windows support, cloud triggers, and continuous background agents
OpenAI outlined an aggressive development roadmap for Codex. The company plans to make the app available on Windows, continue pushing “the frontier of model capabilities,” and roll out faster inference.
Advertisement
Within the app, OpenAI will “keep refining multi-agent workflows based on real-world feedback” and is “building out Automations with support for cloud-based triggers, so Codex can run continuously in the background—not just when your computer is open.”
The company also announced a new “plan mode” feature that allows Codex to read through complex changes in read-only mode, then discuss with the user before executing. “This means that it lets you build a lot of confidence before, again, sending it to do a lot of work by itself, independently, in parallel to you,” Embiricos explained.
Additionally, OpenAI is introducing customizable personalities for Codex. “The default personality for Codex has been quite terse. A lot of people love it, but some people want something more engaging,” Embiricos said. Users can access the new personalities using the /personality command.
Altman also hinted at future integration with ChatGPT’s broader ecosystem.
Advertisement
“There will be all kinds of cool things we can do over time to connect people’s ChatGPT accounts and leverage sort of all the history they’ve built up there,” Altman said.
Microsoft still dominates enterprise AI, but the window for disruption is open
The Codex app launch occurs as most enterprises have moved beyond single-vendor strategies. According to the Andreessen Horowitz survey, “81% now use three or more model families in testing or production, up from 68% less than a year ago.”
Despite the proliferation of AI coding tools, Microsoft continues to dominate enterprise adoption through its existing relationships. “Microsoft 365 Copilot leads enterprise chat though ChatGPT has closed the gap meaningfully,” and “Github Copilot is still the coding leader for enterprises.” The survey found that “65% of enterprises noted they preferred to go with incumbent solutions when available,” citing trust, integration, and procurement simplicity.
However, the survey also suggests significant opportunity for challengers: “Enterprises consistently say they value faster innovation, deeper AI focus, and greater flexibility paired with cutting edge capabilities that AI native startups bring.”
Advertisement
OpenAI appears to be positioning Codex as a bridge between these worlds. “Codex is built on a simple premise: everything is controlled by code,” the company stated. “The better an agent is at reasoning about and producing code, the more capable it becomes across all forms of technical and knowledge work.”
The company’s ambition extends beyond coding. “We’ve focused on making Codex the best coding agent, which has also laid the foundation for it to become a strong agent for a broad range of knowledge work tasks that extend beyond writing code.”
When asked whether AI coding tools could eventually move beyond early adopters to become mainstream, Altman suggested the transition may be closer than many expect.
“Can it go from vibe coding to serious software engineering? That’s what this is about,” Altman said. “I think we are over the bar on that. I think this will be the way that most serious coders do their job — and very rapidly from now.”
Advertisement
He then pivoted to an even bolder prediction: that code itself could become the universal interface for all computer-based work.
“Code is a universal language to get computers to do what you want. And it’s gotten so good that I think, very quickly, we can go not just from vibe coding silly apps but to doing all the non-coding knowledge work,” Altman said.
At the close of the briefing, Altman urged journalists to try the product themselves: “Please try the app. There’s no way to get this across just by talking about it. It’s a crazy amount of power.”
For developers who have spent careers learning to write code, the message was clear: the future belongs to those who learn to manage the machines that write it for them.
Researchers are warning about the risks posed by a low-cost device that can give insiders and hackers unusually broad powers in compromising networks.
The devices, which typically sell for $30 to $100, are known as IP KVMs. Administrators often use them to remotely access machines on networks. The devices, not much bigger than a deck of cards, allow the machines to be accessed at the BIOS/UEFI level, the firmware that runs before the loading of the operating system.
This provides power and convenience to admins, but in the wrong hands, the capabilities can often torpedo what might otherwise be a secure network. Risks are posed when the devices—which are exposed to the Internet—are deployed with weak security configurations or surreptitiously connected to by insiders. Firmware vulnerabilities also leave them open to remote takeover.
No exotic zero-days here
On Tuesday, researchers from security firm Eclypsium disclosed a total of nine vulnerabilities in IP KVMs from four manufacturers. The most severe flaws allow unauthenticated hackers to gain root access or run malicious code on them.
Advertisement
“These are not exotic zero-days requiring months of reverse engineering,” Eclypsium researchers Paul Asadoorian and Reynaldo Vasquez Garcia wrote. “These are fundamental security controls that any networked device should implement. Input validation. Authentication. Cryptographic verification. Rate limiting. We are looking at the same class of failures that plagued early IoT devices a decade ago, but now on a device class that provides the equivalent of physical access to everything it connects to.”
Arizona Attorney General Kris Mayes has filed criminal charges against prediction market platform Kalshi, for allegedly operating an illegal gambling business in the state without a license and for election wagering.
The 20-count complaint, filed in Maricopa County court on Tuesday, accuses the company of engaging in unlicensed gambling activities, claiming that the site “accepted bets from Arizona residents on a wide range of events,” including state elections, a practice that is illegal in Arizona. The complaint charged Kalshi with four counts of election wagering for accepting bets from Arizona residents on the 2028 presidential race, the 2026 Arizona gubernatorial race, the 2026 Arizona Republican gubernatorial primary, and the 2026 Arizona Secretary of State race.
This is the first time a state has pursued such charges against the company, according to the Arizona Mirror, and marks a significant escalation in the battle between states and the prediction market industry.
“Kalshi may brand itself as a ‘prediction market,’ but what it’s actually doing is running an illegal gambling operation and taking bets on Arizona elections, both of which violate Arizona law,” Attorney General Mayes said in a statement. “No company gets to decide for itself which laws to follow.”
Advertisement
It’s worth noting that the charges are technically misdemeanors. They follow a small surge of cease-and-desist letters, lawsuits, and other official actions from states over Kalshi’s activities, in which numerous officials have complained that the company is skirting state gambling laws.
Conversely, prediction sites like Kalshi have argued that they are not in violation of state law because they are subject to federal regulation via the Commodity Futures Trading Commission.
Kalshi may be getting attacked left, right, and center, but the Kalshi has also taken its own, often preemptive, legal action.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
Kalshi sued Arizona’s Department of Gaming in federal court on March 12. The company’s lawsuit argued that Arizona’s regulatory attempts were intruding “into the federal government’s exclusive authority to regulate derivatives trading on exchanges.” Kalshi also recently sued Iowa and Utah on similar grounds.
Advertisement
Mayes’ office argues the company is merely trying to avoid accountability.
“Kalshi is making a habit of suing states rather than following their laws. In the last three weeks alone, the company has filed lawsuits against Iowa and Utah, and now Arizona,” Mayes said in a statement. “Rather than work within the legal frameworks that states like Arizona have established, Kalshi is running to federal court to try to avoid accountability.”
Elisabeth Diana, Kalshi’s head of communications, called the Arizona criminal charges “seriously flawed” and a matter of “gamesmanship” related to the company’s own litigation against the state.
“Four days after Kalshi filed suit in federal court, these charges were filed to circumvent federal court and short-circuit the normal judicial process,” Diana said. “They attempt to prevent federal courts from evaluating the case based on the merits – whether Kalshi is subject to exclusive federal jurisdiction. These charges are meritless, and we look forward to fighting them in court.”
Advertisement
Federal officials have signaled that they’re on the prediction industry’s side, setting up a potential regulatory showdown between states and the federal bureaucracy. Mike Selig, chair of the Commodity Futures Trading Commission, recently published an op-ed in the Wall Street Journal in which he accused state governments of having “waged legal attacks on the CFTC’s authority to regulate” such sites. Selig also claimed that his agency would no longer “sit idly by while overzealous state governments” undermined the agency’s “exclusive jurisdiction” over the industry.
Researchers at RMIT University in Australia built a small robot shaped like a dolphin. About the size of a sneaker, the machine glides across the surface of polluted water and gathers oil with a pump mounted at the front. A filter inside separates the oil from everything else, sending only the slick into an onboard tank while the water flows away untouched.
The filter draws its clever design from sea urchins. Microscopic spikes coat the sponge-like surface, too small to see without an electron microscope. Those spikes hold pockets of air that push water aside so it beads up and rolls off. Oil, on the other hand, spreads across the spikes and soaks in right away. The coating mixes oleic acid-treated barium carbonate with thin sheets of reduced graphene oxide. No fluorine or silane chemicals go into it, which keeps the whole setup safer for the environment than many older filters.
Due to platform compatibility issue, the DJI Fly app has been removed from Google Play. DJI Neo must be activated in the DJI Fly App, to ensure a…
Lightweight and Regulation Friendly – At just 135g, this drone with camera for adults 4K may be even lighter than your phone and does not require FAA…
Palm Takeoff & Landing, Go Controller-Free [1] – Neo takes off from your hand with just a push of a button. The safe and easy operation of this drone…
Lab tests put the robot through its paces using blue kerosene as a stand-in for real oil. It collected about two milliliters every minute, and the liquid that ended up in the tank measured more than 95 percent pure. The filter never clogged or soaked up water. One full battery charge keeps the machine running for roughly 15 minutes. The same material can absorb between 15 and 65 times its own weight in oil, then release most of it when squeezed and return to work with over 97 percent of its original performance intact. Salt water does not corrode it, and stray contaminants rinse away easily.
Dr. Ataur Rahman, who leads the project at RMIT’s School of Engineering, described the thinking behind the build. Oil spills bring heavy costs to nature and to economies everywhere. The team wanted a device that deploys fast, steers with precision, and reaches places too dangerous for crews on boats. PhD researcher Surya Kanta Ghadei, who developed the filter material, shared what drove his part of the work. Growing up in India, he watched spills harm marine life, especially turtles. That memory pushed him to find a way for responders to act quicker and shield wildlife from harm.
Right now the robot answers to a Wi-Fi remote. A larger version, closer to the actual size of a dolphin, sits in the plans. Its exact scale will depend on the pump and the tank it carries. In that future form the machine will run without anyone steering it. It will vacuum oil from the surface, head back to a base station to empty the tank and recharge, then return to the spill and start again. The cycle keeps going until the area clears.
Engineers see clear advantages over systems that simply float in place and wait for oil to drift their way. This robot moves through the slick on its own, collecting as it goes. The filter stays dry and ready for repeated use, so crews avoid the constant swaps and messy disposal that older setups demand. Next steps include scaling up the filter area, strengthening the pump, running field trials, and checking long-term durability in open water. [Source]
The new Center Stage selfie camera is one of the best features of Apple’s iPhone 17 series — but why settle for 18MP snaps when 48MP selfies are possible?
That’s the question posed by Kickstarter case brand Dockcase, whose latest offering, the Selfix case, adds a touchscreen to the back of your iPhone 17 Pro for seamless, main camera-quality selfies.
The Selfix case features a Camera Control cutout which, when pressed, activates both the rear touchscreen and the iPhone camera via USB-C, giving you access to Apple’s full suite of rear camera features in a front-facing orientation: the 48MP main lens, 48MP ultra-wide lens, Night Mode, Portrait Mode — the works.
Advertisement
Article continues below
And if that’s not good enough, the Selfix case also features a microSD card slot supporting up to 2TB, making it a smart alternative to bulky (and expensive) external USB-C drives, even if you’re not a keen selfie snapper.
The Selfix selfie case in Satin White (Image credit: Future)
Dockcase sent me a sample to try for the iPhone 17 Pro, and while it’s definitely a hefty piece of kit — it effectively doubles the thickness of your iPhone — it does work exactly as described. The rear touchscreen makes it easy to frame up 48MP snaps, and you can navigate your phone as normal from behind with minimal lag (though you do have to switch on AssistiveTouch in the Accessibility settings menu, since the case is considered an adaptive accessory).
Advertisement
Beyond that setting, though, Dockcase doesn’t ask you to download any apps or sit through a complicated setup process, which makes the Selfix case easy enough to use straight out of the box.
As above, I definitely have reservations about the size of the case, and while it features a built-in magnetic ring around the screen for MagSafe grips and mounts, it’s not compatible with wireless charging accessories. That makes it, for me, a tough sell in practical terms, but having access to 4K/120fps ProRes video in a front-facing capacity might appeal to creators.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
The Selfix case also addresses a problem that was perhaps more prevalent in the iPhone 16 Pro and older iPhones than it is in the iPhone 17 Pro. Apple’s new Center Stage camera features automatic subject framing and gives you the ability to switch between portrait and landscape orientations without physically turning your iPhone. It’s leagues ahead of previous iPhone selfie cameras, and I haven’t found myself wishing for a bigger front-facing sensor in my time with either the iPhone 17 Pro or iPhone Air.
Advertisement
The selfies I’ve been able to capture with the Selfix case are more detailed than those I’d get with the standard 18MP sensor, sure, but not to the extent that I’d sacrifice the portability of my iPhone in its naked state, nor my ability to charge wirelessly.
Image 1 of 2
A 24MP selfie (binned from 48MP) taken with the main camera(Image credit: Future)
A standard 18MP selfie taken with the selfie camera(Image credit: Future)
Image 1 of 2
A 24MP selfie (binned from 48MP) taken with the main camera(Image credit: Future)
A standard 18MP selfie taken with the selfie camera(Image credit: Future)
Still, if the Selfix case calls to you, it’s available for the iPhone 17 Pro and iPhone 17 Pro Max in three colors: Satin White, Morganite Pink, and Onyx Black. Pricing starts at $79 (around £60 / AU$110) for early backers.
Dockcase’s Kickstarter campaign for the Selfix case has only just gone live, and like all Kickstarter projects, buyers may encounter unpredictable delays or design compromises, so exercise caution. Mind you, we have covered previous Dockcase products in the past, and my unit works as described.
Advertisement
If you’re in the market for a more traditional iPhone case, check out our roundup of the best iPhone 17 cases.
Kyle Orland writes via Ars Technica: Since deep-learning super-sampling (DLSS) launched on 2018’s RTX 2080 cards, gamers have been generally bullish on the technology as a way to effectively use machine-learning upscaling techniques to increase resolutions or juice frame rates in games. With yesterday’s tease of the upcoming DLSS 5, though, Nvidia has crossed a line from mere upscaling into complete lighting and texture overhauls influenced by “generative AI.” The result is a bland, uncanny gloss that has received an instant and overwhelmingly negative reaction from large swaths of gamers and the industry at large.
While previous DLSS releases rendered upscaled frames or created entirely new ones to smooth out gaps, Nvidia calls DLSS 5 — which it plans to launch in Autumn — “a real-time neural rendering model” that can “deliver a new level of photoreal computer graphics previously only achieved in Hollywood visual effects.” Nvidia CEO Jensen Huang said explicitly that the technology melds “generative AI” with “handcrafted rendering” for “a dramatic leap in visual realism while preserving the control artists need for creative expression.”
Unlike existing generative video models, which Nvidia notes are “difficult to precisely control and often lack predictability,” DLSS 5 uses a game’s internal color and motion vectors “to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame.” That underlying game data helps the system “understand complex scene semantics such as characters, hair, fabric and translucent skin, along with environmental lighting conditions like front-lit, back-lit or overcast,” the company says. Nvidia’s announcement video and detailed Digital Foundry breakdown can be found at their respective links.
Thomas Was Alone developer Mike Bithell said the technology seems designed “for when you absolutely, positively, don’t want any art direction in your gaming experience.”
Gunfire Games Senior Concept Artist Jeff Talbot added that “in every shot the art direction was taken away for the senseless addition of ‘details.’ Each DLSS 5 shot looked worse and had less character than the original. This is just a garbage AI Filter.”
DLSS 5’s “AI dogshit is actually depressing,” said New Blood Interactive founder and CEO Dave Oshry, adding that future generations “won’t even know this looks ‘bad’ or ‘wrong’ because to them it’ll be normal.”
Most enterprise AI projects fail not because companies lack the technology, but because the models they’re using don’t understand their business. The models are often trained on the internet, rather than decades of internal documents, workflows, and institutional knowledge.
That gap is where Mistral, the French AI startup, sees opportunity. On Tuesday, the company announced Mistral Forge, a platform that lets enterprises build custom models trained on their own data. Mistral announced the platform at Nvidia GTC, Nvidia’s annual technology conference, which this year is focused heavily on AI and agentic models for enterprise.
It’s a pointed move for Mistral, a company that has built its business on corporate clients while rivals OpenAI and Anthropic have soared ahead in terms of consumer adoption. CEO Arthur Mensch says Mistral’s laser focus on the enterprise is working: the company is on track to surpass $1 billion in annual recurring revenue this year.
A big part of doubling down on enterprise is giving companies more control over their data and their AI systems, Mistral says.
Advertisement
“What Forge does is it lets enterprises and governments customize AI models for their specific needs,” Elisa Salamanca, Mistral’s head of product, told TechCrunch.
Several companies in the enterprise AI space already claim to offer similar capabilities, but most focus on fine-tuning existing models or layering proprietary data on top through techniques like retrieval augmented generation (RAG). These approaches don’t fundamentally retrain models; instead, they adapt or query them at runtime using company data.
Mistral, by contrast, says it is enabling companies to train models from scratch. In theory, this could address some of the limitations of more common approaches — for example, better handling of non-English or highly domain-specific data, and greater control over model behavior. It could also allow companies to train agentic systems using reinforcement learning and reduce reliance on third-party model providers, avoiding risks like model changes or deprecation.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
Forge customers can build their custom models using Mistral’s wide library of open-weight AI models, which includes small models such as the recently introduced Mistral Small 4. According to Mistral co-founder and chief technologist, Timothée Lacroix, Forge can help unlock more value out of its existing models.
Advertisement
“The trade-offs that we make when we build smaller models is that they just cannot be as good on every topic as their larger counterparts, and so the ability to customize them lets us pick what we emphasize and what we drop,” Lacroix said.
Mistral advises on which models and infrastructure to use, but both decisions stay with the customer, Lacroix said. And for teams that need more than guidance, Forge comes with Mistral’s team of forward-deployed engineers who embed directly with customers to surface the right data and adapt to their needs — a model borrowed from the likes of IBM and Palantir.
“As a product, Forge already comes with all the tooling and infrastructure so you can generate synthetic data pipelines,” Salamanca said. “But understanding how to build the right evals and making sure that you have the right amount of data is something that enterprises usually don’t have the right expertise for, and that’s what the FDEs bring to the table.”
Mistral has already made Forge available to partners including Ericsson, the European Space Agency, Italian consulting company Reply, and Singapore’s DSO and HTX. Early adopters also include ASML, the Dutch chipmaker that led Mistral’s Series C round last September at a €11.7 billion valuation (approximately $13.8 billion at the time).
Advertisement
These partnerships are emblematic of what Mistral expects Forge’s main use cases to be. According to Mistral’s chief revenue officer Marjorie Janiewicz, these include governments who need to tailor models for their language and culture; financial players with high compliance requirements; manufacturers with customization needs; and tech companies that need to tune models to their code base.
Kit, Firefox’s first mascot, has just made his debut, thanks to Mozilla, who combined fox and red panda features with some searing flame elements to create a one-of-a-kind creature that sticks out. Kit’s tail constantly seems like it’s on the move, even when he’s just relaxing, since his body language, posture, and eyes all appear to be working together to nail the mood.
Illustrator Marco Palmieri created the final design, starting with some pencil drawings to get a feel for the ideas and ensure they were strong before going on to other tools. Design agency JKR then stepped in and collaborated with Mozilla to take the project to the next level by delving into what makes Firefox tick, including the logo colors and the fox itself.
HELLO, MACBOOK NEO — Ready for whatever your day brings, MacBook Neo flies through everyday tasks and apps. Choose from four stunning colors in a…
THE MOST COLORFUL MACBOOK LINEUP EVER — Choose from Silver, Blush, Citrus, or Indigo — each with a color-coordinated keyboard to complete the…
POWER FOR EVERYDAY TASKS — Ready the moment you open it, MacBook Neo with the A18 Pro chip delivers the performance and AI capabilities you need to…
According to Amy Bebbington of Mozilla, Kit is the browser’s BFF for the internet era, as it serves as a gentle reminder to users that Firefox has their back. This comes at a time when the web is undergoing significant changes and people are becoming increasingly concerned about what is happening with their data and trust. Firefox is responding by not disclosing users’ personal information and allowing them to opt in or out of artificial intelligence.
Kit is also present in quiet moments, such as when you first log in, try something new, or do something nice while browsing, and you can even use him as a wallpaper for new browser tabs under the customization menu. You may also see him on the official website, social media, and during meetups. Overall, these subtle touches make the sign-in process feel like reconnecting with an old friend. Kit is quite understated, but he provides just the right amount of personality to remind you that the browser is only there to help (not get in the way).
Anker earbuds and headphones may not have the premium status of Apple, Bose and Sony, but the brand’s value-priced products have a loyal following. Anker aficionados have been waiting for the company to release the Pro version of its $100 Soundcore Liberty 5 earbuds.
According to NotebookCheck, via leaker AnkerInsider, whose X account appears suspended, the release is near. Two versions of Anker’s new flagship earbuds are due to arrive in the coming months: The Liberty 5 Pro and Liberty 5 Pro Max. Both will feature a new AI chip called the Anker Thus to power the buds.
The new models don’t look anything like the current Liberty 5 buds, which have a traditional stem design. Both new Pro models will feature upgraded noise canceling (Anker’s new Adaptive ANC 4.0), Bluetooth 6.1, an IP55 dust- and water-resistant rating, Dolby Atmos spatial audio, Bluetooth multipoint and an AI-powered audio upscaling feature.
While both the Liberty 5 Pro and Liberty Pro Max have a touchscreen built into their cases, the Max’s case also doubles as a voice recorder with built-in microphones. The recorder will reportedly be able to recognize your voice thanks to voiceprint recognition.
Anker has apparently developed its own AI chip for its flagship earbuds.
Advertisement
Screenshot by David Carnoy/CNET
The upcoming buds are expected to be officially announced in late May, with the Liberty 5 Pro to be priced at $170, and the Liberty 5 Pro Max retailing for $230 (the Max already have a shell of listing on Best Buy that notes the voice recorder). Both have a battery life of around 6.5 hours with noise cancellation turned on.
AI voice recorders have been proliferating in recent months (you might have seen an ad for one on Facebook or Instagram). Anker is shipping its Soundcore Work coin-sized wearable Al note take/voice recorder for $129 with a $39-off coupon code. Presumably, some of the same technology found in the wearable recorder will make its way over to the Liberty 5 Pro Max.
The Liberty 5 Pro Max won’t be the first pair of earbuds to have a microphone in their case. Nothing’s Ear (3) flagship earbuds have a Super Mic in their case, which had me talking to my hand when making calls. It’s a clear sign that as earbud performance plateaus, brands are getting creative with extra features to help their products stand out from the pack.
Advertisement
Enlarge Image
Talking to the Nothing Ear (3) case while making a call in the streets of New York. More earbuds cases appear set to have built-in microphones.
A Background Security Improvement in iOS 26.3.1 fixes a WebKit issue in Safari that could break one of the web’s most important safety rules.
Apple has fixed a WebKit bug for Safari and other browsers
Apple released a Background Security Improvement on March 17 for iOS 26.3.1, iPadOS 26.3.1, macOS 26.3.1, and macOS 26.3.2. The update fixes a WebKit flaw that could let a malicious website bypass a key browser security rule. The company said the issue was caused by a cross-origin problem in the Navigation API and assigned it CVE-2026-20643. Apple addressed the flaw by improving input validation to stop harmful web content from breaking the browser’s protections. Continue Reading on AppleInsider | Discuss on our Forums
The closures come nine years after the brand opened its first outlet here
Australian premium tea retailer T2 Tea is set to close all three of its outlets and exit Singapore, according to a report from The Business Times.
Its stores at 313@Somerset, Suntec City, and VivoCity are currently running clearance sales with discounts of up to 30%.
When the publication visited the Suntec City branch yesterday (Mar 16), most of its stock had been cleared from the shelves. The store is expected to cease operations on March 25.
Despite the closures, customers can still purchase products via T2 Tea’s online store.
Advertisement
T2 Tea closed all of its UK outlets in 2023
T2 Tea was founded in 1996 in Melbourne, with retail stores in Australia, New Zealand and Singapore. As of June 2025, it reportedly had 62 stores across these markets.
A T2 Tea store at Melbourne Central./ Image Credit: Ian via Google Reviews
In 2013, it was acquired by Unilever, and later sold to private equity group CVC Capital Partners for about S$6.6 billion in 2021.
T2 Tea entered Singapore in 2017 with a flagship outlet at 313@Somerset, marking its first expansion into Asia. The store offered more than 100 tea blends, ranging from classic options like English Breakfast to signature creations such as Melbourne Breakfast. It also launched a Singapore-exclusive blend inspired by kaya toast.
In recent years, however, the company has faced challenges.
In 2023, it exited the UK market, closing all stores and its online platform there, citing “unprecedented changes” at the time. It had said it would refocus on markets closer to home, including New Zealand and Singapore.
Advertisement
Vulcan Post has reached out to T2 Tea for more information.
Read other articles we’ve written on Singaporean businesses here.
Featured Image Credit: Gemma Chin via Google Reviews/ T2 Tea Singapore via Instagram
You must be logged in to post a comment Login