Connect with us

Tech

OpenAI launches a Codex desktop app for macOS to run multiple AI coding agents in parallel

Published

on

OpenAI on Monday released a new desktop application for its Codex artificial intelligence coding system, a tool the company says transforms software development from a collaborative exercise with a single AI assistant into something more akin to managing a team of autonomous workers.

The Codex app for macOS functions as what OpenAI executives describe as a “command center for agents,” allowing developers to delegate multiple coding tasks simultaneously, automate repetitive work, and supervise AI systems that can run for up to 30 minutes independently before returning completed code.

“This is the most loved internal product we’ve ever had,” Sam Altman, OpenAI’s chief executive, told VentureBeat in a press briefing ahead of Monday’s launch. “It’s been totally an amazing thing for us to be using recently at OpenAI.”

The release arrives at a pivotal moment for the enterprise AI market. According to a survey of 100 Global 2000 companies published last week by venture capital firm Andreessen Horowitz, 78% of enterprise CIOs now use OpenAI models in production, though competitors Anthropic and Google are gaining ground rapidly. Anthropic posted the largest share increase of any frontier lab since May 2025, growing 25% in enterprise penetration, with 44% of enterprises now using Anthropic in production.

Advertisement

The timing of OpenAI’s Codex app launch — with its focus on professional software engineering workflows — appears designed to defend the company’s position in what has become the most contested segment of the AI market: coding tools.

Why developers are abandoning their IDEs for AI agent management

The Codex app introduces a fundamentally different approach to AI-assisted coding. While previous tools like GitHub Copilot focused on autocompleting lines of code in real-time, the new application enables developers to “effortlessly manage multiple agents at once, run work in parallel, and collaborate with agents over long-running tasks.”

Alexander Embiricos, the product lead for Codex, explained the evolution during the press briefing by tracing the product’s lineage back to 2021, when OpenAI first introduced a model called Codex that powered GitHub Copilot.

Advertisement

“Back then, people were using AI to write small chunks of code in their IDEs,” Embiricos said. “GPT-5 in August last year was a big jump, and then 5.2 in December was another massive jump, where people started doing longer and longer tasks, asking models to do work end to end. So what we saw is that developers, instead of working closely with the model, pair coding, they started delegating entire features.”

The shift has been so profound that Altman said he recently completed a substantial coding project without ever opening a traditional integrated development environment.

“I was astonished by this…I did this fairly big project in a few days earlier this week and over the weekend. I did not open an IDE during the process. Not a single time,” Altman said. “I did look at some code, but I was not doing it the old-fashioned way, and I did not think that was going to be happening by now.”

How skills and automations extend AI coding beyond simple code generation

The Codex app introduces several new capabilities designed to extend AI coding beyond writing lines of code. Chief among these are “Skills,” which bundle instructions, resources, and scripts so that Codex can “reliably connect to tools, run workflows, and complete tasks according to your team’s preferences.”

Advertisement

The app includes a dedicated interface for creating and managing skills, and users can explicitly invoke specific skills or allow the system to automatically select them based on the task at hand. OpenAI has published a library of skills for common workflows, including tools to fetch design context from Figma, manage projects in Linear, deploy web applications to cloud hosts like Cloudflare and Vercel, generate images using GPT Image, and create professional documents in PDF, spreadsheet, and Word formats.

To demonstrate the system’s capabilities, OpenAI asked Codex to build a racing game from a single prompt. Using an image generation skill and a web game development skill, Codex built the game by working independently using more than 7 million tokens with just one initial user prompt, taking on “the roles of designer, game developer, and QA tester to validate its work by actually playing the game.”

The company has also introduced “Automations,” which allow developers to schedule Codex to work in the background on an automatic schedule. “When an Automation finishes, the results land in a review queue so you can jump back in and continue working if needed.”

Thibault Sottiaux, who leads the Codex team at OpenAI, described how the company uses these automations internally: “We’ve been using Automations to handle the repetitive but important tasks, like daily issue triage, finding and summarizing CI failures, generating daily release briefs, checking for bugs, and more.”

Advertisement

The app also includes built-in support for “worktrees,” allowing multiple agents to work on the same repository without conflicts. “Each agent works on an isolated copy of your code, allowing you to explore different paths without needing to track how they impact your codebase.”

OpenAI battles Anthropic and Google for control of enterprise AI spending

The launch comes as enterprise spending on AI coding tools accelerates dramatically. According to the Andreessen Horowitz survey, average enterprise AI spend on large language models has risen from approximately $4.5 million to $7 million over the last two years, with enterprises expecting growth of another 65% this year to approximately $11.6 million.

Leadership in the enterprise AI market varies significantly by use case. OpenAI dominates “early, horizontal use cases like general purpose chatbots, enterprise knowledge management and customer support,” while Anthropic leads in “software development and data analysis, where CIOs consistently cite rapid capability gains since the second half of 2024.”

When asked during the press briefing how Codex differentiates from Anthropic’s Claude Code, which has been described as having its “ChatGPT moment,” Sottiaux emphasized OpenAI’s focus on model capability for long-running tasks.

Advertisement

“One of the things that our models are extremely good at—they really sit at the frontier of intelligence and doing reliable work for long periods of time,” Sottiaux said. “This is also what we’re optimizing this new surface to be very good at, so that you can start many parallel agents and coordinate them over long periods of time and not get lost.”

Altman added that while many tools can handle “vibe coding front ends,” OpenAI’s 5.2 model remains “the strongest model by far” for sophisticated work on complex systems.

“Taking that level of model capability and putting it in an interface where you can do what Thibault was saying, we think is going to matter quite a bit,” Altman said. “That’s probably the, at least listening to users and sort of looking at the chatter on social that’s that’s the single biggest differentiator.”

The surprising satisfies on AI progress: how fast humans can type

The philosophical underpinning of the Codex app reflects a view that OpenAI executives have been articulating for months: that human limitations — not AI capabilities — now constitute the primary constraint on productivity.

Advertisement

In a December appearance on Lenny’s Podcast, Embiricos described human typing speed as “the current underappreciated limiting factor” to achieving artificial general intelligence. The logic: if AI can perform complex coding tasks but humans can’t write prompts or review outputs fast enough, progress stalls.

The Codex app attempts to address this by enabling what the team calls an “abundance mindset” — running multiple tasks in parallel rather than perfecting single requests. During the briefing, Embiricos described how power users at OpenAI work with the tool.

“Last night, I was working on the app, and I was making a few changes, and all of these changes are able to run in parallel together. And I was just sort of going between them, managing them,” Embiricos said. “Behind the scenes, all these tasks are running on something called gate work trees, which means that the agents are running independently, and you don’t have to manage them.”

In the Sequoia Capital podcast “Training Data,” Embiricos elaborated on this mindset shift: “The mindset that works really well for Codex is, like, kind of like this abundance mindset and, like, hey, let’s try anything. Let’s try anything even multiple times and see what works.” He noted that when users run 20 or more tasks in a day or an hour, “they’ve probably understood basically how to use the tool.”

Advertisement

Building trust through sandboxes: how OpenAI secures autonomous coding agents

OpenAI has built security measures into the Codex architecture from the ground up. The app uses “native, open-source and configurable system-level sandboxing,” and by default, “Codex agents are limited to editing files in the folder or branch where they’re working and using cached web search, then asking for permission to run commands that require elevated permissions like network access.”

Embiricos elaborated on the security approach during the briefing, noting that OpenAI has open-sourced its sandbox technology.

“Codex has this sandbox that we’re actually incredibly proud of, and it’s open source, so you can go check it out,” Embiricos said. The sandbox “basically ensures that when the agent is working on your computer, it can only make writes in a specific folder that you want it to make rights into, and it doesn’t access network without information.”

The system also includes a granular permission model that allows users to configure persistent approvals for specific actions, avoiding the need to repeatedly authorize routine operations. “If the agent wants to do something and you find yourself annoyed that you’re constantly having to approve it, instead of just saying, ‘All right, you can do everything,’ you can just say, ‘Hey, remember this one thing — I’m actually okay with you doing this going forward,’” Embiricos explained.

Advertisement

Altman emphasized that the permission architecture signals a broader philosophy about AI safety in agentic systems.

“I think this is going to be really important. I mean, it’s been so clear to us using this, how much you want it to have control of your computer, and how much you need it,” Altman said. “And the way the team built Codex such that you can sensibly limit what’s happening and also pick the level of control you’re comfortable with is important.”

He also acknowledged the dual-use nature of the technology. “We do expect to get to our internal cybersecurity high moment of our models very soon. We’ve been preparing for this. We’ve talked about our mitigation plan,” Altman said. “A real thing for the world to contend with is going to be defending against a lot of capable cybersecurity threats using these models very quickly.”

The same capabilities that make Codex valuable for fixing bugs and refactoring code could, in the wrong hands, be used to discover vulnerabilities or write malicious software—a tension that will only intensify as AI coding agents become more capable.

Advertisement

From Android apps to research breakthroughs: how Codex transformed OpenAI’s own operations

Perhaps the most compelling evidence for Codex’s capabilities comes from OpenAI’s own use of the tool. Sottiaux described how the system has accelerated internal development.

“A Sora Android app is an example of that where four engineers shipped in only 18 days internally, and then within the month we give access to the world,” Sottiaux said. “I had never noticed such speed at this scale before.”

Beyond product development, Sottiaux described how Codex has become integral to OpenAI’s research operations.

“Codex is really involved in all parts of the research — making new data sets, investigating its own screening runs,” he said. “When I sit in meetings with researchers, they all send Codex off to do an investigation while we’re having a chat, and then it will come back with useful information, and we’re able to debug much faster.”

Advertisement

The tool has also begun contributing to its own development. “Codex also is starting to build itself,” Sottiaux noted. “There’s no screen within the Codex engineering team that doesn’t have Codex running on multiple, six, eight, ten, tasks at a time.”

When asked whether this constitutes evidence of “recursive self-improvement” — a concept that has long concerned AI safety researchers — Sottiaux was measured in his response.

“There is a human in the loop at all times,” he said. “I wouldn’t necessarily call it recursive self-improvement, a glimpse into the future there.”

Altman offered a more expansive view of the research implications.

Advertisement

“There’s two parts of what people talk about when they talk about automating research to a degree where you can imagine that happening,” Altman said. “One is, can you write software, extremely complex infrastructure, software to run training jobs across hundreds of thousands of GPUs and babysit them. And the second is, can you come up with the new scientific ideas that make algorithms more efficient.”

He noted that OpenAI is “seeing early but promising signs on both of those.”

The end of technical debt? AI agents take on the work engineers hate most

One of the more unexpected applications of Codex has been addressing technical debt — the accumulated maintenance burden that plagues most software projects.

Altman described how AI coding agents excel at the unglamorous work that human engineers typically avoid.

Advertisement

“The kind of work that human engineers hate to do — go refactor this, clean up this code base, rewrite this, write this test — this is where the model doesn’t care. The model will do anything, whether it’s fun or not,” Altman said.

He reported that some infrastructure teams at OpenAI that “had sort of like, given up hope that you were ever really going to long term win the war against tech debt, are now like, we’re going to win this, because the model is going to constantly be working behind us, making sure we have great test coverage, making sure that we refactor when we’re supposed to.”

The observation speaks to a broader theme that emerged repeatedly during the briefing: AI coding agents don’t experience the motivational fluctuations that affect human programmers. As Altman noted, a team member recently observed that “the hardest mental adjustment to make about working with these sort of like aI coding teammates, unlike a human, is the models just don’t run out of dopamine. They keep trying. They don’t run out of motivation. They don’t get, you know, they don’t lose energy when something’s not working. They just keep going and, you know, they figure out how to get it done.”

What the Codex app costs and who can use it starting today

The Codex app launches today on macOS and is available to anyone with a ChatGPT Plus, Pro, Business, Enterprise, or Edu subscription. Usage is included in ChatGPT subscriptions, with the option to purchase additional credits if needed.

Advertisement

In a promotional push, OpenAI is temporarily making Codex available to ChatGPT Free and Go users “to help more people try agentic workflows.” The company is also doubling rate limits for existing Codex users across all paid plans during this promotional period.

The pricing strategy reflects OpenAI’s determination to establish Codex as the default tool for AI-assisted development before competitors can gain further traction. More than a million developers have used Codex in the past month, and usage has nearly doubled since the launch of GPT-5.2-Codex in mid-December, building on more than 20x usage growth since August 2025.

Customers using Codex include large enterprises like Cisco, Ramp, Virgin Atlantic, Vanta, Duolingo, and Gap, as well as startups like Harvey, Sierra, and Wonderful. Individual developers have also embraced the tool: Peter Steinberger, creator of OpenClaw, built the project entirely with Codex and reports that since fully switching to the tool, his productivity has roughly doubled across more than 82,000 GitHub contributions.

OpenAI’s ambitious roadmap: Windows support, cloud triggers, and continuous background agents

OpenAI outlined an aggressive development roadmap for Codex. The company plans to make the app available on Windows, continue pushing “the frontier of model capabilities,” and roll out faster inference.

Advertisement

Within the app, OpenAI will “keep refining multi-agent workflows based on real-world feedback” and is “building out Automations with support for cloud-based triggers, so Codex can run continuously in the background—not just when your computer is open.”

The company also announced a new “plan mode” feature that allows Codex to read through complex changes in read-only mode, then discuss with the user before executing. “This means that it lets you build a lot of confidence before, again, sending it to do a lot of work by itself, independently, in parallel to you,” Embiricos explained.

Additionally, OpenAI is introducing customizable personalities for Codex. “The default personality for Codex has been quite terse. A lot of people love it, but some people want something more engaging,” Embiricos said. Users can access the new personalities using the /personality command.

Altman also hinted at future integration with ChatGPT’s broader ecosystem.

Advertisement

“There will be all kinds of cool things we can do over time to connect people’s ChatGPT accounts and leverage sort of all the history they’ve built up there,” Altman said.

Microsoft still dominates enterprise AI, but the window for disruption is open

The Codex app launch occurs as most enterprises have moved beyond single-vendor strategies. According to the Andreessen Horowitz survey, “81% now use three or more model families in testing or production, up from 68% less than a year ago.”

Despite the proliferation of AI coding tools, Microsoft continues to dominate enterprise adoption through its existing relationships. “Microsoft 365 Copilot leads enterprise chat though ChatGPT has closed the gap meaningfully,” and “Github Copilot is still the coding leader for enterprises.” The survey found that “65% of enterprises noted they preferred to go with incumbent solutions when available,” citing trust, integration, and procurement simplicity.

However, the survey also suggests significant opportunity for challengers: “Enterprises consistently say they value faster innovation, deeper AI focus, and greater flexibility paired with cutting edge capabilities that AI native startups bring.”

Advertisement

OpenAI appears to be positioning Codex as a bridge between these worlds. “Codex is built on a simple premise: everything is controlled by code,” the company stated. “The better an agent is at reasoning about and producing code, the more capable it becomes across all forms of technical and knowledge work.”

The company’s ambition extends beyond coding. “We’ve focused on making Codex the best coding agent, which has also laid the foundation for it to become a strong agent for a broad range of knowledge work tasks that extend beyond writing code.”

When asked whether AI coding tools could eventually move beyond early adopters to become mainstream, Altman suggested the transition may be closer than many expect.

“Can it go from vibe coding to serious software engineering? That’s what this is about,” Altman said. “I think we are over the bar on that. I think this will be the way that most serious coders do their job — and very rapidly from now.”

Advertisement

He then pivoted to an even bolder prediction: that code itself could become the universal interface for all computer-based work.

“Code is a universal language to get computers to do what you want. And it’s gotten so good that I think, very quickly, we can go not just from vibe coding silly apps but to doing all the non-coding knowledge work,” Altman said.

At the close of the briefing, Altman urged journalists to try the product themselves: “Please try the app. There’s no way to get this across just by talking about it. It’s a crazy amount of power.”

For developers who have spent careers learning to write code, the message was clear: the future belongs to those who learn to manage the machines that write it for them.

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

LG’s massive 52-inch ultra-wide gaming monitor costs $2,000

Published

on

LG kicked off the year by unveiling a new lineup of gaming monitors, and today the company has priced out the biggest of the bunch. The UltraGear evo G9 (52G930B) is now available for pre-order, and the massive screen will cost just $2,000.

Yes, you can buy a perfectly excellent gaming monitor for much less, but $2,000 is a surprisingly low price tag for this 52-inch ultrawide monitor with a 1000R curve, which LG is billing as “the world’s largest 5K2K gaming monitor.” In addition to its huge size, the G9 can run at a 240Hz refresh rate and offers a 1 millisecond gray-to-gray response rate. Visuals are supported by VESA DisplayHDR 600 and up to 95% DCI-P3 color gamut coverage.

LG has long done solid work on gaming monitors, and the G9 seems like a good choice for anyone who wants to be seriously immersed in their gameplay. Whether that’s for a high-fidelity experience like Microsoft Flight Simulator or for having the maximum coziness in Stardew Valley is up to you.

Image for the mini product module

Source link

Advertisement
Continue Reading

Tech

Column: Public trust is becoming AI’s real bottleneck

Published

on

Jesse Collins.

The two towers near Aberdeen weren’t supposed to be monuments. They were supposed to be engines.

Drive west from Olympia and you’ll see the unfinished nuclear plant rising from the evergreen canopy. The project promised clean energy, jobs, and technological prestige. Instead, it became a cautionary tale of cost overruns and evaporating public confidence.

Nuclear engineering remained sound. Public confidence did not.

Industries rarely stall because they hit a technical ceiling. They slow when political and social permission erodes.

Artificial intelligence now sits in a similar moment. Public trust in major institutions is fragile, and trust in large technology companies is even lower. Concerns about job displacement, wealth concentration, and infrastructure strain are no longer fringe anxieties. They are mainstream political energy. Across multiple states, lawmakers have introduced proposals to pause or restrict data center expansion. That momentum did not emerge overnight.

Advertisement

Tech executives and investors are no longer background actors. Their statements travel faster than their products. As taxes, oversight, and regulation come under debate, tech’s most visible voices often frame them as hostility toward innovation. It may feel like a necessary defense, but it can reinforce the perception that the industry is unwilling to adapt to broader political realities.
In Washington state, that energy is visible in the debate around new capital gains and high-income tax proposals. Some startup leaders have framed tax proposals as existential threats to Seattle’s innovation economy and warn that Washington risks becoming “the next Cleveland.”

Incremental taxes on high incomes are unlikely to determine whether Seattle remains a technology hub. But public panic about those taxes can shape how the industry is perceived. To an average voter worried about job displacement or rising costs, highly visible opposition to millionaire tax proposals can feel disconnected from broader economic anxieties. That contrast hardens the sense that tech operates in a separate lane from everyone else. Perception like that carries consequences.

The site of Satsop Nuclear Power Plant in Elma, Wash., where only one of five units were actually built following public pushback. (Photo via Wikimedia Commons)

When distrust hardens into political momentum, policy seldom arrives as a narrow correction. It tends to be broad and reactive. 

What makes legitimacy risk particularly dangerous is that it rarely begins with statute. It begins with friction. Hiring becomes harder in communities that feel antagonistic toward the industry. Government partnerships face louder opposition. Enterprise buyers extend diligence cycles. Distribution slows in subtle ways that don’t show up in quarterly dashboards but compound over time. These costs compound even if they are difficult to measure.

Industries under suspicion move differently. Telecommunications once represented the frontier of American innovation. As power consolidated and public suspicion grew, the response included structural control and heavy supervision. Innovation did not end, but it moved under tighter constraints and at a slower pace. The center of gravity shifted from experimentation to permission.

Advertisement

As a founder building risk and regulatory infrastructure for financial institutions, I think about these dynamics constantly. I expect guardrails. Thoughtful regulation is not the enemy. In many cases, it creates highly functional markets.

What concerns me is overcorrection. Sweeping licensing regimes, expansive liability standards for model outputs, escalating compliance overhead, infrastructure caps written in frustration rather than precision. Those burdens fall hardest on young companies without large compliance teams.

We are careful about pricing market and technical risk. We are far less disciplined about legitimacy risk, the moment an industry loses its social license to operate.

Over the next decade, legitimacy may be the binding constraint. Durability matters more than short-term velocity, and durability is built on public trust.

Advertisement

Seattle became a technology hub because it was broadly trusted to build. That trust gave companies room to experiment and scale. It was a form of oxygen. You rarely notice it until it thins. By then, the towers are already standing.

Source link

Continue Reading

Tech

Apple’s touch-screen MacBook Pro will get the iPhone’s pill-shaped Dynamic Island

Published

on

Apple is expected to launch redesigned MacBook Pro laptops later this year, and these are expected to bring a massive overhaul in terms of looks and innards. The biggest change is going to be a touch-sensitive panel, one with OLED tech underneath instead of mini-LED panels that you get on the current crop of Pro laptops by Apple. But it seems the pill-shaped cutout from the iPhones — officially known as the Dynamic Island — will also appear on these laptops, as per Bloomberg.

What’s the big shift?

“The company’s initial touch Macs, due this fall, will have the Dynamic Island at the center top of the display, said the people, who asked not to be identified because the plans aren’t public,” reports Bloomberg. Ever since Apple put a notch on the MacBook — both Air and Pro models — fans have complained about the lost screen real estate and how it has remained untouched in terms of functionalities.

The open-source community, on the other hand, has developed plenty of apps that make the best use of the notch, turning it into a file container, clipboard manager, camera preview engine, mini-calendar, and more. But the aesthetic trade-off is still very much there. On the upcoming MacBook Pro overhaul, Apple is apparently solving two problems in one go viz., get rid of the notch, and put a Dynamic Island in its place that can serve as a hub of activities, similar to what we get on the iPhone.

At last, some good news

Currently in development under the codenames code-named K114 and K116, the upcoming 14-inch and 16-inch MacBook Pro will feature a UI that is designed around interactions. And if the user interaction with the Dynamic Island on iPhones is anything to go by, its counterpart on the MacBook Pro will do a lot more, from tracking ongoing activities to serving as a progress timer and more. But Apple is not going all-in with a touch-friendly design of macOS.

Advertisement

“The idea is to let customers use the touch input as much or as little as they’d like, and blend it with the familiar point-and-click approach,” adds the Bloomberg report. As far as the Dynamic Island itself is concerned, it will be smaller than what you currently see on iPhones. Either way, it’s an exciting turn of events. But it would still take some time getting used to. “There are other questions — how dynamic would this Dynamic Island be? If it frequently changes size like the iPhone version, that might mess with your muscle memory, as buttons are no longer where you expect them to be,” says our previous reporting on the possibility.

Source link

Advertisement
Continue Reading

Tech

The era of human web search is over: Nimble launches Agentic Search Platform for enterprises boasting 99% accuracy

Published

on

Web Search has already been disrupted by AI — just take a look at how readily Google is presenting users with AI Overviews (summaries of search results) at the top of their results pages, how Bing early on integrated OpenAI’s GPT models, and how Perplexity continues to build on its own AI-driven web search platform and browsers.

Nimble announced the launch of its Agentic Search Platform, a system designed to transform the public web into trusted, decision-grade data for AI systems and business workflows.

The launch is supported by $47 million in Series B financing led by Norwest, with participation from Databricks Ventures and others, bringing the company’s total funding to $75 million.

Advertisement

The initiative addresses a fundamental bottleneck in the current AI era: while large language models (LLMs) are becoming more sophisticated, they often reason over incomplete or unverifiable external information. Nimble’s platform aims to eliminate this “guesswork gap” by providing a governed data layer that searches, navigates, and validates live internet data in real time.

In an exclusive interview with VentureBeat, Nimble co-founder and CEO Uri Knorovich reflected on the early skepticism regarding his vision of a machine-centric internet.

“Whenever we started this company, and the first time I went to investors, I told them the web is built for humans, but machines are going to be the first citizens of the web,” Knorovich recalled. He noted that while initial reactions labeled him as “too visionary,” the current reality of AI adoption has validated his thesis.

Technology: Coordinated multi-agent architecture

The core of Nimble’s solution is a proprietary distributed architecture that orchestrates specialized agents to perform tasks traditionally handled by human researchers or brittle web scrapers. According to the company’s infrastructure documentation, the process is broken down into five distinct layers:

Advertisement
  • Headless browser and browsing agents: These layers manage the initial interaction with a target domain, navigating complex site structures as a human would.

  • Parsing agents: These agents interpret the page content, identifying relevant data elements across various formats.

  • Data processing agents: This layer aggregates, filters, and cleans noisy internet data to produce specific, structured answers.

  • Validation agents: The final step involves verifying the results to ensure accuracy and completeness before delivery.

Unlike standard search engines designed for consumer link-clicking, this architecture uses multimodal and reasoning capabilities from frontier models—including those from OpenAI, Anthropic, and Meta—to control real browsers. This allows Nimble to navigate dynamic layouts and cross-check results, producing auditable data outputs rather than simple text summaries.

A new paradigm: ‘The web is built for humans, but machines are the first citizens’

Knorovich points out that the scale of AI interaction with the web is fundamentally different from human behavior. “We, as humans, search for maybe three or five options before we making decisions… but every day, Nimble perform more than 3.2 million interactions in the web,” he explained. This sheer volume of billions of monthly searches represents a programmatic shift that requires a new type of infrastructure.

The bottleneck for enterprises today, according to Knorovich, isn’t the intelligence of the models, but the quality of the data they can access. “Agents are the headlines, and accurate and reliable web search is the bottleneck,” he stated.

Nimble vs. consumer search: Precision over speed

Knorovich explicitly differentiates Nimble from general-purpose tools like Google or consumer AI search assistants.

Advertisement

While Google has built a search experience for consumers that is optimized for speed and finding a local restaurant, enterprises require high-scale, high-accuracy results to make multi-million dollar decisions.

“General purpose web search tool are great to have a general answers, such as who is the wife Leo missing,” Knorovich remarked during the interview. “But enterprises need deep, granular data, and they need to have the ability to control the search filters, to control the regulation, to control what is a trusted source”. Unlike consumer AI modes that may summarize a Reddit post or high-level news, Nimble provides “street-level” information that can be stored directly in an enterprise system of record.

Product: Bridging the no-code and developer divide

The Agentic Search Platform is delivered through two primary interfaces designed for enterprise scalability:

  1. Web search agents: A no-code AI workflow builder that enables business teams to describe the data they need and receive structured data streams without writing a line of code.

  2. Web tools SDK: A suite of APIs for builders to search, extract, and crawl the web directly from their code. This includes specialized tools like the /crawl API for mapping entire domains and the /map API for creating domain trees.

The platform is built to deliver data with greater than 99% accuracy — meaning fewer than 1% inaccurate or hallucinated data for the total contents of each search result returned — and a latency of 1-2 milliseconds per request.

Advertisement

It integrates natively with major data environments, allowing users to stream clean data directly into Databricks, Snowflake, S3, or Microsoft Fabric.

During the interview, Knorovich emphasized that Nimble is designed to be model-agnostic, working seamlessly with state-of-the-art models from OpenAI, Anthropic, and Google’s Gemini. This flexibility allows companies to use Nimble alongside their existing tech stack, whether they are running models in the cloud or on-premise for high-security environments like healthcare or banking.

Case studies: Accuracy in action

Knorovich provided several real-world examples of how this “street-level” data impacts professional workflows. For instance, a real estate broker looking to expand into a new territory doesn’t need a high-level summary from a general-purpose AI.

“If you want to know what’s happening in the commercial real estate in Atlanta… you’re not looking for search that’s optimized for the millisecond,” Knorovich explained. “You’re looking for street-level, neighborhood-level information… data that you can actually see on a table or download to Excel”.

Advertisement

Another use case involves major financial institutions utilizing Nimble for “know your customer” (KYC) processes. By deploying an autonomous search agent, banks can cross-reference multiple public reports, criminal records, and address verifications to build a complete profile of a client before they even enter the building. The goal, Knorovich noted, is to provide the “external truth” that exists outside an organization’s internal firewalls.

Enterprise licensing and compliance

Nimble differentiates itself from legacy scraping tools through a rigorous focus on governance and trust. The platform is “compliant-by-design,” holding certifications for SOC2 Type II, GDPR, CCPA, and HIPAA.

Pricing is structured to support both experimental startups and high-scale enterprise operations, aligned with the volume and depth of data retrieved.

“Pricing should be aligned with the value that the user is getting… therefore, we are pricing by the amount of searches that you’re running,” Knorovich said.

Advertisement
  • Search and answer APIs: Standard search inputs cost $1 per 1,000, while the “Answer” function—which provides reasoning based on search results—costs $4 per 1,000.

  • Managed services: For larger organizations, managed tiers start at $2,000 per month (Startup) and scale to $15,000 per month (Professional) for unlimited agents and priority support.

  • Proxy access: A network of over 1 million residential proxies is available starting at $7.50 per GB

Community and user reactions

The transition to agentic search has already been operationalized by several Fortune 500 companies and AI-native startups:

  • Julie Averill, former CIO at Lululemon, stated that pricing intelligence which once took weeks to review can now be responded to in minutes by putting control in the hands of an agent.

  • Itamar Fridman, CEO and Co-founder of Qodo, noted that the platform’s scalability was “crucial in developing more robust and reliable AI systems” by feeding LLMs with high-quality data.

  • Dennis Irorere, Data Engineer at TripAdvisor, highlighted that the platform simplifies the extraction of structured data from complex sources, which he described as “transformative” for his role.

  • Grips Intelligence reported scaling to over 45,000 e-commerce sites using Nimble’s Web API to deliver real-time pricing and product data.

  • Alta utilizes the platform to power millions of AI-driven go-to-market workflows daily, reporting 3–4× deeper context and >99% reliability

Series B to accelerate multi-agent web search and data governance

The $47 million Series B funding announced alongside the platform will be used to accelerate research in multi-agent web search and further develop the governed data layer.

The round saw participation from a wide ecosystem of investors, including Target Global, Square Peg, Hetz Ventures, Slow Ventures, R-Squared Ventures, J-Ventures, and InvestInData.

Andrew Ferguson, VP of Databricks Ventures, noted that Nimble complements their Data Intelligence Platform by providing a “real-time web data layer” that extends workflows beyond internal sources. This strategic investment signals a shift in the industry toward prioritizing “external truth” to ground mission-critical AI applications.

Advertisement

For Knorovich, the future of the web belongs to programmatic interaction. “Programmatic web search is where we are building towards,” he concluded. By moving away from legacy data vendors and brittle scrapers, Nimble aims to provide the real-time structure needed for AI to act with confidence in the real world.

Source link

Continue Reading

Tech

Apple rolls out age verification tools worldwide to comply with growing web of child safety laws

Published

on

Apple is launching new tools to comply with the growing number of age verification laws both in the U.S. and abroad. As part of the changes, Apple will block the downloads of apps rated 18+ in Brazil, Australia, and Singapore, while also rolling out other features to comply with laws in the U.S. states of Utah and Louisiana.

The company informed developers on Tuesday that it’s expanding its set of “age assurance” tools, including an updated Declared Age Range API now available for beta testing.

These tools allow developers to obtain a user’s age range without gaining access to the user’s personal information, like their date of birth. The need for a technical solution like this came about as more governments around the world have created laws to block or restrict certain apps like social media that can only be used by adults 18 and up.

In Brazil, for example, developers can use the Declared Age Range API to obtain the user’s age category, if the user or their parent or guardian chooses to share it.

Advertisement

In addition, Apple will block users in Australia, Brazil, and Singapore from downloading apps rated 18+, starting today, until they confirm they are adults. In this case, the App Store will perform the age confirmation automatically, but Apple notes that developers may still have separate compliance requirements they need to meet.

Also, developers whose games contain loot boxes, a gambling-like mechanism that lets players spend money for a random chance at in-game rewards, and that lawmakers believe shouldn’t be available to kids, will see their apps’ age ratings updated to reflect an 18+ audience in Brazil, specifically.

In the U.S., new users in Utah and Louisiana will soon have their age categories shared with their developers’ apps through the Declared Age Range API, as well. The company said it has expanded its other tools around age ratings and permissions to meet its compliance obligations.

“New signals are now available through the Declared Age Range API, including whether age-related regulatory requirements apply to the user and if the user is required to share their age range,” reads the Apple blog post. “The API will also let you know if you need to get a parent or guardian’s permission for significant app updates for a child.”

Advertisement

Apple last October worked to comply with similar age assurance requirements in Texas, but put some of its plans on hold back in December, as the state’s law is being fought in court. It also updated its age ratings system last year with more granular age ranges than before, and added a variety of new questions for developers submitting apps to Apple for review.

Source link

Continue Reading

Tech

iPhone 18 Pro again rumored to feature a smaller, redesigned Dynamic Island

Published

on

It’s been said time and time again that the iPhone 18 Pro will sport a noticeably smaller Dynamic Island. Now, yet another report has reiterated the claim.

Close-up of a smartphone lock screen showing time 12:55, date Fri Jan 23, with a cloudy sky wallpaper and status icons for signal, WiFi, and battery at the top
A repeat rumor says the iPhone 18 Pro will have a smaller Dynamic Island.

While the iPhone 18 Pro isn’t expected to feature any major design changes, Apple’s next high-end iPhone is set to receive new under-display technology that will reduce the Dynamic Island.
Following a January 2026 post with alleged dimensions of the new-and-improved Dynamic Island, a repeat rumor now says the iPhone 18 Pro will indeed receive a modified camera cutout.
Rumor Score: 🤯 Likely
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Sophia Space raises $10M to accelerate creation of orbital computing systems

Published

on

An artist’s conception shows the Sophia 40 TILE satellite, with each tile powered by its own solar panel. (Sophia Space Illustration)

Sophia Space says it has closed a $10 million seed financing round to accelerate the development of orbital computing systems that could serve as the foundation for space-based data processing.

The startup’s tabletop-sized satellite modules take advantage of a proprietary system that combines solar power generation and radiative cooling. Multiple tiles can be connected into racks to provide scalable computing power in low Earth orbit. The infrastructure concept is called Thermal-Integrated LEO Edge, or TILE.

“With this seed round, we’re not just building compute modules,” Sophia Space CEO Rob DeMillo said today in a news release. “We’re building the infrastructure for the next era of space-based AI and data processing.”

The investment round was led by Alpha Funds, KDDI Green Partners Fund and Unlock Venture Partners — and builds upon $3.5 million in pre-seed investment. The newly raised cash will support the continued hiring of engineering talent, the further maturation of Sophia’s TILE platform and the formation of strategic partnerships in the orbital computing ecosystem.

Sophia Space is based in Pasadena, Calif., and was founded by Leon Alkalai, a former fellow at NASA’s Jet Propulsion Laboratory who now serves as the company’s chief technology officer. But the venture has a Pacific Northwest connection in chief growth officer Brian Monnin, who worked at Intel and Microsoft before founding Seattle startups Play Impossible and Quivr.

Advertisement

In-space computing is increasingly gaining attention because of the potential for launching orbital data centers for artificial intelligence applications.

Orbital data centers could address some of the major challenges surrounding terrestrial data centers, such as the need for land and electrical power. But finding a way to cool data center satellites amid the vacuum of space poses its own technical challenge. Sophia’s founders say the company’s TILE architecture, combined with the placement of satellites in orbits around Earth’s day-night terminator, can address the cooling challenge.

Sophia Space is planning to conduct in-space demonstrations of its software with an existing communications network later this year.

DeMillo told GeekWire that the company is planning to start with edge computing applications — for example, doing on-orbit processing of imaging data collected by Earth observation satellites. “Until we get to the level where we’re going to be putting up our own orbital data centers, selling these as edge computers allows income to flow into the company and gets our name out there, and allows us to refine things going forward,” he said.

Advertisement

He said Sophia Space is planning to deliver its first TILE modules to customers in 2028.

Source link

Continue Reading

Tech

How U Business’ new 3-Line Bundle with free flagship phone works

Published

on

[This is a sponsored article with U Business.]

U Business just launched a new mobile device bundle to solve one of the biggest headaches for growing companies: getting teams properly equipped without burning cash upfront.

The U Biz 3-Line Bundle is a limited-time offer that packages multiple business lines together under one plan, with free flagship smartphones. Yes, including the latest Apple iPhone 15 and Samsung Galaxy S25.

Designed to make connectivity easier, it’s built on U Mobile’ 5G network to support the day-to-day needs of Malaysian entrepreneurs and SMEs, regardless of whether your team is desk-bound or constantly on the move.

Advertisement

Here’s everything you need to know about the new U Biz 3-Line Bundle.

The smarter business upgrade

As mentioned, the bundle gives businesses three mobile lines under a single plan, available with either the U Biz 68 or U Biz 98. Each line comes with a free flagship 5G smartphone, with no upfront payment required for the devices.

Instead of buying phones separately and managing multiple subscriptions, this limited-time offer bundle allows companies to consolidate your mobile needs into one structured plan. 

Image Credit: U Business

In today’s Instagram and TikTok-driven world, flagship 5G smartphones are now a necessary productivity tool for businesses. Yet, buying several devices at once may not be financially strategic as it can strain cash flow, especially when paired with recurring operational expenses.

With the U Biz 3-Line Bundle, entrepreneurs and SMEs are able to spread those costs into predictable monthly payments. 

Advertisement

What’s more if this new bundle includes flagship 5G devices. Businesses can choose between premium iOS and Android options, including the Apple iPhone 15 and Samsung Galaxy S25.

Access to these newer flagship 5G devices means that the team is able to enjoy stronger performance and longer software support. Thereby translating to more reliable day-to-day work tools.

Business performance without compromise

Businesses get to choose between the U Biz 68 (which starts from RM68/month per line) and the U Biz 98 (which starts from RM98/month per line). 

Image Credit: U Business

Both plans are built for high-usage business environments. On the data front, users get up to 1,000GB of 5G high-speed data, supporting faster downloads, smoother video calls, real-time cloud collaboration, and reliable hotspot usage across teams.

Communication-wise, the bundle includes unlimited local calls, so teams can stay connected internally and with clients without worrying about extra charges.

Advertisement

There’s also free global roaming in over 60 destinations, useful for businesses that have regional travel needs or cross-border operations.

By combining multiple lines and devices under one bundle, companies can better optimise their monthly expenses. 

Image Credit: ZDNET / TechRadar

All-in-one connectivity for SMEs

As Malaysia’s 5G ecosystem continues to expand, having access to a strong 5G network becomes increasingly important for businesses that operate across multiple locations.

Whether you’re a new startup or a business scaling up, the U Biz 3-Line Bundle is suited for teams of all sizes who want to stay connected without breaking the bank, regardless of where work takes you.

So if your team is seeking premium devices, then this is one bundle deal you do not want to miss. 

Advertisement

To sign up or learn more about the U Biz 3-Line Bundle, check out the website here.

Source link

Advertisement
Continue Reading

Tech

Meta AI Security Researcher Said an OpenClaw Agent Ran Amok on Her Inbox

Published

on

Meta AI security researcher Summer Yue posted a now-viral account on X describing how an OpenClaw agent she had tasked with sorting through her overstuffed email inbox went rogue, deleting messages in what she called a “speed run” while ignoring her repeated commands from her phone to stop.

“I had to RUN to my Mac mini like I was defusing a bomb,” Yue wrote, sharing screenshots of the ignored stop prompts as proof. Yue said she had previously tested the agent on a smaller “toy” inbox where it performed well enough to earn her trust, so she let it loose on the real thing. She believes the larger volume of data triggered compaction — a process where the context window grows too large and the agent begins summarizing and compressing its running instructions, potentially dropping ones the user considers critical.

The agent may have reverted to its earlier toy-inbox behavior and skipped her last prompt telling it not to act. OpenClaw is an open-source AI agent designed to run as a personal assistant on local hardware.

Source link

Advertisement
Continue Reading

Tech

BluOS Partners with airable to Enhance Radio and Podcast Discovery Across NAD, Bluesound, PSB and More

Published

on

Tens of millions of people listen to podcasts and stream internet radio every day. The challenge isn’t access, it’s organization. With content spread across multiple apps and platforms, discovery can feel fragmented, and for many listeners that means sticking to the familiar rather than finding something new.

BluOS, the premium multi-room audio software platform from Lenbrook Media Group is addressing that with a new partnership with airable. The first phase integrates airable’s extensive global catalog of internet radio stations and podcasts directly into the BluOS Controller app.

The update gives BluOS users centralized access to a wide range of programming, from independent shows like the eCoustics Podcast to widely followed titles such as The Joe Rogan Experience, The Daily, and thousands of global radio stations. Rather than requiring separate apps, content is surfaced within the BluOS interface itself, with browsing tools organized by country, genre, city, and newly added stations.

Because BluOS operates as the software layer across hardware brands including Bluesound, NAD Electronics, PSB Speakers, DALI, Monitor Audio, Cyrus Audio, and Roksan, the integration rolls out across a broad installed base without requiring new hardware.

Advertisement

The goal is straightforward: streamline radio and podcast discovery inside the same control environment users already rely on for music streaming and multi-room playback.

bluos-airable-iphone-app-popular-podcasts

What Is airable?

airable is a Germany-based media services provider that supplies internet radio and podcast aggregation to audio brands, automakers, and streaming platforms.

In simple terms, airable is the infrastructure layer. It licenses, organizes, and maintains access to a massive catalogue of global radio stations and podcasts, then integrates that catalogue into partner ecosystems through APIs and backend services.

Rather than each company negotiating station agreements or building its own discovery engine, airable handles:

Advertisement
  • Aggregation of tens of thousands of global radio stations
  • Podcast indexing and catalog updates
  • Metadata, categorization, and search tools
  • Geographic portals (country, city, genre browsing)
  • Ongoing catalogue maintenance and scalability

For platforms like BluOS, airable acts as the content backbone behind the scenes. The user experience lives inside the BluOS Controller app, but the station and podcast database, discovery structure, and updates are powered by airable’s media services platform.

It’s not a consumer-facing brand most listeners recognize — and that’s intentional. It operates quietly in the background, enabling centralized radio and podcast access without requiring users to jump between separate apps.

bluos-airable-iphone-app-popular-podcasts

The Bottom Line

By integrating airable into BluOS, Lenbrook adds a large, structured catalogue of global radio stations and podcasts directly inside the BluOS Controller app. That means no separate radio app, no bouncing between podcast platforms, and no fragmented search experience.

Advertisement. Scroll to continue reading.

Who benefits? Existing BluOS users across BluesoundNAD Electronics, PSB SpeakersDALIMonitor AudioCyrus Audio, and Roksan. They get broader access and improved discovery through a software update, not a hardware upgrade.

Advertisement

In practical terms, BluOS becomes a more complete listening hub with music, radio, and podcasts in one control environment without adding complexity.

For more information: bluos.io

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025