Connect with us

Tech

Google finds that AI agents learn to cooperate when trained against unpredictable opponents

Published

on

Training standard AI models against a diverse pool of opponents — rather than building complex hardcoded coordination rules — is enough to produce cooperative multi-agent systems that adapt to each other on the fly. That’s the finding from Google’s Paradigms of Intelligence team, which argues the approach offers a scalable and computationally efficient blueprint for enterprise multi-agent deployments without requiring specialized scaffolding.

The technique works by training an LLM agent via decentralized reinforcement learning against a mixed pool of opponents — some actively learning, some static and rule-based. Instead of hardcoded rules, the agent uses in-context learning to read each interaction and adapt its behavior in real time.

Why multi-agent systems keep fighting each other

The AI landscape is rapidly shifting away from isolated systems toward a fleet of agents that must negotiate, collaborate, and operate in shared spaces simultaneously. In multi-agent systems, the success of a task depends on the interactions and behaviors of multiple entities as opposed to a single agent.

The central friction in these multi-agent systems is that their interactions frequently involve competing goals. Because these autonomous agents are designed to maximize their own specific metrics, ensuring they don’t actively undermine one another in these mixed-motive scenarios is incredibly difficult.

Advertisement

Multi-agent reinforcement learning (MARL) tries to address this problem by training multiple AI agents operating, interacting, and learning in the same shared environment at the same time. However, in real-world enterprise architectures, a single, centralized system rarely has visibility over or controls every moving part. Developers must rely on decentralized MARL, where individual agents must figure out how to interact with others while only having access to their own limited, local data and observations.

multi-agent reinforcement learning

Multi-agent reinforcement learning

One of the main problems with decentralized MARL is that the agents frequently get stuck in suboptimal states as they try to maximize their own specific rewards. The researchers refer to it as “mutual defection,” based on the Prisoner’s Dilemma puzzle used in game theory. For example, think of two automated pricing algorithms locked in a destructive race to the bottom. Because each agent optimizes strictly for its own selfish reward, they arrive at a stalemate where the broader enterprise loses.

Another problem is that traditional training frameworks are designed for stationary environments, meaning the rules of the game and the behavior of the environment are relatively fixed. In a multi-agent system, from the perspective of any single agent, the environment is fundamentally unpredictable and constantly shifting because the other agents are simultaneously learning and adapting their own policies.

Advertisement

While enterprise developers currently rely on frameworks that use rigid state machines, these methods often hit a scalability wall in complex deployments.

“The primary limitation of hardcoded orchestration is its lack of flexibility,” Alexander Meulemans, co-author of the paper and Senior Research Scientist on Google’s Paradigms of Intelligence team, told VentureBeat. “While rigid state machines function adequately in narrow domains, they can fail to scale as the scope and complexity of agent deployments broaden. Our in-context approach complements these existing frameworks by fostering adaptive social behaviors that are deeply embedded during the post-training phase.”

What this means for developers using LangGraph, CrewAI, or AutoGen

Frameworks like LangGraph require developers to explicitly define agents, state transitions, and routing logic as a graph. LangChain describes this approach as equivalent to a state machine, where agent nodes and their connections represent states and transition matrices. Google’s approach inverts that model: rather than hardcoding how agents should coordinate, it produces cooperative behavior through training, leaving the agents to infer coordination rules from context.

The researchers prove that developers can achieve advanced, cooperative multi-agent systems using the exact same standard sequence modeling and reinforcement learning techniques that already power today’s foundation models.

Advertisement

The team validated the concept using a new method called Predictive Policy Improvement (PPI), though Meulemans notes the underlying principle is model-agnostic.

“Rather than training a small set of agents with fixed roles, teams should implement a ‘mixed pool’ training routine,” Meulemans said. “Developers can reproduce these dynamics using standard, out-of-the-box reinforcement learning algorithms (such as GRPO).”

By exposing agents to interact with diverse co-players (i.e., varying in system prompts, fine-tuned parameters, or underlying policies) teams create a robust learning environment. This produces strategies that are resilient when interacting with new partners and ensures that multi-agent learning leads toward stable, long-term cooperative behaviors.

How the researchers proved it works

To build agents that can successfully deduce a co-player’s strategy, the researchers created a decentralized training setup where the AI is pitted against a highly diverse, mixed pool of opponents composed of actively learning models and static, rule-based programs. This forced diversity requires the agent to dynamically figure out who it is interacting with and adapt its behavior on the fly, entirely from the context of the interaction.

Advertisement
diverse multi-agent learning environment

Diverse multi-agent training

For enterprise developers, the phrase “in-context learning” often triggers concerns about context window bloat, API costs, and latency, especially when windows are already packed with retrieval-augmented generation (RAG) data and system prompts. However, Meulemans clarifies that this technique focuses on efficiency rather than token count. “Our method focuses on optimizing how agents utilize their available context during post-training, rather than strictly demanding larger context windows,” he said. By training agents to parse their interaction history to infer strategies, they use their allocated context more adaptively without requiring longer context windows than existing applications.

Using the Iterated Prisoner’s Dilemma (IPD) as a benchmark, the researchers achieved robust, stable cooperation without any of the traditional crutches. There are no artificial separations between meta and inner learners, and no need to hardcode assumptions about how the opponent’s algorithm functions. Because the agent is adapting in real-time while also updating its core foundation model weights over time across many interactions, it effectively occupies both roles simultaneously. In fact, the agents performed better when given no information about their adversaries and were forced to adapt to their behavior through trial and error. 

Multi-agent training

Multi-agent training works best when given a pool of diverse agents and allowed to explore the rules by themselves (source: arXiv)

Advertisement

The developer’s role shifts from rule writer to architect

The researchers say that their work bridges the gap between multi-agent reinforcement learning and the training paradigms of modern foundation models. “Since foundation models naturally exhibit in-context learning and are trained on diverse tasks and behaviors, our findings suggest a scalable and computationally efficient path for the emergence of cooperative social behaviors using standard decentralized learning techniques,” they write.

As relying on in-context behavioral adaptation becomes the standard over hardcoding strict rules, the human element of AI engineering will fundamentally shift. “The AI application developer’s role may evolve from designing and managing individual interaction rules to designing and providing high-level architectural oversight for training environments,” Meulemans said. This transition elevates developers from writing narrow rulebooks to taking on a strategic role, defining the broad parameters that ensure agents learn to be helpful, safe, and collaborative in any situation.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Ctrl-Alt-Speech: Writing Some Wrongs | Techdirt

Published

on

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

Advertisement

Play along with Ctrl-Alt-Speech’s 2026 Bingo Card and get in touch if you win!

Filed Under: ai, artificial intelligence, child safety, content moderation, molly russell, publicity rights, trust and safety

Companies: grammarly, meta, whatsapp

Source link

Advertisement
Continue Reading

Tech

Professional Community Investment Yields Big Returns

Published

on

Engineering is so much more than solving problems or writing efficient code. It is about creating solutions that affect billions of lives and contributing to a profession built on innovation, responsibility, and collaboration. Although technical skills remain critical, what truly will accelerate the growth of the next generation of engineers is community and professional involvement.

Learning from communities

University programs provide a strong foundation in theory and practice, but they cannot capture the complexity of real-world engineering. As an IEEE senior member, I believe professional communities such as IEEE can help bridge the gap by offering:

I have served as a mentor and judge for a variety of hackathons across different age groups, including high school competitions United Hacks and NextStep Hacks, as well as graduate-level events such as HackHarvard.

The experiences demonstrate how transformative community-driven opportunities can be for young engineers. They provide exposure to teamwork, innovation, and the realities of solving problems at scale.

Advertisement

The power of mentorship

Engineers don’t develop skills in isolation. Mentorship, whether formal or informal, plays a pivotal role in shaping careers. Senior professionals who invest in guiding students and early-career engineers pass on more than technical knowledge. They share decision-making approaches, ethical considerations, and strategies for navigating careers, thereby expanding the engineering field.

As a keynote speaker at conferences, I have seen how sharing real-world experiences can ignite students’ curiosity and confidence. What they often value most is not a lecture on technology but candid insights into how to be resilient, grow their career, and learn about the different engineering paths.

Building ethical awareness

With the rise of artificial intelligence, biotechnology, and other high-impact innovations, engineers’ ethical responsibilities are more important than ever. Professional organizations such as IEEE and ACM emphasize codes of ethics and standards to help ensure that technology is developed responsibly.

Through my work as a peer reviewer and committee member for IEEE and ACM conferences, including those at the university level, I have seen how the organizations promote rigor and accountability.

Advertisement

When students engage with such communities early, they can not only expand their technical knowledge but also build an understanding of responsible innovation.

Networking as a catalyst for innovation

Engineering breakthroughs often emerge at the intersections of different fields. Professional communities create the space for such interactions. A student working on computer vision, for example, might discover health care applications by collaborating with biomedical engineers.

While reviewing papers for conferences, I have seen how interdisciplinary ideas spark promising innovations.

I bring the same perspective to my role as an IEEE Collabratec mentor, connecting with innovators across different disciplines and industries.

Advertisement

“When we invest in the community, we invest in the future of engineering.”

By collaborating on projects and expanding your reach, you can find the mentors or partners you need to inspire your next breakthrough.

Participating in forums allows students and professionals alike to broaden their horizons and explore solutions that go beyond traditional boundaries.

Giving back shapes leadership

Community involvement is not only about what you gain. It is also about what you give. Engineers who volunteer for educational programs, STEM initiatives, and professional committees can develop leadership skills that extend beyond technical expertise. They can learn to inspire, organize, and guide others.

Advertisement

Judging hackathons and mentoring student teams reminds me that leadership often begins with service. When experienced professionals actively invest in the growth of others, they help create a culture wherein learning and leadership are passed forward.

Preparing for a lifelong journey

Learning how to be an engineer doesn’t end when you earn your degree. It is a lifelong journey of learning, adapting, and contributing. By engaging with communities and professional networks early, students and graduates can develop habits that serve them throughout their career. They can stay current with emerging trends, build trusted professional relationships, and gain resilience through shared challenges.

Community involvement can transform engineers from problem-solvers into change agents.

Investing in the community

The future of engineering depends not only on technological advancement but also on the collective strength of its communities. By fostering mentorship, encouraging collaboration, and embedding ethical responsibility, professional and community involvement can ensure that the next generation of engineers is prepared to meet tomorrow’s challenges with competence and character.

Advertisement

My journey as a mentor, judge, keynote speaker, and peer reviewer has reinforced a clear truth: When we invest in the community, we invest in the future of engineering. The students and young professionals we support today will be the ones building the world we live in tomorrow.

From Your Site Articles

Related Articles Around the Web

Advertisement

Source link

Continue Reading

Tech

How to watch Jensen Huang’s Nvidia GTC 2026 keynote

Published

on

Nvidia kicks off its annual GTC developer conference in San Jose, California, next week with CEO Jensen Huang’s keynote scheduled for Monday at 11am PT / 2pm ET.

GTC — which stands for GPU Technology Conference — is Nvidia’s flagship annual event, where the chipmaker typically uses the spotlight to announce new products, champion partnerships, and lay out its vision for the future of computing. Huang’s keynote will focus on Nvidia’s role in the future of computing and AI. You can watch the two-hour address in person at the SAP Center or livestream the talk on the event’s website.

The broader three-day event is focused on what’s coming next for AI across industries including healthcare, robotics, and autonomous vehicles, among others.

On the software side, it’s rumored that Nvidia will release an open source platform for enterprise AI agents, dubbed NemoClaw, as originally reported by Wired. The platform would give businesses a structured way to build and deploy AI agents (software that can carry out multi-step tasks autonomously) and would position Nvidia to mirror similar offerings from companies like OpenAI.

Advertisement

On the hardware side, the company is also rumored to be releasing a new chip designed to accelerate the AI inference process — the process by which an AI model applies what it has learned to generate responses or make decisions, as distinct from the initial training process, which requires far more computing power. Faster, cheaper inference is widely seen as one of the last bottlenecks to scaling AI applications broadly. The chip, if confirmed, would represent Nvidia’s latest bid to dominate not just the training market, where it already commands an estimated 80% share, but the inference market as well, where competition from custom chips built by Google, Amazon and others is fast intensifying.

Kevin Cook, a senior equity strategist at Zacks Investment Research, told TechCrunch that attendees should also expect to learn what the company plans to do with its relationship with Groq, the inference company Nvidia reportedly paid $20 billion late last year to license its technology. There’s a lot of curiosity around this tie-up, given that Jonathan Ross, Groq’s founder, Sunny Madra, Groq’s President, and other members of the Groq team agreed to join Nvidia to help advance and scale that licensed tech.

There will, of course, also be a range of partnership announcements and demonstrations showcasing Nvidia’s AI capabilities across industries.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

Source link

Advertisement
Continue Reading

Tech

John Solly Is the DOGE Operative Accused of Planning to Take Social Security Data to His New Job

Published

on

John Solly, a software engineer and former member of the so-called Department of Government Efficiency (DOGE), is the DOGE operative reportedly accused in a whistleblower complaint of telling colleagues that he stored sensitive Social Security Administration (SSA) data on a thumb drive and wanted to share the information with his new employer, multiple sources tell WIRED.

Since October, according to a copy of his résumé, Solly has worked as the chief technology officer for the health IT division of a government contractor called Leidos, which has already received millions in SSA contracts and could receive up to $1.5 billion in contracts with SSA based on a five-year deal it signed in 2023. Solly’s personal website and LinkedIn have been taken offline as of this week.

Responding to a request for comment, Solly, through his legal counsel, denied engaging in any wrongdoing. A spokesperson for Leidos also said the company found no evidence supporting the whistleblower’s claims against Solly.

Solly was one of 12 DOGE team members at SSA, where, according to the résumé on his personal website, he supported “other DOGE engineers on initiatives including Digital SSN, Death Master File cleanup,” and “SSN verification API (EDEN 2.0).” The “death master file” is an SSA database containing millions of Social Security records of deceased people and is maintained so that their identities can’t be used for fraud. An API, or application programming interface, allows different programs to talk to each other, including pulling data and information from each other. In this case, it could allow Social Security data to be accessed by agencies and institutions outside of SSA.

Advertisement

The allegation was revealed in a complaint filed to SSA’s internal watchdog first reported earlier this week by The Washington Post, which did not name Solly or Leidos. According to the Post, the complaint was filed with the SSA’s Office of the Inspector General earlier this year and alleges that the former DOGE employee told coworkers he took copies of the SSA’s Numerical Identification System, or NUMIDENT, as well as the “death master file.” NUMIDENT is a master SSA database containing all information included in a Social Security number application, including full names, birth dates, race, and more personally identifiable information.

In the complaint, according to the Post, a whistleblower alleges that the former DOGE employee sought help transferring a set of data from a thumb drive to a personal computer so he could “sanitize” it before uploading it for use at a private-sector company. The former DOGE employee allegedly said that he expected to receive a presidential pardon if his actions were unlawful, the complaint reportedly stated.

Solly “did not share, access, or view any personally identifiable information (PII) maintained by SSA, including SSA’s Death Master File (DMF) and Numerical Identification System (Numident). The allegations made by a supposedly anonymous source are patently false and slanderous. Mr. Solly will take all appropriate steps to clear his good name and stellar reputation,” says Seth Waxman, who is representing Solly. “He is certain that any fair review of the facts and circumstances surrounding these spurious allegations will fully exonerate him.”

Leidos is a major contractor for SSA. Between 2010 and 2018, the company brought in millions of dollars in SSA IT contracts. In 2018, Leidos was awarded contracts potentially worth up to $639 million for IT support services and processing disability claims. In 2023, the company announced that it had been awarded an estimated $1.5 billion IT contract with the agency. As part of DOGE’s blitz into the US government in early 2025, Leidos, like many government contractors, saw some of its contracts cut.

Advertisement

Source link

Continue Reading

Tech

Wayve, Uber, Nissan to launch robotaxis in Tokyo

Published

on

A roll-out is planned later this year, with safety riders initially deployed in all robotaxis.

UK start-up Wayve, Uber and Nissan are collaborating to deploy robotaxi services in Tokyo.

In a joint press release, the companies said that the deal will see Nissan’s Leaf electric vehicles equipped with Wayve’s AI technology, made available to customers via Uber’s platform. A roll-out is planned later this year.

During the initial phase, safety riders will be seated in all robotaxis, the three said. Last September, Nissan said that it was testing a driver assistance system that uses Wayve’s technology, with a planned launch in Japan in 2027.

Advertisement

“We have been testing our technology throughout Japan since early 2025, building extensive experience in the country’s unique road environments,” said Alex Kendall, the co-founder and CEO of Wayve.

“Partnering with Uber and Nissan to begin pilot deployment of robotaxi[s] allows us to introduce this technology in a responsible way, while continuing to learn and expand.”

This is Uber’s first robotaxi partnership in Japan. The company recently announced its plans for an international roll-out that also includes London, Madrid, Munich, Hong Kong, and a number of US cities.

Uber’s London roll-out this spring is in partnership with Wayve, a company it backs. The ride-hailing platform recently announced its intentions to become the leading provider of robotaxi services by 2029.

Advertisement

“Autonomous mobility is becoming an increasingly important part of the Uber platform,” said Dara Khosrowshahi, the CEO of Uber, of the three-way partnership.

“Following our planned pilot deployment in London, we look forward to expanding into Tokyo and introducing new, modern ways to travel in some of the world’s largest cities … Our goal is to give riders more ways to move with seamless access through the Uber app.”

Ivan Espinosa, the president and CEO of Nissan, said: “Our work with Wayve to integrate advanced AI technology across our consumer vehicle portfolio has laid strong foundations, and we are excited to take this partnership further with a pilot deployment of robotaxi[s] in Tokyo, bringing together Wayve’s AI technology, Uber’s network and Nissan vehicles.

Nissan supported Wayve in a $1.2bn Series D round announced in February. Big-name backers Nvidia and SoftBank also participated in the round.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

The who, what, and why of the attack that has shut down Stryker’s Windows network”

Published

on

What else is known about Handala Hack?

The group has existed since at least 2023. It takes its name from a character in the political cartoons of Palestinian artist Naji al-Ali. The group’s logo depicts a small Palestinian boy who is a symbol associated with Palestinian resistance.

Check Point and other security firms have said Handala Hack is affiliated with Iran’s Ministry of Intelligence and Security and maintains multiple online personas. Compared to other nation-state-sponsored hacking groups, Handala Hack has kept a comparatively lower profile. Still, it has carried out a series of destructive wiping attacks and influence operations over the years.

Around the same time the Stryker attack came to light, posts to a Telegram account and website controlled by Handala Hack took credit for the takedown. Handala posts cited last week’s killing of 165 civilians at a girls’ school in Iran by an American Tomahawk missile and past hacking operations that the US and Israel have perpetuated on Iran.

What is the point of striking a corporation in retaliation for airstrikes carried out by the US and Israel?

Such actions are taken for their psychological effects, which are often disproportionately larger than the resources required to bring them about. With limited means for Iran to strike back militarily, the Stryker disruption allows an alternative means for the country and its allies to retaliate. The success is intended to demonstrate that pro-Iranian forces can still exact a price that has a material effect on large populations in the US, Israel, and countries allied with them.

Advertisement

As a major supplier of lifesaving medical devices relied on throughout the US and its allies, Stryker plays a strategic and symbolic role in their security, researchers at Flash Point said Thursday. “By operating behind a persona styled as a grassroots, pro-Palestinian resistance movement, Iranian state-nexus actors are able to conduct destructive cyber operations against Western organizations while maintaining a degree of plausible deniability.”

Source link

Advertisement
Continue Reading

Tech

Agents need vector search more than RAG ever did

Published

on

What’s the role of vector databases in the agentic AI world? That’s a question that organizations have been coming to terms with in recent months.

The narrative had real momentum. As large language models scaled to million-token context windows, a credible argument circulated among enterprise architects: purpose-built vector search was a stopgap, not infrastructure. Agentic memory would absorb the retrieval problem. Vector databases were a RAG-era artifact.

The production evidence is running the other way.

Qdrant, the Berlin-based open source vector search company, announced a $50 million Series B on Thursday, two years after a $28 million Series A. The timing is not incidental. The company is also shipping version 1.17 of its platform. Together, they reflect a specific argument: The retrieval problem did not shrink when agents arrived. It scaled up and got harder.

Advertisement

“Humans make a few queries every few minutes,” Andre Zayarni, Qdrant’s CEO and co-founder, told VentureBeat. “Agents make hundreds or even thousands of queries per second, just gathering information to be able to make decisions.”

That shift changes the infrastructure requirements in ways that RAG-era deployments were never designed to handle.

Why agents need a retrieval layer that memory can’t replace

Agents operate on information they were never trained on: proprietary enterprise data, current information, millions of documents that change continuously. Context windows manage session state. They don’t provide high-recall search across that data, maintain retrieval quality as it changes, or sustain the query volumes autonomous decision-making generates.

“The majority of AI memory frameworks out there are using some kind of vector storage,” Zayarni said. 

Advertisement

The implication is direct: even the tools positioned as memory alternatives rely on retrieval infrastructure underneath.

Three failure modes surface when that retrieval layer isn’t purpose-built for the load. At document scale, a missed result is not a latency problem — it is a quality-of-decision problem that compounds across every retrieval pass in a single agent turn. Under write load, relevance degrades because newly ingested data sits in unoptimized segments before indexing catches up, making searches over the freshest data slower and less accurate precisely when current information matters most. Across distributed infrastructure, a single slow replica pushes latency across every parallel tool call in an agent turn — a delay a human user absorbs as inconvenience but an autonomous agent cannot.

Qdrant’s 1.17 release addresses each directly. A relevance feedback query improves recall by adjusting similarity scoring on the next retrieval pass using lightweight model-generated signals, without retraining the embedding model. A delayed fan-out feature queries a second replica when the first exceeds a configurable latency threshold. A new cluster-wide telemetry API replaces node-by-node troubleshooting with a single view across the entire cluster.

Why Qdrant doesn’t want to be called a vector database anymore

Nearly every major database now supports vectors as a data type — from hyperscalers to traditional relational systems. That shift has changed the competitive question. The data type is now table stakes. What remains specialized is retrieval quality at production scale.

Advertisement

That distinction is why Zayarni no longer wants Qdrant called a vector database.

“We’re building an information retrieval layer for the AI age,” he said. “Databases are for storing user data. If the quality of search results matters, you need a search engine.”

His advice for teams starting out: use whatever vector support is already in your stack. The teams that migrate to purpose-built retrieval do so when scale forces the issue.

“We see companies come to us every day saying they started with Postgres and thought it was good enough — and it’s not.”

Advertisement

Qdrant’s architecture, written in Rust, gives it memory efficiency and low-level performance control that higher-level languages don’t match at the same cost. The open source foundation compounds that advantage — community feedback and developer adoption are what allow a company at Qdrant’s scale to compete with vendors that have far larger engineering resources.

“Without it, we wouldn’t be where we are right now at all,” Zayarni said.

How two production teams found the limits of general-purpose databases

The companies building production AI systems on Qdrant are making the same argument from different directions: agents need a retrieval layer, and conversational or contextual memory is not a substitute for it.

GlassDollar helps enterprises including Siemens and Mahle evaluate startups. Search is the core product: a user describes a need in natural language and gets back a ranked shortlist from a corpus of millions of companies. The architecture runs query expansion on every request – a single prompt fans out into multiple parallel queries, each retrieving candidates from a different angle, before results are combined and re-ranked. That is an agentic retrieval pattern, not a RAG pattern, and it requires purpose-built search infrastructure to sustain it at volume.

Advertisement

The company migrated from Elasticsearch as it scaled toward 10 million indexed documents. After moving to Qdrant it cut infrastructure costs by roughly 40%, dropped a keyword-based compensation layer it had maintained to offset Elasticsearch’s relevance gaps, and saw a 3x increase in user engagement.

“We measure success by recall,” Kamen Kanev, GlassDollar’s head of product, told VentureBeat. “If the best companies aren’t in the results, nothing else matters. The user loses trust.” 

Agentic memory and extended context windows aren’t enough to absorb the workload that GlassDollar needs, either.

 “That’s an infrastructure problem, not a conversation state management task,” Kanev said. “It’s not something you solve by extending a context window.”

Advertisement

Another Qdrant user is &AI, which is building infrastructure for patent litigation. Its AI agent, Andy, runs semantic search across hundreds of millions of documents spanning decades and multiple jurisdictions. Patent attorneys will not act on AI-generated legal text, which means every result the agent surfaces has to be grounded in a real document.

“Our whole architecture is designed to minimize hallucination risk by making retrieval the core primitive, not generation,” Herbie Turner, &AI’s founder and CTO, told VentureBeat. 

For &AI, the agent layer and the retrieval layer are distinct by design.

 “Andy, our patent agent, is built on top of Qdrant,” Turner said. “The agent is the interface. The vector database is the ground truth.”

Advertisement

Three signals it’s time to move off your current setup

The practical starting point: use whatever vector capability is already in your stack. The evaluation question isn’t whether to add vector search — it’s when your current setup stops being adequate. Three signals mark that point: retrieval quality is directly tied to business outcomes; query patterns involve expansion, multi-stage re-ranking, or parallel tool calls; or data volume crosses into the tens of millions of documents.

At that point the evaluation shifts to operational questions: how much visibility does your current setup give you into what’s happening across a distributed cluster, and how much performance headroom does it have when agent query volumes increase.

“There’s a lot of noise right now about what replaces the retrieval layer,” Kanev said. “But for anyone building a product where retrieval quality is the product, where missing a result has real business consequences, you need dedicated search infrastructure.”

Source link

Advertisement
Continue Reading

Tech

Wonderful raises $150M Series B

Published

on

The Amsterdam-headquartered startup has been out of stealth for just eight months, but it already has 350 staff, production deployments across four continents, and a valuation reportedly approaching $1.7 billion


There is a problem that every major enterprise AI deployment eventually runs into: the gap between a convincing demo and a working system in production. Models hallucinate. Integrations break. Compliance requirements differ by country.

Local languages do not behave the way US-centric training data assumes. The organisations best placed to close this gap, the argument goes, are not those with the best models, they are those with the most people on the ground.

That thesis is the foundation of Wonderful, the enterprise AI agent platform founded in early 2025 by Bar Winkler and Roey Lalazar.

Advertisement

The company has raised $150 million in a Series B round led by Insight Partners, with participation from existing backers Index Ventures, IVP, Bessemer Venture Partners, and Vine Ventures. 

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The raise brings Wonderful’s total disclosed funding to $286 million, a striking figure for a company that only emerged from stealth in mid-2025 with a $34 million seed round, then raised a $100 million Series A in November of the same year.

Advertisement

Wonderful is headquartered in Amsterdam, with Israeli founders and a model built on local deployment teams embedded inside client organisations. The company says it now operates in more than 30 countries across Europe, the Middle East, Asia-Pacific, and Latin America, serving enterprises in telecoms, financial services, manufacturing, and healthcare.

It will use the new capital to grow headcount from 350 to approximately 900 by year-end.

The company’s core product is an enterprise AI agent platform, model-agnostic by design, continuously benchmarking and selecting AI models for each use case.

The agents handle customer-facing workflows across voice, chat, and email, as well as internal workflows such as employee onboarding, compliance, and IT support.

Advertisement

What distinguishes Wonderful’s model is the deployment layer: rather than selling software and leaving clients to integrate it themselves, the company embeds local teams inside enterprise environments to manage rollout, integration, and post-deployment optimisation.

“In 2026, enterprises will be deciding who to partner with to operationalize AI across their organizations, and those decisions will hinge on who can deliver deep integrations across complex infrastructures and tailor solutions to each organization’s unique environment,” said Bar Winkler, CEO and Co-founder of Wonderful.

“We built our platform and operating model around that reality, and the demand we’re seeing globally reflects it.”

The company says more than 70% of enterprises that begin with a single use case expand into additional workflows within three months, a retention dynamic that Bar Winkler attributes to Wonderful’s practice of building a shared architecture across an enterprise’s core systems from the outset. Once that foundation is in place, activating new use cases becomes progressively faster.

Advertisement

Wonderful also claims measurable operational results from production deployments: reductions in handling times of up to 60%, containment rates above 80%, and multi-million-dollar annual efficiency gains for individual clients. These figures are not independently audited.

“Over 70% of enterprises that begin with a single use case expand into additional workflows within the first three months,” Winkler added. “That expansion is possible because we built a shared foundation across core systems from day one.”

“Wonderful is establishing trust and deep partnerships inside complex enterprises at a critical moment for the market,” said Jeff Horing, managing director at Insight Partners. “We believe that the team’s combination of platform strength and execution position Wonderful as a strong enterprise partner in today’s ecosystem.”

Lalazar, the company’s CTO, framed the ambition in broader terms. “We’re deploying agents across every business function, while pioneering the next generation of application layers that will transform how organisations operate,” he said.

Advertisement

The enterprise AI agent market is crowded, and growing more so. Salesforce’s Agentforce, ServiceNow’s AI platform, and a wave of better-funded standalone startups are all pursuing the same budget line.

Wonderful’s differentiation rests on a bet that local deployment teams and multilingual agents will be decisive in markets where US-centric platforms struggle, that the structural complexity of global enterprise is, in effect, its moat.

Eight months out of stealth, the bet appears to be attracting capital. Whether it holds at scale is the question this round is funding.

Advertisement

Source link

Continue Reading

Tech

Seattle’s downtown paradox: Commercial engine sputters amid improved safety and visitor growth

Published

on

Mayor Katie Wilson speaks at the Downtown Seattle Association’s annual event on Wednesday. (GeekWire Photo / Lisa Stiffler)

Seattle is witnessing a curious role reversal in its economic narrative. While the city finally gains ground on perennial challenges like crime and transportation, its traditional growth engine — the tech sector and downtown employment — is beginning to sputter.

The city has for years been a tech, retail and arts hub, but its total downtown jobs peaked in 2019 with more than 340,000 workers. Since the pandemic, that number has been creeping downwards, hitting approximately 317,000 jobs — which is roughly on par with 2018 numbers, according to a new report from the Downtown Seattle Association (DSA).

“We’re going in the wrong direction,” said Jon Scholes, DSA president and CEO, at the organization’s annual State of Downtown event on Wednesday.

“Over this period where we’ve seen a decrease in jobs, we’ve seen a record increase in taxes that employers in the city of Seattle are paying — that employers aren’t paying in Bellevue and other cities in our region,” he added. “We have become an outlier when it comes to the cost of doing business in our city.”

Those costs include the city’s JumpStart tax, which targets the payrolls of large employers with high‑earning employees, as well as last year’s restructuring of Seattle’s tax on gross revenue that shifted the burden from smaller businesses to large ones. Also on the horizon is the new state income tax on wealthier individuals that lawmakers just passed.

Advertisement

Taxes are taking a lot of the blame, but other major forces are at work as well. Across the country, companies are cutting headcount as AI tools replace some roles, economic uncertainty lingers, and leaders move to trim what they see as pandemic-era corporate “bloat.”

That said, key elected leaders on Wednesday acknowledged concerns about rising taxes and government budgets.

“I very much appreciate that it is not ideal for our tax environment for businesses to be wildly out of step with neighboring jurisdictions,” Mayor Katie Wilson told the packed hall at the Seattle Convention Center.

Wilson and King County Executive Girmay Zahilay both pledged to scrutinize their governments’ budgets. Wilson said she expects to make “significant” cuts and Zahilay plans to build the county’s spending plans “from the ground up” rather than following the model of rolling past budgets forward.

Advertisement
Jon Scholes, Downtown Seattle Association president and CEO, speaking at the Seattle Convention Center. (GeekWire Photo / Lisa Stiffler)

The fiscal caution comes even as the city’s social metrics trend upward. The 2025 DSA report highlighted several bright spots:

  • Crime: Incidents and violent crimes have decreased downtown since a 2021 peak.
  • Residential Growth: The number of downtown residents has reached nearly 110,000 — an 80% increase over the past 25 years.
  • Visitors: More than 15.3 million unique visitors came to downtown — an increase from 2019, but flat compared to the year before. People are also visiting more frequently.
  • Transit: Light rail boardings at downtown stations jumped 23% over 2024.

And yet that residential and visitor energy hasn’t yet translated into a full-scale recovery of the Monday-through-Friday workforce. Despite return-to-office mandates, daily worker foot traffic averages just 145,000 — still well below the 226,000 workers on average who filled downtown streets each day in 2019, according to DSA.

Amazon has helped with the rebound, but multiple rounds of layoffs have dampened the effect.

Once Seattle’s largest employer, Amazon recently lost that crown to the University of Washington, the Seattle Times reported. The company had a peak of about 60,000 workers in the city in 2020, but that headcount has slumped to less than 50,000. That figure could dip further as Amazon this spring is vacating a seven-story, 251,000-square-foot leased space in downtown.

A display at the Downtown Seattle Association’s annual event. (GeekWire Photo / Lisa Stiffler)

Beyond the tech giants, the broader commercial landscape is struggling with a growing volume of empty office spaces. Downtown vacancies reached a new high of 34.7% in the last quarter of 2025, according to CBRE. Before the pandemic, that number was hovering around 8%.

Despite these headwinds, the contractions aren’t universal. Some firms are doubling down on the city’s core: Impinj recently renewed and increased its downtown office space while DAT Solutions and Docker both took sublease space along the city’s waterfront.

In an interview after the event, Scholes emphasized that the health of the entire economic ecosystem depends on these major anchors.

Advertisement

“We need big employers in the city,” he said. “I was with some small businesses earlier this week, and they said, ‘You know, our best customers are big employers. They are our lifeblood … If you’re a restaurant, if you’re a barbershop downtown, you’re relying on people in those upper floors.”

Source link

Continue Reading

Tech

This web app lets you ‘channel surf’ YouTube like a ’90s kid watching cable

Published

on

Many of us remember the halcyon days of being a kid in the ‘90s, spending a weekend afternoon with remote control in hand and a seemingly endless well of stuff to watch on TV. Now you can relive the experience thanks to the appropriately named Channel Surfer web app. It’s essentially a YouTube discovery tool that surfaces interesting videos, but presented in a retro homage to the cable channel screen.

Channel Surfer is the work of developer Steven Irby. He has 40 channels on the app right now, mostly grouping content by theme. There are channels for typical cable fare like news and sports, but also music, movies and a number of more tailored tech subjects like AI, gaming, gadgets and space.

“I built Channel Surfer because I’m tired of the algorithms and indecision fatigue,” he told TechCrunch, which is where we discovered the app. “I miss channel surfing and not having to decide what to watch. I want to just sit and tune into what’s on and not think about what to watch next.”

It seems Irby isn’t alone, because he posted on X that the number of views he’s getting for Channel Surfer already broke 10,000 on its first day.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025