Connect with us

Tech

“Free” Surveillance Tech Still Comes At A High And Dangerous Cost

Published

on

from the no-such-thing-as-a-free-surveillance-tech dept

Surveillance technology vendors, federal agencies, and wealthy private donors have long helped provide local law enforcement “free” access to surveillance equipment that bypasses local oversight. The result is predictable: serious accountability gaps and data pipelines to other entities, including Immigration and Customs Enforcement (ICE), that expose millions of people to harm.

The cost of “free” surveillance tools — like automated license plate readers (ALPRs), networked cameras, face recognition, drones, and data aggregation and analysis platforms — is measured not in tax dollars, but in the erosion of civil liberties. 

The collection and sharing of our data quietly generates detailed records of people’s movements and associations that can be exposed, hacked, or repurposed without their knowledge or consent. Those records weaken sanctuary and First Amendment protections while facilitating the targeting of vulnerable people.   

Cities can and should use their power to reject federal grants, vendor trials, donations from wealthy individuals, or participation in partnerships that facilitate surveillance and experimentation with spy tech. 

Advertisement

If these projects are greenlit, oversight is imperative. Mechanisms like public hearings, competitive bidding, public records transparency, and city council supervision aid to ensure these acquisitions include basic safeguards — like use policies, audits, and consequences for misuse — to protect the public from abuse and from creeping contracts that grow into whole suites of products. 

Clear policies and oversight mechanisms must be in place before using any surveillance tools, free or not, and communities and their elected officials must be at the center of every decision about whether to bring these tools in at all.

Here are some of the most common methods “free” surveillance tech makes its way into communities.

Trials and Pilots

Police departments are regularly offered free access to surveillance tools and software through trials and pilot programs that often aren’t accompanied by appropriate use policies. In many jurisdictions, trials do not trigger the same requirements to go before decision-makers outside the police department. This means the public may have no idea that a pilot program for surveillance technology is happening in their city. 

Advertisement

In Denver, Colorado, the police department is running trials of possible unmanned aerial vehicles (UAVs) for a drone-as-first-responder (DFR) program from two competing drone vendors: Flock Safety Aerodome drones (through August 2026) and drones from the company Skydio, partnering with Axon, the multi-billion dollar police technology company behind tools like Tasers and AI-generated police reports. Drones create unique issues given their vantage for capturing private property and unsuspecting civilians, as well as their capacity to make other technologies, like ALPRs, airborne. 

Functional, Even Without Funding 

We’ve seen cities decide not to fund a tool, or run out of funding for it, only to have a company continue providing it in the hope that money will turn up. This happened in Fall River, Massachusetts, where the police department decided not to fund ShotSpotter’s $90,000 annual cost and its frequent false alarms, but continued using the system when the company provided free access. 

In May 2025, Denver’s city council unanimously rejected a $666,000 contract extension for Flock Safety ALPR cameras after weeks of public outcry over mass surveillance data sharing with federal immigration enforcement. But Mayor Mike Johnston’s office allowed the cameras to keep running through a “task force” review, effectively extending the program even after the contract was voted down. In response, the Denver Taskforce to Reimagine Policing and Public Safety and Transforming Our Communities Alliance launched a grassroots campaign demanding the city “turn Flock cameras off now,” a reminder that when surveillance starts as a pilot or time‑limited contract, communities often have to fight not just to block renewals but to shut the systems off.

 Importantly, police technology companies are developing more features and subscription-based models, so what’s “free” today frequently results in taxpayers footing the bill later. 

Advertisement

Gifts from Police Foundations and Wealthy Donors

Police foundations and the wealthy have pushed surveillance-driven agendas in their local communities by donating equipment and making large monetary gifts, another means of acquiring these tools without public oversight or buy-in.

In Atlanta, the Atlanta Police Foundation (APF) attempted to use its position as a private entity to circumvent transparency. Following a court challenge from the Atlanta Community Press Collective and Lucy Parsons Labs, a Georgia court determined that the APF must comply with public records laws related to some of its actions and purchases on behalf of law enforcement.
In San Francisco, billionaire Chris Larsen has financially supported a supercharging of the city’s surveillance infrastructure, donating $9.4 million to fund the San Francisco Police Department’s (SFPD) Real-Time Investigation Center, where a menu of surveillance technologies and data come together to surveil the city’s residents. This move comes after the billionaire backed a ballot measure, which passed in March 2025, eroding the city’s surveillance technology law and allowing the SFPD free rein to use new surveillance technologies for a full year without oversight.

Free Tech for Federal Data Pipelines

Federal grants and Department of Homeland Security funding are another way surveillance technology appears free to, only to lock municipalities into long‑term data‑sharing and recurring costs. 

Through the Homeland Security Grant Program, which includes the State Homeland Security Program (SHSP) and the Urban Areas Security (UASI) Initiative, and Department of Justice programs like Byrne JAG, the federal government reimburses states and cities for “homeland security” equipment and software, including including law‑enforcement surveillance tools, analytics platforms, and real‑time crime centers. Grant guidance and vendor marketing materials make clear that these funds can be used for automated license plate readers, integrated video surveillance and analytics systems, and centralized command‑center software—in other words, purchases framed as counterterrorism investments but deployed in everyday policing.

Advertisement

Vendors have learned to design products around this federal money, pitching ALPR networks, camera systems, and analytic platforms as “grant-ready” solutions that can be acquired with little or no upfront local cost. Motorola Solutions, for example, advertises how SHSP and UASI dollars can be used for “law enforcement surveillance equipment” and “video surveillance, warning, and access control” systems. Flock Safety, partnering with Lexipol, a company that writes use policies for law enforcement, offers a “License Plate Readers Grant Assistance Program” that helps police departments identify federal and state grants and tailor their applications to fund ALPR projects. 

Grant assistance programs let police chiefs fast‑track new surveillance: the paperwork is outsourced, the grant eats the upfront cost, and even when there is a formal paper trail, the practical checks from residents, councils, and procurement rules often get watered down or bypassed.

On paper, these systems arrive “for free” through a federal grant; in practice, they lock cities into recurring software, subscription, and data‑hosting fees that quietly turn into permanent budget lines—and a lasting surveillance infrastructure—as soon as police and prosecutors start to rely on them. In Santa Cruz, California, the police department explicitly sought to use a DHS-funded SHSP grant to pay for a new citywide network of Flock ALPR cameras at the city’s entrances and exits, with local funds covering additional cameras. In Sumner, Washington, a $50,000 grant was used to cover the entire first year of a Flock system — including installation and maintenance — after which the city is on the hook for roughly $39,000 every year in ongoing fees. The free grant money opens the door, but local governments are left with years of financial, political, and permanent surveillance entanglements they never fully vetted.

The most dangerous cost of this “free” funding is not just budgetary; it is the way it ties local systems into federal data pipelines. Since 9/11, DHS has used these grant streams to build a nationwide network of at least 79–80 state and regional fusion centers that integrate and share data from federal, state, local, tribal, and private partners. Research shows that state fusion centers rely heavily on the DHS Homeland Security Grant Program (especially SHSP and UASI) to “mature their capabilities,” with some centers reporting that 100 percent of their annual expenditures are covered by these grants. 

Advertisement

Civil rights investigations have documented how this funding architecture creates a backdoor channel for ICE and other federal agencies to access local surveillance data for their own purposes. A recent report by the Surveillance Technology Oversight Project (S.T.O.P.) describes ICE agents using a Philadelphia‑area fusion center to query the city’s ALPR network to track undocumented drivers in a self‑described sanctuary city.

Ultimately, federal grants follow the same script as trials and foundation gifts: what looks “free” ends up costing communities their data, their sanctuary protections, and their power over how local surveillance is used.

Protecting Yourself Against “Free” Technology

The most important protection against “free” surveillance technology is to reject it outright. Cities do not have to accept federal grants, vendor trials, or philanthropic donations. Saying no to “free” tech is not just a policy choice; it is a political power that local governments possess and can exercise. Communities and their elected officials can and should refuse surveillance systems that arrive through federal grants, vendor pilots, or private donations, regardless of how attractive the initial price tag appears. 

For those cities that have already accepted surveillance technology, the imperative is equally clear: shut it down. When a community has rejected use of a spying tool, the capabilities, equipment, and data collected from that tool should be shut off immediately. Full stop.

Advertisement

And for any surveillance technology that remains in operation, even temporarily, there must be clear rules: when and how equipment is used, how that data is retained and shared, who owns data and how companies can access and use it, transparency requirements, and consequences for any misuse and abuse. 

“Free” surveillance technology is never free. Someone profits or gains power from it. Police technology vendors, federal agencies, and wealthy donors do not offer these systems out of generosity; they offer them because surveillance serves their interests, not ours. That is the real cost of “free” surveillance.

Originally posted to EFF’s Deeplinks blog.

Filed Under: alpr, dhs, drones, facial recognition, grants, law enforcement, surveillance

Companies: flock, flock safety, lexipol, motorola

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Professional Community Investment Yields Big Returns

Published

on

Engineering is so much more than solving problems or writing efficient code. It is about creating solutions that affect billions of lives and contributing to a profession built on innovation, responsibility, and collaboration. Although technical skills remain critical, what truly will accelerate the growth of the next generation of engineers is community and professional involvement.

Learning from communities

University programs provide a strong foundation in theory and practice, but they cannot capture the complexity of real-world engineering. As an IEEE senior member, I believe professional communities such as IEEE can help bridge the gap by offering:

I have served as a mentor and judge for a variety of hackathons across different age groups, including high school competitions United Hacks and NextStep Hacks, as well as graduate-level events such as HackHarvard.

The experiences demonstrate how transformative community-driven opportunities can be for young engineers. They provide exposure to teamwork, innovation, and the realities of solving problems at scale.

Advertisement

The power of mentorship

Engineers don’t develop skills in isolation. Mentorship, whether formal or informal, plays a pivotal role in shaping careers. Senior professionals who invest in guiding students and early-career engineers pass on more than technical knowledge. They share decision-making approaches, ethical considerations, and strategies for navigating careers, thereby expanding the engineering field.

As a keynote speaker at conferences, I have seen how sharing real-world experiences can ignite students’ curiosity and confidence. What they often value most is not a lecture on technology but candid insights into how to be resilient, grow their career, and learn about the different engineering paths.

Building ethical awareness

With the rise of artificial intelligence, biotechnology, and other high-impact innovations, engineers’ ethical responsibilities are more important than ever. Professional organizations such as IEEE and ACM emphasize codes of ethics and standards to help ensure that technology is developed responsibly.

Through my work as a peer reviewer and committee member for IEEE and ACM conferences, including those at the university level, I have seen how the organizations promote rigor and accountability.

Advertisement

When students engage with such communities early, they can not only expand their technical knowledge but also build an understanding of responsible innovation.

Networking as a catalyst for innovation

Engineering breakthroughs often emerge at the intersections of different fields. Professional communities create the space for such interactions. A student working on computer vision, for example, might discover health care applications by collaborating with biomedical engineers.

While reviewing papers for conferences, I have seen how interdisciplinary ideas spark promising innovations.

I bring the same perspective to my role as an IEEE Collabratec mentor, connecting with innovators across different disciplines and industries.

Advertisement

“When we invest in the community, we invest in the future of engineering.”

By collaborating on projects and expanding your reach, you can find the mentors or partners you need to inspire your next breakthrough.

Participating in forums allows students and professionals alike to broaden their horizons and explore solutions that go beyond traditional boundaries.

Giving back shapes leadership

Community involvement is not only about what you gain. It is also about what you give. Engineers who volunteer for educational programs, STEM initiatives, and professional committees can develop leadership skills that extend beyond technical expertise. They can learn to inspire, organize, and guide others.

Advertisement

Judging hackathons and mentoring student teams reminds me that leadership often begins with service. When experienced professionals actively invest in the growth of others, they help create a culture wherein learning and leadership are passed forward.

Preparing for a lifelong journey

Learning how to be an engineer doesn’t end when you earn your degree. It is a lifelong journey of learning, adapting, and contributing. By engaging with communities and professional networks early, students and graduates can develop habits that serve them throughout their career. They can stay current with emerging trends, build trusted professional relationships, and gain resilience through shared challenges.

Community involvement can transform engineers from problem-solvers into change agents.

Investing in the community

The future of engineering depends not only on technological advancement but also on the collective strength of its communities. By fostering mentorship, encouraging collaboration, and embedding ethical responsibility, professional and community involvement can ensure that the next generation of engineers is prepared to meet tomorrow’s challenges with competence and character.

Advertisement

My journey as a mentor, judge, keynote speaker, and peer reviewer has reinforced a clear truth: When we invest in the community, we invest in the future of engineering. The students and young professionals we support today will be the ones building the world we live in tomorrow.

From Your Site Articles

Related Articles Around the Web

Advertisement

Source link

Continue Reading

Tech

How to watch Jensen Huang’s Nvidia GTC 2026 keynote

Published

on

Nvidia kicks off its annual GTC developer conference in San Jose, California, next week with CEO Jensen Huang’s keynote scheduled for Monday at 11am PT / 2pm ET.

GTC — which stands for GPU Technology Conference — is Nvidia’s flagship annual event, where the chipmaker typically uses the spotlight to announce new products, champion partnerships, and lay out its vision for the future of computing. Huang’s keynote will focus on Nvidia’s role in the future of computing and AI. You can watch the two-hour address in person at the SAP Center or livestream the talk on the event’s website.

The broader three-day event is focused on what’s coming next for AI across industries including healthcare, robotics, and autonomous vehicles, among others.

On the software side, it’s rumored that Nvidia will release an open source platform for enterprise AI agents, dubbed NemoClaw, as originally reported by Wired. The platform would give businesses a structured way to build and deploy AI agents (software that can carry out multi-step tasks autonomously) and would position Nvidia to mirror similar offerings from companies like OpenAI.

Advertisement

On the hardware side, the company is also rumored to be releasing a new chip designed to accelerate the AI inference process — the process by which an AI model applies what it has learned to generate responses or make decisions, as distinct from the initial training process, which requires far more computing power. Faster, cheaper inference is widely seen as one of the last bottlenecks to scaling AI applications broadly. The chip, if confirmed, would represent Nvidia’s latest bid to dominate not just the training market, where it already commands an estimated 80% share, but the inference market as well, where competition from custom chips built by Google, Amazon and others is fast intensifying.

Kevin Cook, a senior equity strategist at Zacks Investment Research, told TechCrunch that attendees should also expect to learn what the company plans to do with its relationship with Groq, the inference company Nvidia reportedly paid $20 billion late last year to license its technology. There’s a lot of curiosity around this tie-up, given that Jonathan Ross, Groq’s founder, Sunny Madra, Groq’s President, and other members of the Groq team agreed to join Nvidia to help advance and scale that licensed tech.

There will, of course, also be a range of partnership announcements and demonstrations showcasing Nvidia’s AI capabilities across industries.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

Source link

Advertisement
Continue Reading

Tech

John Solly Is the DOGE Operative Accused of Planning to Take Social Security Data to His New Job

Published

on

John Solly, a software engineer and former member of the so-called Department of Government Efficiency (DOGE), is the DOGE operative reportedly accused in a whistleblower complaint of telling colleagues that he stored sensitive Social Security Administration (SSA) data on a thumb drive and wanted to share the information with his new employer, multiple sources tell WIRED.

Since October, according to a copy of his résumé, Solly has worked as the chief technology officer for the health IT division of a government contractor called Leidos, which has already received millions in SSA contracts and could receive up to $1.5 billion in contracts with SSA based on a five-year deal it signed in 2023. Solly’s personal website and LinkedIn have been taken offline as of this week.

Responding to a request for comment, Solly, through his legal counsel, denied engaging in any wrongdoing. A spokesperson for Leidos also said the company found no evidence supporting the whistleblower’s claims against Solly.

Solly was one of 12 DOGE team members at SSA, where, according to the résumé on his personal website, he supported “other DOGE engineers on initiatives including Digital SSN, Death Master File cleanup,” and “SSN verification API (EDEN 2.0).” The “death master file” is an SSA database containing millions of Social Security records of deceased people and is maintained so that their identities can’t be used for fraud. An API, or application programming interface, allows different programs to talk to each other, including pulling data and information from each other. In this case, it could allow Social Security data to be accessed by agencies and institutions outside of SSA.

Advertisement

The allegation was revealed in a complaint filed to SSA’s internal watchdog first reported earlier this week by The Washington Post, which did not name Solly or Leidos. According to the Post, the complaint was filed with the SSA’s Office of the Inspector General earlier this year and alleges that the former DOGE employee told coworkers he took copies of the SSA’s Numerical Identification System, or NUMIDENT, as well as the “death master file.” NUMIDENT is a master SSA database containing all information included in a Social Security number application, including full names, birth dates, race, and more personally identifiable information.

In the complaint, according to the Post, a whistleblower alleges that the former DOGE employee sought help transferring a set of data from a thumb drive to a personal computer so he could “sanitize” it before uploading it for use at a private-sector company. The former DOGE employee allegedly said that he expected to receive a presidential pardon if his actions were unlawful, the complaint reportedly stated.

Solly “did not share, access, or view any personally identifiable information (PII) maintained by SSA, including SSA’s Death Master File (DMF) and Numerical Identification System (Numident). The allegations made by a supposedly anonymous source are patently false and slanderous. Mr. Solly will take all appropriate steps to clear his good name and stellar reputation,” says Seth Waxman, who is representing Solly. “He is certain that any fair review of the facts and circumstances surrounding these spurious allegations will fully exonerate him.”

Leidos is a major contractor for SSA. Between 2010 and 2018, the company brought in millions of dollars in SSA IT contracts. In 2018, Leidos was awarded contracts potentially worth up to $639 million for IT support services and processing disability claims. In 2023, the company announced that it had been awarded an estimated $1.5 billion IT contract with the agency. As part of DOGE’s blitz into the US government in early 2025, Leidos, like many government contractors, saw some of its contracts cut.

Advertisement

Source link

Continue Reading

Tech

Wayve, Uber, Nissan to launch robotaxis in Tokyo

Published

on

A roll-out is planned later this year, with safety riders initially deployed in all robotaxis.

UK start-up Wayve, Uber and Nissan are collaborating to deploy robotaxi services in Tokyo.

In a joint press release, the companies said that the deal will see Nissan’s Leaf electric vehicles equipped with Wayve’s AI technology, made available to customers via Uber’s platform. A roll-out is planned later this year.

During the initial phase, safety riders will be seated in all robotaxis, the three said. Last September, Nissan said that it was testing a driver assistance system that uses Wayve’s technology, with a planned launch in Japan in 2027.

Advertisement

“We have been testing our technology throughout Japan since early 2025, building extensive experience in the country’s unique road environments,” said Alex Kendall, the co-founder and CEO of Wayve.

“Partnering with Uber and Nissan to begin pilot deployment of robotaxi[s] allows us to introduce this technology in a responsible way, while continuing to learn and expand.”

This is Uber’s first robotaxi partnership in Japan. The company recently announced its plans for an international roll-out that also includes London, Madrid, Munich, Hong Kong, and a number of US cities.

Uber’s London roll-out this spring is in partnership with Wayve, a company it backs. The ride-hailing platform recently announced its intentions to become the leading provider of robotaxi services by 2029.

Advertisement

“Autonomous mobility is becoming an increasingly important part of the Uber platform,” said Dara Khosrowshahi, the CEO of Uber, of the three-way partnership.

“Following our planned pilot deployment in London, we look forward to expanding into Tokyo and introducing new, modern ways to travel in some of the world’s largest cities … Our goal is to give riders more ways to move with seamless access through the Uber app.”

Ivan Espinosa, the president and CEO of Nissan, said: “Our work with Wayve to integrate advanced AI technology across our consumer vehicle portfolio has laid strong foundations, and we are excited to take this partnership further with a pilot deployment of robotaxi[s] in Tokyo, bringing together Wayve’s AI technology, Uber’s network and Nissan vehicles.

Nissan supported Wayve in a $1.2bn Series D round announced in February. Big-name backers Nvidia and SoftBank also participated in the round.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

The who, what, and why of the attack that has shut down Stryker’s Windows network”

Published

on

What else is known about Handala Hack?

The group has existed since at least 2023. It takes its name from a character in the political cartoons of Palestinian artist Naji al-Ali. The group’s logo depicts a small Palestinian boy who is a symbol associated with Palestinian resistance.

Check Point and other security firms have said Handala Hack is affiliated with Iran’s Ministry of Intelligence and Security and maintains multiple online personas. Compared to other nation-state-sponsored hacking groups, Handala Hack has kept a comparatively lower profile. Still, it has carried out a series of destructive wiping attacks and influence operations over the years.

Around the same time the Stryker attack came to light, posts to a Telegram account and website controlled by Handala Hack took credit for the takedown. Handala posts cited last week’s killing of 165 civilians at a girls’ school in Iran by an American Tomahawk missile and past hacking operations that the US and Israel have perpetuated on Iran.

What is the point of striking a corporation in retaliation for airstrikes carried out by the US and Israel?

Such actions are taken for their psychological effects, which are often disproportionately larger than the resources required to bring them about. With limited means for Iran to strike back militarily, the Stryker disruption allows an alternative means for the country and its allies to retaliate. The success is intended to demonstrate that pro-Iranian forces can still exact a price that has a material effect on large populations in the US, Israel, and countries allied with them.

Advertisement

As a major supplier of lifesaving medical devices relied on throughout the US and its allies, Stryker plays a strategic and symbolic role in their security, researchers at Flash Point said Thursday. “By operating behind a persona styled as a grassroots, pro-Palestinian resistance movement, Iranian state-nexus actors are able to conduct destructive cyber operations against Western organizations while maintaining a degree of plausible deniability.”

Source link

Advertisement
Continue Reading

Tech

Agents need vector search more than RAG ever did

Published

on

What’s the role of vector databases in the agentic AI world? That’s a question that organizations have been coming to terms with in recent months.

The narrative had real momentum. As large language models scaled to million-token context windows, a credible argument circulated among enterprise architects: purpose-built vector search was a stopgap, not infrastructure. Agentic memory would absorb the retrieval problem. Vector databases were a RAG-era artifact.

The production evidence is running the other way.

Qdrant, the Berlin-based open source vector search company, announced a $50 million Series B on Thursday, two years after a $28 million Series A. The timing is not incidental. The company is also shipping version 1.17 of its platform. Together, they reflect a specific argument: The retrieval problem did not shrink when agents arrived. It scaled up and got harder.

Advertisement

“Humans make a few queries every few minutes,” Andre Zayarni, Qdrant’s CEO and co-founder, told VentureBeat. “Agents make hundreds or even thousands of queries per second, just gathering information to be able to make decisions.”

That shift changes the infrastructure requirements in ways that RAG-era deployments were never designed to handle.

Why agents need a retrieval layer that memory can’t replace

Agents operate on information they were never trained on: proprietary enterprise data, current information, millions of documents that change continuously. Context windows manage session state. They don’t provide high-recall search across that data, maintain retrieval quality as it changes, or sustain the query volumes autonomous decision-making generates.

“The majority of AI memory frameworks out there are using some kind of vector storage,” Zayarni said. 

Advertisement

The implication is direct: even the tools positioned as memory alternatives rely on retrieval infrastructure underneath.

Three failure modes surface when that retrieval layer isn’t purpose-built for the load. At document scale, a missed result is not a latency problem — it is a quality-of-decision problem that compounds across every retrieval pass in a single agent turn. Under write load, relevance degrades because newly ingested data sits in unoptimized segments before indexing catches up, making searches over the freshest data slower and less accurate precisely when current information matters most. Across distributed infrastructure, a single slow replica pushes latency across every parallel tool call in an agent turn — a delay a human user absorbs as inconvenience but an autonomous agent cannot.

Qdrant’s 1.17 release addresses each directly. A relevance feedback query improves recall by adjusting similarity scoring on the next retrieval pass using lightweight model-generated signals, without retraining the embedding model. A delayed fan-out feature queries a second replica when the first exceeds a configurable latency threshold. A new cluster-wide telemetry API replaces node-by-node troubleshooting with a single view across the entire cluster.

Why Qdrant doesn’t want to be called a vector database anymore

Nearly every major database now supports vectors as a data type — from hyperscalers to traditional relational systems. That shift has changed the competitive question. The data type is now table stakes. What remains specialized is retrieval quality at production scale.

Advertisement

That distinction is why Zayarni no longer wants Qdrant called a vector database.

“We’re building an information retrieval layer for the AI age,” he said. “Databases are for storing user data. If the quality of search results matters, you need a search engine.”

His advice for teams starting out: use whatever vector support is already in your stack. The teams that migrate to purpose-built retrieval do so when scale forces the issue.

“We see companies come to us every day saying they started with Postgres and thought it was good enough — and it’s not.”

Advertisement

Qdrant’s architecture, written in Rust, gives it memory efficiency and low-level performance control that higher-level languages don’t match at the same cost. The open source foundation compounds that advantage — community feedback and developer adoption are what allow a company at Qdrant’s scale to compete with vendors that have far larger engineering resources.

“Without it, we wouldn’t be where we are right now at all,” Zayarni said.

How two production teams found the limits of general-purpose databases

The companies building production AI systems on Qdrant are making the same argument from different directions: agents need a retrieval layer, and conversational or contextual memory is not a substitute for it.

GlassDollar helps enterprises including Siemens and Mahle evaluate startups. Search is the core product: a user describes a need in natural language and gets back a ranked shortlist from a corpus of millions of companies. The architecture runs query expansion on every request – a single prompt fans out into multiple parallel queries, each retrieving candidates from a different angle, before results are combined and re-ranked. That is an agentic retrieval pattern, not a RAG pattern, and it requires purpose-built search infrastructure to sustain it at volume.

Advertisement

The company migrated from Elasticsearch as it scaled toward 10 million indexed documents. After moving to Qdrant it cut infrastructure costs by roughly 40%, dropped a keyword-based compensation layer it had maintained to offset Elasticsearch’s relevance gaps, and saw a 3x increase in user engagement.

“We measure success by recall,” Kamen Kanev, GlassDollar’s head of product, told VentureBeat. “If the best companies aren’t in the results, nothing else matters. The user loses trust.” 

Agentic memory and extended context windows aren’t enough to absorb the workload that GlassDollar needs, either.

 “That’s an infrastructure problem, not a conversation state management task,” Kanev said. “It’s not something you solve by extending a context window.”

Advertisement

Another Qdrant user is &AI, which is building infrastructure for patent litigation. Its AI agent, Andy, runs semantic search across hundreds of millions of documents spanning decades and multiple jurisdictions. Patent attorneys will not act on AI-generated legal text, which means every result the agent surfaces has to be grounded in a real document.

“Our whole architecture is designed to minimize hallucination risk by making retrieval the core primitive, not generation,” Herbie Turner, &AI’s founder and CTO, told VentureBeat. 

For &AI, the agent layer and the retrieval layer are distinct by design.

 “Andy, our patent agent, is built on top of Qdrant,” Turner said. “The agent is the interface. The vector database is the ground truth.”

Advertisement

Three signals it’s time to move off your current setup

The practical starting point: use whatever vector capability is already in your stack. The evaluation question isn’t whether to add vector search — it’s when your current setup stops being adequate. Three signals mark that point: retrieval quality is directly tied to business outcomes; query patterns involve expansion, multi-stage re-ranking, or parallel tool calls; or data volume crosses into the tens of millions of documents.

At that point the evaluation shifts to operational questions: how much visibility does your current setup give you into what’s happening across a distributed cluster, and how much performance headroom does it have when agent query volumes increase.

“There’s a lot of noise right now about what replaces the retrieval layer,” Kanev said. “But for anyone building a product where retrieval quality is the product, where missing a result has real business consequences, you need dedicated search infrastructure.”

Source link

Advertisement
Continue Reading

Tech

Wonderful raises $150M Series B

Published

on

The Amsterdam-headquartered startup has been out of stealth for just eight months, but it already has 350 staff, production deployments across four continents, and a valuation reportedly approaching $1.7 billion


There is a problem that every major enterprise AI deployment eventually runs into: the gap between a convincing demo and a working system in production. Models hallucinate. Integrations break. Compliance requirements differ by country.

Local languages do not behave the way US-centric training data assumes. The organisations best placed to close this gap, the argument goes, are not those with the best models, they are those with the most people on the ground.

That thesis is the foundation of Wonderful, the enterprise AI agent platform founded in early 2025 by Bar Winkler and Roey Lalazar.

Advertisement

The company has raised $150 million in a Series B round led by Insight Partners, with participation from existing backers Index Ventures, IVP, Bessemer Venture Partners, and Vine Ventures. 

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The raise brings Wonderful’s total disclosed funding to $286 million, a striking figure for a company that only emerged from stealth in mid-2025 with a $34 million seed round, then raised a $100 million Series A in November of the same year.

Advertisement

Wonderful is headquartered in Amsterdam, with Israeli founders and a model built on local deployment teams embedded inside client organisations. The company says it now operates in more than 30 countries across Europe, the Middle East, Asia-Pacific, and Latin America, serving enterprises in telecoms, financial services, manufacturing, and healthcare.

It will use the new capital to grow headcount from 350 to approximately 900 by year-end.

The company’s core product is an enterprise AI agent platform, model-agnostic by design, continuously benchmarking and selecting AI models for each use case.

The agents handle customer-facing workflows across voice, chat, and email, as well as internal workflows such as employee onboarding, compliance, and IT support.

Advertisement

What distinguishes Wonderful’s model is the deployment layer: rather than selling software and leaving clients to integrate it themselves, the company embeds local teams inside enterprise environments to manage rollout, integration, and post-deployment optimisation.

“In 2026, enterprises will be deciding who to partner with to operationalize AI across their organizations, and those decisions will hinge on who can deliver deep integrations across complex infrastructures and tailor solutions to each organization’s unique environment,” said Bar Winkler, CEO and Co-founder of Wonderful.

“We built our platform and operating model around that reality, and the demand we’re seeing globally reflects it.”

The company says more than 70% of enterprises that begin with a single use case expand into additional workflows within three months, a retention dynamic that Bar Winkler attributes to Wonderful’s practice of building a shared architecture across an enterprise’s core systems from the outset. Once that foundation is in place, activating new use cases becomes progressively faster.

Advertisement

Wonderful also claims measurable operational results from production deployments: reductions in handling times of up to 60%, containment rates above 80%, and multi-million-dollar annual efficiency gains for individual clients. These figures are not independently audited.

“Over 70% of enterprises that begin with a single use case expand into additional workflows within the first three months,” Winkler added. “That expansion is possible because we built a shared foundation across core systems from day one.”

“Wonderful is establishing trust and deep partnerships inside complex enterprises at a critical moment for the market,” said Jeff Horing, managing director at Insight Partners. “We believe that the team’s combination of platform strength and execution position Wonderful as a strong enterprise partner in today’s ecosystem.”

Lalazar, the company’s CTO, framed the ambition in broader terms. “We’re deploying agents across every business function, while pioneering the next generation of application layers that will transform how organisations operate,” he said.

Advertisement

The enterprise AI agent market is crowded, and growing more so. Salesforce’s Agentforce, ServiceNow’s AI platform, and a wave of better-funded standalone startups are all pursuing the same budget line.

Wonderful’s differentiation rests on a bet that local deployment teams and multilingual agents will be decisive in markets where US-centric platforms struggle, that the structural complexity of global enterprise is, in effect, its moat.

Eight months out of stealth, the bet appears to be attracting capital. Whether it holds at scale is the question this round is funding.

Advertisement

Source link

Continue Reading

Tech

Seattle’s downtown paradox: Commercial engine sputters amid improved safety and visitor growth

Published

on

Mayor Katie Wilson speaks at the Downtown Seattle Association’s annual event on Wednesday. (GeekWire Photo / Lisa Stiffler)

Seattle is witnessing a curious role reversal in its economic narrative. While the city finally gains ground on perennial challenges like crime and transportation, its traditional growth engine — the tech sector and downtown employment — is beginning to sputter.

The city has for years been a tech, retail and arts hub, but its total downtown jobs peaked in 2019 with more than 340,000 workers. Since the pandemic, that number has been creeping downwards, hitting approximately 317,000 jobs — which is roughly on par with 2018 numbers, according to a new report from the Downtown Seattle Association (DSA).

“We’re going in the wrong direction,” said Jon Scholes, DSA president and CEO, at the organization’s annual State of Downtown event on Wednesday.

“Over this period where we’ve seen a decrease in jobs, we’ve seen a record increase in taxes that employers in the city of Seattle are paying — that employers aren’t paying in Bellevue and other cities in our region,” he added. “We have become an outlier when it comes to the cost of doing business in our city.”

Those costs include the city’s JumpStart tax, which targets the payrolls of large employers with high‑earning employees, as well as last year’s restructuring of Seattle’s tax on gross revenue that shifted the burden from smaller businesses to large ones. Also on the horizon is the new state income tax on wealthier individuals that lawmakers just passed.

Advertisement

Taxes are taking a lot of the blame, but other major forces are at work as well. Across the country, companies are cutting headcount as AI tools replace some roles, economic uncertainty lingers, and leaders move to trim what they see as pandemic-era corporate “bloat.”

That said, key elected leaders on Wednesday acknowledged concerns about rising taxes and government budgets.

“I very much appreciate that it is not ideal for our tax environment for businesses to be wildly out of step with neighboring jurisdictions,” Mayor Katie Wilson told the packed hall at the Seattle Convention Center.

Wilson and King County Executive Girmay Zahilay both pledged to scrutinize their governments’ budgets. Wilson said she expects to make “significant” cuts and Zahilay plans to build the county’s spending plans “from the ground up” rather than following the model of rolling past budgets forward.

Advertisement
Jon Scholes, Downtown Seattle Association president and CEO, speaking at the Seattle Convention Center. (GeekWire Photo / Lisa Stiffler)

The fiscal caution comes even as the city’s social metrics trend upward. The 2025 DSA report highlighted several bright spots:

  • Crime: Incidents and violent crimes have decreased downtown since a 2021 peak.
  • Residential Growth: The number of downtown residents has reached nearly 110,000 — an 80% increase over the past 25 years.
  • Visitors: More than 15.3 million unique visitors came to downtown — an increase from 2019, but flat compared to the year before. People are also visiting more frequently.
  • Transit: Light rail boardings at downtown stations jumped 23% over 2024.

And yet that residential and visitor energy hasn’t yet translated into a full-scale recovery of the Monday-through-Friday workforce. Despite return-to-office mandates, daily worker foot traffic averages just 145,000 — still well below the 226,000 workers on average who filled downtown streets each day in 2019, according to DSA.

Amazon has helped with the rebound, but multiple rounds of layoffs have dampened the effect.

Once Seattle’s largest employer, Amazon recently lost that crown to the University of Washington, the Seattle Times reported. The company had a peak of about 60,000 workers in the city in 2020, but that headcount has slumped to less than 50,000. That figure could dip further as Amazon this spring is vacating a seven-story, 251,000-square-foot leased space in downtown.

A display at the Downtown Seattle Association’s annual event. (GeekWire Photo / Lisa Stiffler)

Beyond the tech giants, the broader commercial landscape is struggling with a growing volume of empty office spaces. Downtown vacancies reached a new high of 34.7% in the last quarter of 2025, according to CBRE. Before the pandemic, that number was hovering around 8%.

Despite these headwinds, the contractions aren’t universal. Some firms are doubling down on the city’s core: Impinj recently renewed and increased its downtown office space while DAT Solutions and Docker both took sublease space along the city’s waterfront.

In an interview after the event, Scholes emphasized that the health of the entire economic ecosystem depends on these major anchors.

Advertisement

“We need big employers in the city,” he said. “I was with some small businesses earlier this week, and they said, ‘You know, our best customers are big employers. They are our lifeblood … If you’re a restaurant, if you’re a barbershop downtown, you’re relying on people in those upper floors.”

Source link

Continue Reading

Tech

This web app lets you ‘channel surf’ YouTube like a ’90s kid watching cable

Published

on

Many of us remember the halcyon days of being a kid in the ‘90s, spending a weekend afternoon with remote control in hand and a seemingly endless well of stuff to watch on TV. Now you can relive the experience thanks to the appropriately named Channel Surfer web app. It’s essentially a YouTube discovery tool that surfaces interesting videos, but presented in a retro homage to the cable channel screen.

Channel Surfer is the work of developer Steven Irby. He has 40 channels on the app right now, mostly grouping content by theme. There are channels for typical cable fare like news and sports, but also music, movies and a number of more tailored tech subjects like AI, gaming, gadgets and space.

“I built Channel Surfer because I’m tired of the algorithms and indecision fatigue,” he told TechCrunch, which is where we discovered the app. “I miss channel surfing and not having to decide what to watch. I want to just sit and tune into what’s on and not think about what to watch next.”

It seems Irby isn’t alone, because he posted on X that the number of views he’s getting for Channel Surfer already broke 10,000 on its first day.

Advertisement

Source link

Continue Reading

Tech

Allen Institute for AI CEO Ali Farhadi steps down as nonprofit navigates shifting AI landscape

Published

on

Ali Farhadi has been CEO of the Allen Institute for AI since July 2023. (GeekWire File Photo / Todd Bishop)

Ali Farhadi is stepping down as the CEO of the Allen Institute for AI (Ai2), after a two-and-a-half-year tenure that brought growing recognition to the Seattle-based nonprofit research institute as a key player in the world of open-source artificial intelligence.

He will be replaced on an interim basis by Peter Clark, a founding member of Ai2, as the board begins a search for a permanent successor. Clark served in the same interim role after the departure of founding CEO Oren Etzioni in 2022. Farhadi’s last day is Friday.

The announcement was made late Thursday morning to the roughly 200-person Ai2 team, said board chair Bill Hilf, in an interview with GeekWire shortly after the internal meeting.

Hilf said he and Farhadi had been discussing the transition for about six months. Farhadi wants to pursue his research ambitions at the frontier of large-scale AI, where for-profit companies are spending billions of dollars a year on computing horsepower, Hilf said.

Asked why Farhadi couldn’t pursue that work at Ai2, Hilf cited the financial realities of competing against tech giants at the largest scale of AI model development as a nonprofit. He said the board has to weigh whether philanthropic dollars are best spent trying to keep pace.

Advertisement

“The cost to do extreme-scale open model research is extraordinary,” Hilf said, adding that it’s “really hard to do extreme-scale model work inside of a nonprofit.”

Hilf said Ai2 will continue its work on areas including OLMo, its open-source AI models, while also citing its focus on applying AI to real-world problems in areas such as climate, conservation, and health.

A computer vision specialist, Farhadi had deep roots at Ai2. He joined the institute in 2015 and co-founded the Ai2 spinout Xnor.ai, which Apple acquired in 2020 for an estimated $200 million in one of the institute’s biggest commercial successes. 

He led machine learning efforts at Apple before returning to lead Ai2 in July 2023.

Advertisement

Farhadi has not said where he might go next. He is expected to remain a professor at the University of Washington’s Allen School of Computer Science and Engineering.

“Leading Ai2 has been a true privilege,” Farhadi said in a statement, citing the Ai2 team’s release of more than 300 models and artifacts with more than 33 million downloads. 

He pointed to advances in health, science, and environmental research, and cited investments from the NSF and Nvidia and initiatives such as the Cancer AI Alliance as results of its impact.

“Ai2 is entering its next phase from a position of real strength, with growing global adoption of our work and an extraordinary team driving innovation,” Farhadi said. “I’m excited to see them continue pushing the boundaries of what AI can achieve for humanity.”

Advertisement

Farhadi will leave the Ai2 board. Chief Operating Officer Sophie Lebrecht is also leaving. Lebrecht worked alongside Farhadi at Xnor.ai and at Apple before joining him at Ai2. 

Hilf noted that all programs planned for 2026 are fully funded and that Farhadi wanted to ensure that stability before stepping down. 

Existing commitments are not affected, Hilf said, including a $152 million, five-year initiative backed by the National Science Foundation and Nvidia to build open AI models for scientific research, and Ai2’s role in the Cancer AI Alliance led by Seattle’s Fred Hutch Cancer Center.

Ai2 was founded in 2014 by the late Microsoft co-founder Paul Allen. It receives major funding from the Foundation for Science and Technology, an Allen entity. Jody Allen is on the Ai2 board.

Advertisement

Clark, the interim CEO, said in a statement that he is committed to a smooth transition. 

“Our mission remains unchanged: advancing AI research and engineering for the common good, and turning our open breakthroughs into lasting, real-world impact,” he said.

Hilf said the board is looking for a new CEO who combines scientific depth with nonprofit management experience and a passion for open science, acknowledging that the combination is rare and that building an open community is harder than people think.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025