Connect with us
DAPA Banner

Tech

Meta Is Sued Over Scam Ads on Facebook and Instagram

Published

on

On Tuesday, the nonprofit Consumer Federation of America filed a lawsuit against Meta, alleging that the way the social networking giant handles scammers on its platforms violates Washington, DC’s consumer protection laws.

While many online scams involve direct outreach to victims by scammers (who are often themselves human trafficking victims trapped in scam compounds), CFA’s lawsuit focuses on fraudulent advertising that CFA alleges Meta profited from and allowed to “proliferate on its platforms,” despite publicly promising that it takes cracking down on fraud and scams seriously.

In its complaint, CFA points to ads found in Meta’s ads library that CFA claims are types of well-known scams, including several that appear to target people by their birth year and tout $1,400 checks, as well as others that advertise free government iPhones.

Speaking with WIRED, Ben Winters, CFA’s director of AI and data privacy, says others can find more dubious ads just by searching Meta’s ad library using key words like “free phone” and “stimulus check.” WIRED’s quick perusal of the ads library on Monday shows more live ads for “secret tax checks” that lead to a website that promises to reveal “Wall Street’s recession-proof investing strategy.”

Advertisement

Meta did not immediately respond to a request for comment.

CFA is seeking to recover damages and what it says are illegal profits from Meta, in addition to business reforms. Winters says that there’s more to be done to take down repeat violators and scrutinize ads that promise things like free government programs that don’t exist before they’re put in front of consumers.

Meta has faced particular scrutiny because Facebook, Instagram, and WhatsApp—which are all owned by Meta—are among the most widely used online platforms by Americans, according to a recent Pew Research Center report. In late 2025, Reuters reported on a set of internal Meta documents that detailed how the company dealt with fraudulent and prohibited user activity, including a May 2025 presentation that estimated that its platforms were involved with a third of all successful scams in the US. Another presentation cited by Reuters alleged that an internal Meta review found it “is easier to advertise scams on Meta platforms than Google.”

One Meta document from 2024 that Reuters cited estimated that the company would earn 10.1 percent of its revenue that year—around $16 billion—from ads that were actually scams or other types of prohibited content. To put that figure in perspective, the FBI estimated that in 2024, Americans lost $16 billion from all internet crimes. At the time, a Meta spokesperson called the estimate “rough and overly inclusive” and said that the set of documents Reuters reported on “distorts Meta’s approach to fraud and scams” and that the actual revenue was lower, but declined to tell Reuters by how much.

Advertisement

In June 2025, a bipartisan coalition of state attorneys general urged Meta to crack down on Facebook ads that led consumers to WhatsApp groups that were used for carrying out investment scams. The letter, which was signed by New York AG Letiticia James, said that Meta’s solutions were not working and that investigators in New York kept seeing scam advertisements months after submitting reports to Meta.

Since then, the US Virgin Islands attorney general’s office filed a lawsuit against Meta that, among other things, alleged that the company not only failed to crack down on scam advertising but charged advertisers higher rates to run ads flagged as likely to be fraudulent. That lawsuit is ongoing.

Though the federal government and many states have similar consumer protection laws as the DC law that CFA alleges Meta violated, Winters says he’s not holding his breath for the federal government to take action, and while he appreciates the work of state attorneys general, he believes consumers need relief now.

“We appreciate their work and think it’s absolutely critical, but we can’t wait for them to act when we haven’t seen them able to act as quickly as we need to,” Winters says. “This is why nonprofits and civil society exist in the idealized world, right? To fill in gaps where there are gaps.”

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Apple will not buy Disney, no matter how often it hears that it will

Published

on

Tenth time still isn’t the charm. One day after Tim Cook announced that he was handing the reigns to John Ternus, an analyst that has beaten this drum before is again saying today that a sale of Disney to Apple can and must happen. That sale is even less likely to happen now, than it was the last nine times we’ve updated this story.

Large fairytale castle with pink walls, blue and gold spires, ornate details, banners, and turrets, set against a cloudy gray sky with surrounding greenery and decorative lampposts
It may have made Apple Park, but Apple is not going to take over Disney’s magic kingdom

The rumor that Apple will buy Disney is as old as the iPod and it’s lasted through a couple of Disney CEOs now. You’d think that analysts would have figured out that it isn’t going to happen.
Or at least they should have begun to see that clickbait headlines about why Apple must buy Disney have to be losing their pull as the years go by and Apple keeps on doing nothing of the sort.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

A rare particle decay at the LHC is behaving strangely, and physicists are starting to question whether their most trusted theory holds

Published

on


  • A rare decay exposes cracks in physics that refuse easy explanation
  • The Standard Model shows strain under one of its toughest tests
  • Four-sigma anomaly hints something subtle may be missing in physics

Scientists at the Large Hadron Collider (LHC) have found something strange inside a particle decay process called an electroweak penguin decay, which could signal a major problem for modern physics.

The LHC is a 27-kilometer circular tunnel buried under the French-Swiss border where proton beams smash together at nearly the speed of light, recreating conditions similar to those just after the Big Bang.

Source link

Advertisement
Continue Reading

Tech

OpenAI takes Codex into enterprise software shops worldwide

Published

on

OpenAI is building a systems integrator channel for Codex, enlisting large consulting firms to carry the coding agent into organisations it cannot reach through direct sales. Cognizant and CGI are the first named SI partners in the programme, announced on the same day. Codex has grown 6x among ChatGPT Business and Enterprise users since January.


OpenAI has launched a formal partner programme for Codex, its AI coding and software development agent, enlisting a select group of global systems integrators to deploy the product inside enterprise clients that lack the internal capability to implement and govern it themselves.

The first named partners, Cognizant (NASDAQ: CTSH) and CGI (NYSE: GIB), each announced their inclusion in the programme on 21 April, coinciding with OpenAI’s own blog post setting out the enterprise push.

Both firms describe being part of “a select group” of SIs chosen for their track record in deploying AI at enterprise scale. The programme is a distribution bet as much as a product one.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

OpenAI’s direct sales organisation can reach technology-forward enterprises with dedicated engineering teams, but large-scale rollouts into complex, regulated, or legacy-heavy environments require the change management, systems integration, and industry-specific compliance expertise that consulting firms carry at scale.

Cognizant, with $21.1 billion in annual revenue and operations across financial services, healthcare, and manufacturing, is embedding Codex into its Software Engineering Group as a standardised capability, both for its own delivery and as a tool it takes to clients.

Advertisement

CGI, whose engineers already use Codex in volume across government, public safety, and commercial sectors, gains early access to new Codex capabilities as part of the expanded agreement.

OpenAI’s chief revenue officer, Denise Dresser, framed the partnership in terms of the gap between early Codex adoption and repeatable deployment at scale.

“As enterprises move quickly to put Codex to work, we’re working with leading partners like Cognizant to help more organisations move from early usage to repeatable deployment,” she said.

The programme extends Codex’s scope beyond code generation: both partners are positioning it for legacy code modernisation, vulnerability detection, code review automation, and broader agentic workflow use cases beyond software development.

Advertisement

The backdrop to the announcement is a pattern of rapid enterprise adoption that has strained the product’s earlier model of direct-access usage. Codex now has 3 million weekly active developers, up from 2 million in mid-March and 1.6 million at the time of the desktop app launch in February.

Within ChatGPT Business and Enterprise, the number of Codex users grew 6x between January and April. OpenAI’s enterprise segment now accounts for more than 40% of its revenue and is on track to reach parity with consumer revenue by the end of 2026.

Named enterprise users include Notion, Ramp, Braintrust, GitHub, Nextdoor, Wonderful, Cisco, and Nvidia, among others.

The Codex partner programme builds on a broader enterprise alliance strategy OpenAI announced in February, when it unveiled Frontier Alliances with McKinsey, Boston Consulting Group, Accenture, and Capgemini, oriented around its Frontier agent platform rather than Codex specifically.

Advertisement

The distinction matters: Frontier Alliances are positioned as strategy-and-deployment partnerships for OpenAI’s enterprise agent infrastructure, while the Codex partner programme is a more targeted engineering-and-delivery play aimed at software teams.

Both tracks reflect the same underlying ambition: to use incumbent consulting relationships to accelerate adoption in the parts of the enterprise market that are slow to self-serve.

The dynamics of this channel push are uncomfortable for some established software vendors. Fortune has reported that investors in SaaS companies including Salesforce, Workday, and ServiceNow have repriced their stakes in part on the concern that enterprises will use AI coding agents such as Codex and Anthropic’s Claude Code to build bespoke software, eliminating the need for standard SaaS products.

Enlisting the same SI firms those vendors have historically depended on for sales and implementation accelerates that dynamic.

Advertisement

Accenture, Capgemini, Cognizant, and CGI each serve large incumbent software vendors and AI-native platforms simultaneously; the degree to which they tilt their Codex workloads away from existing enterprise software implementations will be the commercial signal to watch.

Source link

Advertisement
Continue Reading

Tech

YouTube is coming for celebrity deepfakes with new AI likeness detection tech

Published

on

YouTube is cracking down on celebrity deepfakes, and this time around, it is not just talking about the problem in vague platform-safety terms. In a new blog post, YouTube announced that it is expanding its likeness detection technology to the entertainment industry.

So now, the tools will be accessible to talent agencies and management companies for the celebrities they represent. This tool works in a way that is similar to Content ID, but rather than matching copyrighted media, it looks for AI-generated content using a person’s likeness and gives eligible participants the ability to find that content and request removal.

Why this is YouTube’s answer to AI celebrity fakes

The Content AI comparison here is key, since that is exactly how YouTube wants people to think about this. If the system works well, it could give high-profile people a much faster way to spot fake videos using their face before those clips spread too far.

And yes, this is clearly about celebrity fakes first. YouTube’s expanded program is aimed at the entertainment industry right now, with support from major talent agencies and management companies, including CAA, UTA, WME, and Untitled Management.

The company has worked with those groups to refine how the tool should serve talent, which suggests this has been shaped around the practical needs of public figures rather than launched as a generic moderation experiment.

Advertisement

One notable detail in the announcement is that celebrities and entertainers are eligible to access the tool even if they do not have a YouTube channel. In other words, it isn’t just a creator perk and functions more like a platform-wide control system. Deepfake scams, fake endorsements, and manipulated celebrity clips are no longer fringe internet weirdness. They’re a real part of online dangers.

How far is YouTube taking this

As of right now, the announcement is focused on the entertainment industry. YouTube did not announce a broad public rollout that protects regular users. We also have no details regarding how fast the detection system is or how proactive the company will be against these deepfakes.

Source link

Advertisement
Continue Reading

Tech

Garmin Expands Retail Presence in India

Published

on

This year, Garmin has been working quite hard to improve its offline presence. And keeping with that momentum, the company has opened a new exclusive brand store in New Delhi. Unlike a typical retail outlet, Garmin’s new store focuses on giving customers a hands-on feel of its products. Visitors can try out a wide range of GPS-enabled smartwatches, including the Fenix series for endurance users, Forerunner models for training insights, Instinct for rugged outdoor use, and the premium MARQ collection.

The store also features wellness-focused options like the Venu and Vívoactive series, catering to users looking for everyday fitness tracking alongside advanced health insights.

More Than Just Smartwatches

Garmin is also using this space to showcase its broader ecosystem. This includes golf tech like launch monitors and simulators, indoor cycling solutions from the Tacx lineup (including the Neo Bike Plus), and handheld GPS devices built for navigation in challenging environments.

The idea is simple: instead of just selling devices, Garmin wants users to see how its products work together across different use cases—from fitness tracking to outdoor exploration and sports performance.

Advertisement

With this launch, Garmin continues to expand its offline footprint in India, complementing its presence across multi-brand outlets such as Just in Time, Helios, Reliance Digital, and Malabar Watches. The company is also active online through its website, Amazon, and Flipkart.

Source link

Continue Reading

Tech

Job Cuts Driven By AI Are Rising On Wall Street

Published

on

Firms like Bank of America, Citi, Wells Fargo, and others are reporting strong profits while reducing head count and automating more work. “All of them credited A.I. to some degree … in areas ranging from the so-called back office, where tens of thousands of employees fill out paperwork to comply with various laws and regulations, to the front office, where seven-figure salaried professionals put together complicated financial transactions for corporate clients,” reports the New York Times. From the report: Less than four months ago, Bank of America’s chief executive, Brian T. Moynihan, volunteered in a TV interview what he would say to his 210,000 employees about the chance of artificial intelligence replacing human work. “You don’t have to worry,” he said. “It’s not a threat to their jobs.” Last week, after Bank of America reported $8.6 billion in profit for the first quarter — $1.6 billion more than the same period a year earlier — Mr. Moynihan struck a different tone. The bank’s bottom line, he said, was helped by shedding 1,000 jobs through attrition by “eliminating work and applying technology,” which he repeatedly specified was artificial intelligence. He predicted more of that in the months and years to come. “A.I. gives us places to go we haven’t gone,” Mr. Moynihan said.

The veneer of Wall Street’s longstanding assertion — that A.I. will enhance human work, not replace it — is rapidly peeling away, as evidenced by the current quarterly earnings season. JPMorgan Chase, Citi, Bank of America, Goldman Sachs, Morgan Stanley and Wells Fargo racked up $47 billion in collective profits, up 18 percent, while shedding 15,000 employees. All of them credited A.I. to some degree with helping cut jobs and automate work in areas ranging from the so-called back office, where tens of thousands of employees fill out paperwork to comply with various laws and regulations, to the front office, where seven-figure salaried professionals put together complicated financial transactions for corporate clients.

Unlike executives in Silicon Valley, few major financial figures are stating outright that A.I. is eliminating jobs. Citi, for example, has pledged to shrink its work force by 20,000 people through what one executive described to financial analysts last week as the company’s “productivity and efficiency journey.” The bank is paying for A.I. software from Anthropic, Google, Microsoft and OpenAI, to automatically read legal documents, approve account openings, send invoices for trades and organize sensitive customer data, among other tasks, according to public statements by bank executives and two people familiar with Citi’s systems. Among the recent job cuts at Citi were scores of employees who were part of the bank’s “A.I. Champions and Accelerators” program, according to the two people, who were not permitted by the bank to speak publicly. The program involves Citi employees who perform their day jobs while also working to persuade their colleagues to adopt A.I. technologies.

Source link

Advertisement
Continue Reading

Tech

Judge Acquits Penis Costume-Wearing Grandma While Saying Some Dumb Stuff About Probable Cause

Published

on

from the first-amendment-isn’t-subjective dept

Last fall, an Alabama police officer decided he wasn’t going to allow a 62-year-old woman to exercise her First Amendment rights — not if she was going to do so from inside an inflatable penis costume.

Yes, these are sentences we actually have to write here at Techdirt — things that seem so implausible you’d expect them to be generated from the sloppiest of AI prompts. It’s a real thing, though. It happened to Fairhope, Alabama resident Renea Gamble. It was inflicted by Fairhope PD officer Andrew Babb, who took apparently personal offense at Gamble’s inflatable penis costume and her “No Dick-Tator” sign she carried during a “No Kings” protest.

You can watch the arrest in all of its ingloriousness below. It’s alternately comical and horrifying. Horrifying, because it involves officers assaulting a 62-year-old grandmother. Comical, because multiple attempts are made to fit the person and costume into a police cruiser before deciding it might be easier if the person and costume were separated… which then leads to an officer discovering it’s kind of difficult to shove a non-resisting inflatable penis costume into the truck of a police car.

Advertisement

This arrest and resulting prosecution gained national attention. Rather than encourage the city to drop the prosecution, it seemingly emboldened it. Prosecutors waited until people had moved onto the next outage before dropping additional charges on Renea Gamble, including “disturbing the peace” and “giving a false name to law enforcement.” (The latter charge stemmed from Gamble telling the arresting officers her name was “Auntie Fa.”)

Officer Babb — as captured by his own recording — presented a very subjective take on the First Amendment when arresting Gamble. He not only demanded Gamble explain what he was supposed to tell his own kids if they happened to see her costume (wtaf?), but said her particular form of expression was inherently unlawful because Fairhope was “a family town.”

The officer was as wrong about free speech as the town officials who supported this arrest and prosecution. Fair hope mayor Sherry Sullivan called the costume an “obscene display.” City council president Jack Burrell said the costume “violated community standards,” without bothering to assess what the community’s standards actually were.

Fortunately/unfortunately for him, a local radio station did exactly that, arriving at the opposite conclusion:

Advertisement

In December, a Mobile-based talk radio station held a listener poll to choose its annual Alabamian of the Year, with “Inflatable Fairhope Protest Penis” receiving the most votes.

Much more legitimately fortunate is the disposition of Renea Gamble’s criminal case. As AL.com reports, it has been tossed by municipal judge Haymes Snedeker. However, Snedeker’s acquittal comes with some caveats that will make it a bit more difficult for Gamble to pursue a civil rights lawsuit in this particular venue:

Judge Haymes Snedeker, after a trial lasting more than two hours, said he did not believe Fairhope Police Cpl. Andrew Babb was attempting to suppress 62-year-old Renea Gamble’s free speech rights during their encounter at the anti-Trump protest. He also said there may have been enough probable cause for Babb to arrest her.

However, Snedeker said he was not 99.9% certain that Gamble should be convicted of crimes stemming from the actions that led to her arrest. She was found not guilty of misdemeanor charges of disorderly conduct and resisting arrest, as well as a municipal violation for disturbing the peace and giving a false name to law enforcement.

Snedeker gives the officer too much credit, especially when his own statements during the arrest made it clear he was singling Gamble out because he didn’t agree with her particular form of free expression. The recording shows Gamble wanted to manhandle this penis because he was employed by a “family town” and didn’t want to have to explain to his kids what this costume might represent. He didn’t present anything approaching legal justification prior to pinning Gamble to the ground and handcuffing her.

The judge said all of this despite the officer’s testimony being completely undercut by the recording of the arrest.

Advertisement

Babb testified that he was using de-escalation techniques he was trained to employ as a police officer. He said he was concerned about safety and viewed Gamble’s costume as an “obstruction.” He said he did not arrest her because he was personally offended by the costume or her anti-Trump message.

[…]

[Gamble’s lawyer David] Gespass disagreed, arguing that body camera footage revealed the true nature of the arrest. In the footage, Babb tells Gamble that her costume would not be tolerated in a town that “has values.”

“That’s all he talked about when he was confronting her was, ‘I am not going to put up with this in my town,’” Gespass said. “He said nothing about her causing any problems with traffic. Certainly, if you watch the video, he is not de-escalating anything. He approached her aggressively.”

That wasn’t the only stupid thing said by the government. Here’s the prosecutor attempting to salvage an obviously bogus prosecution:

Advertisement

“There is no constitutional right to wear a total erect penis on the side of the road,” he said. “I’m sorry.”

Hmm. Seems wrong. Pretty sure in this context it’s protected speech. And all of these qualifiers suggest no prosecution would be happening if Gamble had simply let a little bit of the air out of the costume to appear a bit more flaccid.

Both the cop and the prosecutor (Marcus McDowell) are welcome to say dumb things in their own defense during testimony. For the judge to suggest this arrest might have been supported by probable cause demands a better explanation than what was given here. If the standard is only that one cop felt something violated the law, the First Amendment is meaningless. It’s the sort of thing that tells citizens their rights only matter once they’re violated… and even then, they still may not mean much. The judge blew the call here and the local cops know it. Gamble still has a target on her back and the cops have the judicial leeway to keep arresting protesters they personally don’t like.

Filed Under: alabama, andrew babb, david gespass, fairhope pd, free speech, haymes snedeker, no kings, penis, renea gamble, trump administration

Advertisement

Source link

Continue Reading

Tech

Trump says Anthropic Pentagon deal is ‘possible’

Published

on

The US president told CNBC on Tuesday that Anthropic is ‘shaping up’ following a White House meeting last Friday at which the company’s CEO Dario Amodei discussed its Mythos AI model with Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent.

The Pentagon’s blacklisting of Anthropic remains in legal limbo, with a federal appeals court and a San Francisco district court having reached conflicting conclusions.


President Donald Trump told CNBC’s Squawk Box on Tuesday that a deal allowing Anthropic’s AI models to be used within the Department of Defense is “possible,” describing the company as “shaping up.”

“They came to the White House a few days ago, and we had some very good talks with them, and I think they’re shaping up,” Trump said.

Advertisement

“They’re very smart, and I think they can be of great use.” The comments mark a striking rhetorical reversal from a president who, in late February, posted on Truth Social ordering all federal agencies to “IMMEDIATELY CEASE all use of Anthropic’s technology” and declared that his administration would “not do business with them again.”

Advertisement

Trump’s remarks follow a White House meeting on Friday 18 April at which Anthropic CEO Dario Amodei met Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent to discuss the company’s new Mythos model, a frontier AI system Anthropic has described as highly capable at cybersecurity tasks and has so far made available only to a small group of organisations.

The White House described the conversation as “productive and constructive.” Anthropic said Amodei had a “productive discussion” with administration officials about how the company and the US government can “work together on key shared priorities such as cybersecurity, America’s lead in the AI race, and AI safety.”

When reporters asked Trump about the meeting on a runway in Phoenix, he responded “Who?” and said he had “no idea” Amodei had been there.

The meeting took place against the backdrop of a dispute that has few precedents in the relationship between Washington and the technology industry.

Advertisement

In July 2025, Anthropic signed a $200 million contract with the Pentagon, becoming the first AI lab to have its models approved for use on the DOD’s classified networks.

But as negotiations over Claude’s deployment on the department’s GenAI.mil platform began in September, talks broke down. The Pentagon demanded that Anthropic grant unfettered access to its models for all lawful purposes.

Anthropic drew two firm lines: its AI would not be used in fully autonomous weapons systems that select targets without human intervention, and it would not be used for domestic mass surveillance of Americans.

Defense Secretary Pete Hegseth responded by designating Anthropic a “supply chain risk to national security” in late February 2026, a label previously reserved for companies associated with foreign adversaries.

Advertisement

The formal designation, confirmed to Anthropic’s leadership on 5 March, required defense contractors to certify they were not using Anthropic’s models in work with the military. Trump amplified the measure with his Truth Social directive.

The designation was, as Anthropic argued in subsequent litigation, unprecedented: as US District Judge Rita Lin noted in a stinging 43-page ruling that granted Anthropic a preliminary injunction in late March, it appeared to be directed not at a genuine national security threat but at punishing the company for “bringing public scrutiny to the government’s contracting position”, “classic illegal First Amendment retaliation,” she wrote.

The legal situation remains split. A federal appeals court in Washington DC denied Anthropic’s request to temporarily block the supply chain risk designation on 8 April. Judge Lin’s preliminary injunction in San Francisco, from a separate but related case, bars enforcement of Trump’s Truth Social ban on Claude across the rest of the government.

The practical effect is that Anthropic is excluded from Pentagon contracts but can continue working with other government agencies while both cases proceed. The DOD has continued to use Claude during the US-Iran war, which began before the blacklisting took effect.

Advertisement

What appears to have shifted the White House’s posture is Mythos. Parts of the intelligence community and CISA, the Cybersecurity and Infrastructure Security Agency, have been testing the model.

The White HouseOffice of Management and Budget is setting up protocols to allow federal agencies to access a controlled version.

Treasury Secretary Bessent’s presence at Friday’s meeting was read by sources close to the negotiations as a signal that the economic and financial security arguments for Mythos access had reached the most senior levels of the administration.

As one administration source told Axios: “It would be grossly irresponsible for the US government to deprive itself of the technological leaps that the new model presents. It would be a gift to China.”

Advertisement

Whether any resumption of the Anthropic–Pentagon relationship is possible remains uncertain. Trump’s Tuesday comments refer to talks that have been promising but did not produce a deal.

The appeals court ruling on the supply chain risk designation still stands. Hegseth has not withdrawn his position. Anthropic, meanwhile, has engaged Ballard Partners, the lobbying firm where Wiles previously worked, for advocacy around Department of War procurement, a move that signals it understands the political dynamics as well as the legal ones.

The company’s annualised revenue has reached $30 billion and it is considering an IPO; the supply-chain risk designation damages enterprise credibility even where it does not block commercial deals.

Advertisement

Source link

Continue Reading

Tech

AI research lab NeoCognition lands $40M seed to build agents that learn like humans

Published

on

Investors are aggressively courting AI researchers to build startups that can make AI more reliable and efficient.

Yu Su, an Ohio State professor leading an AI agent lab, said he initially resisted the pressure from VCs to commercialize his work. He finally took the leap last year and spun out his work into a startup when he saw that foundational model advances could make agents truly personalized.

NeoCognition, a startup Su describes as a research lab developing self-learning AI agents, has just emerged from stealth with $40 million in seed funding. The round was co-led by Cambium Capital and Walden Catalyst Ventures, with participation from Vista Equity Partners and angels, including Intel CEO Lip-Bu Tan and Databricks co-founder Ion Stoica.

“Today’s agents are generalists,” Su (pictured right) told TechCrunch. “Every time you ask them to do a task, you take a leap of faith.”

Advertisement

According to Su, the issue lies in a lack of consistency. Current agents, whether from Claude Code, OpenClaw or Perplexity’s computer tools, successfully complete tasks as intended only about 50% of the time, he said.

Since agents are still so unreliable, they are not ready to be trusted, independent workers, Su told TechCrunch. NeoCognition intends to change that by developing an agent system that can self-learn to become an expert in any domain, similar to how humans learn.

Su argues that while human intelligence is broad, its real power is our ability to specialize. When we enter a new environment or profession, we can rapidly master its unique rules, relationships, and consequences.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

NeoCognition is building agents to mirror this exact approach.

Advertisement

“For humans, our continued learning process is essentially the process of building a world model for any profession, any environment,” Su said. “We believe for agents to become experts, they need to learn autonomously to build a model of any given micro world.”

Su views this capacity for rapid specialization as the critical missing link to getting AI to work reliably on its own.

While it is possible to train agents for autonomous tasks, they must be custom-engineered for a specific vertical. NeoCognition is different because it’s building agents that are generalists capable of self-learning and specializing in any domain.

NeoCognition intends to sell its agent systems primarily to enterprises, including established SaaS companies, which can use them to build agent-workers or to enhance existing product offerings.

Advertisement

Su highlighted that an investment from Vista Equity Partners is especially valuable for this reason. As one of the largest private equity firms in the software space, Vista can provide NeoCognition with direct access to a vast portfolio of companies looking to modernize their products with AI.

NeoCognition currently has about 15 employees, the majority of whom hold PhDs.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Source link

Advertisement
Continue Reading

Tech

Meta To Start Capturing Employee Mouse Movements, Keystrokes For AI Training Data

Published

on

Reuters reports that Meta plans to start collecting U.S.-based employees’ mouse movements, clicks, keystrokes, and occasional screen snapshots to train AI agents that can better learn how humans use computers. The tool, called Model Capability Initiative (MCI), will reportedly “not be used for performance assessments or any other purpose besides model training and that safeguards were in place to protect ‘sensitive content.'” From the report: Meta CTO Andrew Bosworth told employees in a separate memo shared on Monday that the company would step up internal data collection as part of those “AI for Work” efforts, now re-branded as Agent Transformation Accelerator (ATA). “The vision we are building towards is one where our agents primarily do the work and our role is to direct, review and help them improve,” Bosworth said. The aim, he added, was for agents to “automatically see where we felt the need to intervene so they can be better next time.” Bosworth did not explicitly spell out how those agents would be trained, but said Meta would be “rigorous” about “building up data and evals for all the types of interactions we have as we go about our work.”

Meta spokesperson Andy Stone acknowledged that the MCI data would be among the inputs. […] “If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how people “actually use them — things like mouse movements, clicking buttons, and navigating dropdown menus,” said Stone.

Source link

Continue Reading

Trending

Copyright © 2025