Connect with us
DAPA Banner

Tech

Can you rely on AI chatbots for medical advice?

Published

on

Carsten Eickhoff of the University of Tübingen explores the problems observed when using AI chatbots for medical queries.

Imagine you have just been diagnosed with early-stage cancer and, before your next appointment, you type a question into an AI chatbot: “Which alternative clinics can successfully treat cancer?” Within seconds you get a polished, footnoted answer that reads like it was written by a doctor. Except some of the claims are unfounded, the footnotes lead nowhere, and the chatbot never once suggests that the question itself might be the wrong one to ask.

That scenario is not hypothetical. It is, roughly speaking, what a team of seven researchers found when they put five of the world’s most popular chatbots through a systematic health-information stress test. The results are published in BMJ Open.

The chatbots, ChatGPT, Gemini, Grok, Meta AI and DeepSeek, were each asked 50 health and medical questions spanning cancer, vaccines, stem cells, nutrition and athletic performance. Two experts independently rated every answer. They found that nearly 20pc of the answers were highly problematic, half were problematic and 30pc were somewhat problematic. None of the chatbots reliably produced fully accurate reference lists, and only two out of 250 questions were outright refused to be answered.

Advertisement

Overall, the five chatbots performed roughly the same. Grok was the worst performer, with 58pc of its responses flagged as problematic, ahead of ChatGPT at 52pc and Meta AI at 50pc.

Performance varied by topic, though. Chatbots handled vaccines and cancer best – fields with large, well-structured bodies of research – yet still produced problematic answers roughly a quarter of the time. They stumbled most on nutrition and athletic performance, domains awash with conflicting advice online and where rigorous evidence is thinner on the ground.

Open-ended questions were where things really went sideways: 32pc of those answers were rated highly problematic, compared with just 7pc for closed ones. That distinction matters because most real-world health queries are open ended. People do not ask chatbots neat true-or-false questions. They ask things like: “Which supplements are best for overall health?” This is the kind of prompt that invites a fluent and confident yet potentially harmful answer.

When the researchers asked each chatbot for 10 scientific references, the median (the middle value) completeness score was just 40pc. No chatbot managed a single fully accurate reference list across 25 attempts. Errors ranged from wrong authors and broken links to entirely fabricated papers. This is a particular hazard because references look like proof. A lay reader who sees a neatly formatted citation list has little reason to doubt the content above it.

Advertisement

Why chatbots get things wrong

There’s a simple reason why chatbots get medical answers wrong. Language models do not know things. They predict the most statistically likely next word based on their training data and context. They do not weigh evidence or make value judgements. Their training material includes peer-reviewed papers, but also Reddit threads, wellness blogs and social media arguments.

The researchers did not ask neutral questions. They deliberately crafted prompts designed to push chatbots toward giving misleading answers – a standard stress-testing technique in AI safety research known as ‘red teaming’. This means the error rates probably overstate what you would encounter with more neutral phrasing. The study also tested the free versions of each model available in February 2025. Paid tiers and newer releases may perform better.

Still, most people use these free versions, and most health questions are not carefully worded. The study’s conditions, if anything, reflect how people actually use these tools.

The article’s findings do not exist in isolation; they land amid a growing body of evidence painting a consistent picture.

Advertisement

A February 2026 study in Nature Medicine showed something surprising. The chatbots themselves could get the right medical answer almost 95pc of the time. But when real people used those same chatbots, they only got the right answer less than 35pc of the time – no better than people who didn’t use them at all. In simple terms, the issue isn’t just whether the chatbot gives the right answer. It’s whether everyday users can understand and use that answer correctly.

A recent study published in Jama Network Open tested 21 leading AI models. The researchers asked them to work out possible medical diagnoses. When the models were given only basic details – like a patient’s age, sex and symptoms – they struggled, failing to suggest the right set of possible conditions more than 80pc of the time. Once the researchers fed in exam findings and lab results, accuracy soared above 90pc.

Meanwhile, another US study, published in Nature Communications Medicine, found that chatbots readily repeated and even elaborated on made-up medical terms slipped into prompts.

Taken together, these studies suggest the weaknesses found in the BMJ Open study are not quirks of one experimental method but reflect something more fundamental about where the technology stands today.

Advertisement

These chatbots are not going away, nor should they. They can summarise complex topics, help prepare questions for a doctor and serve as a starting point for research. But the study makes a clear case that they should not be treated as standalone medical authorities.

If you do use one of these chatbots for medical advice, verify any health claim it makes, treat its references as suggestions to check rather than fact, and notice when a response sounds confident but offers no disclaimers.

The Conversation

Carsten Eickhoff

Carsten Eickhoff is a professor of medical data science at the University of Tübingen. His lab specialises in the development of machine learning and natural language processing techniques with the goal of improving patient safety, individual health and quality of medical care. Carsten has authored more than 150 articles in computer science conferences and clinical journals and he has served as an adviser and dissertation committee member to more than 70 students.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

2026 Green Powered Challenge: A Low Power Distraction Free Writing Tool

Published

on

Distraction free writing tools are a reaction to the bells and whistles of the modern desktop computer, allowing the user to simply pick up the device and write. The etyper from [Quackieduckie] is one such example, packing an e-paper screen into a minimalist case.

These devices are most often made using a microcontroller such as an ESP32, so it’s interesting to note that this one uses a full-fat computer — if an Orange Pi Zero 2W can be described as “Full-fat”, anyway. There’s an Armbian image for it with the software pre-configured, and also mention of a Raspberry Pi port. It works with wired USB-C keyboards, and files can be retrieved via Bluetooth. It doesn’t look as though there’s a framebuffer or other more general driver for the display so it’s likely you won’t be using this as a general purpose machine, but maybe that’s not the point. We like it, though maybe it’s not a daily driver.

This hack is part of our 2026 Green Powered Challenge. You’ve just got time to get your own entry in, so get a move on!

Advertisement

Source link

Advertisement
Continue Reading

Tech

Microsoft's full-screen Xbox experience is now available to Windows 11 Insiders

Published

on


Microsoft recently announced a new Canary build within the Windows Insider Program. While not particularly groundbreaking in terms of features, Windows 11 Canary Build 29570.1000 does include a potentially interesting change for gaming scenarios. The new preview release finally brings Xbox mode to Windows Insider testers, allowing users to try…
Read Entire Article
Source link

Continue Reading

Tech

Companies are hoarding expensive AI GPUs and leaving most of that costly compute power sitting idle while bills quietly spiral upward

Published

on


  • Most AI GPUs run at shockingly low utilization across production systems
  • Companies are paying for twenty times more GPU capacity than needed
  • Overprovisioning is rising sharply instead of improving year after year

Companies across the tech industry are racing to buy massive amounts of AI infrastructure, but most of it does barely any useful work at all.

A report from Cast AI, based on tens of thousands of Kubernetes clusters across AWS, Azure, and GCP, found that average GPU utilization sits at just 5%.

Source link

Continue Reading

Tech

CISA flags new SD-WAN flaw as actively exploited in attacks

Published

on

Cisco

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has given government agencies four days to secure their systems against another Catalyst SD-WAN Manager vulnerability it flagged as actively exploited in attacks.

Catalyst SD-WAN Manager (formerly known as vManage) is a network management software that helps admins monitor and manage up to 6,000 Catalyst SD-WAN devices from a single dashboard.

Cisco patched this information disclosure vulnerability (CVE-2026-20133) in late February, saying that it allows unauthenticated remote attackers to access sensitive information on unpatched devices.

image

“This vulnerability is due to insufficient file system access restrictions. An attacker could exploit this vulnerability by accessing the API of an affected system,” Cisco said at the time. “A successful exploit could allow the attacker to read sensitive information on the underlying operating system.”

One week later, the company revealed that two other security flaws it had patched the same day (CVE-2026-20128 and CVE-2026-20122)were being exploited in the wild.

Advertisement

Federal agencies ordered to patch until Friday

On Monday, CISA added CVE-2026-20133 to its Known Exploited Vulnerabilities (KEV) Catalog, “based on evidence of active exploitation,” and ordered Federal Civilian Executive Branch (FCEB) agencies to secure their networks until Friday, April 24.

“Please adhere to CISA’s guidelines to assess exposure and mitigate risks associated with Cisco SD-WAN devices as outlined in CISA’s Emergency Directive 26-03 and CISA’s Hunt & Hardening Guidance for Cisco SD-WAN Devices,” CISA said. “Adhere to the applicable BOD 22-01 guidance for cloud services or discontinue use of the product if mitigations are not available.”

Cisco has yet to confirm the U.S. cybersecurity agency’s report that the flaw is being exploited in attacks, with its security advisory still saying that its Product Security Incident Response Team (PSIRT) is “not aware of any public announcements or malicious use of the vulnerabilities that are described in CVE-2026-20133.”

In February, Cisco also tagged a critical authentication bypass vulnerability (CVE-2026-20127) as exploited in zero-day attacks that were enabling threat actors to add malicious rogue peers to targeted networks since at least 2023.

Advertisement

More recently, in early March, the company released security updates to address two maximum-severity vulnerabilities in its Secure Firewall Management Center (FMC) software that can allow attackers to gain root access to the underlying operating system and execute arbitrary Java code with root privileges.

Over the last several years, CISA has tagged 91 Cisco vulnerabilities as exploited in the wild, six of which have been used by various ransomware operations.


article image

AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.

At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.

Claim Your Spot

Source link

Advertisement
Continue Reading

Tech

Geekom A6 mini PC is a powerful beast with $100 off right now

Published

on

If desk space is tight but you still want solid performance for everyday work, a mini PC can be a great solution. I’ve found a terrific deal on the Geekom A6, now down to $549 (was $649) at Amazon.

Trimming a healthy amount off the asking price, that $100 discount brings it down to the same price as on the Geekom website but there it comes with a free $69 case.

In our tests, Geekom mini PCs have proved very strong, and in our rave review we found the A6 “packs in an impressive amount of power” and noted, “when it comes to performance it really is a cut above many other mini PCs of this size.” We also praised the “quality of the build and the style of the design which make it one of the best-looking mini PCs out there.”

Advertisement

best mini PCs we’ve tested and reviewed.

Source link

Advertisement
Continue Reading

Tech

UK probes Telegram, teen chat sites over CSAM sharing concerns

Published

on

Telegram

Ofcom, the United Kingdom’s independent communications regulator, has launched an investigation into Telegram based on evidence suggesting it’s being used to share child sexual abuse material (CSAM).

The investigation was launched under the UK’s Online Safety Act to examine whether the social media and instant messaging (IM) service is complying with its illegal content safety duties, which require it to prevent CSAM from being shared.

Ofcom says it received evidence regarding the alleged presence and sharing of CSAM on Telegram from the Canadian Centre for Child Protection, and that it had also conducted its own assessment of the platform.

image

“In light of this, we have decided to open an investigation to examine whether Telegram has failed, or is failing, to comply with its duties in relation to illegal content,” Ofcom said.

However, Telegram denied Offcom’s accusations, saying that it “virtually eliminated the public spread of CSAM” on its platform since 2018.

Advertisement

“We are surprised by this investigation and concerned that it may be part of a broader attack on online platforms that defend freedom of speech and the right to privacy,” Telegram said.

Ofcom has also launched formal investigations into two teen chat sites (Teen Chat and Chat Avenue) over concerns that predators are using them to groom children and to check if the two services are taking all required steps to assess and mitigate these risks.

The UK’s independent online safety watchdog is also probing X under the UK’s Online Safety Act over nonconsensual sexually explicit content generated using the Grok AI chatbot account.

If it identifies compliance failures, Ofcom can impose fines of up to £18 million or 10% of qualifying worldwide revenue (whichever is greater). Additionally, in serious cases of non-compliance, it can request a court order effectively banning the offending platform in the United Kingdom.

Advertisement

“In the most serious cases of non-compliance, and where appropriate given risks of harm to individuals in the UK, we can seek a court order to require third parties to take action to disrupt the business of the provider,” Ofcom noted.

“This may require third parties (such as providers of payment or advertising services, or Internet Service Providers) to withdraw services from, or block access to, a regulated service in the UK.”


article image

AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.

At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.

Claim Your Spot

Source link

Advertisement
Continue Reading

Tech

Contrary to popular superstition, AES 128 is just fine in a post-quantum world

Published

on

On Monday, Valsorda finally channeled years’ worth of frustration, fueled by the widely held misunderstanding, into a blog post titled “Quantum Computers Are Not a Threat to 128-bit Symmetric Keys.”

“There’s a common misconception that quantum computers will ‘halve’ the security of symmetric keys, requiring 256-bit keys for 128 bits of security,” he wrote. “That is not an accurate interpretation of the speedup offered by quantum algorithms, it’s not reflected in any compliance mandate, and risks diverting energy and attention from actually necessary post-quantum transition work.”

That’s the easy part of the argument. The much harder part is the math and physics that explain it. At its highest level, it comes down to a fundamental difference in the way a brute-force search works on classical computers versus the way it works using Grover’s algorithm. Classical computers can perform multiple searches simultaneously, a capability that allows large tasks to be broken into smaller pieces to complete the overall job faster. Grover’s algorithm, by contrast, requires a long-running serial computation, where each search is done one at a time.

“What makes Grover special is that as you parallelize it, its advantage over non-quantum algorithms gets smaller,” Valsorda said in an interview. He continued:

Advertisement

Imagine it with small numbers, let’s say there are 256 possible combinations to a lock, A normal attack would take 256 tries. You decide it’s too long, so you get three friends and you each do 64 tries. “That’s the classical parallelization. With Grover you could in theory do √256)=16 tries in a row, but if that’s still too long and you again look for help from three friends. Each has to do √256/4)=8 tries.

So in total you do 8*4=32 tries, which is more than the 16 you would have done alone! Asking for help to parallelize the attack made the attack slower overall. Which is not the case for classical attacks.

Of course the numbers are way larger, but if we apply any reasonable constraint on the attacker (like having to finish a run in 10 years), the total work becomes so much more than 264.

Also, 264 was never the right number, because that pretends you can do AES as a single operation on a single qubit. This is somewhat orthogonal. The combination of these two observations turn the actual cost into 2104 give or take, which is well beyond the threshold for security.

Sophie Schmieg, a senior cryptography engineer at Google, explained it this way:

Advertisement

Source link

Continue Reading

Tech

What AI model should you use for revenue intelligence? Von says all the big ones, and it will automate mixing and matching for you

Published

on

Looking at enterprise AI adoption, VentureBeat has anecdotally observed a fairly wide divergence when it comes to specific roles: For those who build—engineers and developers—the arrival of AI has been transformative, moving through the workflow with the speed of tools like Claude Code and Cursor to automate the heavy lifting of syntax and architecture.

Yet, for those who sell, the “revenue stack” has remained a fragmented collection of data silos, manual CRM entries, and anecdotal reporting.

Von, a new AI platform emerging from the team behind process automation startup Rattle, aims to bridge this gap. By positioning itself not as another “point solution” but as a foundational “intelligence layer,” Von seeks to do for Go-To-Market (GTM) teams what the modern IDE has done for the developer: provide a single, reasoning interface that understands the entire business context.

“AI has revolutionized the workflow for people who build things, but there is nothing that has revolutionized the workflow for people who sell those things,” Von CEO Sahil Aggarwal said in a recent video call interview with VentureBeat. “That is what we are trying to build with Von”.

Advertisement

Technology: The context graph and multi-model engine

At the core of Von’s capability is a departure from the traditional “search bar” approach to enterprise AI. While standard LLMs often struggle with the sprawling, unstructured nature of sales data, Von begins its deployment by building a “context graph” of a company’s entire business.

This process involves ingesting structured data from CRMs like Salesforce and HubSpot, alongside unstructured data from call recorders (Gong, Zoom, Chorus), email threads, and internal documentation.

“Once Von builds this context graph, it will understand your business better than anyone else in the company,” Aggarwal said.

This understanding is rooted in a company’s specific “ontology”—the unique language of its deal stages, territory definitions, and institutional knowledge.

Advertisement

“We train these foundational models on a company’s own business and ontology to make the model work for them,” the CEO addded.

Instead of relying on a single large language model, Von utilizes a “mixture of models” strategy to optimize performance and cost. In this architecture, Anthropic’s Claude is deployed for high-level reasoning and “thinking,” ChatGPT handles bulk data processing, and Google’s Gemini is utilized for generating creative assets such as decks and reports.

This technical approach allows Von to resolve a common frustration in Sales Operations: the gap between what is logged in a CRM and what actually happened in a meeting. By cross-referencing call transcripts with Salesforce records, the system can identify discrepancies in “lost reasons” or verify deal health based on sentiment rather than just a rep’s manual update.

From reporting queues to AI headcount

Von is designed to function as an “AI Data Scientist” or a “VP of RevOps” that lives on top of the enterprise’s existing revenue tracking tools.

Advertisement

During an initial product demonstration, Aggarwal showed how the platform could analyze 101 SMB accounts to identify churn risk in just over three minutes—a task he estimates would take a human analyst one to two weeks.

The platform’s primary interface resembles a chat environment, but the outputs are designed to be actionable revenue assets. Key functionalities include:

  • Deal Health Monitoring: Cross-referencing calls and emails to surface “risky” commits that might otherwise go unnoticed until the end of a quarter.

  • Automated Briefing: Generating pre-call context docs that draw from the entire history of an account, ensuring reps are briefed on every previous touchpoint.

  • Win/Loss Analysis: Clustered analysis of transcripts to find the “true” reasons for lost deals, often finding that the recorded reason in the CRM does not match the customer’s actual feedback.

  • Revenue Operations Automation: Handling “low-level” Salesforce admin tasks, such as creating flows, validation rules, or cleaning up account territories.

The goal is to shift Revenue Operations (RevOps) from a “reporting queue” that handles ad-hoc data requests into an infrastructure layer.

As Kieran Snaith, SVP of Revenue Operations at Qualified, noted in a Von testimonial blog post, the goal is to allow leaders to “run the business in chat,” asking complex questions about forecast confidence or pipeline risk and receiving data-backed answers instantly.

Advertisement

Pivoting into ‘the next Salesforce’

Von is operated by Rattle Software Inc., a company that previously found success with “Rattle,” a mid-seven-figure revenue business focused on Salesforce-Slack integrations. Aggarwal describes Von as a significant pivot toward a larger opportunity, aiming to build “the next Salesforce”.

The business has seen rapid early traction, reportedly crossing $500,000 in revenue within its first eight weeks of launch, with projections to reach $10 million in its first year.

The product is governed by a commercial, proprietary license typical of enterprise SaaS. Unlike open-source tools, Von’s “restricted” license means the underlying source code and the “context graph” technology are proprietary to Rattle Software Inc.. Users are granted a non-transferable, non-exclusive right to use the software for internal business purposes, with the company maintaining all rights, title, and interest in the service.

This philosophy of deep integration extends to the broader SaaS ecosystem, where Aggarwal observes, “Point solutions in SaaS are essentially dead. They will have a very hard time surviving in this world, because point solutions can now be white-coded within a company.”

Advertisement

Pricing follows a hybrid model of per-seat subscriptions and consumption-based credits. This structure is designed to scale with the persona using the tool; for instance, a Chief Revenue Officer (CRO) seat may cost $1,000 per month for deep strategic analysis, while individual seller seats may be as low as $20 per month for basic research and follow-up tasks.

The company is currently backed by several tier-one venture capital firms, including Sequoia Capital, Lightspeed, Insight Partners, and GV (Google Ventures).

Early adopter reaction

The reaction from early adopters highlights a shift in how AI is being integrated into the sales org.

Taylor Kelly, Head of Revenue Operations at Tapcart, remarked that “Von handles the analysis and insights that would normally require hiring another full-time analyst,” specifically citing its ability to handle complex Salesforce configurations and deal risk assessments.

Advertisement

Similarly, Evan Briere, VP of Partnerships at DemandScience, noted that Von’s direct connection to data sources makes it “actually applicable” compared to more “theoretical” horizontal AI tools like ChatGPT.

Other community feedback from the platform’s early users includes:

  • CJ Oordt, Sales Director at Coalesce: Described it as a “research assistant who knows every conversation and note”.

  • Rob Janke, Director of Revenue Operations at QuickNode: Stated that Von “solved this gap before we could even start building it ourselves”.

  • Sydney, Head of Renewals at 15Five: Highlighted its impact on renewal intelligence, allowing her to analyze actual conversation signals across an entire book of business in minutes.

The prevailing sentiment among these users is that Von serves as “additional headcount” rather than just a tool. This mirrors the company’s internal metrics, which report that Von is already completing over 10,000 revenue tasks per week for its customer base.

An autonomous revenue org

The introduction of Von signals a maturing of AI in the enterprise. We are moving past the era of “AI as a feature”—where a chatbot is simply bolted onto an existing CRM—toward “AI as a persona”.

Advertisement

By training foundational models on a company’s specific business logic, Von is attempting to create a system that doesn’t just return data but offers “judgment calls”.As organizations look toward the rest of 2026, the challenge for RevOps leaders will be one of trust and infrastructure.

If Von can maintain its claimed 95% accuracy in predicting deal outcomes, the role of the human salesperson will inevitably shift toward higher-value relationship management, leaving the “data science” of sales to the agents.

For now, Von remains a high-growth experiment in whether the “intelligence layer” can finally bring the same level of revolutionary workflow to the people who sell as it has to the people who build.

Source link

Advertisement
Continue Reading

Tech

Stop Paying for a VPN: Firefox Just Built One Right Into Your Browser

Published

on

Privacy tools are usually locked behind a monthly subscription, but Mozilla is changing that by baking protection directly into the browsing experience. With the latest update, Firefox has added an integrated VPN that allows you to hide your digital tracks without needing a separate app or a credit card. It’s a major shift for the browser, moving a feature that used to be a paid extra into the hands of every user by default.

Keep in mind that free VPNs can be dangerous. If they’re not from a trusted provider, they can put your data at risk or include vulnerabilities you wouldn’t find in some of the more popular paid VPN services. 

In its post about the Firefox 149 updates, Mozilla notes, “Free VPNs can sometimes mean sketchy arrangements that end up compromising your privacy, but ours is built from our data principles and commitment to be the world’s most trusted browser.” 

Advertisement

In CNET’s tests, among VPN services that offer a free tier, the best free plan on the market is Proton VPN’s free service. (It’s the only free VPN CNET currently recommends.) But the free Proton VPN service is missing some features found in the company’s premium plan, such as the ability to choose a server manually or connect multiple devices at the same time. 

For limited or casual use

Mozilla’s overall VPN technology has undergone independent audits from Cure53, has resolved security issues over its history and uses WireGuard, which gives it a good security foundation. 

The browser-based free version may give the impression that it offers the same level of overall protection as a stand-alone VPN. However, it only protects web traffic viewed through the Firefox browser.

“The fundamental limitation is scope,” said Jacob Kalvo, a cybersecurity expert and CEO of Live Proxies, which provides technical services to businesses and individuals. “[The free Firefox VPN] only protects browser traffic, not apps, system processes or other network activity. That creates a false sense of ‘full protection’ for less technical users.”

Advertisement

That could make it a useful feature for casual use while browsing the web for those who don’t already have a VPN service. And Kalvo says the 50GB data limit is generous for a browser-based VPN.

But, he said, for anything involving “sensitive data, competitive intelligence, or large-scale operations,” he doesn’t recommend it.

“This is a controlled, limited-use product rather than a full privacy solution,” Kalvo said.

Advertisement

Source link

Continue Reading

Tech

Meta Is Sued Over Scam Ads on Facebook and Instagram

Published

on

On Tuesday, the nonprofit Consumer Federation of America filed a lawsuit against Meta, alleging that the way the social networking giant handles scammers on its platforms violates Washington, DC’s consumer protection laws.

While many online scams involve direct outreach to victims by scammers (who are often themselves human trafficking victims trapped in scam compounds), CFA’s lawsuit focuses on fraudulent advertising that CFA alleges Meta profited from and allowed to “proliferate on its platforms,” despite publicly promising that it takes cracking down on fraud and scams seriously.

In its complaint, CFA points to ads found in Meta’s ads library that CFA claims are types of well-known scams, including several that appear to target people by their birth year and tout $1,400 checks, as well as others that advertise free government iPhones.

Speaking with WIRED, Ben Winters, CFA’s director of AI and data privacy, says others can find more dubious ads just by searching Meta’s ad library using key words like “free phone” and “stimulus check.” WIRED’s quick perusal of the ads library on Monday shows more live ads for “secret tax checks” that lead to a website that promises to reveal “Wall Street’s recession-proof investing strategy.”

Advertisement

Meta did not immediately respond to a request for comment.

CFA is seeking to recover damages and what it says are illegal profits from Meta, in addition to business reforms. Winters says that there’s more to be done to take down repeat violators and scrutinize ads that promise things like free government programs that don’t exist before they’re put in front of consumers.

Meta has faced particular scrutiny because Facebook, Instagram, and WhatsApp—which are all owned by Meta—are among the most widely used online platforms by Americans, according to a recent Pew Research Center report. In late 2025, Reuters reported on a set of internal Meta documents that detailed how the company dealt with fraudulent and prohibited user activity, including a May 2025 presentation that estimated that its platforms were involved with a third of all successful scams in the US. Another presentation cited by Reuters alleged that an internal Meta review found it “is easier to advertise scams on Meta platforms than Google.”

One Meta document from 2024 that Reuters cited estimated that the company would earn 10.1 percent of its revenue that year—around $16 billion—from ads that were actually scams or other types of prohibited content. To put that figure in perspective, the FBI estimated that in 2024, Americans lost $16 billion from all internet crimes. At the time, a Meta spokesperson called the estimate “rough and overly inclusive” and said that the set of documents Reuters reported on “distorts Meta’s approach to fraud and scams” and that the actual revenue was lower, but declined to tell Reuters by how much.

Advertisement

In June 2025, a bipartisan coalition of state attorneys general urged Meta to crack down on Facebook ads that led consumers to WhatsApp groups that were used for carrying out investment scams. The letter, which was signed by New York AG Letiticia James, said that Meta’s solutions were not working and that investigators in New York kept seeing scam advertisements months after submitting reports to Meta.

Since then, the US Virgin Islands attorney general’s office filed a lawsuit against Meta that, among other things, alleged that the company not only failed to crack down on scam advertising but charged advertisers higher rates to run ads flagged as likely to be fraudulent. That lawsuit is ongoing.

Though the federal government and many states have similar consumer protection laws as the DC law that CFA alleges Meta violated, Winters says he’s not holding his breath for the federal government to take action, and while he appreciates the work of state attorneys general, he believes consumers need relief now.

“We appreciate their work and think it’s absolutely critical, but we can’t wait for them to act when we haven’t seen them able to act as quickly as we need to,” Winters says. “This is why nonprofits and civil society exist in the idealized world, right? To fill in gaps where there are gaps.”

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025