Connect with us

Tech

OpenAI and Google employees rush to Anthropic’s defense in DOD lawsuit

Published

on

More than 30 OpenAI and Google DeepMind employees filed a statement Monday supporting Anthropic’s lawsuit against the U.S. Defense Department after the federal agency labeled the AI firm a supply-chain risk, according to court filings.

“The government’s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry,” reads the brief, whose signatories include Google DeepMind chief scientist Jeff Dean.

Late last week, the Pentagon labeled Anthropic a supply-chain risk — usually reserved for foreign adversaries — after the AI firm refused to allow the Department of Defense (DOD) to use its technology for mass surveillance of Americans or autonomously firing weapons. The DOD had argued that it should be able to use AI for any “lawful” purpose and not be constrained by a private contractor.

The amicus brief in support of Anthropic showed up on the docket a few hours after the Claude maker filed two lawsuits against the DOD and other federal agencies. Wired was first to report the news.

Advertisement

In the court filing, the Google and OpenAI employees make the point that if the Pentagon was “no longer satisfied with the agreed-upon terms of its contract with Anthropic,” the agency could have “simply canceled the contract and purchased the services of another leading AI company.”

The DOD did, in fact, sign a deal with OpenAI within moments of designating Anthropic a supply-chain risk — a move many of the ChatGPT maker’s employees protested.

“If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond,” the brief reads. “And it will chill open deliberation in our field about the risks and benefits of today’s AI systems.”

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

The filing also affirms that Anthropic’s stated red lines are legitimate concerns warranting strong guardrails. Without public law to govern AI use, it argues, the contractual and technical restrictions developers impose on their systems are a critical safeguard against catastrophic misuse.

Advertisement

Many of the employees who signed the statement also signed open letters over the last couple of weeks urging the DOD to withdraw the label and calling on the leaders of their companies to support Anthropic and refuse unilateral use of their AI systems.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Real Consequences: Trump’s Bullshit Claim About Tylenol Is Seeing Real World Results

Published

on

from the fever-dream dept

There’s this insane subset of people who, when they talk about Donald Trump, I’ll never understand. It’s the ones who claim that taking what Donald Trump says seriously is a mistake that most people are unlikely to make. It’s also expressed by the crowd that claims something to the effect of: you shouldn’t take Trump literally, but you should take him seriously.

That this is said about the most powerful single individual on the planet is bonkers. This is typically how I’ve talked about my own kids when they were toddlers. Inevitably, one of my kids would be trying to say something entirely innocuous, only to have what came out of their mouth be some horrible word or swear or something. And I would hand-wave that away. C’mon, I’d tell people, you know that’s not what he meant to say.

Donald Trump is, unfortunately, the President of the United States of America. When he speaks, people listen. And a percentage of those listening will take him both literally and seriously. And when Donald Trump told American women last year to not take Tylenol, or give it to their young children, because it would give their kids autism, well, they listened.

Researchers found that in the wake of that batshit crazy announcement, use of Tylenol and its generic equivalents dropped significantly in use in emergency rooms and prescriptions written for children.

Advertisement

For nearly three months after that, new research found, Tylenol orders for pregnant women showing up in emergency rooms dropped and prescriptions of the generic drug for children rose. This happened despite sharp criticism of the president’s message from doctor groups saying that the drug, leucovorin, shouldn’t be broadly used for autism and Tylenol is safe during pregnancy.

“It just shows that in our country right now, health care has been politicized in a way that political messages are driving and impacting care — and not always for good,” said Dr. Susan Sirota, a pediatrician in Highland Park, Illinois, who wasn’t involved with the research.

The research suggested something like a 10% drop in measurable use of acetaminophen or paracetamol in the wake of Trump’s announcement. That doesn’t tell the whole story, of course, since so much of the use of Tylenol occurs through over the counter purchases at drug stores and the like. Based on market research, however, Tylenol specifically saw a nearly identical 11% or so drop in OTC sales as well back in November.

But that isn’t all. With all of this attention on a common drug supposedly giving children autism, parental anxiety about the condition has shot up as well. And, as a result, parents are turning toward experimental drugs for that that defy expert recommendations. That’s where leucovorin comes in.

Leucovorin is a derivative of folic acid used for, among other things, reducing the toxic side effects of certain chemotherapy drugs and treating a rare blood disorder. It has also been studied for a neurological condition known as cerebral folate deficiency and for a subset of autistic children, according to the American Academy of Pediatrics.

The pediatrics group doesn’t recommend routine use of the drug for autistic children. Early, small-scale studies have explored its use, “and some findings suggest potential benefit in carefully selected cases,” the group said.

Advertisement

Still, after the federal announcement about the drug, Sirota said some families in her practice asked about getting it for their autistic children. She educated them about the evidence, told them about the potential for side effects and didn’t prescribe it. Potential side effects include irritability, nausea and vomiting and skin issues like dermatitis.

This may sound melodramatic, but there is real psychological harm being done to those just starting families in this country. For most parents, their children become their entire world. Their raison d’etre. And if you scare the shit out of them about Tylenol giving their kids a disorder, they’re going to stop taking the common drug and turn to any hair-brained lifeline they can find to try to keep their children from that disorder.

Does leucovorin do anything at all for anyone with autism? I don’t have the slightest clue. And neither does the Trump administration. I’m quite confident that there is no current reason to see Tylenol as a danger to the general populace, however, and that didn’t stop Trump from going on television and playing doctor.

“It feels like a pattern with our government, right? They keep building on these houses of cards that just fall down,” she said. “This politicizing of medicine just in general, and moving away from science, has been so challenging.”

The consequences of this sort of thing are going to span decades. Let that sink in.

Advertisement

Filed Under: donald trump, health, rfk jr., tylenol

Companies: kenvue

Source link

Advertisement
Continue Reading

Tech

Satya Nadella denies Xbox death rumors, insists Microsoft is "long on gaming"

Published

on


Speaking with newly appointed Microsoft Gaming head Asha Sharma, Nadella dismissed speculation that the company might abandon gaming in favor of Windows, Azure, and AI. He described gaming as one of Microsoft’s “main identities” over the past couple of decades and said it will remain a core part of the…
Read Entire Article
Source link

Continue Reading

Tech

OpenAI and Google Workers File Amicus Brief in Support of Anthropic Against the US Government

Published

on

More than 30 employees from OpenAI and Google, including Google DeepMind chief scientist Jeff Dean, filed an amicus brief on Monday in support of Anthropic in its legal fight against the US government.

“If allowed to proceed, this effort to punish one of the leading US AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond,” the employees wrote.

The brief was filed just hours after Anthropic sued the Department of Defense and other federal agencies over the Pentagon’s decision to designate the company a “supply-chain risk.” The sanction, which severely limits Anthropic’s ability to work with military contractors, went into effect after Anthropic’s negotiations with the Pentagon fell apart. The AI startup is seeking a temporary restraining order to continue its work with military partners as the lawsuit progresses. This brief specifically supports this motion.

Signatories of the brief include Google DeepMind researchers Zhengdong Wang, Alexander Matt Turner, and Noah Siegel, as well as OpenAI researchers Gabriel Wu, Pamela Mishkin, and Roman Novak, among others. Amicus briefs are legal filings submitted by parties that are not directly involved in a court case but that have expertise relevant to it. The employees signed in a personal capacity and don’t represent the views of their companies, according to the brief.

Advertisement

OpenAI and Google did not immediately respond to WIRED’s request for comment.

The amicus brief says that the Pentagon’s decision to blacklist Anthropic “introduces an unpredictability in [their] industry that undermines American innovation and competitiveness” and “chills professional debate on the benefits and risks of frontier AI systems.” It notes that the Pentagon could have simply dropped Anthropic’s contract if it no longer wished to be bound by its terms.

The brief also says that the red lines Anthropic claims it requested, including that its AI wouldn’t be used for mass domestic surveillance and the development of autonomous lethal weapons, are legitimate concerns and require sufficient guardrails. “In the absence of public law, the contractual and technological requirements that AI developers impose on the use of their systems represent a vital safeguard against their catastrophic misuse,” the brief says.

Several other AI leaders have also publicly questioned the Pentagon’s decision to label Anthropic a supply-chain risk. OpenAI CEO Sam Altman said in a post on social media that “enforcing the SCR [supply-chain risk] designation on Anthropic would be very bad for our industry and our country.” He added that “this is a very bad decision from the DoW and I hope they reverse it.” As Anthropic’s relationship with the Pentagon soured, OpenAI quickly signed its own contract with the US military, a decision some people criticized as opportunistic.

Advertisement

Source link

Continue Reading

Tech

Worried about Australian age verification measures? I’ve found the cheapest way to secure your personal data for years to come

Published

on

Australia’s new age verification legislation has left Australians raising their eyebrows.

Similar to enforcements in the UK and US, citizens will be required to verify that they are over 18 to access adult content. But, with this comes concerns that people’s most sensitive data will be put at risk by communications with third parties.

Advertisement

Source link

Continue Reading

Tech

New York lawmakers move to block AI chatbots from giving legal or medical advice

Published

on


  • A proposed New York bill would ban AI chatbots from providing legal or medical advice
  • The legislation would allow users to sue companies if their chatbots impersonate licensed professionals
  • Lawmakers say the measure is meant to protect the public as AI tools become more widely used

AI chatbots have spent the past few years answering nearly every kind of question imaginable, but New York lawmakers are preparing to draw a firm line around at least a couple of categories of conversation. A bill advancing through the state legislature would prohibit AI chatbots from providing legal or medical advice and would allow users to sue the companies behind those systems if they cross that boundary.

The proposal, Senate Bill S7263, would apply to AI chatbots that mimic or impersonate licensed professionals such as lawyers or physicians. The heart of the bill applies the same principle about how individuals cannot practice law or medicine without the appropriate licenses to AI. That rule is meant to ensure that people receive guidance from trained professionals who can be held accountable for their advice.

Source link

Advertisement
Continue Reading

Tech

The open-source AI red-teaming tool used by Fortune 500 companies is now part of OpenAI

Published

on

The acquisition of Promptfoo, which counts more than 125,000 developers and 30-plus Fortune 500 companies among its users, is OpenAI’s most direct move yet into AI application security. Its technology will go into Frontier, the company’s enterprise agent platform launched just a month ago.

When Ian Webster was leading the LLM engineering team at Discord, shipping AI products to 200 million users, he noticed something the security industry had not yet caught up with: the tools his team relied on to keep those products safe were built for a different era. Traditional vulnerability scanners could not reason about prompt injection. Static analysis had nothing to say about a model that promised a user something it had no authority to deliver. The testing infrastructure for AI applications, he concluded, simply did not exist.

So he built it himself, nights and weekends, as an open-source project. That project became Promptfoo. On Monday, OpenAI announced it is acquiring the company.

The deal, terms of which were not disclosed, will see Promptfoo’s technology integrated into OpenAI Frontier, the enterprise agent management platform that OpenAI launched in early February. In a post on X, OpenAI said the acquisition would “strengthen agentic security testing and evaluation capabilities” within Frontier, and pledged that Promptfoo would remain open source under its current licence, with continued support for existing customers.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Promptfoo, which Webster co-founded with Michael D’Angelo – a former VP of engineering and head of AI at identity verification firm Smile Identity – launched commercially in 2024 with $5 million in seed funding from Andreessen Horowitz. The seed round attracted backing from a notable roster of angels, including Shopify CEO Tobi Lütke, Discord CTO Stanislav Vishnevskiy, and Okta co-founder Frederic Kerrest. By July 2025, the company had raised an $18.4 million Series A led by Insight Partners, with a16z again participating. Total funding ahead of the acquisition was approximately $23.4 million.

At the time of the Series A, Promptfoo said it had more than 125,000 developers using its open-source framework and over 30 Fortune 500 companies running its enterprise platform in production. Customers span retail, telecoms, financial services, and media, sectors with acute exposure to the regulatory and reputational risks of AI failures.

Advertisement

The product works by acting as an automated adversary. Rather than relying on manual penetration testing, Promptfoo’s platform talks directly to a customer’s AI application, through its chat interface or APIs, using specialised models and agents that behave like users, or specifically like attackers. When an attack succeeds, the platform records it, analyses why it worked, and iterates through an agentic reasoning loop to refine the test and expose deeper vulnerabilities. Risks the platform targets include prompt injection, data leakage, jailbreaks, and what Webster has called “application-level” failures: AI systems that promise users things they cannot deliver, or that reveal database contents to a customer service query, or that stray into political opinion in a homework tutor.

It is precisely those application-level risks that make Promptfoo’s acquisition a strategic fit for OpenAI’s current direction. Frontier, which OpenAI has described as an attempt to create “AI coworkers” for the enterprise, is designed to give AI agents access to production systems, CRM platforms, data warehouses, internal ticketing tools, and to execute workflows with real-world consequences. Agents operating at that level of access create a correspondingly enlarged attack surface. Early customers named by OpenAI for Frontier include Uber, State Farm, Intuit, and Thermo Fisher Scientific: organisations for whom a misbehaving agent is not an inconvenience but a liability.

OpenAI has been building out Frontier at speed. Since launching the platform on 5 February, the company has announced Frontier Alliances with Accenture, Boston Consulting Group, Capgemini, and McKinsey, enlisting the consulting firms to drive enterprise deployment. Separately, the company has been rolling out Codex Security, an AI-powered application security agent for software repositories, formerly known internally as Aardvark, which entered wider availability on the same day as the Promptfoo acquisition announcement.

Promptfoo is not the only AI security product entering broader availability this month. Anthropic launched Claude Code Security in February, targeting similar vulnerability scanning use cases. The convergence suggests that as AI agents move into production at scale, the question of who secures them, and how,  is fast becoming one of the defining commercial battlegrounds in enterprise AI.

Advertisement

For Promptfoo’s open-source community, OpenAI’s commitment to keeping the project open source under its current licence will be the line to watch. The project has over 248 contributors, and its adoption by developers at companies across the AI industry – including, according to Promptfoo’s own website, teams at Anthropic and Google – was built on the premise that the tool belonged to the developer community rather than to any one vendor. That promise now sits alongside a commercial integration into one of the most powerful enterprise AI platforms in the market.

Source link

Advertisement
Continue Reading

Tech

Microsoft Teams phishing targets employees with A0Backdoor malware

Published

on

Microsoft Teams phishing targets employees with backdoors

Hackers contacted employees at financial and healthcare organizations over Microsoft Teams to trick them into granting remote access through Quick Assist and deploy a new piece of malware called A0Backdoor.

The attacker relies on social engineering to gain the employee’s trust by first flooding their inbox with spam and then contacting them over Teams, pretending to be the company’s IT staff, offering assistance with the unwanted messages.

To obtain access to the target machine, the threat actor instructs the user to start a Quick Assist remote session, which is used to deploy a malicious toolset that includes digitally signed MSI installers hosted in a personal Microsoft cloud storage account.

According to researchers at cybersecurity company BlueVoyant, the malicious MSI files masquerade as Microsoft Teams components and the CrossDeviceService, a legitimate Windows tool used by the Phone Link app.

Advertisement
Commandline argument for CrossDeviceService.exe
Command line argument to install the malicious CrossDeviceService.exe
Source: BlueVoyant

Using the DLL sideloading technique with legitimate Microsoft binaries, the attacker deploys a malicious library (hostfxr.dll) that contains compressed or encrypted data. Once loaded in memory, the library decrypts the data into shellcode and transfers execution to it.

The researchers say that the malicious library also uses the CreateThread function to prevent analysis. BlueVoyant explains that the excessive thread creation could cause a debugger to crash, but it does not have a significant impact under normal execution.

The shellcode performs sandbox detection and then generates a SHA-256-derived key, which it uses to extract the A0Backdoor, which is encrypted using the AES algorithm.

Encrypted payload in the shellcode
Encrypted payload in the shellcode
Source: BlueVoyant

The malware relocates itself into a new memory region, decrypts its core routines, and relies on Windows API calls (e.g., DeviceIoControl, GetUserNameExW, and GetComputerNameW) to collect information about the host and fingerprint it.

Communication with the command-and-control (C2) is hidden in DNS traffic, with the malware sending DNS MX queries with encoded metadata in high-entropy subdomains to public recursive resolvers. The DNS servers respond with MX records containing encoded command data.

Captured DNS communication
Captured DNS communication
Source: BlueVoyant

“The malware extracts and decodes the leftmost label to recover command/configuration data, then proceeds accordingly,” explains BlueVoyant.

“Using DNS MX records helps the traffic blend in and can evade controls tuned to detect TXT-based DNS tunneling, which may be more commonly monitored.”

Advertisement

BlueVoyant states that two of the targets of this campaign are a financial institution in Canada and a global healthcare organization.

The researchers assess with moderate-to-high confidence that the campaign is an evolution of tactics, techniques and procedures associated with the BlackBasta ransomware gang, which has dissolved after the internal chat logs of the operation were leaked.

While there are plenty of overlaps, BlueVoyant notes that the use of signed MSIs and malicious DLLs, the A0Backdoor payload, and using DNS MX-based C2 communication are new elements.

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Advertisement

Source link

Continue Reading

Tech

Microsoft’s new Copilot Cowork integrates Anthropic’s Claude in rollout of new E7 licensing tier

Published

on

Microsoft is leveraging its new Anthropic partnership to bolster Copilot adoption among businesses. (GeekWire Photo / Todd Bishop)

Microsoft unveiled Copilot Cowork, a new AI assistant that can run tasks in the background, create documents, and work across Microsoft 365 apps, the company announced Monday.

The product integrates technology from Anthropic’s Claude family of models into Microsoft’s existing Copilot assistant, the latest example of Microsoft expanding beyond its tight partnership with OpenAI. Anthropic already offers Claude Cowork through its own platform.

It comes as Microsoft tries to boost adoption of Copilot, which remains a relatively small fraction of its commercial user base amid big investments in AI infrastructure.

Copilot Cowork is part of what Microsoft is calling Wave 3 of Microsoft 365 Copilot. The company also announced a new $99-per-user Microsoft 365 E7 tier launching May 1 — a new level of its technology licensing program for businesses — which bundles Copilot, identity management tools, and a new $15 Agent 365 product for managing AI agents.

The E7 tier costs 65% more than the current $60 E5 subscription.

Advertisement

“Customers have told us E5 alone is no longer enough; they do not want multiple tools stitched together, they want one trusted solution,” Judson Althoff, CEO of Microsoft’s commercial business, wrote in a blog post.

Microsoft says Copilot Cowork can handle multiple tasks simultaneously, pulling from a user’s calendar, email, and files to complete work without constant supervision.

“Copilot Chat already makes it easy to research topics and think through ideas, and Copilot Cowork allows you to take action and complete activities in the background so you can get more work done on a regular basis,” said Charles Lamanna, Microsoft’s president of Business Applications & Agents, in a demo video.

In the video, Lamanna showed Copilot Cowork analyzing a month of meetings with direct reports, compiling customer notes from a business trip, and generating a competitive analysis with accompanying Word document and Excel spreadsheet. 

The company emphasized the role of Work IQ, its intelligence layer that connects Copilot to a user’s work patterns, relationships, and content across Microsoft 365.

Copilot Cowork runs within Microsoft 365’s security and compliance boundaries, with actions and outputs auditable by default. Microsoft is pitching its multi-model approach as a differentiator, saying it will choose the right model for each task regardless of provider.

The announcement drew mixed reactions. Ethan Mollick, a Wharton professor and author of “Co-Intelligence” who studies AI adoption, raised questions on LinkedIn.

Advertisement

“Will it continue to use lower-end models or older models without telling you the way Copilot does?” Mollick wrote. He also asked whether Microsoft would keep the product updated, noting that Anthropic’s standalone Cowork product “was built in a couple of weeks using Claude Code and is being updated and evolving quickly.” 

Microsoft, he added, “has a tendency to launch a leading product and then let it sit for awhile,” noting that he was “curious about whether their pacing will change.”

Copilot Cowork is available in limited research preview and will roll out to Microsoft’s Frontier program later this month.

[Editor’s Note: Charles Lamanna will be among the speakers at GeekWire’s upcoming AI event, Agents of Transformation, March 24. More info and tickets.]

Advertisement

Source link

Continue Reading

Tech

What is the release date for The Pitt season 2 episode 10 on HBO Max?

Published

on

For good reason, Dr. Robby (Noah Wyle) is on the verge of quitting again in The Pitt season 2. He’s already due to go on a three-month sabbatical when this hellish Fourth of July shift ends, but now that’s starting to feel like a permanent leave of absence.

In his defence, I don’t blame him. The ER is currently under digital lockdown to prevent a cyber attack, meaning no computer records can be accessed, the number of patients practically doubles every five seconds, and replacement Dr. Al-Hashimi (Sepideh Moafi) isn’t making life easier for anyone.

Advertisement

Source link

Continue Reading

Tech

Microsoft adding Anthropic’s AI technology to its Copilot service

Published

on

The organisation is aiming to tap into the growing demand for autonomous agents.

Tech giant Microsoft has announced plans to launch Copilot Cowork, which is a tool based on Anthropic’s popular Claude Cowork. Reportedly, it is part of a larger initiative to take advantage of the growing demand for autonomous agents.

The news comes two months after Anthropic launched its Cowork model, which it described as a “simpler version of Claude Code”. This prompted concerns among those heavily invested in ‘traditional’ software companies resulting in a strong sell-off in US and European software. According to Reuters, Microsoft’s own shares fell nearly 9pc in February.

Currently, ​Copilot Cowork is in the testing phase and will be ​available to early-access ⁠users in later March. The organisation has not disclosed the pricing structure, but has revealed that some usage would be included ​in its $30-per-user, per-month M365 Copilot offering for enterprises.

Advertisement

Jared Spataro, the chief marketing officer of AI at Work at Microsoft said: “Frontier transformation starts with a simple idea: AI must do more than optimise what already exists. It must unlock new levels of creativity, innovation, and growth. And it must show up inside real work, grounded in real context and solve real problems for people and organisations. 

“We’ve found that to do this, the two most important elements are intelligence and trust. Intelligence ensures AI is contextual, relevant and grounded. Trust ensures AI can scale safely, securely and responsibly. Our announcements today (9 March) show how intelligence and trust together turn AI from experimentation into durable, enterprise-wide value.”

Following the reveal of Microsoft’s Copilot Cowork, Forrester vice-president and principal analyst JP Gownder said: “Microsoft’s launch of Copilot Cowork signals a strategic shift in its AI approach, showing the company moving Copilot away from reliance on OpenAI alone and toward a multi-model architecture that includes partners such as Anthropic. 

“The move also highlights the current limitations of Microsoft’s existing Copilot agents: while the company has talked extensively about autonomous ‘agents’, they have so far struggled to take meaningful action compared with newer agentic systems such as Anthropic’s. 

Advertisement

“At the same time, Copilot Cowork clearly taps into the growing hype around Anthropic’s Claude Cowork concept, but significantly extends it by embedding the capability across Microsoft 365 applications rather than keeping it as a desktop-centric tool.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025