Spend about ten minutes walking the floor at CanJam NYC 2026 and something becomes painfully obvious: wired IEMs are not quietly fading away into the Bluetooth sunset. If anything, they’re staging a very loud comeback. That only their users can hear but you understand where I’m going with this.
CanJam NYC 2026
We counted roughly three dozen IEM brands showing new models this year and not just niche boutique outfits either. Meze Audio, Campfire Audio, Noble, 64 Audio, Astell&Kern, FiiO, Melody, Final DUNU, and a handful of smaller boutique builders were all pushing new wired designs that ranged from “somewhat affordable” to “this probably requires a second mortgage.”
This is happening at the exact same moment society has fully embraced streaming and wireless convenience. Walk down any street, get on the subway, or sit in a coffee shop and you’ll see the same thing: people glued to their phones with AirPods, Sony, or Bose wireless earbuds jammed in their ears while Spotify algorithmically decides what they should listen to next. Convenience won that war years ago.
But here’s the part that makes the CanJam floor so interesting. Despite the dominance of wireless listening, there appears to be a growing group of listeners who still care about sound quality enough to deal with the dreaded cable. And unlike full size headphones, wired IEMs solve a problem that a lot of audiophile gear doesn’t: they’re small, portable, and actually practical to carry around.
Advertisement
Pair them with one of the many dongle DACs or portable DAC/amps that have flooded the market over the past few years from companies like FiiO, Questyle, iFi, and Astell&Kern, and suddenly you have something that delivers far better sound than most wireless earbuds while still fitting in your pocket.
So yes, the rest of the world may be happily living in a wireless ecosystem fueled by streaming and convenience. But if the crowds around the IEM tables at CanJam NYC 2026 were any indication, there are still plenty of people willing to deal with a cable if it means their music actually sounds better.
And based on the sheer number of new wired IEMs launching right now, manufacturers seem to think that number is growing — not shrinking.
Which raises an uncomfortable question for the wireless everything crowd:
Advertisement
What if wired IEMs never actually went away? They just waited for everyone to remember what better sound actually feels like.
A Wall of IEMs: More Choice Than Ever for Listeners
Campfire Audio at CanJam NYC 2026
I’ve never been the world’s biggest IEM fan. The whole “shove this into your ear canal and enjoy” concept never really worked for me. Some people swear by it. I usually spend the first five minutes wondering why my ears feel like they’re being fitted for dental molds.
The over-ear cable loop was always the least offensive part of the experience. It kept things relatively secure and avoided that lovely moment when someone brushes past you on a train platform or airport concourse and suddenly your headphones are being violently introduced to Newton’s laws of motion. Anyone who owned early fixed-cable IEMs knows the feeling: one snag, one sharp tug, and that cable is done.
Advertisement. Scroll to continue reading.
So a sincere thank you to whoever finally realized detachable cables were not a luxury feature but basic survival equipment. The pro audio world figured that out decades ago. You can’t exactly be onstage in front of 90,000 people and have your monitor connection disappear because someone stepped on a cable. Robust connectors and replaceable cables were inevitable — and long overdue.
Advertisement
Yes, wireless will replace most of this eventually. Convenience tends to win those battles. But what makes IEMs fascinating in 2026 isn’t the cable debate — it’s the sheer level of innovation packed into something smaller than a pinky ring. Balanced armatures, planar drivers, electrostatic elements, hybrid designs mixing multiple technologies, and configurations that stack five, eight, ten drivers or more inside a headshell that looks like it belongs on a piece of jewelry.
It’s absurd engineering in miniature. And judging by what we saw at CanJam NYC 2026, the people building these things are just getting started.
Campfire Audio Andromeda 10
One of the reasons we brought columnist Aaron Sigal back to cover wired and wireless IEMs is simple: the traffic is there. The demand is there. Our recent reviews of the Campfire Audio Andromeda 10, DUNU DN142, Apos x Community Rock Lobster, SIVGA Nightingale Pro, and Beyerdynamic DT 7x IE Series have all pulled in highly focused readership. Not casual drive by traffic either. The kind of readers who actually care about what driver configuration is inside the shell and whether the tuning leans reference or warm.
Is a lot of that coming from the Head-Fi crowd? Maybe. They can circle the wagons and argue endlessly about tips, cables, and impedance curves like it’s a graduate seminar in ear canal acoustics. But the interest is real, and the traffic numbers back that up.
Advertisement
Walking the floor at CanJam NYC 2026 made that even harder to ignore. There were so many tables dedicated to wired IEMs that it almost discouraged me from trying to cover them all. Full size headphones are still where my personal interest leans, and frankly that’s where a lot of our readers tend to focus as well. But the reality on the show floor told a different story.
The IEM tables were packed. Constantly.
Yes, the entire show was a sea of people moving from booth to booth, but the crowds hovering over those tiny display trays full of in ear monitors never really thinned out. People waiting for a chance to listen. Swapping tips. Plugging into portable DACs. Comparing notes.
Based on what we saw, it’s hard to imagine that the companies building wired IEMs didn’t have a very good weekend in New York. There’s just no way those tables were that busy if nobody was buying.
Advertisement
So here’s the question for readers.
Do you actually use wired IEMs? And if you do, why?
Is it about sound quality? Portability? Isolation on planes and trains? Or are you pairing them with a dongle DAC or portable player because you simply refuse to let Bluetooth compression have the final say in how your music sounds?
Advertisement. Scroll to continue reading.
Advertisement
And let’s address the elephant sitting in the display case: price. The number of wired IEMs that now cost well into the thousands of dollars is…kind of insane. Universal or custom, it doesn’t seem to matter anymore. Some of these models cost as much as a very good stereo system or a pair of flagship headphones.
Does that discourage you? Or do you see them as the most practical way to get reference level sound in a portable format?
We’re genuinely curious where people land on this.
Speaking with newly appointed Microsoft Gaming head Asha Sharma, Nadella dismissed speculation that the company might abandon gaming in favor of Windows, Azure, and AI. He described gaming as one of Microsoft’s “main identities” over the past couple of decades and said it will remain a core part of the… Read Entire Article Source link
More than 30 employees from OpenAI and Google, including Google DeepMind chief scientist Jeff Dean, filed an amicus brief on Monday in support of Anthropic in its legal fight against the US government.
“If allowed to proceed, this effort to punish one of the leading US AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond,” the employees wrote.
The brief was filed just hours after Anthropic sued the Department of Defense and other federal agencies over the Pentagon’s decision to designate the company a “supply-chain risk.” The sanction, which severely limits Anthropic’s ability to work with military contractors, went into effect after Anthropic’s negotiations with the Pentagon fell apart. The AI startup is seeking a temporary restraining order to continue its work with military partners as the lawsuit progresses. This brief specifically supports this motion.
Signatories of the brief include Google DeepMind researchers Zhengdong Wang, Alexander Matt Turner, and Noah Siegel, as well as OpenAI researchers Gabriel Wu, Pamela Mishkin, and Roman Novak, among others. Amicus briefs are legal filings submitted by parties that are not directly involved in a court case but that have expertise relevant to it. The employees signed in a personal capacity and don’t represent the views of their companies, according to the brief.
Advertisement
OpenAI and Google did not immediately respond to WIRED’s request for comment.
The amicus brief says that the Pentagon’s decision to blacklist Anthropic “introduces an unpredictability in [their] industry that undermines American innovation and competitiveness” and “chills professional debate on the benefits and risks of frontier AI systems.” It notes that the Pentagon could have simply dropped Anthropic’s contract if it no longer wished to be bound by its terms.
The brief also says that the red lines Anthropic claims it requested, including that its AI wouldn’t be used for mass domestic surveillance and the development of autonomous lethal weapons, are legitimate concerns and require sufficient guardrails. “In the absence of public law, the contractual and technological requirements that AI developers impose on the use of their systems represent a vital safeguard against their catastrophic misuse,” the brief says.
Several other AI leaders have also publicly questioned the Pentagon’s decision to label Anthropic a supply-chain risk. OpenAI CEO Sam Altman said in a post on social media that “enforcing the SCR [supply-chain risk] designation on Anthropic would be very bad for our industry and our country.” He added that “this is a very bad decision from the DoW and I hope they reverse it.” As Anthropic’s relationship with the Pentagon soured, OpenAI quickly signed its own contract with the US military, a decision some people criticized as opportunistic.
Australia’s new age verification legislation has left Australians raising their eyebrows.
Similar to enforcements in the UK and US, citizens will be required to verify that they are over 18 to access adult content. But, with this comes concerns that people’s most sensitive data will be put at risk by communications with third parties.
Luckily, this is where a VPN can help. The best VPNs secure your online traffic, leaving no trace of your IP that snoopers can find, and securing data you transfer to age verification providers against malicious actors trying to intercept your information. What’s more, doing this can cost under 3 AU$ per month.
My top pick for achieving this is Surfshark. It gives you unlimited simultaneous connections, it’s the fastest VPN I’ve ever used, and you can choose to add a bunch of additional features if you want to completely secure your online presence.
Advertisement
While it’s not quite as high-performing as NordVPN, our best Australia VPN overall, it’s a superb option if you’re new to VPNs and looking for something cheap and simple to use that’s still powerful.
While a Surfshark Starter plan is the cheapest way to protect yourself during age verification checks, a One adds a full security suite, including antivirus and real-time data breach alerts. For a small monthly increase, this bundle provides broader digital protection than the base plans of many competitors.
NordVPN and Surfshark both offer 5 Australia locations. However, NordVPN offers more advanced security credentials and more consistent content unblocking capabilities.
Advertisement
Having tested both myself, you can rest assured that either choice will protect you should you choose to use a VPN when verifying your age. But, no matter if you choose Surfshark or NordVPN, make sure to turn off auto-renewals after signing up, as both enforce large price jumps at the end of your plan if you don’t.
A proposed New York bill would ban AI chatbots from providing legal or medical advice
The legislation would allow users to sue companies if their chatbots impersonate licensed professionals
Lawmakers say the measure is meant to protect the public as AI tools become more widely used
AI chatbots have spent the past few years answering nearly every kind of question imaginable, but New York lawmakers are preparing to draw a firm line around at least a couple of categories of conversation. A bill advancing through the state legislature would prohibit AI chatbots from providing legal or medical advice and would allow users to sue the companies behind those systems if they cross that boundary.
The proposal, Senate Bill S7263, would apply to AI chatbots that mimic or impersonate licensed professionals such as lawyers or physicians. The heart of the bill applies the same principle about how individuals cannot practice law or medicine without the appropriate licenses to AI. That rule is meant to ensure that people receive guidance from trained professionals who can be held accountable for their advice.
If an AI chatbot responds in a way that effectively substitutes for licensed legal or medical advice, the developers could be in violation of the law. The bill, which includes other AI safety measures, recently passed out of the New York Senate’s Internet and Technology Committee with unanimous support.
Article continues below
Advertisement
Chatbot providers would also have to clearly inform users that they are interacting with an artificial intelligence system rather than a human professional. Even if a chatbot displays a warning that it is not a doctor or lawyer, that disclaimer would not protect the company from liability if the system still provides prohibited advice.
But it’s also part of a larger effort to regulate AI chatbots in New York. Other bills focus on protecting minors who interact with AI chatbots or strengthening transparency requirements for generative AI systems and synthetic media.
“People deserve real care from real people,” State Senator Kristen Gonzalez, who introduced the bill, said in a statement. “They deserve transparency, accountability, and the promise that their data is secure while utilizing technology.”
Advertisement
AI advice
To enforce the law, individuals could file civil lawsuits against companies whose AI chatbots violate the rule. Users could seek damages and recover legal fees if they successfully prove that a chatbot provided unauthorized professional advice.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
When millions of people use AI chatbots for drafting emails and answering questions on topics ranging from cooking to tax policy, it’s not surprising that many may treat AI answers as genuine advice. That is precisely the situation lawmakers hope to avoid in areas where mistakes could carry serious consequences.
Advertisement
Educational explanations about general concepts would still be allowed. What lawmakers want to avoid is the scenario in which a chatbot confidently instructs someone how to treat a medical condition or interpret a legal contract. But there are always ambiguous situations. For instance, a chatbot might explain the symptoms of a medical condition by summarizing publicly available information. Yet the same explanation could influence a user’s health decisions, making it resemble medical advice in practice.
Despite those concerns, the broader trend toward regulating artificial intelligence appears unlikely to slow. AI’s growing influence has prompted lawmakers to ask whether the technology should face rules similar to those that govern traditional professions.
Technology regulation often spreads from one jurisdiction to another. Laws enacted in large states frequently become models for similar legislation elsewhere. So, for AI developers, the New York proposal offers a preview of the kinds of questions that governments will increasingly ask, and that they want AI chatbots not to answer.
The acquisition of Promptfoo, which counts more than 125,000 developers and 30-plus Fortune 500 companies among its users, is OpenAI’s most direct move yet into AI application security. Its technology will go into Frontier, the company’s enterprise agent platform launched just a month ago.
When Ian Webster was leading the LLM engineering team at Discord, shipping AI products to 200 million users, he noticed something the security industry had not yet caught up with: the tools his team relied on to keep those products safe were built for a different era. Traditional vulnerability scanners could not reason about prompt injection. Static analysis had nothing to say about a model that promised a user something it had no authority to deliver. The testing infrastructure for AI applications, he concluded, simply did not exist.
So he built it himself, nights and weekends, as an open-source project. That project became Promptfoo. On Monday, OpenAI announced it is acquiring the company.
The deal, terms of which were not disclosed, will see Promptfoo’s technology integrated into OpenAI Frontier, the enterprise agent management platform that OpenAI launched in early February. In a post on X, OpenAI said the acquisition would “strengthen agentic security testing and evaluation capabilities” within Frontier, and pledged that Promptfoo would remain open source under its current licence, with continued support for existing customers.
Advertisement
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
Promptfoo, which Webster co-founded with Michael D’Angelo – a former VP of engineering and head of AI at identity verification firm Smile Identity – launched commercially in 2024 with $5 million in seed funding from Andreessen Horowitz. The seed round attracted backing from a notable roster of angels, including Shopify CEO Tobi Lütke, Discord CTO Stanislav Vishnevskiy, and Okta co-founder Frederic Kerrest. By July 2025, the company had raised an $18.4 million Series A led by Insight Partners, with a16z again participating. Total funding ahead of the acquisition was approximately $23.4 million.
At the time of the Series A, Promptfoo said it had more than 125,000 developers using its open-source framework and over 30 Fortune 500 companies running its enterprise platform in production. Customers span retail, telecoms, financial services, and media, sectors with acute exposure to the regulatory and reputational risks of AI failures.
Advertisement
The product works by acting as an automated adversary. Rather than relying on manual penetration testing, Promptfoo’s platform talks directly to a customer’s AI application, through its chat interface or APIs, using specialised models and agents that behave like users, or specifically like attackers. When an attack succeeds, the platform records it, analyses why it worked, and iterates through an agentic reasoning loop to refine the test and expose deeper vulnerabilities. Risks the platform targets include prompt injection, data leakage, jailbreaks, and what Webster has called “application-level” failures: AI systems that promise users things they cannot deliver, or that reveal database contents to a customer service query, or that stray into political opinion in a homework tutor.
It is precisely those application-level risks that make Promptfoo’s acquisition a strategic fit for OpenAI’s current direction. Frontier, which OpenAI has described as an attempt to create “AI coworkers” for the enterprise, is designed to give AI agents access to production systems, CRM platforms, data warehouses, internal ticketing tools, and to execute workflows with real-world consequences. Agents operating at that level of access create a correspondingly enlarged attack surface. Early customers named by OpenAI for Frontier include Uber, State Farm, Intuit, and Thermo Fisher Scientific: organisations for whom a misbehaving agent is not an inconvenience but a liability.
OpenAI has been building out Frontier at speed. Since launching the platform on 5 February, the company has announced Frontier Alliances with Accenture, Boston Consulting Group, Capgemini, and McKinsey, enlisting the consulting firms to drive enterprise deployment. Separately, the company has been rolling out Codex Security, an AI-powered application security agent for software repositories, formerly known internally as Aardvark, which entered wider availability on the same day as the Promptfoo acquisition announcement.
Promptfoo is not the only AI security product entering broader availability this month. Anthropic launched Claude Code Security in February, targeting similar vulnerability scanning use cases. The convergence suggests that as AI agents move into production at scale, the question of who secures them, and how, is fast becoming one of the defining commercial battlegrounds in enterprise AI.
Advertisement
For Promptfoo’s open-source community, OpenAI’s commitment to keeping the project open source under its current licence will be the line to watch. The project has over 248 contributors, and its adoption by developers at companies across the AI industry – including, according to Promptfoo’s own website, teams at Anthropic and Google – was built on the premise that the tool belonged to the developer community rather than to any one vendor. That promise now sits alongside a commercial integration into one of the most powerful enterprise AI platforms in the market.
Hackers contacted employees at financial and healthcare organizations over Microsoft Teams to trick them into granting remote access through Quick Assist and deploy a new piece of malware called A0Backdoor.
The attacker relies on social engineering to gain the employee’s trust by first flooding their inbox with spam and then contacting them over Teams, pretending to be the company’s IT staff, offering assistance with the unwanted messages.
To obtain access to the target machine, the threat actor instructs the user to start a Quick Assist remote session, which is used to deploy a malicious toolset that includes digitally signed MSI installers hosted in a personal Microsoft cloud storage account.
According to researchers at cybersecurity company BlueVoyant, the malicious MSI files masquerade as Microsoft Teams components and the CrossDeviceService, a legitimate Windows tool used by the Phone Link app.
Advertisement
Command line argument to install the malicious CrossDeviceService.exe Source: BlueVoyant
Using the DLL sideloading technique with legitimate Microsoft binaries, the attacker deploys a malicious library (hostfxr.dll) that contains compressed or encrypted data. Once loaded in memory, the library decrypts the data into shellcode and transfers execution to it.
The researchers say that the malicious library also uses the CreateThread function to prevent analysis. BlueVoyant explains that the excessive thread creation could cause a debugger to crash, but it does not have a significant impact under normal execution.
The shellcode performs sandbox detection and then generates a SHA-256-derived key, which it uses to extract the A0Backdoor, which is encrypted using the AES algorithm.
Encrypted payload in the shellcode Source: BlueVoyant
The malware relocates itself into a new memory region, decrypts its core routines, and relies on Windows API calls (e.g., DeviceIoControl, GetUserNameExW, and GetComputerNameW) to collect information about the host and fingerprint it.
Communication with the command-and-control (C2) is hidden in DNS traffic, with the malware sending DNS MX queries with encoded metadata in high-entropy subdomains to public recursive resolvers. The DNS servers respond with MX records containing encoded command data.
Captured DNS communication Source: BlueVoyant
“The malware extracts and decodes the leftmost label to recover command/configuration data, then proceeds accordingly,” explains BlueVoyant.
“Using DNS MX records helps the traffic blend in and can evade controls tuned to detect TXT-based DNS tunneling, which may be more commonly monitored.”
Advertisement
BlueVoyant states that two of the targets of this campaign are a financial institution in Canada and a global healthcare organization.
The researchers assess with moderate-to-high confidence that the campaign is an evolution of tactics, techniques and procedures associated with the BlackBasta ransomware gang, which has dissolved after the internal chat logs of the operation were leaked.
While there are plenty of overlaps, BlueVoyant notes that the use of signed MSIs and malicious DLLs, the A0Backdoor payload, and using DNS MX-based C2 communication are new elements.
Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.
Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.
Microsoft is leveraging its new Anthropic partnership to bolster Copilot adoption among businesses. (GeekWire Photo / Todd Bishop)
Microsoft unveiled Copilot Cowork, a new AI assistant that can run tasks in the background, create documents, and work across Microsoft 365 apps, the company announced Monday.
The product integrates technology from Anthropic’s Claude family of models into Microsoft’s existing Copilot assistant, the latest example of Microsoft expanding beyond its tight partnership with OpenAI. Anthropic already offers Claude Cowork through its own platform.
It comes as Microsoft tries to boost adoption of Copilot, which remains a relatively small fraction of its commercial user base amid big investments in AI infrastructure.
Copilot Cowork is part of what Microsoft is calling Wave 3 of Microsoft 365 Copilot. The company also announced a new $99-per-user Microsoft 365 E7 tier launching May 1 — a new level of its technology licensing program for businesses — which bundles Copilot, identity management tools, and a new $15 Agent 365 product for managing AI agents.
The E7 tier costs 65% more than the current $60 E5 subscription.
Advertisement
“Customers have told us E5 alone is no longer enough; they do not want multiple tools stitched together, they want one trusted solution,” Judson Althoff, CEO of Microsoft’s commercial business, wrote in a blog post.
Microsoft says Copilot Cowork can handle multiple tasks simultaneously, pulling from a user’s calendar, email, and files to complete work without constant supervision.
“Copilot Chat already makes it easy to research topics and think through ideas, and Copilot Cowork allows you to take action and complete activities in the background so you can get more work done on a regular basis,” said Charles Lamanna, Microsoft’s president of Business Applications & Agents, in a demo video.
Great to see the excitement around Copilot Cowork today. I have been using it in my own work for the past few weeks, and the best way to understand it is to see it in action. Sharing a short demo from my day to day here. pic.twitter.com/Rxf6wkaLTk
In the video, Lamanna showed Copilot Cowork analyzing a month of meetings with direct reports, compiling customer notes from a business trip, and generating a competitive analysis with accompanying Word document and Excel spreadsheet.
The company emphasized the role of Work IQ, its intelligence layer that connects Copilot to a user’s work patterns, relationships, and content across Microsoft 365.
Copilot Cowork runs within Microsoft 365’s security and compliance boundaries, with actions and outputs auditable by default. Microsoft is pitching its multi-model approach as a differentiator, saying it will choose the right model for each task regardless of provider.
The announcement drew mixed reactions. Ethan Mollick, a Wharton professor and author of “Co-Intelligence” who studies AI adoption, raised questions on LinkedIn.
Advertisement
“Will it continue to use lower-end models or older models without telling you the way Copilot does?” Mollick wrote. He also asked whether Microsoft would keep the product updated, noting that Anthropic’s standalone Cowork product “was built in a couple of weeks using Claude Code and is being updated and evolving quickly.”
Microsoft, he added, “has a tendency to launch a leading product and then let it sit for awhile,” noting that he was “curious about whether their pacing will change.”
Copilot Cowork is available in limited research preview and will roll out to Microsoft’s Frontier program later this month.
[Editor’s Note: Charles Lamanna will be among the speakers at GeekWire’s upcoming AI event, Agents of Transformation, March 24. More info and tickets.]
For good reason, Dr. Robby (Noah Wyle) is on the verge of quitting again in The Pitt season 2. He’s already due to go on a three-month sabbatical when this hellish Fourth of July shift ends, but now that’s starting to feel like a permanent leave of absence.
In his defence, I don’t blame him. The ER is currently under digital lockdown to prevent a cyber attack, meaning no computer records can be accessed, the number of patients practically doubles every five seconds, and replacement Dr. Al-Hashimi (Sepideh Moafi) isn’t making life easier for anyone.
I need a holiday just reading that, let alone watching. But when will The Pitt season 2 episode 10 air on HBO Max?
Advertisement
Article continues below
Advertisement
What time can I watch The Pitt season 2 episode 10 on HBO Max?
The Pitt Season 2 | Official Trailer | HBO Max – YouTube
For US viewers, The Pitt season 2 episode 10 will drop on Thursday, March 12 at 6pm PT/ 9pm ET. As always, it’ll come out on HBO Max, too.
Internationally, you’re looking out for these timings:
US – 6pm PT / 9pm ET
Canada – 6pm PT / 9pm ET
India – Friday, March 13 at 7:30am IST
Singapore – Friday, March 13 at 10am SGT
Australia – Friday, March 13 at 1pm AEDT
New Zealand – Friday, March 13 at 3pm NZDT
You’ll notice that I’ve not included the UK here. That’s because HBO Max doesn’t launch in the UK until March 26. It hasn’t been released on Sky or Now TV either, which are the usual homes to HBO Originals on British shores.
In short: you’ll have to wait until HBO Max makes its UK debut to binge both seasons, but at least you should be able to watch the season finale with everyone else on April 16.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Advertisement
When do new episodes of The Pitt season 2 come out?
The main man. (Image credit: HBO)
New episodes of The Pitt will make landfall every Thursday in the US, and Fridays everywhere else. Here are the all-important dates you need to know about:
And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
The organisation is aiming to tap into the growing demand for autonomous agents.
Tech giant Microsoft has announced plans to launch Copilot Cowork, which is a tool based on Anthropic’s popular Claude Cowork. Reportedly, it is part of a larger initiative to take advantage of the growing demand for autonomous agents.
The news comes two months after Anthropic launched its Cowork model, which it described as a “simpler version of Claude Code”. This prompted concerns among those heavily invested in ‘traditional’ software companies resulting in a strong sell-off in US and European software. According to Reuters, Microsoft’s own shares fell nearly 9pc in February.
Currently, Copilot Cowork is in the testing phase and will be available to early-access users in later March. The organisation has not disclosed the pricing structure, but has revealed that some usage would be included in its $30-per-user, per-month M365 Copilot offering for enterprises.
Advertisement
Jared Spataro, the chief marketing officer of AI at Work at Microsoft said: “Frontier transformation starts with a simple idea: AI must do more than optimise what already exists. It must unlock new levels of creativity, innovation, and growth. And it must show up inside real work, grounded in real context and solve real problems for people and organisations.
“We’ve found that to do this, the two most important elements are intelligence and trust. Intelligence ensures AI is contextual, relevant and grounded. Trust ensures AI can scale safely, securely and responsibly. Our announcements today (9 March) show how intelligence and trust together turn AI from experimentation into durable, enterprise-wide value.”
Following the reveal of Microsoft’s Copilot Cowork, Forrester vice-president and principal analyst JP Gownder said: “Microsoft’s launch of Copilot Cowork signals a strategic shift in its AI approach, showing the company moving Copilot away from reliance on OpenAI alone and toward a multi-model architecture that includes partners such as Anthropic.
“The move also highlights the current limitations of Microsoft’s existing Copilot agents: while the company has talked extensively about autonomous ‘agents’, they have so far struggled to take meaningful action compared with newer agentic systems such as Anthropic’s.
Advertisement
“At the same time, Copilot Cowork clearly taps into the growing hype around Anthropic’s Claude Cowork concept, but significantly extends it by embedding the capability across Microsoft 365 applications rather than keeping it as a desktop-centric tool.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
People can block the xAI’s Grok chatbot from creating modifications of their uploaded images on social network X. Neither X or xAI, both Elon Musk-owned businesses, have made a public announcement about this feature, which users began noticing on the iOS app within the image/video upload menu over the past few days.
This option is likely a response to Grok’s latest scandal, which began at the start of 2026 when the addition of image generation tools to the chatbot saw about 3 million sexualized or nudified images created. An estimated 23,000 of the images made in that 11-day period contained sexualized images of children, according to the Center for Countering Digital Hate. Grok is now facing two separateinvestigations by regulators in the EU over the issue.
The positive side of the recent feature addition is that X and xAI have taken a step toward limiting inappropriate uses of Grok. This block is a simple toggle and it hasn’t been buried in the UI. So that’s nice.
The negative side, however, is that this token gesture that doesn’t amount to any serious improvement to how Grok works or can be used. It’s great that the chatbot won’t alter the file uploaded by one person, but as reported by The Verge, the block only limits tagging Grok in a reply to create an image edit. There are plenty of workarounds for those dedicated individuals who insist on being able to use generative AI to undress people without their consent or knowledge.
Advertisement
Hopefully xAI has more powerful protective tools in the works. The limitations Grok on putting real people in scanty clothing that X announced in January seem to have had only partial success at best. If this additional and narrow use case is all the company offers, then the claims of being a zero-tolerance space for nonconsensual nudity are going to ring hollow. Especially since, as we noted at the time, xAI could stop allowing image generation at all until the issue is properly and thoroughly fixed.