However, rendered here in Motorola’s Watch app, everything looks fun and easy! Motorola (and Polar, I guess) uses Apple’s “close your rings” approach, with active minutes, steps, and calories. I particularly like that you can now use Polar’s sleep tracking with a cheaper Android watch. Polar takes into account sleep time, solidity (whether or not your sleep was interrupted), and regeneration to give you a Nightly Recharge Status.
You can still click through and see your ANS, but there’s a lot more context surrounding it. Also, the graphs are prettier. I compared the sleep, heart rate, and stress measurements to my Oura Ring 4, and I found no big discrepancies. The Moto Watch tended to be a little bit more generous in my sleep and activity measurements (7 hours and 21 minutes of sleep instead of 7 hours and 13 minutes, or 3,807 steps as compared to 3,209), but that’s usual for lower-end fitness trackers that have fewer and less-sensitive sensors.
On that note, I do have one major hardware gripe. Onboard GPS is meant to make it easier to just run out the door and start your watch. I didn’t find this to be the case. Whatever processor is in the watch (Motorola has conveniently chosen not to reveal this), it’s just really slow to connect to satellites and iffy whenever it does. This isn’t a huge deal when I’m just walking my dog or lifting weights in my living room, but it constantly cuts out when I’m outside and doesn’t have the ability to fill in the blanks, as another, more expensive fitness tracker would do.
It’s just really annoying to constantly get pinged about satellite loss and to have a quarter-mile or a half-mile cut out of your runs. That’s how I know the speaker works—it was constantly telling me it lost satellite connection during activities.
Advertisement
Finally, the screen and buttons are really sensitive. It does give you an option to lock the screen, but even then, I found myself accidentally unlocking it from time to time and turning the recording off when I didn’t mean to.
As I write this, I have seven different smartwatches from different brands sitting on my desk. If you’re looking for a cheap, attractive, and effective Android-compatible smartwatch, I would say that the CMF Watch 3 Pro is your best choice. However, I do think the integration with Polar was well done, and the price point is not that bad. I’m definitely keeping an eye out for what Motorola might have to offer in the future.
X is experimenting with a new way for AI to write Community Notes. The company is testing a new “collaborative notes” feature that allows human writers to request an AI-written Community Note.
It’s not the first time the platform has experimented with AI in Community Notes. The company started a pilot program last year to allow developers to create dedicated AI note writers. But the latest experiment sounds like a more streamlined process.
According to the company, when an existing Community Note contributor requests a note on a post, the request “now also kicks off creation of a Collaborative Note.” Contributors can then rate the note or suggest improvements. “Collaborative Notes can update over time as suggestions and ratings come in,” X says. “When considering an update, the system reviews new input from contributors to make the note as helpful as possible, then decides whether the new version is a meaningful improvement.”
X doesn’t say whether it’s using Grok or another AI tool to actually generate the fact check. If it was using Grok, that would be in-line with how a lot of X users currently invoke the AI on threads with replies like “@grok is this true?”
Advertisement
Community Notes has often been criticized for moving too slowly so adding AI into the mix could help speed up the process of getting notes published. Keith Coleman, who oversees Community Notes at X, wrote in a post that the update also provides “a new way to make models smarter in the process (continuous learning from community feedback).” On the other hand, we don’t have to look very far to find examples of Grok losing touch with reality or worse.
According to X, only Community Note Contributors with a “top writer” status will be able to initiate a collaborative note to start, though it expects to expand availability “over time.”
Amazon has refreshed its Fire TV lineup in the UK, with three new ranges available to buy right now.
The updated Fire TV 2-Series, Fire TV 4-Series, and Fire TV Omni QLED promise slimmer designs, faster performance and smarter picture tech. All of this is aimed at getting you to your shows quicker.
Leading this current crop is the Fire TV Omni QLED, available in 50-, 55- and 65-inch sizes. Amazon says the new panel is 60% brighter than previous models, with double the local dimming zones for punchier highlights and deeper blacks. Dolby Vision and HDR10+ Adaptive are on board. In addition, the TV can automatically adjust colour and brightness based on your room lighting.
The Omni QLED also leans heavily into smart features. OmniSense uses presence detection to wake the TV when you enter the room and power it down when you leave. Meanwhile, Interactive Art reacts to movement, turning the screen into something closer to a living display than a black rectangle on the wall.
Advertisement
Further down the range, the redesigned Fire TV 2-Series and Fire TV 4-Series cover screen sizes from 32 to 55 inches. The 2-Series sticks to HD resolution, while the 4-Series steps up to 4K. Both benefit from ultra-thin bezels and a new quad-core processor that Amazon says makes them 30% faster than before. It’s a modest upgrade on paper. However, it is one that should make everyday navigation feel noticeably snappier.
Advertisement
All three ranges run Fire TV OS, with Amazon continuing to push its content-first approach. It surfaces apps, live TV and recommendations as soon as you turn the screen on.
The new Fire TV models are available now in the UK, with introductory pricing running until 10 February 2026:
Advertisement
With faster internals and a brighter flagship model, Amazon’s latest Fire TVs look like a solid refresh, especially if you’re after a big screen without a premium TV price tag.
For years, we’ve been subjected to an endless parade of hyperventilating claims about the Biden administration’s supposed “censorship industrial complex.” We were told, over and over again, that the government was weaponizing its power to silence conservative speech. The evidence for this? Some angry emails from White House staffers that Facebook ignored. That was basically it. The Supreme Court looked at it and said there was no standing because there was no evidence of coercion (and even suggested that the plaintiffs had fabricated some of the facts, unsupported by reality).
But now we have actual, documented cases of the federal government using its surveillance apparatus to track down and intimidate Americans for nothing more than criticizing government policy. And wouldn’t you know it, the same people who spent years screaming about censorship are suddenly very quiet.
If any of the following stories had happened under the Biden administration, you’d hear screams from the likes of Matt Taibbi, Bari Weiss, and Michael Shellenberger, about the crushing boot of the government trying to silence speech.
But somehow… nothing. Weiss is otherwise occupied—busy stripping CBS News for parts to please King Trump. And the dude bros who invented the “censorship industrial complex” out of their imaginations? Pretty damn quiet about stories like the following.
Advertisement
Taibbi is spending his time trying to play down the Epstein files and claiming Meta blocking ICE apps on direct request from DHS isn’t censorship because he hasn’t seen any evidence that it’s because of the federal government. Dude. Pam Bondi publicly stated she called Meta to have them removed. Shellenberger, who is now somehow a “free speech professor” at Bari Weiss’ collapsing fake university, seems to just be posting non-stop conspiracy theory nonsense from cranks.
Let’s start with the case that should make your blood boil. The Washington Post reports that a 67-year-old retired Philadelphia man — a naturalized U.S. citizen originally from the UK — found himself in the crosshairs of the Department of Homeland Security after he committed the apparently unforgivable sin of… sending a polite email to a government lawyer asking for mercy in a deportation case.
Here’s what he wrote to a prosecutor who was trying to deport an Afghani man who feared the Taliban would take his life if sent there. The Philadelphia resident found the prosecutors email and sent the following:
“Mr. Dernbach, don’t play Russian roulette with H’s life. Err on the side of caution. There’s a reason the US government along with many other governments don’t recognise the Taliban. Apply principles of common sense and decency.”
That’s it. That’s the email that triggered a federal response. Within hours — hours — of sending this email, Google notified him that DHS had issued an administrative subpoena demanding his personal information. Days later, federal agents showed up at his door.
Advertisement
Showed. Up. At. His. Door.
A retired guy sends a respectful email asking the government to be careful with someone’s life, and within the same day, the surveillance apparatus is mobilized against him.
The tool being weaponized here is the administrative subpoena (something we’ve been calling out for well over a decade, under administrations of both parties) which is a particularly insidious instrument because it doesn’t require a judge’s approval. Unlike a judicial subpoena, where investigators have to show a judge enough evidence to justify the search, administrative subpoenas are essentially self-signed permission slips. As TechCrunch explains:
Unlike judicial subpoenas, which are authorized by a judge after seeing enough evidence of a crime to authorize a search or seizure of someone’s things, administrative subpoenas are issued by federal agencies, allowing investigators to seek a wealth of information about individuals from tech and phone companies without a judge’s oversight.
While administrative subpoenas cannot be used to obtain the contents of aperson’s emails, online searches, or location data, they can demand information specifically about the user, such as what time a user logs in, from where, using which devices, and revealing the email addresses and other identifiable information about who opened an online account. But because administrative subpoenas are not backed by a judge’s authority or a court’s order, it’s largely up to a company whether to give over any data to the requesting government agency.
Advertisement
The Philadelphia retiree’s case would be alarming enough if it were a one-off. It’s not. Bloomberg has reported on at least five cases where DHS used administrative subpoenas to try to unmask anonymous Instagram accounts that were simply documenting ICE raids in their communities. One account, @montcowatch, was targeted simply for sharing resources about immigrant rights in Montgomery County, Pennsylvania. The justification? A claim that ICE agents were being “stalked” — for which there was no actual evidence.
The ACLU, which is now representing several of these targeted individuals, isn’t mincing words:
“It doesn’t take that much to make people look over their shoulder, to think twice before they speak again. That’s why these kinds of subpoenas and other actions—the visits—are so pernicious. You don’t have to lock somebody up to make them reticent to make their voice heard. It really doesn’t take much, because the power of the federal government is so overwhelming.”
This is textbook chilling effects on speech.
Remember, it was just a year and a half ago in Murthy v. Missouri, the Supreme Court found no First Amendment violation when the Biden administration sent emails to social media platforms—in part because the platforms felt entirely free to say no. The platforms weren’t coerced; they could ignore the requests and did.
Advertisement
Now consider the Philadelphia retiree. He sends one polite email. Within hours, DHS has mobilized to unmask him. Days later, federal agents are at his door. Does that sound like someone who’s free to speak his mind without consequence?
Even if you felt that what the Biden admin did was inappropriate, it didn’t involve federal agents showing up at people’s homes.
That is what actual government suppression of speech looks like. Not mean tweets from press secretaries that platforms ignored, but federal agents showing up at your door because you sent an (perfectly nice) email the government didn’t like.
So we have DHS mobilizing within hours to identify a 67-year-old retiree who sent a polite email. We have agents showing up at citizens’ homes to interrogate them about their protected speech. We have the government trying to unmask anonymous accounts that are documenting law enforcement activities — something that is unambiguously protected under the First Amendment.
Advertisement
Recording police, sharing that recording, and doing so anonymously is legal. It’s protected speech. And the government is using administrative subpoenas to try to identify and intimidate the people doing it.
For years, we heard that government officials sending emails to social media companies — emails the companies ignored — constituted an existential threat to the First Amendment. But when the government actually uses its coercive power to track down, identify, and intimidate citizens for their speech?
Crickets.
This is what a real threat to free speech looks like. Not “jawboning” that platforms can easily refuse, but the full weight of federal surveillance being deployed against anyone who dares to criticize the administration. The chilling effect here is the entire point.
Advertisement
As the ACLU noted, this appears to be “part of a broader strategy to intimidate people who document immigration activity or criticize government actions.”
If you spent the last few years warning about government censorship, this is your moment. This is the actual thing you claimed to be worried about. But, of course, all those who pretended to care about free speech really only meant they cared about their own team’s speech. Watching the government actually suppress critics? No big deal. They probably deserved it.
Elon Musk told podcast host Dwarkesh Patel and Stripe co-founder John Collison that space will become the most economically compelling location for AI data centers in less than 36 months, a prediction rooted not in some exotic technical breakthrough but in the basic math of electricity supply: chip output is growing exponentially, and electrical output outside China is essentially flat.
Solar panels in orbit generate roughly five times the power they do on the ground because there is no day-night cycle, no cloud cover, no atmospheric loss, and no atmosphere-related energy reduction. The system economics are even more favorable because space-based operations eliminate the need for batteries entirely, making the effective cost roughly 10 times cheaper than terrestrial solar, Musk said. The terrestrial bottleneck is already real.
Musk said powering 330,000 Nvidia GB300 chips — once you account for networking hardware, storage, peak cooling on the hottest day of the year, and reserve margin for generator servicing — requires roughly a gigawatt at the generation level. Gas turbines are sold out through 2030, and the limiting factor is the casting of turbine vanes and blades, a process handled by just three companies worldwide.
Five years from now, Musk predicted, SpaceX will launch and operate more AI compute annually than the cumulative total on Earth, expecting at least a few hundred gigawatts per year in space. Patel estimated that 100 gigawatts alone would require on the order of 10,000 Starship launches per year, a figure Musk affirmed. SpaceX is gearing up for 10,000 launches a year, Musk said, and possibly 20,000 to 30,000.
From 6-22 February, the 2026 Winter Olympics in Milan-Cortina d’Ampezzo, Italy will feature not just the world’s top winter athletes but also some of the most advanced sports technologies today. At the first Cortina Olympics in 1956, the Swiss company Omega—based in Biel/Bienne—introduced electronic ski starting gates and launched the first automated timing tech of its kind.
At this year’s Olympics, Swiss Timing, sister company to Omega under the parent Swatch Group, unveils a new generation of motion analysis and computer vision technology. The new technologies on offer include photofinish cameras that capture up to 40,000 images per second.
“We work very closely with athletes,” says Swiss Timing CEO Alain Zobrist, who has overseen Olympic timekeeping since the winter games of 2006 in Torino “They are the primary customers of our technology and services, and they need to understand how our systems work in order to trust them.”
Using high-resolution cameras and AI algorithms tuned to skaters’ routines, Milan-Cortina Olympic officials expect new figure skating tech to be a key highlight of the games. Omega
Figure Skating Tech Completes the Rotation
Figure skating, the Winter Olympics’ biggest TV draw, is receiving a substantial upgrade at Milano Cortina 2026.
Advertisement
Fourteen 8K resolution cameras positioned around the rink will capture every skater’s movement. “We use proprietary software to interpret the images and visualize athlete movement in a 3D model,” says Zobrist. “AI processes the data so we can track trajectory, position, and movement across all three axes—X, Y, and Z”.
The system measures jump heights, air times, and landing speeds in real time, producing heat maps and graphic overlays that break down each program—all instantaneously. “The time it takes for us to measure the data, until we show a matrix on TV with a graphic, this whole chain needs to take less than 1/10 of a second,” Zobrist says.
A range of different AI models helps the broadcasters and commentators process each skater’s every move on the ice.
“There is an AI that helps our computer vision system do pose estimation,” he says. “So we have a camera that is filming what is happening, and an AI that helps the camera understand what it’s looking at. And then there is a second type of AI, which is more similar to a large language model that makes sense of the data that we collect”.
Advertisement
Among the features Swiss Timing’s new systems provide is blade angle detection, which gives judges precise technical data to augment their technical and aesthetic decisions. Zobrist says future versions will also determine whether a given rotation is complete, so that “If the rotation is 355 degrees, there is going to be a deduction,” he says.
This builds on technology Omega unveiled at the 2024 Paris Olympics for diving, where cameras measured distances between a diver’s head and the board to help judges assess points and penalties to be awarded.
At the 2026 Winter Olympics, ski jumping will feature both camera-based and sensor-based technologies to make the aerial experience more immediate and real-time. Omega
Ski Jumping Tech Finds Make-or-Break Moments
Unlike figure skating’s camera-based approach, ski jumping also relies on physical sensors.
“In ski jumping, we use a small, lightweight sensor attached to each ski, one sensor per ski, not on the athlete’s body,” Zobrist says. The sensors are lightweight and broadcast data on a skier’s speed, acceleration, and positioning in the air. The technology also correlates performance data with wind conditions, revealing environmental factors’ influence on each jump.
Advertisement
High-speed cameras also track each ski jumper. Then, a stroboscopic camera provides body position time-lapses throughout the jump.
“The first 20 to 30 meters after takeoff are crucial as athletes move into a V position and lean forward,” Zobrist says. “And both the timing and precision of this movement strongly influence performance.”
The system reveals biomechanical characteristics in real time, he adds, showing how athletes position their bodies during every moment of the takeoff process. The most common mistake in flight position, over-rotation or under-rotation, can now be detailed and diagnosed with precision on every jump.
Bobsleigh: Pushing the Line on the Photo Finish
This year’s Olympics will also feature a “virtual photo finish,” providing comparison images of when different sleds cross the finish line over previous runs.
Advertisement
Omega’s cameras will provide virtual photo finishes at the 2026 Winter Olympics. Omega
“We virtually build a photo finish that shows different sleds from different runs on a single visual reference,” says Zobrist.
After each run, composite images show the margins separating performances. However, more tried-and-true technology still generates official results. A Swiss Timing score, he says, still comes courtesy of photoelectric cells, devices that emit light beams across the finish line and stop the clock when broken. The company offers its virtual photo finish, by contrast, as a visualization tool for spectators and commentators.
In bobsleigh, as in every timed Winter Olympic event, the line between triumph and heartbreak is sometimes measured in milliseconds or even shorter time intervals still. Such precision will, Zobrist says, stem from Omega’s Quantum Timer.
“We can measure time to the millionth of a second, so 6 digits after the comma, with a deviation of about 23 nanoseconds over 24 hours,” Zobrist explained. “These devices are constantly calibrated and used across all timed sports.”
We may receive a commission on purchases made from links.
With some product segments, two rival companies dominate the market space. Boeing and Airbus sometimes even use the same engines, and chances are good you’re reading this on a mobile phone running software created by either Apple or Google. When it comes to alkaline batteries, the two most recognizable premium brands are Duracell and Energizer. Consumer Reports tested 15 different AA batteries and rated the Duracell Quantum AA highest among alkaline batteries and equal in perfomance to Energizer Ultimate lithium batteries. Rayovac Fusion Advanced AA batteries also performed well, coming in just ahead of Energizer Advanced lithium and Duracell Copper top alkaline cells. Energizer EcoAdvanced and Max+ PowerSeal alkaline batteries scored a little lower; in the company of retailer-branded batteries from Amazon, CVS, Walgreens, and Rite Aid.
Most tests (including this one from Consumer Reports) find Duracell and Energizer batteries to be more or less equal in terms of performance, although the Duracell Quantum was a top performer here. Energizer Ultimate lithium batteries more or less matched the Quantum’s performance benchmarks, and Energizer Advanced lithium cells tested slightly better than Duracell Copper Top alkalines. Both brands are highly recommended by SlashGear and Consumer Reports with impressive performance that separates them from the rest of the pack, so buy either with confidence.
Advertisement
How to make batteries last longer
timallenphoto/Shutterstock
Consumer Reports tested this battery of batteries by measuring runtime in toys and flashlights, but use across a variety of devices might return different results. For example, a TV remote control doesn’t require that much power to function and is used intermittently, so drain on batteries is low. That’s why you may not need to change them for months, or even years. In contrast, Xbox controllers drain batteries very quickly because features like haptic feedback and wireless connectivity draw lots of power.
There are a couple things you can do to get the most out of your batteries. First, pay attention to the stamped expiration date if you’re buying them in a store. If the batteries aren’t going straight to use in a device, keep them in their original packaging in a cool, dry place. Contrary to a common myth, storing batteries in the fridge or freezer is a bad idea; condensation can form inside the packaging. Put them in an interior closet or water-resistant toolbox, and remove batteries from devices you don’t plan on using for a while. Be careful not to store batteries loose in a box or drawer with other batteries or metal objects; short-circuits could drain your batteries or even cause a fire. A plastic battery organizer will protect batteries when not in use and can be stored in a closet or cabinet. Stacking batteries upright will prevent terminal-to-terminal contact, and always recycle used batteries according to local guidelines.
Anthropic on Thursday released Claude Opus 4.6, a major upgrade to its flagship artificial intelligence model that the company says plans more carefully, sustains longer autonomous workflows, and outperforms competitors including OpenAI’s GPT-5.2 on key enterprise benchmarks — a release that arrives at a tumultuous moment for the AI industry and global software markets.
The launch comes just three days after OpenAI released its own Codex desktop application in a direct challenge to Anthropic’s Claude Code momentum, and amid a $285 billion rout in software and services stocks that investors attribute partly to fears that Anthropic’s AI tools could disrupt established enterprise software businesses.
For the first time, Anthropic’s Opus-class models will feature a 1 million token context window, allowing the AI to process and reason across vastly more information than previous versions. The company also introduced “agent teams” in Claude Code — a research preview feature that enables multiple AI agents to work simultaneously on different aspects of a coding project, coordinating autonomously.
“We’re focused on building the most capable, reliable, and safe AI systems,” an Anthropic spokesperson told VentureBeat about the announcements. “Opus 4.6 is even better at planning, helping solve the most complex coding tasks. And the new agent teams feature means users can split work across multiple agents — one on the frontend, one on the API, one on the migration — each owning its piece and coordinating directly with the others.”
Advertisement
Why OpenAI and Anthropic are locked in an all-out war for enterprise developers
The release intensifies an already fierce competition between Anthropic and OpenAI, the two most valuable privately held AI companies in the world. OpenAI on Monday released a new desktop application for its Codex artificial intelligence coding system, a tool the company says transforms software development from a collaborative exercise with a single AI assistant into something more akin to managing a team of autonomous workers.
AI coding assistants have exploded in popularity over the last year, and OpenAI said more than 1 million developers have used Codex in the past month. The new Codex app is part of OpenAI’s ongoing effort to lure users and market share away from rivals like Anthropic and Cursor.
The timing of Anthropic’s release — just 72 hours after OpenAI’s Codex launch — underscores the breakneck pace of competition in AI development tools. OpenAI faces intensifying competition from Anthropic, which posted the largest share increase of any frontier lab since May 2025, according to a recent Andreessen Horowitz survey. Forty-four percent of enterprises now use Anthropic in production, driven by rapid capability gains in software development since late 2024. The desktop launch is a strategic counter to Claude Code’s momentum.
Advertisement
According to Anthropic’s announcement, Opus 4.6 achieves the highest score on Terminal-Bench 2.0, an agentic coding evaluation, and leads all other frontier models on Humanity’s Last Exam, a complex multi-discipline reasoning test. On GDPval-AA — a benchmark measuring performance on economically valuable knowledge work tasks in finance, legal and other domains — Opus 4.6 outperforms OpenAI’s GPT-5.2 by approximately 144 ELO points, which translates to obtaining a higher score approximately 70% of the time.
Claude Opus 4.6 leads or matches competitors across most benchmark categories, according to Anthropic’s internal testing. The model showed particular strength in agentic tasks, office work and novel problem-solving. (Source: Anthropic)
Inside Claude Code’s $1 billion revenue milestone and growing enterprise footprint
The stakes are substantial. Asked about Claude Code’s financial performance, the Anthropic spokesperson noted that in November, the company announced that Claude Code reached $1 billion in run rate revenue only six months after becoming generally available in May 2025.
The spokesperson highlighted major enterprise deployments: “Claude Code is used by Uber across teams like software engineering, data science, finance, and trust and safety; wall-to-wall deployment across Salesforce’s global engineering org; tens of thousands of devs at Accenture; and companies across industries like Spotify, Rakuten, Snowflake, Novo Nordisk, and Ramp.”
Advertisement
That enterprise traction has translated into skyrocketing valuations. Earlier this month, Anthropic signed a term sheet for a $10 billion funding round at a $350 billion valuation. Bloomberg reported that Anthropic is simultaneously working on a tender offer that would allow employees to sell shares at that valuation, offering liquidity to staffers who have watched the company’s worth multiply since its 2021 founding.
How Opus 4.6 solves the ‘context rot’ problem that has plagued AI models
One of Opus 4.6’s most significant technical improvements addresses what the AI industry calls “context rot“—the degradation of model performance as conversations grow longer. Anthropic says Opus 4.6 scores 76% on MRCR v2, a needle-in-a-haystack benchmark testing a model’s ability to retrieve information hidden in vast amounts of text, compared to just 18.5% for Sonnet 4.5.
“This is a qualitative shift in how much context a model can actually use while maintaining peak performance,” the company said in its announcement.
The model also supports outputs of up to 128,000 tokens — enough to complete substantial coding tasks or documents without breaking them into multiple requests.
Advertisement
For developers, Anthropic is introducing several new API features alongside the model: adaptive thinking, which allows Claude to decide when deeper reasoning would be helpful rather than requiring a binary on-off choice; four effort levels (low, medium, high, max) to control intelligence, speed and cost tradeoffs; and context compaction, a beta feature that automatically summarizes older context to enable longer-running tasks.
Opus 4.6 dramatically outperformed its predecessor on tests measuring how well models retrieve information buried in long documents — a key capability for enterprise coding and research tasks. (Source: Anthropic)
Anthropic’s delicate balancing act: Building powerful AI agents without losing control
Anthropic, which has built its brand around AI safety research, emphasized that Opus 4.6 maintains alignment with its predecessors despite its enhanced capabilities. On the company’s automated behavior audit measuring misaligned behaviors such as deception, sycophancy, and cooperation with misuse, Opus 4.6 “showed a low rate” of problematic responses while also achieving “the lowest rate of over-refusals — where the model fails to answer benign queries — of any recent Claude model.”
When asked how Anthropic thinks about safety guardrails as Claude becomes more agentic, particularly with multiple agents coordinating autonomously, the spokesperson pointed to the company’s published framework: “Agents have tremendous potential for positive impacts in work but it’s important that agents continue to be safe, reliable, and trustworthy. We outlined our framework for developing safe and trustworthy agents last year which shares core principles developers should consider when building agents.”
Advertisement
The company said it has developed six new cybersecurity probes to detect potentially harmful uses of the model’s enhanced capabilities, and is using Opus 4.6 to help find and patch vulnerabilities in open-source software as part of defensive cybersecurity efforts.
Anthropic says its newest model exhibits the lowest rate of problematic behaviors — including deception and sycophancy — of any Claude version tested, even as capabilities have increased. (Source: Anthropic)
Sam Altman vs. Dario Amodei: The Super Bowl ad battle that exposed AI’s deepest divisions
The rivalry between Anthropic and OpenAI has spilled into consumer marketing in dramatic fashion. Both companies will feature prominently during Sunday’s Super Bowl. Anthropic is airing commercials that mock OpenAI’s decision to begin testing advertisements in ChatGPT, with the tagline: “Ads are coming to AI. But not to Claude.”
OpenAI CEO Sam Altman responded by calling the ads “funny” but “clearly dishonest,” posting on X that his company would “obviously never run ads in the way Anthropic depicts them” and that “Anthropic wants to control what people do with AI” while serving “an expensive product to rich people.”
Advertisement
The exchange highlights a fundamental strategic divergence: OpenAI has moved to monetize its massive free user base through advertising, while Anthropic has focused almost exclusively on enterprise sales and premium subscriptions.
The $285 billion stock selloff that revealed Wall Street’s AI anxiety
The launch occurs against a backdrop of historic market volatility in software stocks. A new AI automation tool from Anthropic PBC sparked a $285 billion rout in stocks across the software, financial services and asset management sectors on Tuesday as investors raced to dump shares with even the slightest exposure. A Goldman Sachs basket of US software stocks sank 6%, its biggest one-day decline since April’s tariff-fueled selloff.
The selloff was triggered by a new legal tool from Anthropic, which showed the AI industry’s growing push into industries that can unlock lucrative enterprise revenue needed to fund massive investments in the technology. One trigger for Tuesday’s selloff was Anthropic’s launch of plug-ins for its Claude Cowork agent on Friday, enabling automated tasks across legal, sales, marketing and data analysis.
Thomson Reuters plunged 15.83% Tuesday, its biggest single-day drop on record; and Legalzoom.com sank 19.68%. European legal software providers including RELX, owner of LexisNexis, and Wolters Kluwer experienced their worst single-day performances in decades.
Advertisement
Not everyone agrees the selloff is warranted. Nvidia CEO Jensen Huang said on Tuesday that fears AI would replace software and related tools were “illogical” and “time will prove itself.” Mark Murphy, head of U.S. enterprise software research at JPMorgan, said in a Reuters report it “feels like an illogical leap” to say a new plug-in from an LLM would “replace every layer of mission-critical enterprise software.”
What Claude’s new PowerPoint integration means for Microsoft’s AI strategy
Among the more notable product announcements: Anthropic is releasing Claude in PowerPoint in research preview, allowing users to create presentations using the same AI capabilities that power Claude’s document and spreadsheet work. The integration puts Claude directly inside a core Microsoft product — an unusual arrangement given Microsoft’s 27% stake in OpenAI.
The Anthropic spokesperson framed the move pragmatically in an interview with VentureBeat: “Microsoft has an official add-in marketplace for Office products with multiple add-ins available to help people with slide creation and iteration. Any developer can build a plugin for Excel or PowerPoint. We’re participating in that ecosystem to bring Claude into PowerPoint. This is about participating in the ecosystem and giving users the ability to work with the tools that they want, in the programs they want.”
Claude’s new PowerPoint integration, shown here analyzing a market research slide, places Anthropic’s AI directly inside a flagship Microsoft product — despite Microsoft’s major investment in rival OpenAI. (Source: Anthropic)
Advertisement
The data behind enterprise AI adoption: Who’s winning and who’s losing ground
Data from a16z’s recent enterprise AI survey suggests both Anthropic and OpenAI face an increasingly competitive landscape. While OpenAI remains the most widely used AI provider in the enterprise, with approximately 77% of surveyed companies using it in production in January 2026, Anthropic’s adoption is rising rapidly — from near-zero in March 2024 to approximately 40% using it in production by January 2026.
The survey data also shows that 75% of Anthropic’s enterprise customers are using it in production, with 89% either testing or in production — figures that slightly exceed OpenAI’s 46% in production and 73% testing or in production rates among its customer base.
Enterprise spending on AI continues to accelerate. Average enterprise LLM spend reached $7 million in 2025, up 180% from $2.5 million in 2024, with projections suggesting $11.6 million in 2026 — a 65% increase year-over-year.
OpenAI remains the dominant AI provider in enterprise settings, but Anthropic’s share has surged from near zero in early 2024 to roughly 40 percent of companies using it in production by January 2026. (Source: Andreessen Horowitz survey, January 2026)
Advertisement
Pricing, availability, and what developers need to know about Claude Opus 4.6
Opus 4.6 is available immediately on claude.ai, the Claude API, and major cloud platforms. Developers can access it via claude-opus-4-6 through the API. Pricing remains unchanged at $5 per million input tokens and $25 per million output tokens, with premium pricing of $10/$37.50 for prompts exceeding 200,000 tokens using the 1 million token context window.
For users who find Opus 4.6 “overthinking” simpler tasks — a characteristic Anthropic acknowledges can add cost and latency — the company recommends adjusting the effort parameter from its default high setting to medium.
The recommendation captures something essential about where the AI industry now stands. These models have grown so capable that their creators must now teach customers how to make them think less. Whether that represents a breakthrough or a warning sign depends entirely on which side of the disruption you’re standing on — and whether you remembered to sell your software stocks before Tuesday.
The bundle includes Mario Kart World and Donkey Kong Bananza, two exclusive Switch 2 titles that showcase the fun ready to be had with the console this President’s Day. I’ve bought and played both games (not for $80), and I can say that they serve as perfect introductions to the Switch ecosystem, especially if you’ve not played many Nintendo games.
While I still own my Lenovo Legion Go S SteamOS handheld, ready to be used for a bigger library of games, the Nintendo Switch 2 is the ideal device if you don’t always want to tweak power settings or graphics settings in games. With Nvidia‘s DLSS enabled on the custom T239 processor, games are surprisingly smooth to play, with consistent frame rates, and that’s a big difference compared to the Nintendo Switch.
Advertisement
A price hike might be close
(Image credit: Future / Isaiah Williams)
It’s not a big discount, but the Nintendo Switch 2 is another game console that’s at risk of a potential price increase. Multiple reports suggest the threat is from a combination of tariffs and the ongoing RAM crisis.
Since Valve is also hesitant to announce prices for the Steam Machine’s launch due to the high cost of memory, Nintendo will likely announce a price increase for its Switch 2 soon.
That makes now the best time to make a move if you’ve been considering a Switch 2 – and I can attest to it being worth every cent, especially since its life cycle has only just begun. Expect to see several exclusives launch later down the line, and The Duskbloods is already one that’s caught my eye.
Macropads can be as simple as a few buttons hooked up to a microcontroller to do the USB HID dance and talk to a PC. However, you can go a lot further, too. [CNCDan] demonstrates this well with his sleek macropad build, which throws haptic feedback into the mix.
The build features six programmable macro buttons, which are situated either on side of a 128×64 OLED display. This setup allows the OLED screen to show icons that explain the functionality of each button. There’s also a nice large rotary knob, surrounded by 20 addressable WS2811 LEDs for visual feedback. Underneath the knob lives an an encoder, as well as a brushless motor typically used in gimbal builds, which is driven by a TMC6300 motor driver board. Everything is laced up to a Waveshare RP2040 Plus devboard which runs the show. It’s responsible for controlling the motors, reading the knob and switches, and speaking USB to the PC that it’s plugged into.
It’s a compact device that nonetheless should prove to be a good productivity booster on the bench. We’ve featured [CNCDan’s] work before, too, such as this nifty DIY VR headset.
Anthropic’s most powerful Claude model is leveling up, with the company saying in a blog post Thursday that Claude Opus 4.6 will be even better at coding and creating projects on the first go.
Claude Opus 4.5 is already a powerful coding model, with its release in November setting up Claude Code’s viral vibe-coding moment over the holidays. Claude’s proven coding prowess and new Cowork feature have Wall Street anxious, with many tech stocks falling in recent weeks, over concerns that people won’t need software products in the future.
Anthropic said the new model is more focused on solving the biggest challenges, like the inner workings of complex apps, while also handling the easy steps more quickly. As a reasoning model, Opus 4.6 works by breaking down the steps it needs to take to do what you ask it to and putting together a plan before starting on it. It’ll also go back and check its work on those steps, sometimes making multiple attempts without you asking for it.
Sometimes the model can spend too much effort on a task, which Anthropic said can be resolved by reducing its effort level from the default “high” setting.
The Claude Opus models are available for paying Claude users on the Pro, Max, Team and Enterprise plans. The cheapest of those, Pro, costs $20 a month (or $17 a month if you pay annually). The Pro plan comes with usage limits for Opus, which users can hit after a few hours of vibe coding and then have to wait several hours for it to reset.
Aside from Opus, Anthropic has smaller, less powerful models in Sonnet 4.5 and Haiku 4.5.