Connect with us

Tech

The trap Anthropic built for itself

Published

on

Friday afternoon, just as this interview was getting underway, a news alert flashed across my computer screen: the Trump administration was severing ties with Anthropic, the San Francisco AI company founded in 2021 by Dario Amodei. Defense Secretary Pete Hegseth had invoked a national security law to blacklist the company from doing business with the Pentagon after Amodei refused to allow Anthropic’s tech to be used for mass surveillance of U.S. citizens or for autonomous armed drones that could select and kill targets without human input.

It was a jaw-dropping sequence. Anthropic stands to lose a contract worth up to $200 million and will be barred from working with other defense contractors after President Trump posted on Truth Social directing every federal agency to “immediately cease all use of Anthropic technology.” (Anthropic has since said it will challenge the Pentagon in court.)

Max Tegmark has spent the better part of a decade warning that the race to build ever-more-powerful AI systems is outpacing the world’s ability to govern them. The MIT physicist founded the Future of Life Institute in 2014 and helped organize an open letter — ultimately signed by more than 33,000 people, including Elon Musk — calling for a pause in advanced AI development.

His view of the Anthropic crisis is unsparing: the company, like its rivals, has sown the seeds of its own predicament. Tegmark’s argument doesn’t begin with the Pentagon but with a decision made years earlier — a choice, shared across the industry, to resist binding regulation. Anthropic, OpenAI, Google DeepMind and others have long promised to govern themselves responsibly. Anthropic this week even dropped the central tenet of its own safety pledge — its promise not to release increasingly powerful AI systems until the company was confident they wouldn’t cause harm.

Advertisement

Now, in the absence of rules, there’s not a lot to protect these players, says Tegmark. Here’s more from that interview, edited for length and clarity. You can hear the full conversation this coming week on TechCrunch’s StrictlyVC Download podcast.

When you saw this news just now about Anthropic, what was your first reaction?

The road to hell is paved with good intentions. It’s so interesting to think back a decade ago, when people were so excited about how we were going to make artificial intelligence to cure cancer, to grow the prosperity in America and make America strong. And here we are now where the U.S. government is pissed off at this company for not wanting AI to be used for domestic mass surveillance of Americans, and also not wanting to have killer robots that can autonomously — without any human input at all — decide who gets killed.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

Anthropic has staked its entire identity on being a safety-first AI company, and yet it was collaborating with defense and intelligence agencies [dating back to at least 2024]. Do you think that’s at all contradictory?

Advertisement

It is contradictory. If I can give a little cynical take on this — yes, Anthropic has been very good at marketing themselves as all about safety. But if you actually look at the facts rather than the claims, what you see is that Anthropic, OpenAI, Google DeepMind and xAI have all talked a lot about how they care about safety. None of them has come out supporting binding safety regulation the way we have in other industries. And all four of these companies have now broken their own promises. First we had Google — this big slogan, ‘Don’t be evil.’ Then they dropped that. Then they dropped another longer commitment that basically said they promised not to do harm with AI. They dropped that so they could sell AI for surveillance and weapons. OpenAI just dropped the word safety from their mission statement. xAI shut down their whole safety team. And now Anthropic, earlier in the week, dropped their most important safety commitment — the promise not to release powerful AI systems until they were sure they weren’t going to cause harm.

How did companies that made such prominent safety commitments end up in this position?

All of these companies, especially OpenAI and Google DeepMind but to some extent also Anthropic, have persistently lobbied against regulation of AI, saying, ‘Just trust us, we’re going to regulate ourselves.’ And they’ve successfully lobbied. So we right now have less regulation on AI systems in America than on sandwiches. You know, if you want to open a sandwich shop and the health inspector finds 15 rats in the kitchen, he won’t let you sell any sandwiches until you fix it. But if you say, ‘Don’t worry, I’m not going to sell sandwiches, I’m going to sell AI girlfriends for 11-year-olds, and they’ve been linked to suicides in the past, and then I’m going to release something called superintelligence which might overthrow the U.S. government, but I have a good feeling about mine’ — the inspector has to say, ‘Fine, go ahead, just don’t sell sandwiches.’

There’s food safety regulation and no AI regulation.

Advertisement

And this, I feel, all of these companies really share the blame for. Because if they had taken all these promises that they made back in the day for how they were going to be so safe and goody-goody, and gotten together, and then gone to the government and said, ‘Please take our voluntary commitments and turn them into U.S. law that binds even our most sloppy competitors’ — this would have happened instead. We’re in a complete regulatory vacuum. And we know what happens when there’s a complete corporate amnesty: you get thalidomide, you get tobacco companies pushing cigarettes on kids, you get asbestos causing lung cancer. So it’s sort of ironic that their own resistance to having laws saying what’s okay and not okay to do with AI is now coming back and biting them.

There is no law right now against building AI to kill Americans, so the government can just suddenly ask for it. If the companies themselves had earlier come out and said, ‘We want this law,’ they wouldn’t be in this pickle. They really shot themselves in the foot.

The companies’ counter-argument is always the race with China — if American companies don’t do this, Beijing will. Does that argument hold?

Let’s analyze that. The most common talking point from the lobbyists for the AI companies — they’re now better funded and more numerous than the lobbyists from the fossil fuel industry, the pharma industry and the military-industrial complex combined — is that whenever anyone proposes any kind of regulation, they say, ‘But China.’ So let’s look at that. China is in the process of banning AI girlfriends outright. Not just age limits — they’re looking at banning all anthropomorphic AI. Why? Not because they want to please America but because they feel this is screwing up Chinese youth and making China weak. Obviously, it’s making American youth weak, too.

Advertisement

And when people say we have to race to build superintelligence so we can win against China — when we don’t actually know how to control superintelligence, so that the default outcome is that humanity loses control of Earth to alien machines — guess what? The Chinese Communist Party really likes control. Who in their right mind thinks that Xi Jinping is going to tolerate some Chinese AI company building something that overthrows the Chinese government? No way. It’s clearly really bad for the American government too if it gets overthrown in a coup by the first American company to build superintelligence. This is a national security threat.

That’s compelling framing — superintelligence as a national security threat, not an asset. Do you see that view gaining traction in Washington?

I think if people in the national security community listen to Dario Amodei describe his vision — he’s given a famous speech where he says we’ll soon have a country of geniuses in a data center — they might start thinking: wait, did Dario just use the word ‘country’? Maybe I should put that country of geniuses in a data center on the same threat list I’m keeping tabs on, because that sounds threatening to the U.S. government. And I think fairly soon, enough people in the U.S. national security community are going to realize that uncontrollable superintelligence is a threat, not a tool. This is totally analogous to the Cold War. There was a race for dominance — economic and military — against the Soviet Union. We Americans won that one without ever engaging in the second race, which was to see who could put the most nuclear craters in the other superpower. People realized that was just suicide. No one wins. The same logic applies here.

What does all of this mean for the pace of AI development more broadly? How close do you think we are to the systems you’re describing?

Advertisement

Six years ago, almost every expert in AI I knew predicted we were decades away from having AI that could master language and knowledge at human level — maybe 2040, maybe 2050. They were all wrong, because we already have that now. We’ve seen AI progress quite rapidly from high school level to college level to PhD level to university professor level in some areas. Last year, AI won the gold medal at the International Mathematics Olympiad, which is about as difficult as human tasks get. I wrote a paper together with Yoshua Bengio, Dan Hendrycks, and other top AI researchers just a few months ago giving a rigorous definition of AGI. According to this, GPT-4 was 27% of the way there. GPT-5 was 57% of the way there. So we’re not there yet, but going from 27% to 57% that quickly suggests it might not be that long.

When I lectured to my students yesterday at MIT, I told them that even if it takes four years, that means when they graduate, they might not be able to get any jobs anymore. It’s certainly not too soon to start preparing for it.

Anthropic is now blacklisted. I’m curious to see what happens next — will the other AI giants stand with them and say, we won’t do this either? Or does someone like xAI raise their hand and say, Anthropic didn’t want that contract, we’ll take it? [Editor’s note: Hours after the interview, OpenAI announced its own deal with the Pentagon.]

Last night, Sam Altman came out and said he stands with Anthropic and has the same red lines. I admire him for the courage of saying that. Google, as of when we started this interview, had said nothing. If they just stay quiet, I think that’s incredibly embarrassing for them as a company, and a lot of their staff will feel the same. We haven’t heard anything from xAI yet either. So it’ll be interesting to see. Basically, there’s this moment where everybody has to show their true colors.

Advertisement

Is there a version of this where the outcome is actually good?

Yes, and this is why I’m actually optimistic in a strange way. There’s such an obvious alternative here. If we just start treating AI companies like any other companies — drop the corporate amnesty — they would clearly have to do something like a clinical trial before they released something this powerful, and demonstrate to independent experts that they know how to control it. Then we get a golden age with all the good stuff from AI, without the existential angst. That’s not the path we’re on right now. But it could be.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

I wish Apple made this sleek wireless power bank, but it works just fine with the iPhone

Published

on

A new magnetic wireless power bank from Xiaomi is gaining attention – not because it’s an Android accessory, but because it feels like something Apple should have made. Its compact design, strong magnetic grip, and clean aesthetic make it look and behave like a premium iPhone-compatible accessory, offering a sleeker, more polished experience than many MagSafe alternatives.

And yes – it works flawlessly with the Apple iPhone, despite not being an Apple product.

A premium magnetic power bank that feels like it belongs to Apple’s ecosystem

Xiaomi’s new magnetic wireless power bank instantly stands out because of its ultra-thin profile, polished finish, and minimalist design. It clips onto the back of an iPhone with a firm, MagSafe-compatible lock, delivering wireless charging without wobbling or shifting in your hand.

While Apple’s official MagSafe Battery Pack was discontinued and third-party options vary in quality, Xiaomi’s take feels refined – almost intentional – with edges and materials that mimic Apple’s industrial design language more than typical Android-centric accessories.

Users who prefer pocketable designs will appreciate how easily it slips into a bag or pocket without adding bulk. The lightweight build makes it ideal for travel, commuting, or extended outdoor use, especially for iPhone models with aging batteries.

Advertisement

Beyond looks, the power bank is surprisingly capable

Xiaomi equips the unit with a 5000mAh battery, offering enough power to recharge most iPhones. The wireless charging surface delivers stable output, and the magnets ensure the phone stays aligned during use – a key issue for many cheaper MagSafe clones.

There’s also a wired output option for faster, cable-based charging when needed, giving it versatility for users who switch between devices. Xiaomi also includes safety layers for temperature control, foreign object detection, and overvoltage protection, making it feel dependable for all-day use.

What sets it apart is the attention to ergonomic usability. You can comfortably hold the phone while it charges, use it while gaming or streaming, or leave it in a pocket – and it still stays aligned.

Why this accessory matters in the broader market

With the iPhone’s shift to USB-C and the growing popularity of magnetic charging accessories, users are now looking for power banks that are not just functional but designed to blend seamlessly with their device. Apple’s exit from the MagSafe battery category left a gap that accessory makers are trying to fill. Xiaomi’s new wireless power bank stands out by offering a level of design polish and efficiency rare in the Android-first accessory landscape.

Advertisement

This also reflects a wider industry trend: top OEMs are expanding beyond traditional ecosystems. Accessories once thought to be Android-exclusive or Apple-exclusive are now intentionally designed with cross-device compatibility in mind.

For consumers, it means more options and better value without sacrificing design or performance.

Why you may care, even if you’re deep in Apple’s ecosystem

If you own an iPhone and need a reliable wireless power bank that looks premium, charges consistently, and doesn’t cost a fortune, this accessory is one of the best new options available globally. It’s especially appealing for users of the iPhone 13, 14, and 15 series, where battery life naturally declines over time.

It also appeals to travelers, students, creators, or anyone who needs clean, cable-free charging on the move. Given its slim profile, it could fit seamlessly into an existing Apple setup without feeling out of place.

Should you buy it?

Xiaomi’s new wireless power bank is already rolling out globally through its online store and regional partners. As the magnetic charging category continues to grow – especially with Apple expected to expand Qi2 support across future devices – more brands will likely release premium, iPhone-friendly accessories with similar design polish.

Advertisement

For now, this sleek wireless power bank stands as one of the nicest options you can buy for your iPhone, even if it didn’t come from Cupertino.

Source link

Advertisement
Continue Reading

Tech

Xcode with vibecoding AI agents to help build apps is now available

Published

on

Apple has released Xcode 26.3 with support for autonomous coding agents, that can directly analyze projects, modify files, and assist developers inside the official development environment.

Xcode app icon showing a metallic hammer diagonally over a blue rounded square with white technical blueprints forming a stylized letter A on a dark background
Xcode now runs with AI agents

Xcode, Apple’s central tool for building apps across various devices, is expanding its role with version 26.3. AI agents can actively participate in development, offering suggestions and documentation help.
The release includes Swift 6.2.3 and updated SDKs, but the defining change is agentic coding. Xcode is now a platform where AI helps developers plan, write, and maintain software.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Viral ad shows aged Musk, Altman, and Bezos using jobless humans to power AI

Published

on


The ad, set in 2036, sees Musk, Altman, and Bezos talking about their co-founded company, Energym. The eerily accurate AI-generated versions talk about how 80% of people had lost their jobs by 2030, leaving them with no money or purpose – but plenty of free time.
Read Entire Article
Source link

Continue Reading

Tech

Blender shelves iPad app, says it's focusing on Android tablets first

Published

on

Blender’s long-anticipated native iPad app has been placed on hold as developers shift tablet priorities elsewhere.

Tablet displaying a room design app with a 3D room model featuring wooden floors, window, and teal wall. Office supplies and toys are scattered on the desk.
A previous mockup of the potential Blender for iPad app

In June 2025, Blender announced that it would be creating a native iPad version of its popular 3D creation software. According to the team, they would be releasing the app for the iPad Pro — though they provide a timeline for release.
Unfortunately, it doesn’t seem like we’ll be getting one anytime soon, either.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

16-inch M4 Pro MacBook Pro vs. Acer Predator Helios Neo 14: Apple's never been so far behind

Published

on

Apple’s 16-inch MacBook Pro has been my notebook of choice in recent years, but it isn’t perfect, and that led me to check out the competition — namely, Acer’s $2,000 Predator Helios Neo 14 AI laptop.

Two open laptops on a kitchen counter, both displaying bright abstract neon wallpapers, with a softly lit Christmas tree and home interior blurred in the background.
Two very different notebooks at very similar price points

My current notebook is an M4 Pro 16-inch MacBook Pro, and that replaced my M1 Pro 16-inch MacBook Pro after it took an unfortunate tumble from a table. As you might expect, I’m very comfortable with macOS, and switching to Windows full-time isn’t in the cards.
But that doesn’t mean that there isn’t room for a Windows PC in my life. Because as much as Apple might try to tell you otherwise, Mac gaming just doesn’t quite cut it.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Vend-o-Vision: Trading Quarters For Watching TV In Public

Published

on

The timer mechanism of the Vend-o-Vision. (Credit: SpaceTime Junction, YouTube)
The timer mechanism of the Vend-o-Vision. (Credit: SpaceTime Junction, YouTube)

There was a time before portable TVs and personal media players when the idea of putting coin-operated TVs everywhere, from restaurants to airports and laundromats, would have seemed like a solid business model. Thus was born the Vend-o-Vision by Mini-TV USA, which presented itself as a cash earner for businesses and a way to make their customers even happier. One of these new-in-box units recently made its way over to [Mark] of the SpaceTime Junction YouTube channel.

This unit is very simple, with what appears to be an off-the-shelf Panasonic black-and-white TV with UHF and VHF reception capability, inside a metal box that contains the timer mechanism, which is linked to the coin mechanism. Depending on a physical slider with three positions, you get anywhere from 10 to 20 minutes per quarter, with the customer having to tune into the station themselves using the TV’s controls. A counter mechanism is provided as an option.

Time to enjoy your favorite TV shows. (Credit: SpaceTime Junction, YouTube)
Time to enjoy your favorite TV shows. (Credit: SpaceTime Junction, YouTube)

As would be expected from a new-in-box unit, after chiseling off the 30-odd-year-old Styrofoam packaging, it fires right up and works fine. Of course, it’s a small black-and-white TV, so it’s not incredibly useful, and clearly wasn’t even back in 1989 when the Vend-o-Vision first appeared.

After some finagling with adapters, [Mark] gets everyone’s favorite movie playing on the tiny screen, giving us the first glimpse of what it would have been like to gaze at this miracle of technology back around the early 1990s in a noisy laundromat or restaurant. One can hardly imagine why it didn’t catch on.

We can see a patent for this appear in a 1990 scan of the USPTO’s gazette, where it’s listed as being first in commercial operation on the 29th of November 1989. The system was short-lived, however, with in 1995 the FTC settling with the company for deceptive practices, as the company had overinflated the projected earnings per TV when it started flaunting it at tradeshows in 1990. A few years prior, Mini TV USA appears to have already ceased operations, making these remaining Vend-o-Vision quite rare indeed. These types of coin-operated TVs were usually in public places or hotels. But we’ve seen coin-operated TVs that briefly appeared in homes, too.

Advertisement

Source link

Continue Reading

Tech

Running A Desktop PC Off AA Alkaline Cells

Published

on

Everyone is probably familiar with the concept of battery-powered devices, but generally, this involves a laptop with a beefy battery pack and hardware optimized for low power draw. You could also do the complete opposite and try to run a desktop PC off alkaline AA cells, as [ScuffedBits] recently did out of morbid curiosity. Exactly how many alkaline cells does it take to run a desktop PC for any reasonable amount of time?

One nice thing about using batteries with a desktop PC is that you can ditch the entire AC-DC power conversion step and instead use a DC-DC adapter like the well-known PicoATX and its many clones. These just take in 12 VDC and tend to have a fairly wide input voltage range, which is useful when your batteries begin to run out of juice. In this case, just above 10 VDC seemed to be the cut-off point for the used DC-DC adapter.

In the end, [ScuffedBits] used what looks like 56 alkaline AA cells connected in both parallel and series, along with two series-connected 6,800 µF, 40V electrolytic capacitors to buffer the spikes in power demand, after early experiments showed that the cells just cannot provide power that quickly. Although admittedly, the initial thin wiring didn’t help either. With alkaline rather than carbon AA cells, improved wiring, and some buffer capacitors, it turns out that you can indeed run a desktop PC off AA cells, if only just about long enough for a small game of Minesweeper.

Advertisement

Amusingly, the small LCD monitor used in the experiment drew so little power that it happily ran on eight NiMH cells for much longer, highlighting just how important power conservation is for battery-powered devices. We wonder if you could marry this project to a battery project we saw and end up with something practically portable?

Advertisement

Source link

Continue Reading

Tech

Silicon Valley’s Ideas Mocked Over Penchant for Favoring Young Entrepreneurs with ‘Agency’

Published

on

In a 9,000-word expose, a writer for Harper’s visited San Francisco’s young entrepreneurs in September to mockingly profile “tech’s new generation and the end of thinking.”

There’s Cluely founder Roy Lee. (“His grand contribution to the world was a piece of software that told people what to do.”) And the Rationalist movement’s Scott Alexander, who “would probably have a very easy time starting a suicide cult…”

Alexander’s relationship with the AI industry is a strange one. “In theory, we think they’re potentially destroying the world and are evil and we hate them,” he told me. In practice, though, the entire industry is essentially an outgrowth of his blog’s comment section… “Many of them were specifically thinking, I don’t trust anybody else with superintelligence, so I’m going to create it and do it well.” Somehow, a movement that believes AI is incredibly dangerous and needs to be pursued carefully ended up generating a breakneck artificial arms race.

There’s a fascinating story about teenaged founder Eric Zhu (who only recently turned 18):

Clients wanted to take calls during work hours, so he would speak to them from his school bathroom. “I convinced my counselor that I had prostate issues… I would buy hall passes from drug dealers to get out of class, to have business meetings.” Soon he was taking Zoom calls with a U.S. senator to discuss tech regulation… Next, he built his own venture-capital fund, managing $20 million. At one point cops raided the bathroom looking for drug dealers while Eric was busy talking with an investor. Eventually, the school got sick of Eric’s misuse of the facilities and kicked him out. He moved to San Francisco.

Eric made all of this sound incredibly easy. You hang out in some Discord servers, make a few connections with the right people; next thing you know, you’re a millionaire… Eric didn’t think there was anything particularly special about himself. Why did he, unlike any of his classmates, start a $20 million VC fund? “I think I was just bored. Honestly, I was really bored.” Did he think anyone could do what he did? “Yeah, I think anyone genuinely can.”
The article concludes Silicon Valley’s investors are rewarding young people with “agency”. Although “As far as I could tell, being a highly agentic individual had less to do with actually doing things and more to do with constantly chasing attention online.” Like X.com user Donald Boat, who successfully baited Sam Altman into buying him a gaming PC in “a brutally simplified miniature of the entire VC economy.” (After which “People were giving him stuff for no reason except that Altman had already done it, and they didn’t want to be left out of the trend.”)

Shortly before I arrived at the Cheesecake Factory, [Donald Boat] texted to let me know that he’d been drinking all day, so when I met him I thought he was irretrievably wasted. In fact, it turned out, he was just like that all the time… He seemed to have a constant roster of projects on the go. He’d sent me occasional photos of his exploits. He went down to L.A. to see Oasis and ended up in a poker game with a group of weapons manufacturers. “I made a bunch of jokes about sending all their poker money to China,” he said, “and they were not pleased….”

Advertisement

“I don’t use that computer and I think video games are a waste of time. I spent all the money I made from going viral on Oasis tickets.” As far as he was concerned, the fact that tech people were tripping over themselves to take part in his stunt just confirmed his generally low impression of them. “They have too much money and nothing going on…” Ever since his big viral moment, he’d been suddenly inundated with messages from startup drones who’d decided that his clout might be useful to them. One had offered to fly him out to the French Riviera.
The author’s conclusion? “It did not seem like a good idea to me that some of the richest people in the world were no longer rewarding people for having any particular skills, but simply for having agency.”

Source link

Continue Reading

Tech

Hackers can now track your car's location through tire pressure sensors

Published

on


The device in many automobiles that warns drivers when their tire pressure is low transmits the data in unencrypted cleartext and carries a unique identifier for each vehicle. Researchers from IMEDA Networks and several European universities recently discovered that relatively inexpensive wireless devices can track Tire Pressure Monitoring System (TPMS)…
Read Entire Article
Source link

Continue Reading

Tech

Alaska could be the next state to crack down on AI-generated CSAM and restrict kids’ social media use

Published

on

Alaska’s House of Representatives unanimously passed HB47, a bill that imposes sweeping limits on when and how minors use social media apps, along with bans on generating or distributing harmful deepfakes of children.

The bill’s original form was focused on prohibiting the possession and distribution of sexually explicit images of children using AI, but Alaska lawmakers decided to add amendments that would impose social media restrictions. The proposed limitations include a statewide curfew on using social media between 10:30 PM and 6:30 AM, banning “addictive design features” and requiring social media platforms to verify user ages and get parental consent if they are minors.

While the House bill saw 39 votes in favor and zero against, the amendments offered some hints at potential upcoming revisions. Before the bill went to a vote, some of the House representatives expressed concern about adding such broad rules on social media without consulting the companies behind them first.

The bill still has to make its way through the Alaska State Senate, which already has presented a companion bill, and the governor. Alaska is following the footsteps of many other states, and the House even modeled its social media amendments in the HB47 bill after Utah. While Utah was the first to propose social media restrictions for kids, it was later met with a preliminary injunction.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025