Connect with us

Tech

The trap Anthropic built for itself

Published

on

Friday afternoon, just as this interview was getting underway, a news alert flashed across my computer screen: the Trump administration was severing ties with Anthropic, the San Francisco AI company founded in 2021 by Dario Amodei. Defense Secretary Pete Hegseth had invoked a national security law to blacklist the company from doing business with the Pentagon after Amodei refused to allow Anthropic’s tech to be used for mass surveillance of U.S. citizens or for autonomous armed drones that could select and kill targets without human input.

It was a jaw-dropping sequence. Anthropic stands to lose a contract worth up to $200 million and will be barred from working with other defense contractors after President Trump posted on Truth Social directing every federal agency to “immediately cease all use of Anthropic technology.” (Anthropic has since said it will challenge the Pentagon in court.)

Max Tegmark has spent the better part of a decade warning that the race to build ever-more-powerful AI systems is outpacing the world’s ability to govern them. The MIT physicist founded the Future of Life Institute in 2014 and helped organize an open letter — ultimately signed by more than 33,000 people, including Elon Musk — calling for a pause in advanced AI development.

His view of the Anthropic crisis is unsparing: the company, like its rivals, has sown the seeds of its own predicament. Tegmark’s argument doesn’t begin with the Pentagon but with a decision made years earlier — a choice, shared across the industry, to resist binding regulation. Anthropic, OpenAI, Google DeepMind and others have long promised to govern themselves responsibly. Anthropic this week even dropped the central tenet of its own safety pledge — its promise not to release increasingly powerful AI systems until the company was confident they wouldn’t cause harm.

Advertisement

Now, in the absence of rules, there’s not a lot to protect these players, says Tegmark. Here’s more from that interview, edited for length and clarity. You can hear the full conversation this coming week on TechCrunch’s StrictlyVC Download podcast.

When you saw this news just now about Anthropic, what was your first reaction?

The road to hell is paved with good intentions. It’s so interesting to think back a decade ago, when people were so excited about how we were going to make artificial intelligence to cure cancer, to grow the prosperity in America and make America strong. And here we are now where the U.S. government is pissed off at this company for not wanting AI to be used for domestic mass surveillance of Americans, and also not wanting to have killer robots that can autonomously — without any human input at all — decide who gets killed.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

Anthropic has staked its entire identity on being a safety-first AI company, and yet it was collaborating with defense and intelligence agencies [dating back to at least 2024]. Do you think that’s at all contradictory?

Advertisement

It is contradictory. If I can give a little cynical take on this — yes, Anthropic has been very good at marketing themselves as all about safety. But if you actually look at the facts rather than the claims, what you see is that Anthropic, OpenAI, Google DeepMind and xAI have all talked a lot about how they care about safety. None of them has come out supporting binding safety regulation the way we have in other industries. And all four of these companies have now broken their own promises. First we had Google — this big slogan, ‘Don’t be evil.’ Then they dropped that. Then they dropped another longer commitment that basically said they promised not to do harm with AI. They dropped that so they could sell AI for surveillance and weapons. OpenAI just dropped the word safety from their mission statement. xAI shut down their whole safety team. And now Anthropic, earlier in the week, dropped their most important safety commitment — the promise not to release powerful AI systems until they were sure they weren’t going to cause harm.

How did companies that made such prominent safety commitments end up in this position?

All of these companies, especially OpenAI and Google DeepMind but to some extent also Anthropic, have persistently lobbied against regulation of AI, saying, ‘Just trust us, we’re going to regulate ourselves.’ And they’ve successfully lobbied. So we right now have less regulation on AI systems in America than on sandwiches. You know, if you want to open a sandwich shop and the health inspector finds 15 rats in the kitchen, he won’t let you sell any sandwiches until you fix it. But if you say, ‘Don’t worry, I’m not going to sell sandwiches, I’m going to sell AI girlfriends for 11-year-olds, and they’ve been linked to suicides in the past, and then I’m going to release something called superintelligence which might overthrow the U.S. government, but I have a good feeling about mine’ — the inspector has to say, ‘Fine, go ahead, just don’t sell sandwiches.’

There’s food safety regulation and no AI regulation.

Advertisement

And this, I feel, all of these companies really share the blame for. Because if they had taken all these promises that they made back in the day for how they were going to be so safe and goody-goody, and gotten together, and then gone to the government and said, ‘Please take our voluntary commitments and turn them into U.S. law that binds even our most sloppy competitors’ — this would have happened instead. We’re in a complete regulatory vacuum. And we know what happens when there’s a complete corporate amnesty: you get thalidomide, you get tobacco companies pushing cigarettes on kids, you get asbestos causing lung cancer. So it’s sort of ironic that their own resistance to having laws saying what’s okay and not okay to do with AI is now coming back and biting them.

There is no law right now against building AI to kill Americans, so the government can just suddenly ask for it. If the companies themselves had earlier come out and said, ‘We want this law,’ they wouldn’t be in this pickle. They really shot themselves in the foot.

The companies’ counter-argument is always the race with China — if American companies don’t do this, Beijing will. Does that argument hold?

Let’s analyze that. The most common talking point from the lobbyists for the AI companies — they’re now better funded and more numerous than the lobbyists from the fossil fuel industry, the pharma industry and the military-industrial complex combined — is that whenever anyone proposes any kind of regulation, they say, ‘But China.’ So let’s look at that. China is in the process of banning AI girlfriends outright. Not just age limits — they’re looking at banning all anthropomorphic AI. Why? Not because they want to please America but because they feel this is screwing up Chinese youth and making China weak. Obviously, it’s making American youth weak, too.

Advertisement

And when people say we have to race to build superintelligence so we can win against China — when we don’t actually know how to control superintelligence, so that the default outcome is that humanity loses control of Earth to alien machines — guess what? The Chinese Communist Party really likes control. Who in their right mind thinks that Xi Jinping is going to tolerate some Chinese AI company building something that overthrows the Chinese government? No way. It’s clearly really bad for the American government too if it gets overthrown in a coup by the first American company to build superintelligence. This is a national security threat.

That’s compelling framing — superintelligence as a national security threat, not an asset. Do you see that view gaining traction in Washington?

I think if people in the national security community listen to Dario Amodei describe his vision — he’s given a famous speech where he says we’ll soon have a country of geniuses in a data center — they might start thinking: wait, did Dario just use the word ‘country’? Maybe I should put that country of geniuses in a data center on the same threat list I’m keeping tabs on, because that sounds threatening to the U.S. government. And I think fairly soon, enough people in the U.S. national security community are going to realize that uncontrollable superintelligence is a threat, not a tool. This is totally analogous to the Cold War. There was a race for dominance — economic and military — against the Soviet Union. We Americans won that one without ever engaging in the second race, which was to see who could put the most nuclear craters in the other superpower. People realized that was just suicide. No one wins. The same logic applies here.

What does all of this mean for the pace of AI development more broadly? How close do you think we are to the systems you’re describing?

Advertisement

Six years ago, almost every expert in AI I knew predicted we were decades away from having AI that could master language and knowledge at human level — maybe 2040, maybe 2050. They were all wrong, because we already have that now. We’ve seen AI progress quite rapidly from high school level to college level to PhD level to university professor level in some areas. Last year, AI won the gold medal at the International Mathematics Olympiad, which is about as difficult as human tasks get. I wrote a paper together with Yoshua Bengio, Dan Hendrycks, and other top AI researchers just a few months ago giving a rigorous definition of AGI. According to this, GPT-4 was 27% of the way there. GPT-5 was 57% of the way there. So we’re not there yet, but going from 27% to 57% that quickly suggests it might not be that long.

When I lectured to my students yesterday at MIT, I told them that even if it takes four years, that means when they graduate, they might not be able to get any jobs anymore. It’s certainly not too soon to start preparing for it.

Anthropic is now blacklisted. I’m curious to see what happens next — will the other AI giants stand with them and say, we won’t do this either? Or does someone like xAI raise their hand and say, Anthropic didn’t want that contract, we’ll take it? [Editor’s note: Hours after the interview, OpenAI announced its own deal with the Pentagon.]

Last night, Sam Altman came out and said he stands with Anthropic and has the same red lines. I admire him for the courage of saying that. Google, as of when we started this interview, had said nothing. If they just stay quiet, I think that’s incredibly embarrassing for them as a company, and a lot of their staff will feel the same. We haven’t heard anything from xAI yet either. So it’ll be interesting to see. Basically, there’s this moment where everybody has to show their true colors.

Advertisement

Is there a version of this where the outcome is actually good?

Yes, and this is why I’m actually optimistic in a strange way. There’s such an obvious alternative here. If we just start treating AI companies like any other companies — drop the corporate amnesty — they would clearly have to do something like a clinical trial before they released something this powerful, and demonstrate to independent experts that they know how to control it. Then we get a golden age with all the good stuff from AI, without the existential angst. That’s not the path we’re on right now. But it could be.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Silicon Valley’s Ideas Mocked Over Penchant for Favoring Young Entrepreneurs with ‘Agency’

Published

on

In a 9,000-word expose, a writer for Harper’s visited San Francisco’s young entrepreneurs in September to mockingly profile “tech’s new generation and the end of thinking.”

There’s Cluely founder Roy Lee. (“His grand contribution to the world was a piece of software that told people what to do.”) And the Rationalist movement’s Scott Alexander, who “would probably have a very easy time starting a suicide cult…”

Alexander’s relationship with the AI industry is a strange one. “In theory, we think they’re potentially destroying the world and are evil and we hate them,” he told me. In practice, though, the entire industry is essentially an outgrowth of his blog’s comment section… “Many of them were specifically thinking, I don’t trust anybody else with superintelligence, so I’m going to create it and do it well.” Somehow, a movement that believes AI is incredibly dangerous and needs to be pursued carefully ended up generating a breakneck artificial arms race.

There’s a fascinating story about teenaged founder Eric Zhu (who only recently turned 18):

Clients wanted to take calls during work hours, so he would speak to them from his school bathroom. “I convinced my counselor that I had prostate issues… I would buy hall passes from drug dealers to get out of class, to have business meetings.” Soon he was taking Zoom calls with a U.S. senator to discuss tech regulation… Next, he built his own venture-capital fund, managing $20 million. At one point cops raided the bathroom looking for drug dealers while Eric was busy talking with an investor. Eventually, the school got sick of Eric’s misuse of the facilities and kicked him out. He moved to San Francisco.

Eric made all of this sound incredibly easy. You hang out in some Discord servers, make a few connections with the right people; next thing you know, you’re a millionaire… Eric didn’t think there was anything particularly special about himself. Why did he, unlike any of his classmates, start a $20 million VC fund? “I think I was just bored. Honestly, I was really bored.” Did he think anyone could do what he did? “Yeah, I think anyone genuinely can.”
The article concludes Silicon Valley’s investors are rewarding young people with “agency”. Although “As far as I could tell, being a highly agentic individual had less to do with actually doing things and more to do with constantly chasing attention online.” Like X.com user Donald Boat, who successfully baited Sam Altman into buying him a gaming PC in “a brutally simplified miniature of the entire VC economy.” (After which “People were giving him stuff for no reason except that Altman had already done it, and they didn’t want to be left out of the trend.”)

Shortly before I arrived at the Cheesecake Factory, [Donald Boat] texted to let me know that he’d been drinking all day, so when I met him I thought he was irretrievably wasted. In fact, it turned out, he was just like that all the time… He seemed to have a constant roster of projects on the go. He’d sent me occasional photos of his exploits. He went down to L.A. to see Oasis and ended up in a poker game with a group of weapons manufacturers. “I made a bunch of jokes about sending all their poker money to China,” he said, “and they were not pleased….”

Advertisement

“I don’t use that computer and I think video games are a waste of time. I spent all the money I made from going viral on Oasis tickets.” As far as he was concerned, the fact that tech people were tripping over themselves to take part in his stunt just confirmed his generally low impression of them. “They have too much money and nothing going on…” Ever since his big viral moment, he’d been suddenly inundated with messages from startup drones who’d decided that his clout might be useful to them. One had offered to fly him out to the French Riviera.
The author’s conclusion? “It did not seem like a good idea to me that some of the richest people in the world were no longer rewarding people for having any particular skills, but simply for having agency.”

Source link

Continue Reading

Tech

Hackers can now track your car's location through tire pressure sensors

Published

on


The device in many automobiles that warns drivers when their tire pressure is low transmits the data in unencrypted cleartext and carries a unique identifier for each vehicle. Researchers from IMEDA Networks and several European universities recently discovered that relatively inexpensive wireless devices can track Tire Pressure Monitoring System (TPMS)…
Read Entire Article
Source link

Continue Reading

Tech

Alaska could be the next state to crack down on AI-generated CSAM and restrict kids’ social media use

Published

on

Alaska’s House of Representatives unanimously passed HB47, a bill that imposes sweeping limits on when and how minors use social media apps, along with bans on generating or distributing harmful deepfakes of children.

The bill’s original form was focused on prohibiting the possession and distribution of sexually explicit images of children using AI, but Alaska lawmakers decided to add amendments that would impose social media restrictions. The proposed limitations include a statewide curfew on using social media between 10:30 PM and 6:30 AM, banning “addictive design features” and requiring social media platforms to verify user ages and get parental consent if they are minors.

While the House bill saw 39 votes in favor and zero against, the amendments offered some hints at potential upcoming revisions. Before the bill went to a vote, some of the House representatives expressed concern about adding such broad rules on social media without consulting the companies behind them first.

The bill still has to make its way through the Alaska State Senate, which already has presented a companion bill, and the governor. Alaska is following the footsteps of many other states, and the House even modeled its social media amendments in the HB47 bill after Utah. While Utah was the first to propose social media restrictions for kids, it was later met with a preliminary injunction.

Advertisement

Source link

Continue Reading

Tech

Alphabet-owned robotics software company Intrinsic joins Google

Published

on

The move comes amid Google’s strategy to move further into the physical AI space.

Intrinsic, an Alphabet-owned software and AI company, is joining Google. The platform, which was established in 2021 as one of Alphabet’s ‘other bets’ under the ‘moonshots’ research and development segment X Development, builds AI models and software designed to make industrial robots more accessible. 

In joining Google, Intrinsic will continue to operate as a distinct entity, however, it will work closely with Google DeepMind and will tap into Google’s Gemini AI models and cloud services. Thus far, Alphabet has declined to share information regarding funding or the purchase price.

Commenting on the news, Wendy Tan White, the CEO of Intrinsic said: “The Intrinsic team has been working for years to enable access to intelligent robotics through a democratised platform, so more people can build and benefit from robotics applications. 

Advertisement

“Combined with Google’s incredible AI and infrastructure, we’re going to unlock the promise of physical AI for a much broader set of manufacturing businesses and developers. This will fundamentally shift production, from its economics to operations and enable truly advanced manufacturing.”

Hiroshi Lockheimer, the chief product officer of Other Bets, added: “At Google, we see the immense opportunity in bridging the gap between the digital and physical world, that is also true for intelligent robotics in industries like manufacturing and logistics. We’re excited to welcome the Intrinsic team to Google, so we can bring breakthrough AI to more businesses and industries, at scale.”

In other Alphabet news, Alphabet and Google were in hot water earlier this month as both were at the centre of a new antitrust complaint filed by the European Publishers Council with the European Commission on 10 February. 

The complaint alleged that Google and Alphabet are abusing their dominant position in general search services via the use of AI overviews and AI mode embedded within Google Search.

Advertisement

 Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Have You Ever Used A Tick Stick?

Published

on

Picture this: you have an irregular opening you need to fabricate a piece to fill. Maybe it’s the stonework of a fireplace; maybe it’s the curved bulkhead of a ship. How do you get that shape? The most “Hackaday” answer would be to 3D scan the area, create a CAD model based on the point cloud, and route the shape with CNC. Of course, none of those were options for the entirety of human history. So how do you do it if you don’t have such high-tech toys? With a stick, as [Essential Craftsman] takes great pains to show us in the video below.

It’s not just any stick, of course. Call it a “tick stick”, a “speil stick”, or a “joggle stick” — whatever you call it, it’s just an irregularly shaped piece of wood. The irregular shape is key to the whole process. How you use it is simple: get some kind of storyboard — cardboard, MDF, whatever — that fits inside your irregular void. Thanks to the magic of the stick, it need not fit flush to the edges of the hole. You put the tick stick on the storyboard, press the pointy end against a reference point on the side of the hole, and trace the stick. The irregular shape means you’re going to be able to get that reference point back exactly later. Number the outline you just made, and rinse and repeat until you’ve got a single-plane “point cloud” made of tick stick outlines.

Your storyboard is probably going to look mighty confusing, but that’s what the numbers are for. Bring your storyboard and your tick stick onto the workbench and whatever you want to cut out– plywood, cardboard, 1/4″ steel armor plate, you name it–and simply repeat the process. Put the tick stick inside outline #1 and mark where the pointy end lands on the material. Then do it again for the other outlines, reproducing the points you measured on the original piece. After that, it’s just a game of ‘connect the dots’ and cutting with whatever methodology works for your substrate. A sharp knife will work for cardboard, but you’ll probably want something more substantial for steel plate.

It’s not often you’re going to need the tick stick– the [Craftsman] reports only needing it a few times over the course of a decades-long career, but when you need it, there’s not much else that will do the job. Well, unless you have a 3D scanner handy, that is.

Advertisement

Source link

Advertisement
Continue Reading

Tech

This LED Strip Clock Aims To Make Your Next One Easier, Too

Published

on

At first glance, it may look like [Rybitski]’s 7-segment RGB LED clock is something that’s been done before, but look past the beautiful mounting. It’s not just stylishly framed; the back end is just as attentively executed. It’s got a built-in web UI, MQTT automation, so Home Assistant integration is a snap, and allows remote OTA updates, so software changes don’t require taking the thing down and plugging in a cable.

A slick web interface allows configuring which LEDs belong to which segments without code changes.

Pixel Clock is code for the Wemos D1 Mini microcontroller board and WS2812/WS2812B RGB LED strips, but it’s made to be flexible enough to support different implementations. For example, altering which LEDs in the strip belong to which segments on which digits can be configured entirely from the web interface. Naturally, one could build an LED strip clock using the same layout [Rybitski] did and require no changes at all — but it’s very nice to see that different wiring layouts are supported without needing to edit any code. There’s even automatic brightness adjustment if one adds an LDR (light-dependent resistor), which is a nice touch.

[Rybitski]’s enclosure is CNC-routed MDF, framed and given a marble finish. The number segments are capped with laser-cut frosted white acrylic, which serve as both diffuser for the LEDs and an attractive fit with the marble finish at the front. MDF is dense and opaque enough that no additional baffles or louvers are needed between segments.

With this code and an RGB LED strip, you can implement your own 7-segment clock any way you like, focusing on an artful presentation instead of re-inventing the wheel in software. Of course, there’s nothing that says one must use 7-segment numerals; some say your LED clock need not display numbers at all.

Advertisement

Source link

Advertisement
Continue Reading

Tech

This 6K monitor delivers extreme resolution, multitasking tools, and full connectivity for home and office users without blowing the budget

Published

on


  • JapanNext 31.5-inch 6K panel increases pixel density for sharper interface elements
  • 60Hz refresh and 8ms response focus on productivity usage
  • 500 nit brightness and 1500:1 contrast suit standard office lighting

JapanNext has released the JN-IPS326K-HSPC9, a 31.5-inch IPS monitor with a 6016 x 3384 resolution aimed primarily at home and office users.

This resolution exceeds the 3840 x 2160 pixel count commonly associated with 4K displays, resulting in a pixel pitch of 0.1159mm on this panel size.

Advertisement

Source link

Continue Reading

Tech

I’m a robot vacuum expert, and these are the 8 biggest misconceptions people have

Published

on

I’ve been reviewing robot vacuums professionally for a couple of years now, and as a result I’ve been drawn into conversations about these handy home helpers on a regular basis. Everyone I’ve met outside of a work context seems intrigued by the idea of a robot vacuum, but there are some misconceptions about what they can and can’t do. In many cases, people are underestimating modern robot vacuums’ capabilities.

So let’s set the record straight. Here are eight common robot vacuum misunderstandings, and some information on what you can actually expect…

Source link

Advertisement
Continue Reading

Tech

Sam Altman Answers Questions on X.com About Pentagon Deal, Threats to Anthropic

Published

on

Saturday afternoon Sam Altman announced he’d start answering questions on X.com about OpenAI’s work with America’s Department of War — and all the developments over the past few days. (After that department’s negotions had failed with Anthropic, they announced they’d stop using Anthropic’s technology and threatened to designate it a “Supply-Chain Risk to National Security“. Then they’d reached a deal for OpenAI’s technology — though Altman says it includes OpenAI’s own similar prohibitions against using their products for domestic mass surveillance and requiring “human responsibility” for the use of force in autonomous weapon systems.)

Altman said Saturday that enforcing that “Supply-Chain Risk” designation on Anthropic “would be very bad for our industry and our country, and obviously their company. We said [that] to the Department of War before and after. We said that part of the reason we were willing to do this quickly was in the hopes of de-esclation…. We should all care very much about the precedent… To say it very clearly: I think this is a very bad decision from the Department of War and I hope they reverse it. If we take heat for strongly criticizing it, so be it.”

Altman also said that for a long time, OpenAI was planning to do “non-classified work only,” but this week found the Department of War “flexible on what we needed…”

Sam Altman: The reason for rushing is an attempt to de-escalate the situation. I think the current path things are on is dangerous for Anthropic, healthy competition, and the U.S. We negotiated to make sure similar terms would be offered to all other AI labs.

Advertisement

I know what it’s like to feel backed into a corner, and I think it’s worth some empathy to the Department of War. They are… a very dedicated group of people with, as I mentioned, an extremely important mission. I cannot imagine doing their work. Our industry tells them “The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind.” And then we say “But we won’t help you, and we think you are kind of evil.” I don’t think I’d react great in that situation. I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them.

Question: Are you worried at all about the potential for things to go really south during a possible dispute over what’s legal or not later on and be deemed a supply chain risk…?

Sam Altman: Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk. I am still very hopeful this is going to get resolved, and part of why we wanted to act fast was to help increase the chances of that…

Question: Why the rush to sign the deal ? Obviously the optics don’t look great.

Advertisement

Sam Altman: It was definitely rushed, and the optics don’t look good. We really wanted to de-escalate things, and we thought the deal on offer was good.

If we are right and this does lead to a de-escalation between the Department of War and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as as rushed and uncareful. I don’t where it’s going to land, but I have already seen promising signs. I think a good relationship between the government and the companies developing this technology is critical over the next couple of years…

Question: What was the core difference why you think the Department of War accepted OpenAI but not Anthropic?

Sam Altman: […] We believe in a layered approach to safety–building a safety stack, deploying FDEs [embedded Forward Deployed Engineers] and having our safety and alignment researcher involved, deploying via cloud, working directly with the Department of War. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it it’s very important to build safe system, and although documents are also important, I’d clearly rather rely on technical safeguards if I only had to pick one…

Advertisement

I think Anthropic may have wanted more operational control than we did…

Question: Were the terms that you accepted the same ones Anthropic rejected?

Sam Altman: No, we had some different ones. But our terms would now be available to them (and others) if they wanted.

Question: Will you turn off the tool if they violate the rules?

Advertisement

Sam Altman: Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won’t do is turn it off because we disagree with a particular (legal military) decision. We trust their authority.

Questions were also answered by OpenAI’s head of National Security Partnerships (who at one point posted that they’d managed the White House response to the Snowden disclosures and helped write the post-Snowden policies constraining surveillance during the Obama years.) And they stressed that with OpenAI’s deal with Department of War, “We control how we train the models and what types of requests the models refuse.”

Question: Are employees allowed to opt out of working on Department of War-related projects?

Advertisement

Answer: We won’t ask employees to support Department of War-related projects if they don’t want to.

Question: How much is the deal worth?

Answer: It’s a few million $, completely inconsequential compared to our $20B+ in revenue, and definitely not worth the cost of a PR blowup. We’re doing it because it’s the right thing to do for the country, at great cost to ourselves, not because of revenue impact…

Question: Can you explicitly state which specific technical safeguard OpenAI has that allowed you to sign what Anthropic called a ‘threat to democratic values’?

Advertisement

Answer: We think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s. Other AI labs (including Anthropic) have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. Usage policies, on their own, are not a guarantee of anything. Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. These are the terms we negotiated in our contract.
They also detailed OpenAI’s position on LinkedIn:

Deployment architecture matters more than contract language. Our contract limits our deployment to cloud API. Autonomous systems require inference at the edge. By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware…

Instead of hoping contract language will be enough, our contract allows us to embed forward deployed engineers, commits to giving us visibility into how models are being used, and we have the ability to iterate on safety safeguards over time. If our team sees that our models aren’t refusing queries they should, or there’s more operational risk than we expected, our contract allows us to make modifications at our discretion. This gives us far more influence over outcomes (and insight into possible abuse) than a static contract provision ever could.

U.S. law already constrains the worst outcomes. We accepted the “all lawful uses” language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract. And because laws can change, having this codified in the contract protects against changes in law or policy that we can’t anticipate.

Source link

Advertisement
Continue Reading

Tech

Unihertz’s Titan 2 Elite Arrives Just as Physical Keyboards Refuse to Fade Away

Published

on

Unihertz Titan 2 Elite Launch
Unihertz plans to debut the new Titan 2 Elite on Kickstarter early next month, and by all accounts, it will be a natural evolution of the company’s previous Titan handsets, although one that has managed to reduce the entire thing down into a much more pocket-friendly container. You’ll still get that characteristic QWERTY keyboard that reminds you of the good old days of BlackBerry phones; after all, some people still prefer the experience of pressing actual buttons to swiping at a glass screen.



On the front, we have a 4.03-inch AMOLED display that runs at a smooth 120 Hertz. That’s far clearer and smoother than the LCD screens in earlier Titans. For the time being, we only have two color options: a standard black finish and a more eye-catching orange variety. The overall design is fairly elegant, with impressively low bezels and a small punch hole cutout for the front-facing camera.


Unihertz Titan 2 The Latest 5G QWERTY Physical Keyboard 5G Smartphone Android 15 Dual Screen 5050mAh…
  • ⭐【Compatible with T-Mobile, Verizon and AT&T only in USA】 Verizon Users: Activate the SIM Card with another Verizon-certified phone first, then…
  • ⭐【Global 5G Unlocked】Titan 2 supports major frequencies and bands globally. This means it’ll work with most of the network carriers, so you…
  • ⭐【Android 15】The latest Android 15 OS improves your productivity while safeguarding your sensitive data, all while providing greater usability…

Physical keyboards, like as those seen in the Titan line, are what sets Unihertz apart, and the Elite 2 maintains the same four row QWERTY layout, which is ideal for typing out emails, chats, or notes without relying on on-screen functions. Touch-sensitive buttons on the keyboard will allow you to use a variety of useful motions and custom shortcuts, which is really cool.


Under the hood, it’s very comparable to the existing Titan 2, with a MediaTek Dimensity 7300 processor that can perform typical tasks with ease, as well as 5G connectivity, so you’re ready for almost anything. When combined with 12GB of RAM and 512GB of storage, the device should be able to handle multitasking, apps, and all of your media storage with ease. Battery life is also expected to be good, presumably about 5,000 mAh, though specific data will have to wait until the actual launch.

Advertisement

The Elite’s camera setup is straightforward, with a basic dual-lens system that includes a 50MP main sensor for primary images and several auxiliary lenses for wide-angle or depth effects. Not exactly cutting-edge technology, but it should be more than adequate for the occasional quick snap or video call. It will ship with Android 16 out of the box, and Unihertz has committed to keeping it updated to Android 20, as well as security patches until 2031. That’s some fairly excellent long-term support, which is all too rare in devices that typically slip off the radar after a few of years at most.

The Elite 2 will be launched on Kickstarter first, similar to previous Titan efforts that were favorably received by keyboard fans. As for pricing, we’ll have to wait and see. The basic Titan 2 is around $400, so the Elite may be roughly the same, or somewhat more for all the enhancements.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025