Connect with us

Tech

Sam Altman Answers Questions on X.com About Pentagon Deal, Threats to Anthropic

Published

on

Saturday afternoon Sam Altman announced he’d start answering questions on X.com about OpenAI’s work with America’s Department of War — and all the developments over the past few days. (After that department’s negotions had failed with Anthropic, they announced they’d stop using Anthropic’s technology and threatened to designate it a “Supply-Chain Risk to National Security“. Then they’d reached a deal for OpenAI’s technology — though Altman says it includes OpenAI’s own similar prohibitions against using their products for domestic mass surveillance and requiring “human responsibility” for the use of force in autonomous weapon systems.)

Altman said Saturday that enforcing that “Supply-Chain Risk” designation on Anthropic “would be very bad for our industry and our country, and obviously their company. We said [that] to the Department of War before and after. We said that part of the reason we were willing to do this quickly was in the hopes of de-esclation…. We should all care very much about the precedent… To say it very clearly: I think this is a very bad decision from the Department of War and I hope they reverse it. If we take heat for strongly criticizing it, so be it.”

Altman also said that for a long time, OpenAI was planning to do “non-classified work only,” but this week found the Department of War “flexible on what we needed…”

Sam Altman: The reason for rushing is an attempt to de-escalate the situation. I think the current path things are on is dangerous for Anthropic, healthy competition, and the U.S. We negotiated to make sure similar terms would be offered to all other AI labs.

Advertisement

I know what it’s like to feel backed into a corner, and I think it’s worth some empathy to the Department of War. They are… a very dedicated group of people with, as I mentioned, an extremely important mission. I cannot imagine doing their work. Our industry tells them “The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind.” And then we say “But we won’t help you, and we think you are kind of evil.” I don’t think I’d react great in that situation. I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them.

Question: Are you worried at all about the potential for things to go really south during a possible dispute over what’s legal or not later on and be deemed a supply chain risk…?

Sam Altman: Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk. I am still very hopeful this is going to get resolved, and part of why we wanted to act fast was to help increase the chances of that…

Question: Why the rush to sign the deal ? Obviously the optics don’t look great.

Advertisement

Sam Altman: It was definitely rushed, and the optics don’t look good. We really wanted to de-escalate things, and we thought the deal on offer was good.

If we are right and this does lead to a de-escalation between the Department of War and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as as rushed and uncareful. I don’t where it’s going to land, but I have already seen promising signs. I think a good relationship between the government and the companies developing this technology is critical over the next couple of years…

Question: What was the core difference why you think the Department of War accepted OpenAI but not Anthropic?

Sam Altman: […] We believe in a layered approach to safety–building a safety stack, deploying FDEs [embedded Forward Deployed Engineers] and having our safety and alignment researcher involved, deploying via cloud, working directly with the Department of War. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it it’s very important to build safe system, and although documents are also important, I’d clearly rather rely on technical safeguards if I only had to pick one…

Advertisement

I think Anthropic may have wanted more operational control than we did…

Question: Were the terms that you accepted the same ones Anthropic rejected?

Sam Altman: No, we had some different ones. But our terms would now be available to them (and others) if they wanted.

Question: Will you turn off the tool if they violate the rules?

Advertisement

Sam Altman: Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won’t do is turn it off because we disagree with a particular (legal military) decision. We trust their authority.

Questions were also answered by OpenAI’s head of National Security Partnerships (who at one point posted that they’d managed the White House response to the Snowden disclosures and helped write the post-Snowden policies constraining surveillance during the Obama years.) And they stressed that with OpenAI’s deal with Department of War, “We control how we train the models and what types of requests the models refuse.”

Question: Are employees allowed to opt out of working on Department of War-related projects?

Advertisement

Answer: We won’t ask employees to support Department of War-related projects if they don’t want to.

Question: How much is the deal worth?

Answer: It’s a few million $, completely inconsequential compared to our $20B+ in revenue, and definitely not worth the cost of a PR blowup. We’re doing it because it’s the right thing to do for the country, at great cost to ourselves, not because of revenue impact…

Question: Can you explicitly state which specific technical safeguard OpenAI has that allowed you to sign what Anthropic called a ‘threat to democratic values’?

Advertisement

Answer: We think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s. Other AI labs (including Anthropic) have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. Usage policies, on their own, are not a guarantee of anything. Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. These are the terms we negotiated in our contract.
They also detailed OpenAI’s position on LinkedIn:

Deployment architecture matters more than contract language. Our contract limits our deployment to cloud API. Autonomous systems require inference at the edge. By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware…

Instead of hoping contract language will be enough, our contract allows us to embed forward deployed engineers, commits to giving us visibility into how models are being used, and we have the ability to iterate on safety safeguards over time. If our team sees that our models aren’t refusing queries they should, or there’s more operational risk than we expected, our contract allows us to make modifications at our discretion. This gives us far more influence over outcomes (and insight into possible abuse) than a static contract provision ever could.

U.S. law already constrains the worst outcomes. We accepted the “all lawful uses” language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract. And because laws can change, having this codified in the contract protects against changes in law or policy that we can’t anticipate.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Xcode with vibecoding AI agents to help build apps is now available

Published

on

Apple has released Xcode 26.3 with support for autonomous coding agents, that can directly analyze projects, modify files, and assist developers inside the official development environment.

Xcode app icon showing a metallic hammer diagonally over a blue rounded square with white technical blueprints forming a stylized letter A on a dark background
Xcode now runs with AI agents

Xcode, Apple’s central tool for building apps across various devices, is expanding its role with version 26.3. AI agents can actively participate in development, offering suggestions and documentation help.
The release includes Swift 6.2.3 and updated SDKs, but the defining change is agentic coding. Xcode is now a platform where AI helps developers plan, write, and maintain software.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Viral ad shows aged Musk, Altman, and Bezos using jobless humans to power AI

Published

on


The ad, set in 2036, sees Musk, Altman, and Bezos talking about their co-founded company, Energym. The eerily accurate AI-generated versions talk about how 80% of people had lost their jobs by 2030, leaving them with no money or purpose – but plenty of free time.
Read Entire Article
Source link

Continue Reading

Tech

Blender shelves iPad app, says it's focusing on Android tablets first

Published

on

Blender’s long-anticipated native iPad app has been placed on hold as developers shift tablet priorities elsewhere.

Tablet displaying a room design app with a 3D room model featuring wooden floors, window, and teal wall. Office supplies and toys are scattered on the desk.
A previous mockup of the potential Blender for iPad app

In June 2025, Blender announced that it would be creating a native iPad version of its popular 3D creation software. According to the team, they would be releasing the app for the iPad Pro — though they provide a timeline for release.
Unfortunately, it doesn’t seem like we’ll be getting one anytime soon, either.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

16-inch M4 Pro MacBook Pro vs. Acer Predator Helios Neo 14: Apple's never been so far behind

Published

on

Apple’s 16-inch MacBook Pro has been my notebook of choice in recent years, but it isn’t perfect, and that led me to check out the competition — namely, Acer’s $2,000 Predator Helios Neo 14 AI laptop.

Two open laptops on a kitchen counter, both displaying bright abstract neon wallpapers, with a softly lit Christmas tree and home interior blurred in the background.
Two very different notebooks at very similar price points

My current notebook is an M4 Pro 16-inch MacBook Pro, and that replaced my M1 Pro 16-inch MacBook Pro after it took an unfortunate tumble from a table. As you might expect, I’m very comfortable with macOS, and switching to Windows full-time isn’t in the cards.
But that doesn’t mean that there isn’t room for a Windows PC in my life. Because as much as Apple might try to tell you otherwise, Mac gaming just doesn’t quite cut it.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Vend-o-Vision: Trading Quarters For Watching TV In Public

Published

on

The timer mechanism of the Vend-o-Vision. (Credit: SpaceTime Junction, YouTube)
The timer mechanism of the Vend-o-Vision. (Credit: SpaceTime Junction, YouTube)

There was a time before portable TVs and personal media players when the idea of putting coin-operated TVs everywhere, from restaurants to airports and laundromats, would have seemed like a solid business model. Thus was born the Vend-o-Vision by Mini-TV USA, which presented itself as a cash earner for businesses and a way to make their customers even happier. One of these new-in-box units recently made its way over to [Mark] of the SpaceTime Junction YouTube channel.

This unit is very simple, with what appears to be an off-the-shelf Panasonic black-and-white TV with UHF and VHF reception capability, inside a metal box that contains the timer mechanism, which is linked to the coin mechanism. Depending on a physical slider with three positions, you get anywhere from 10 to 20 minutes per quarter, with the customer having to tune into the station themselves using the TV’s controls. A counter mechanism is provided as an option.

Time to enjoy your favorite TV shows. (Credit: SpaceTime Junction, YouTube)
Time to enjoy your favorite TV shows. (Credit: SpaceTime Junction, YouTube)

As would be expected from a new-in-box unit, after chiseling off the 30-odd-year-old Styrofoam packaging, it fires right up and works fine. Of course, it’s a small black-and-white TV, so it’s not incredibly useful, and clearly wasn’t even back in 1989 when the Vend-o-Vision first appeared.

After some finagling with adapters, [Mark] gets everyone’s favorite movie playing on the tiny screen, giving us the first glimpse of what it would have been like to gaze at this miracle of technology back around the early 1990s in a noisy laundromat or restaurant. One can hardly imagine why it didn’t catch on.

We can see a patent for this appear in a 1990 scan of the USPTO’s gazette, where it’s listed as being first in commercial operation on the 29th of November 1989. The system was short-lived, however, with in 1995 the FTC settling with the company for deceptive practices, as the company had overinflated the projected earnings per TV when it started flaunting it at tradeshows in 1990. A few years prior, Mini TV USA appears to have already ceased operations, making these remaining Vend-o-Vision quite rare indeed. These types of coin-operated TVs were usually in public places or hotels. But we’ve seen coin-operated TVs that briefly appeared in homes, too.

Advertisement

Source link

Continue Reading

Tech

Running A Desktop PC Off AA Alkaline Cells

Published

on

Everyone is probably familiar with the concept of battery-powered devices, but generally, this involves a laptop with a beefy battery pack and hardware optimized for low power draw. You could also do the complete opposite and try to run a desktop PC off alkaline AA cells, as [ScuffedBits] recently did out of morbid curiosity. Exactly how many alkaline cells does it take to run a desktop PC for any reasonable amount of time?

One nice thing about using batteries with a desktop PC is that you can ditch the entire AC-DC power conversion step and instead use a DC-DC adapter like the well-known PicoATX and its many clones. These just take in 12 VDC and tend to have a fairly wide input voltage range, which is useful when your batteries begin to run out of juice. In this case, just above 10 VDC seemed to be the cut-off point for the used DC-DC adapter.

In the end, [ScuffedBits] used what looks like 56 alkaline AA cells connected in both parallel and series, along with two series-connected 6,800 µF, 40V electrolytic capacitors to buffer the spikes in power demand, after early experiments showed that the cells just cannot provide power that quickly. Although admittedly, the initial thin wiring didn’t help either. With alkaline rather than carbon AA cells, improved wiring, and some buffer capacitors, it turns out that you can indeed run a desktop PC off AA cells, if only just about long enough for a small game of Minesweeper.

Advertisement

Amusingly, the small LCD monitor used in the experiment drew so little power that it happily ran on eight NiMH cells for much longer, highlighting just how important power conservation is for battery-powered devices. We wonder if you could marry this project to a battery project we saw and end up with something practically portable?

Advertisement

Source link

Continue Reading

Tech

Silicon Valley’s Ideas Mocked Over Penchant for Favoring Young Entrepreneurs with ‘Agency’

Published

on

In a 9,000-word expose, a writer for Harper’s visited San Francisco’s young entrepreneurs in September to mockingly profile “tech’s new generation and the end of thinking.”

There’s Cluely founder Roy Lee. (“His grand contribution to the world was a piece of software that told people what to do.”) And the Rationalist movement’s Scott Alexander, who “would probably have a very easy time starting a suicide cult…”

Alexander’s relationship with the AI industry is a strange one. “In theory, we think they’re potentially destroying the world and are evil and we hate them,” he told me. In practice, though, the entire industry is essentially an outgrowth of his blog’s comment section… “Many of them were specifically thinking, I don’t trust anybody else with superintelligence, so I’m going to create it and do it well.” Somehow, a movement that believes AI is incredibly dangerous and needs to be pursued carefully ended up generating a breakneck artificial arms race.

There’s a fascinating story about teenaged founder Eric Zhu (who only recently turned 18):

Clients wanted to take calls during work hours, so he would speak to them from his school bathroom. “I convinced my counselor that I had prostate issues… I would buy hall passes from drug dealers to get out of class, to have business meetings.” Soon he was taking Zoom calls with a U.S. senator to discuss tech regulation… Next, he built his own venture-capital fund, managing $20 million. At one point cops raided the bathroom looking for drug dealers while Eric was busy talking with an investor. Eventually, the school got sick of Eric’s misuse of the facilities and kicked him out. He moved to San Francisco.

Eric made all of this sound incredibly easy. You hang out in some Discord servers, make a few connections with the right people; next thing you know, you’re a millionaire… Eric didn’t think there was anything particularly special about himself. Why did he, unlike any of his classmates, start a $20 million VC fund? “I think I was just bored. Honestly, I was really bored.” Did he think anyone could do what he did? “Yeah, I think anyone genuinely can.”
The article concludes Silicon Valley’s investors are rewarding young people with “agency”. Although “As far as I could tell, being a highly agentic individual had less to do with actually doing things and more to do with constantly chasing attention online.” Like X.com user Donald Boat, who successfully baited Sam Altman into buying him a gaming PC in “a brutally simplified miniature of the entire VC economy.” (After which “People were giving him stuff for no reason except that Altman had already done it, and they didn’t want to be left out of the trend.”)

Shortly before I arrived at the Cheesecake Factory, [Donald Boat] texted to let me know that he’d been drinking all day, so when I met him I thought he was irretrievably wasted. In fact, it turned out, he was just like that all the time… He seemed to have a constant roster of projects on the go. He’d sent me occasional photos of his exploits. He went down to L.A. to see Oasis and ended up in a poker game with a group of weapons manufacturers. “I made a bunch of jokes about sending all their poker money to China,” he said, “and they were not pleased….”

Advertisement

“I don’t use that computer and I think video games are a waste of time. I spent all the money I made from going viral on Oasis tickets.” As far as he was concerned, the fact that tech people were tripping over themselves to take part in his stunt just confirmed his generally low impression of them. “They have too much money and nothing going on…” Ever since his big viral moment, he’d been suddenly inundated with messages from startup drones who’d decided that his clout might be useful to them. One had offered to fly him out to the French Riviera.
The author’s conclusion? “It did not seem like a good idea to me that some of the richest people in the world were no longer rewarding people for having any particular skills, but simply for having agency.”

Source link

Continue Reading

Tech

Hackers can now track your car's location through tire pressure sensors

Published

on


The device in many automobiles that warns drivers when their tire pressure is low transmits the data in unencrypted cleartext and carries a unique identifier for each vehicle. Researchers from IMEDA Networks and several European universities recently discovered that relatively inexpensive wireless devices can track Tire Pressure Monitoring System (TPMS)…
Read Entire Article
Source link

Continue Reading

Tech

Alaska could be the next state to crack down on AI-generated CSAM and restrict kids’ social media use

Published

on

Alaska’s House of Representatives unanimously passed HB47, a bill that imposes sweeping limits on when and how minors use social media apps, along with bans on generating or distributing harmful deepfakes of children.

The bill’s original form was focused on prohibiting the possession and distribution of sexually explicit images of children using AI, but Alaska lawmakers decided to add amendments that would impose social media restrictions. The proposed limitations include a statewide curfew on using social media between 10:30 PM and 6:30 AM, banning “addictive design features” and requiring social media platforms to verify user ages and get parental consent if they are minors.

While the House bill saw 39 votes in favor and zero against, the amendments offered some hints at potential upcoming revisions. Before the bill went to a vote, some of the House representatives expressed concern about adding such broad rules on social media without consulting the companies behind them first.

The bill still has to make its way through the Alaska State Senate, which already has presented a companion bill, and the governor. Alaska is following the footsteps of many other states, and the House even modeled its social media amendments in the HB47 bill after Utah. While Utah was the first to propose social media restrictions for kids, it was later met with a preliminary injunction.

Advertisement

Source link

Continue Reading

Tech

Alphabet-owned robotics software company Intrinsic joins Google

Published

on

The move comes amid Google’s strategy to move further into the physical AI space.

Intrinsic, an Alphabet-owned software and AI company, is joining Google. The platform, which was established in 2021 as one of Alphabet’s ‘other bets’ under the ‘moonshots’ research and development segment X Development, builds AI models and software designed to make industrial robots more accessible. 

In joining Google, Intrinsic will continue to operate as a distinct entity, however, it will work closely with Google DeepMind and will tap into Google’s Gemini AI models and cloud services. Thus far, Alphabet has declined to share information regarding funding or the purchase price.

Commenting on the news, Wendy Tan White, the CEO of Intrinsic said: “The Intrinsic team has been working for years to enable access to intelligent robotics through a democratised platform, so more people can build and benefit from robotics applications. 

Advertisement

“Combined with Google’s incredible AI and infrastructure, we’re going to unlock the promise of physical AI for a much broader set of manufacturing businesses and developers. This will fundamentally shift production, from its economics to operations and enable truly advanced manufacturing.”

Hiroshi Lockheimer, the chief product officer of Other Bets, added: “At Google, we see the immense opportunity in bridging the gap between the digital and physical world, that is also true for intelligent robotics in industries like manufacturing and logistics. We’re excited to welcome the Intrinsic team to Google, so we can bring breakthrough AI to more businesses and industries, at scale.”

In other Alphabet news, Alphabet and Google were in hot water earlier this month as both were at the centre of a new antitrust complaint filed by the European Publishers Council with the European Commission on 10 February. 

The complaint alleged that Google and Alphabet are abusing their dominant position in general search services via the use of AI overviews and AI mode embedded within Google Search.

Advertisement

 Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025