Connect with us
DAPA Banner

Tech

Do you want to build a robot snowman?

Published

on

Nvidia’s GTC conference had everything: trillion dollar sales projections, graphics technology that can yassify video games, grand declarations that every company needs an OpenClaw strategy, and even a robot version of the beloved snowman Olaf from Disney’s “Frozen.”

On the latest episode of TechCrunch’s Equity podcast, TechCrunch’s Kirsten Korosec, Sean O’Kane, and I recapped CEO Jensen Huang’s keynote and debated what it means for Nvidia’s future. And yes, a big part of our discussion focused on poor Olaf, whose microphone had to be turned off when he started rambling.

Even if the demo had gone flawlessly, Sean might still have had some reservations, as he noted these presentations always focus on “the engineering challenges” and not the “really messy gray areas” on the social side.

“But what happens when a kid kicks Olaf over?” Sean asked. “And then every other kid who sees Olaf get kicked or knocked over has their whole trip to Disney ruined and it ruins the brand?”

Advertisement

Read a preview of our conversation, edited for length and clarity, below.

Anthony: [CEO Jensen Huang] was basically saying that every company needs to have an OpenClaw strategy now. I think that is just a very grand statement that’s meant to be attention grabbing; I think it’s also interesting coming at this kind of transitional moment for OpenClaw. 

The founder has gone to OpenAI. So it’s now this open source project that potentially can flourish and evolve beyond its creator, or it could languish. If companies like Nvidia are investing a lot into it, then [it’s] more likely that it’ll continue to evolve. But it’ll be interesting to see a year from now, whether that looks like a prescient statement or everyone’s like, “Open what?”

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

Kirsten: In the case of Nvidia, it costs them nothing in the grand scheme of things to launch what they call NemoClaw, which is an open source project, which they built with the OpenClaw creator. But if they don’t do something, they have a lot to lose. So really that message to me, the way I translated it when Jensen was like, “Every enterprise needs to have an OpenClaw strategy,” it was, “Nvidia needs to have a solution or strategy for enterprises, because if it’s successful, it is another way or another pathway for Nvidia to be part of numerous other companies.” So doing nothing is a greater risk than doing something that doesn’t go anywhere.

Advertisement

Sean: The real question here is why have we not talked about what is clearly the end game for Nvidia, and the thing that is going to turn it into the first $100 trillion company, which is an Olaf robot.

Anthony: How could I forget?

Kirsten: Anthony, just go to the end of the two and a half hours to watch this.

So, the Olaf robot comes out, and this is something that Jensen loves to do. He loves to have these demos and some of them go better than others. It is also to demonstrate Nvidia’s technology in robotics, and I don’t know if Olaf was actually speaking in real time or if it was programmed — it felt a little programmed, or it had specific keywords that it used.

Advertisement

But the greatest part about it is that they had to cut its mic at the end because it just started rambling and speaking to the crowd. And then it went over to its little passageway and was slowly lowered. And you could see it on the video. It was still talking, but no mic.

Sean: Now we just need to give this little robot a wheelbase. And I know the perfect founder who can provide it. 

I mean, these demos are always silly. I don’t want to get up on my soapbox, because I know that we’ve talked about this a little bit earlier this week, but this was an impressive demo up until the moment where it fell a little bit short.

This is another really good example, though, of [how] robotics is a really interesting engineering problem and a really interesting physics problem and a really interesting integration problem, and all of this stuff, but this was presented as, in partnership with Disney, and it’s supposed to be the future of Disney parks and things like that: You’re going to be able to walk around and see Olaf from “Frozen” and take pictures of them and everything.

Advertisement

But these efforts never consider — or certainly don’t put front and center in events like this — all the other things you have to consider when you roll stuff out like this. There’s a really good YouTuber, Defunctland, that did a really good video about this — four hours long, not too long — about the history of Disney trying to get these kinds of robotics into their park, these automatons.

The engineering challenges are really interesting and it’s fun to see that history, but it always comes back to the same question of: Okay, but what happens when a kid kicks Olaf over? And then every other kid who sees Olaf get kicked or knocked over has their whole trip to Disney ruined and it ruins the brand?

There’s just so much on the social side of this. And that sounds silly, but this is the question that we’re kind of asking about humanoid robots, too. There’s so much hype about all this other stuff and we just don’t really hear as much conversation about the really messy gray areas on the social side of these things, and also just integrating them into people’s lives. We only ever really hear about the engineering challenges — which again, are really impressive.

Kirsten: I have a counterpoint and then we have to get to our next [topic]. This is a job creator, because Olaf will have to have a human babysitter in Disneyland, probably dressed up as Elsa or something else. You can imagine that actually, what we’re doing is creating jobs [with] this engineering experiment.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Quordle hints and answers for Monday, May 4 (game #1561)

Published

on

Looking for a different day?

A new Quordle puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Sunday’s puzzle instead then click here: Quordle hints and answers for Sunday, May 3 (game #1560).

Quordle was one of the original Wordle alternatives and is still going strong now more than 1,400 games later. It offers a genuine challenge, though, so read on if you need some Quordle hints today – or scroll down further for the answers.

Advertisement

Source link

Advertisement
Continue Reading

Tech

AI best practices: If at first you don’t succeed, prompt, prompt again

Published

on

An AI prompt screen, as reimagined by Google Gemini.

[Editor’s Note: This is the third in a series by Oren Etzioni about AI usage and best practices. See also “AI Coach or AI Ghostwriter? The Choice Is Yours,”  and “How to read with AI.”]

A friend asked ChatGPT for input on a professional matter and received a banal, lackluster response. I suggested she try a different approach: ask for 15 different ideas, scan them, pick the two that felt most promising, and then ask ChatGPT to refine. She came back overjoyed. ChatGPT had not gotten smarter, but she became better at prompting.

This is my favorite gambit: ask AI for many options, delve deeper into the promising ones, and most importantly, if at first you don’t succeed, prompt, prompt again!

What follows is practical advice on how to use AI as a power tool rather than a slot machine. For a simple request, it’s overkill, but if you’re serious about prompting, read on.

Anthropic’s own guidance for prompting Claude contains a helpful hint: treat the model as a brilliant but literal-minded new employee on their first day. They are capable. They are also new. They will do exactly what you ask, so you have to ask exactly what you want.

Advertisement

The Anthropic team’s golden rule is to show your prompt to a colleague with no context and ask whether they could follow it. If the answer is no, the model can’t either. This principle generates a handful of habits that lift output quality immediately, before any of the more advanced techniques come into play.

One caveat from me, though: don’t think of the model as a person. It’s not. The “brilliant new employee” framing is a useful starting point, but it’s a metaphor, not reality. A new hire asks follow-up questions, remembers what you said yesterday, and notices when an instruction is dumb. Claude does none of that by default. Lean on the metaphor to remember to be specific and provide context, but drop it the moment you start to expect human judgment that just isn’t there.

Here’s the playbook, organized as a list for easy reference and periodic review.

Be specific about format, length, audience, and constraints.

Advertisement

Vague prompts produce vague output. The fix is to say what you actually want.

  • Before: Write about marketing trends.
  • After: Analyze the three most significant B2B SaaS marketing trends from the past six months. For each, give one company example and a one-sentence assessment of whether the trend will accelerate or plateau. Write it as a 400-word brief for a non-technical board.

Improving prompt quality is often simply stating constraints. Vague prompts produce safe, hedged, encyclopedic answers because the model has no signal about what to optimize for and defaults to coverage. Specific prompts produce opinionated, useful answers because the constraints eliminate the safe-but-useless options. Asking for “three” instead of “some” forces ranking. Asking for “accelerate or plateau” forces a call. Asking for “a board brief” determines what gets cut. Each constraint you add is a decision the model no longer gets to dodge.

Provide a few examples.

This is the highest-leverage move in everyday prompting. Models pick up patterns from examples faster than from descriptions.

  • Before: Turn these meeting notes into action items.
  • After: Turn these meeting notes into action items. Match this format: Example 1: Note: “Sarah will look into the pricing question and get back to us next week.” Action item: Sarah → research pricing options → due next Friday. Example 2: Note: “We agreed to push the launch.” Action item: Team → revise launch timeline → due before Monday’s standup. Now do the same for these notes: [paste]

Tell the model what to do, not what not to do.

Negative instructions are easier to violate than positive ones. Reframing in the affirmative gets you cleaner results.

Advertisement
  • Before: Don’t be too formal. Don’t use jargon. Don’t make it boring.
  • After: Write in a warm, conversational tone, the way a smart colleague would explain this over coffee. Use plain English and short sentences.

Match the style of your prompt to the style of the output you want.

This one surprises some people. If your prompt is full of bullets and bold text, the model will return bullets and bold text. If you want flowing prose, write in flowing prose.

These habits sound modest. But applied together, they take prompts from the level my friend was operating at, where ChatGPT seemed unhelpful, to a level where AI yields dividends left and right. The advanced techniques in the rest of this piece build on this foundation, but they won’t rescue a prompt that fails the basics.

Beyond the basics, here is a set of effective habits that show up in guidance from OpenAI, Google, working developers, and the people who build production AI systems for a living. These are not techniques so much as workflow disciplines.

Iterate; treat prompting as test-driven.

Advertisement

Your first prompt is a draft. The most experienced practitioners build small sets of test cases (the inputs they care about), run their prompt across them, and refine until the output is consistently good. Several open-source toolkits exist to formalize this loop.

  • Before: Write the prompt. Try it on one example. Looks good. Ship it.
  • After: Write the prompt. Pick five inputs, including the awkward edge cases. Run the prompt on all five. Where it fails, change one thing in the prompt and retest. Keep the version that works on the most cases.

Specify a definition of done.

OpenAI’s own guidance for GPT-5 stresses telling the model what counts as a finished answer. Without that, the model decides for itself, often by stopping at the first plausible-looking response.

  • Before: Help me debug this Python error.
  • After: Help me debug this Python error. You are done when: (1) you have identified the root cause, (2) you have proposed a specific fix with the corrected code, and (3) you have explained why the original failed. If you are not confident on any of those three, say so explicitly rather than guessing.

Calibrate effort to the task.

Modern reasoning models have effort or thinking dials. Low effort for extraction and triage; high for synthesis and strategy. Most users leave them on default and pay for it on hard problems.

  • Before: Summarize this 80-page report. 
  • After: Set thinking effort to high. Read the entire report. Identify the three most important findings, the two weakest claims, and the one question I should ask the authors. Cite page numbers.

Inject current or proprietary context directly.

Be careful to avoid jargon and abbreviations unknown to the model (instead of the acronym PMO, say “Project Management Office”). Models don’t have access to your internal documents. Paste in the relevant material.

Advertisement
  • Before: How should I structure a related work section comparing my framework to prior agent governance proposals?
  • After: Below is my current draft related work section, plus PDFs of the three papers I am positioning against (pasted). Based only on these sources, identify points of overlap I have not yet acknowledged and any claims in my draft that the cited papers would not actually support.

Build a personal prompt library.

This is a power move for a pro. The patterns that worked yesterday are likely to work tomorrow. Stop rewriting them from scratch. Save the prompts that consistently produce good results, organized by task type. Treat them as living documents, not one-off attempts.

  • Before: Open a new chat. Type out the framing, the constraints, the examples, and the question from memory. Watch yourself forget two of them.
  • After: Open your prompt library. Copy the “draft a memo for my manager” template. Paste in today’s specific topic and source material. Run.

Here are some key don’ts:

Don’t tell reasoning models to “think step by step.”

Models like OpenAI’s o-series and GPT-5 thinking already do that internally. Adding the instruction can hurt rather than help. Save it for the everyday models.

Don’t lean on “do not” or “never” instructions for everything.

Advertisement

Models, especially Gemini, can over-index on broad negative constraints and degrade on basic reasoning. Prefer positive framing: tell the model what to do.

Don’t trust polished prose as evidence of correctness.

Hallucinations are most dangerous when they are well-written. As I pointed out in How to Read with AI, you have to carefully verify AI output.

Don’t use aggressive language (“CRITICAL: You MUST…”).

Advertisement

Modern models are highly responsive to ordinary instructions. Aggressive phrasing can produce overcautious output and triggers refusals. Use normal language.

Don’t include undefined acronyms in your prompt.

They measurably degrade output. For research on the impact of prompt changes see this recent paper on Brittlebench.

Don’t change three things at once when iterating.

Advertisement

When a prompt isn’t working, change one variable, test, then change the next. Otherwise you don’t know what helped.

Don’t assume that the same prompt works across models.

Different model families need different prompting. The same instruction can help one and hurt another. The temperature and effort settings that work for GPT are not the ones that work for Claude or Gemini.

Don’t treat the first answer as the final one.

Advertisement

Failing to iterate is a common failure mode in everyday AI use. Here’s a trick for making AI better at multi-step tasks: after each attempt, have the AI write a short critique of what went wrong and tuck that note into its memory for the next try. No fancy mechanics, just the model “talking to itself” in plain English. On the next attempt, it reads its own past reflections and adjusts. This loop can produce meaningful gains over one-shot prompts.

The people who get the most out of AI aren’t the ones with the best prompt templates. They’re the ones who treat the model as a powerful tool for advancing their work. You don’t need to show up with perfect clarity about what you want. A good dialog can get you there, surfacing options and questions you’d have missed on your own. What it can’t do is recognize the right answer when it appears. That part is still on you.

For further reading

Provider documentation:

Advertisement

Practitioner resources:

Editor’s note: GeekWire publishes guest opinions to foster informed discussion and highlight a diversity of perspectives on issues shaping the tech and startup community. If you’re interested in submitting a guest column, email us at tips@geekwire.com. Submissions are reviewed by our editorial team for relevance and editorial standards.

Source link

Continue Reading

Tech

TechCrunch Mobility: How do you issue a ticket to a robotaxi?

Published

on

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. To get this in your inbox, sign up here for free — just click TechCrunch Mobility!

We’re going to do a bit of a deep dive today, which may make this newsletter look a little different than normal. There is a reason! 

This newsletter is not region-specific, but sometimes there are policies at the state level that have widespread implications for tech companies and startups alike. Which brings me to California and the new autonomous vehicle testing and deployment rules issued this week by the state’s Department of Motor Vehicles. 

There are two new sets of rules — collectively 100 pages long — that cover requirements for the testing and deployment of AVs. I spent the past few days speaking to engineers and policy folks working at AV companies and discovered that they have strong opinions and few want to speak publicly about it. But thanks to the public commentary period on these regulations, we have some insight into what the industry supported and what it did not. 

Advertisement

The regulations include new, more robust requirements for data collection and sharing, training, and operations. Here are a few items that stuck out and what insiders told me.

How do you ticket a robotaxi? Under these new rules, law enforcement can cite AV companies for traffic violations committed by their vehicles. The rule, called “Notice of Autonomous Vehicle Noncompliance,” requires the manufacturer (meaning the robotaxi company) to report the violation to the DMV within 72 hours of receiving it from law enforcement. 

I’ve heard a number of interpretations of this rule and how it will be implemented, but it appears there is not a monetary fine attached to these violations. Instead, these violations are another piece of data that the DMV can use to identify problems and take action if needed.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

Insiders told me that the data is actionable and more important than a monetary fine. My question: Why not both? 

Advertisement

The good news for industry: The DMV will now allow heavy-duty vehicles equipped with autonomous vehicle tech to test and eventually deploy on public roads. Self-driving truck companies are happy with this outcome. Daniel Goff, VP of external affairs at Kodiak, told me the company is already working on the required documentation to apply for a permit. 

The burden for the industry: The word that came up in every conversation I had with someone in the AV industry was “burdensome.” And it was always used in reaction to the new data collection and sharing regulations. 

Goodbye, disengagement reports; hello, malfunctions: Others were happy to see annual disengagement reporting disappear. Disengagement reports, which detailed instances when human drivers had to take over control due to technology failures or safety concerns, have been controversial because companies use varying standards. This has made it impossible to compare the results or rate the proficiency of autonomous vehicle technology. 

That entire section has been removed and replaced with a requirement to report “dynamic driving task performance relevant system failure.” This may seem like semantics — trading one jargony phrase for another. Insiders tell me that while it is not a perfect metric, it is clearer than its predecessor. That doesn’t mean it is beloved either. 

Advertisement

There is a lot more in these documents, including a requirement to provide annual updates to first responder interaction plans, access to manual vehicle override systems, two-way communication links with 30-second response times, and updated training requirements to ensure safe and timely interactions with first responders.

My question for you, reader, is whether these rules go too far or if they are appropriate and provide the kind of reporting and data collection needed to keep these companies accountable? Sign up for the Mobility newsletter to vote in our polls!

A little bird

blinky cat bird green
Image Credits:Bryce Durbin

We had a lot of little birds talk to us about the new California AV rules, so nothing new to add here. But remember, you can always send us tips. Here’s how.

Got a tip for us? Email Kirsten Korosec at kirsten.korosec@techcrunch.com or my Signal at kkorosec.07, or email Sean O’Kane at sean.okane@techcrunch.com.

Deals!

money the station
Image Credits:Bryce Durbin

BMW i Ventures launched a new $300 million fund with a timely thesis: AI will reshape how the automotive industry operates. The fund will invest in early-stage through Series B startups in North America and Europe that are working on agentic AI and physical AI as well as industrial software, advanced materials, and manufacturing and supply-chain technologies. This third fund brings the firm’s total capital under management to $1.1 billion. 

Other deals that got my attention …

Advertisement

Sereact, a German robotics startup, raised $110 million in a Series B funding round led by VC Headline. Other investors include Bullhound Capital, Felix Capital, Daphni, Air Street Capital, Creandum, and Point Nine.

Spirit Airlines is preparing to shut down after failing to secure a $500 million lifeline from the government, the WSJ reports. The company is expected to cease operations around 3 a.m. ET Saturday.

Notable reads and other tidbits

Image Credits:Bryce Durbin

China suspended issuing new licenses for autonomous vehicles after dozens of Baidu’s Apollo Go robotaxis suddenly stopped last month, Bloomberg reported.

Google‘s Gemini AI assistant is hitting the road in millions of vehicles.

Faraday Future paid around $7.5 million to a company controlled by its founder, Jia Yueting, in 2025, senior reporter Sean O’Kane discovered in a recent SEC filing. 

Advertisement

Rivian reported earnings this week and one item that stood out to us — and to many others — was the downsizing of its DOE loan from $6.6 billion to $4.5 billion. That loan restructuring comes with changes to its Georgia factory. Instead of two 200,000-vehicle capacity structures on the Georgia site, Rivian will now build a 300,000-vehicle capacity factory and leave the adjacent “pad” untouched and ready for future development. Analysts didn’t necessarily view this as negative but did position this as rightsizing. Barclays, for instance, views the modification as Rivian adjusting to the current EV environment, according to a research note published Friday. Barclays also stated it didn’t believe Rivian currently plans to build the second plant at Georgia, “at least not until early/mid next decade.”

Tesla launched a Semi-Charging for Business program, which includes a new product called the Basecharger that is designed for depot and overnight use.

Uber has tapped Hertz to clean, charge, and fix its Lucid Motors robotaxis. This announcement left us with a cheeky question: How many companies does it take to launch a robotaxi service?

Uber customers in the United States can now book hotels directly through the app, one of several new features announced this week that pushes far beyond the company’s original ride-hailing purpose and even deeper into its users’ lives. At launch, Uber customers will have access to more than 700,000 hotels worldwide through a partnership with Expedia Group, the travel company that Uber CEO Dara Khosrowshahi led for 12 years.

Advertisement

Vay, a remote driving tech startup, says it has grown its fleet to 175 vehicles on the road and has surpassed 60,000 rides.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Source link

Advertisement
Continue Reading

Tech

$15 Thrift Store Lens Meets Orbiting Space Station

Published

on

Thift Store Camera Lens ISS
Saveitforparts picked up a heavy Sigma 400mm XQ lens and its matching 2x teleconverter at a local thrift store for $14.99. He carried the whole assembly home, attached a simple adapter, and mounted it on a decade-old Sony NEX-3 digital camera. The goal looked simple on paper: point the bargain rig skyward when the International Space Station passed overhead and record whatever showed up in the frame.



He began by checking the pass predictions on n2yo.com and the NASA Spot the Station app to determine when and where the ISS would cross the sky above his position. Timing was critical since the ISS moves so quickly that one second too late leaves you with nothing but an empty blue sky. He placed the lens on its built-in tripod foot, approximately oriented it along the projected route, and simply waited for the bright little dot formed by sunlight reflecting off the station’s solar panels.

Sale


Gskyer Telescope, 70mm Aperture 400mm AZ Mount Astronomical Refracting Telescope for Kids Beginners…
  • Superior Optics: 400mm(f/5.7) focal length and 70mm aperture, fully coated optics glass lens with high transmission coatings creates stunning images…
  • Magnification: Come with two replaceable eyepieces and one 3x Barlow lens.3x Barlow lens trebles the magnifying power of each eyepiece. 5×24 finder…
  • Wireless Remote: This refractor telescope includes one smart phone adapter and one Wireless camera remote to explore the nature of the world easily…

Thift Store Camera Lens ISS
Getting a clear focus was difficult from the start, as dirt had gathered inside the old lens over time, dispersing light and destroying the details. So he began by practicing on the moon, adjusting the focus until the craters appeared sharp enough to see through the viewfinder. Getting the shutter speed and exposure exactly right required a delicate balance; too slow and the station blurred into a streak, too quick and the small dot vanished against the sky.

Thift Store Camera Lens ISS
The first few passes produced exactly what he expected: a small white speck dead center in each frame. He switched to a Canon HF G70 camcorder with a 2.2x telephoto converter, which produced video rather than stills, but the results were still unimpressive. He was able to re-capture the station as a moving blob, occasionally catching a glimpse of its center body and extending solar arrays when the alignment and illumination conditions were exactly right.

Thift Store Camera Lens ISS
He was fired up and undeterred, so he focused on solar transits. These events last less than 4 seconds and occur low on the horizon, 12,000 kilometers away from the station. He put a pair of solar viewing glasses over the camcorder lens to block out some of the sun’s brightness, programmed the camera to capture a series of photos in quick succession at 1/250th of a second, and waited for the projected moment to arrive. But the atmospheric haze from the morning added another layer of distortion, making it a little difficult, but he still managed to capture a few frames that showed the station as a clear black speck traveling across the sun’s disk. Taking those frames and putting them together in software proved that the timing was exactly what the transit calculation predicted.

Thift Store Camera Lens ISS
Every attempt revealed the same limitations, with the total focal length reaching a whopping 800mm on the still camera and even more on the camcorder, but the station remained too far away to capture any full-on structural details. Hand-tracked motions were a pain; there was no fancy motorized mount to speak of, so any tiny movement during the limited window could put the target straight out of view. A full moon transit was cut short by a bank of clouds and a lot of bad luck.
[Source]

Advertisement

Source link

Continue Reading

Tech

NYT Connections hints and answers for Monday, May 4 (game #1058)

Published

on

Looking for a different day?

A new NYT Connections puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Sunday’s puzzle instead then click here: NYT Connections hints and answers for Sunday, May 3 (game #1057).

Good morning! Let’s play Connections, the NYT’s clever word game that challenges you to group answers in various categories. It can be tough, so read on if you need Connections hints.

Advertisement

Source link

Continue Reading

Tech

OpenAI opens ChatGPT subscriptions to OpenClaw’s 3.2M users as Anthropic blocks Claude access to the AI agent platform

Published

on

TL;DR

OpenAI has opened ChatGPT subscriptions to OpenClaw, the open-source AI agent framework with 346,000 GitHub stars and 3.2 million users, allowing subscribers to run autonomous agents via GPT-5.4 for $23 per month. The move is the opposite of Anthropic’s decision to block Claude subscriptions from OpenClaw in April, creating a competitive split where OpenAI bets on distribution and Anthropic protects margins.

 

Advertisement

Sam Altman posted on X at 2:33 a.m. on 2 May: “you can sign in to openclaw with your chatgpt account now and use your subscription there! happy lobstering.” The announcement, delivered with the casual register of a founder pushing a minor product update, is anything but minor. OpenAI has made its ChatGPT subscription the authentication and billing layer for OpenClaw, the open-source AI agent framework that became the fastest-growing project in GitHub history, accumulated 346,000 stars in under five months, and is now used by more than three million people. ChatGPT Plus subscribers can log in via OAuth, access GPT-5.4 through the Codex endpoint, and run autonomous AI agents on their own hardware for $23 per month total. OpenAI did not build the most popular AI agent in the world. It hired the developer, backed the foundation, and opened the login.

The lobster

OpenClaw was created in November 2025 by Peter Steinberger, an Austrian developer who had previously sold a software company for $100 million and was experimenting with AI coding tools in a Madrid cafe. The first version was called Clawdbot, a play on Anthropic’s Claude with a lobster mascot. Anthropic filed a trademark complaint. Steinberger renamed it Moltbot, then, because that “never quite rolled off the tongue,” renamed it again to OpenClaw. The lobster stayed.

The product is a locally hosted AI agent that connects to large language models, Claude, GPT, DeepSeek, and others, and operates through the messaging apps people already use: WhatsApp, Telegram, Signal, Discord, Slack, iMessage, Microsoft Teams. It manages calendars, sends emails, organises files, writes code, browses the web, and executes multi-step workflows autonomously. The data stays on the user’s machine. The agent runs continuously in the background. Jensen Huang called it “the most popular open-source project in the history of humanity” at Nvidia’s GTC conference in March. It surpassed React’s ten-year GitHub record in 60 days.

In February, Altman announced that Steinberger was joining OpenAI to “drive the next generation of personal agents” and that OpenClaw would be moved to an independent foundation with OpenAI’s continued support and funding. Sequoia distributed 200 engraved Mac Minis at an AI event as OpenClaw became the infrastructure layer that venture capitalists could not own, and the signal from Silicon Valley’s most influential firms was clear: the agent layer was going to be open, and the business models would have to be built around it rather than on top of it.

The opposite bets

On 4 April, Anthropic blocked Claude Pro and Max subscribers from using their flat-rate subscription plans with OpenClaw and other third-party AI agent frameworks. The reason was cost: OpenClaw agents running autonomously can generate thousands of API calls per day, consuming far more compute than a human typing queries into a chat window. Anthropic decided that unlimited subscription access through an agent framework was economically unsustainable and shut it down.

Advertisement

Anthropic’s decision to ban OpenClaw from Claude subscriptions was a defensive move to protect margins. OpenAI’s decision to do the opposite, to open ChatGPT subscriptions to OpenClaw, is an offensive one. By making ChatGPT the default backend for the world’s most popular agent framework, OpenAI is betting that the volume of new subscribers will more than compensate for the increased compute cost per user. The economics only work if OpenClaw converts a significant number of its 3.2 million users into paying ChatGPT subscribers. If it does, OpenAI will have acquired a distribution channel for its subscription product that no amount of marketing could have built.

The competitive dynamics are stark. Anthropic looked at OpenClaw and saw a cost problem. OpenAI looked at the same product and saw a distribution opportunity. One company locked the door. The other opened it and handed out the keys.

The risks

OpenClaw’s rapid growth has been accompanied by equally rapid security failures. In late January, a critical remote code execution vulnerability, CVE-2026-25253, was disclosed: any website a user visited could silently connect to the agent’s local server through an unvalidated WebSocket, chaining a cross-site hijack into full code execution on the user’s machine. Security researchers audited ClawHub, OpenClaw’s skills marketplace, and found 824 confirmed malicious entries out of 10,700 available skills, with 335 traced to a single coordinated attack operation. More than 30,000 OpenClaw instances were found exposed on the public internet without authentication. Moltbook, the social layer for agents, suffered a breach that exposed 1.5 million API tokens and thousands of private conversations.

The vulnerabilities have been patched in current versions. The problem is that a significant portion of the installed base is running older, unpatched versions. Anything before version 2026.1.30 remains vulnerable to at least some of the disclosed exploits, and attackers are still targeting them. OpenAI’s decision to tie its ChatGPT subscription to OpenClaw means that OpenAI’s brand, its billing system, and its user credentials are now flowing through an open-source platform that has had more security incidents in four months than most enterprise software accumulates in a decade.

Advertisement

The ecosystem

Nvidia turned OpenClaw into an enterprise platform with NemoClaw, adding security hardening, compliance features, and integration with Nvidia’s inference infrastructure. Tencent launched ClawPro, an enterprise AI agent platform built on OpenClaw’s architecture and optimised for the Chinese market. Meta launched Manus AI as a desktop agent, a competing approach that runs as a native application rather than through messaging apps. The agent layer is now a battlefield where every major technology company is staking a position.

The ChatGPT subscription integration positions OpenAI at the centre of this ecosystem without requiring it to own or control the agent framework itself. OpenClaw remains open source, governed by an independent foundation, and compatible with multiple language model providers. But with Anthropic blocking access and OpenAI enabling it, the practical effect is that OpenClaw’s three million users are being funnelled toward ChatGPT as their default model. The foundation structure gives OpenAI deniability. The subscription integration gives it distribution.

The model

The economics are unusual. A ChatGPT Plus subscription costs $20 per month. OpenClaw Launch Lite, a hosted management layer, costs $3 per month. For $23, a user gets access to GPT-5.4 through OpenClaw’s agent framework without per-token API charges. This is substantially cheaper than using the OpenAI API directly, which would cost hundreds of dollars per month at the volume an autonomous agent generates. OpenAI is subsidising agent usage through its subscription tier, betting that the lifetime value of a subscriber who uses ChatGPT through OpenClaw is higher than the compute cost of serving their agent’s requests.

This is the same logic that drove mobile carriers to subsidise smartphones: give away the hardware economics to lock in the subscription revenue. OpenAI is giving away the agent access to lock in the ChatGPT subscription. If the bet works, ChatGPT becomes not just a chatbot but the default intelligence layer for a generation of autonomous AI agents that manage people’s digital lives. If it does not work, OpenAI will have opened its most valuable product to a compute-intensive use case that burns through inference capacity without generating proportional revenue.

Advertisement

Altman’s tweet was seven words and a lobster joke. The decision behind it is one of the most consequential distribution bets OpenAI has made since launching ChatGPT. The most popular open-source project in history now runs on your ChatGPT subscription. Whether that is a masterstroke or a margin trap depends entirely on whether three million lobster enthusiasts convert into paying customers, and whether the agent they are running on their laptops is secure enough to deserve the trust that both OpenAI and its subscribers are placing in it.

Source link

Advertisement
Continue Reading

Tech

AI Cameras are Being Deployed Across the Western US for Early Detection of Wildfires

Published

on

The Associated Press reports:

On a March afternoon, artificial intelligence detected something resembling smoke on a camera feed from Arizona’s Coconino National Forest. Human analysts verified it wasn’t a cloud or dust, then alerted the state’s forest service and largest electric utility. One of dozens of AI cameras installed for the utility Arizona Public Service had spotted early signs of what came to be known as the Diamond Fire. Firefighters raced to the scene and contained the blaze before it grew past 7 acres (2.8 hectares).

As record-breaking heat and an abysmal snowpack raise concerns about severe wildfires, states across the fire-prone West are adding AI to their wildfire detection toolbox, banking on the technology to help save lives and property. Arizona Public Service has nearly 40 active AI smoke-detection cameras and plans to have 71 by summer’s end, and the state’s fire agency has deployed seven of its own. Another utility, Xcel Energy in Colorado, has installed 126 and aims to have cameras in seven of the eight states it serves by year’s end… ALERTCalifornia is a network of some 1,240 AI-enabled cameras across the Golden State that work similar to the system in Arizona….

Pano AI, whose technology combines high-definition camera feeds, satellite data and AI monitoring, has seen a growing interest in its cameras since launching in 2020. They’ve been deployed in Australia, Canada and 17 U.S. states, including Oregon, Washington and Texas… Last year, its technology detected 725 wildfires in the U.S., the company said… Cindy Kobold, an Arizona Public Service meteorologist, said the technology notifies them about 45 minutes faster on average than the first 911 call.

Advertisement

Source link

Continue Reading

Tech

This $170 retro-style dock gives your Mac mini a tiny screen and upgradeable storage

Published

on


One of the more distinctive entries in that category comes from Wokyis: a retro-styled dock that adds NVMe storage, extra ports, and a small secondary display, all within a chassis designed to sit directly under the Mac mini.
Read Entire Article
Source link

Continue Reading

Tech

NYT Strands hints and answers for Monday, May 4 (game #792)

Published

on

Looking for a different day?

A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Sunday’s puzzle instead then click here: NYT Strands hints and answers for Sunday, May 3 (game #791).

Strands is the NYT’s latest word game after the likes of Wordle, Spelling Bee and Connections – and it’s great fun. It can be difficult, though, so read on for my Strands hints.

Advertisement

Source link

Advertisement
Continue Reading

Tech

‘This is fine’ creator says AI startup stole his art

Published

on

You’ve seen this comic before: An anthropomorphic dog sits smiling, surrounded by flames, and says, “This is fine.”

It’s become one of the most durable memes of the past decade, and now AI startup Artisan seems to have incorporated it into an ad campaign — an ad for which KC Green, the artist who created the comic, said his art was stolen.

A Bluesky post seems to show an ad in a subway station featuring Green’s art, except the dog says, “[M]y pipeline is on fire,” and an overlaid message urges passersby to “Hire Ava the AI BDR.”

Quoting that post, Green said he’s “been getting more folks telling me about this” and that “it’s not anything [I] agreed to.” Instead, he said the ad has “been stolen like AI steals,” and he told followers to “please vandalize it if and when you see it.”

Advertisement

When TechCrunch sent Artisan an email asking about the ad, the company said, “We have a lot of respect for KC Green and his work, and we’re reaching out to him directly.” In a follow-up email, the company said it had scheduled time to speak with him.

Artisan has courted controversy with its ads before, specifically with billboards urging businesses to “Stop hiring humans” — although founder and CEO Jaspar Carmichael-Jack insisted that the message was about “a category of work,” not “humans at large.”

“This is fine” first appeared in Green’s webcomic “Gunshow” in 2013, and while he hasn’t disavowed the smiling-melting dog entirely (he recently turned the comic into a game), it’s clearly escaped from his control.  And of course, Green is far from the only artist to see his meme-able art used in ways he finds objectionable.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

But some artists have still taken action when their art is monetized or used in commercial ways without their permission, for example when cartoonist Matt Furie sued right-wing conspiracy theory site Infowars for using his character Pepe the Frog in a poster. (Furie and Infowars eventually settled.)

Advertisement

Green told TechCrunch via email that he will be “looking into [legal] representation, as I feel I have to.” Still, he said it “takes the wind out of my sails” that he has to take “time out of my life to try my hand at the American court system instead of putting that back into what I am passionate about, which is drawing comics and stories.”

Green added, “These no-thought A.I. losers aren’t untouchable and memes just don’t come out of thin air.”

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025