Connect with us
DAPA Banner

Crypto World

An AI Crypto Agent Sent a ‘Beggar’ Six Figures, Then He Lost It All This Way

Published

on

An AI Crypto Agent Sent a ‘Beggar’ Six Figures, Then He Lost It All This Way

An AI agent just made a six figure crypto mistake. And the market rewarded it.

On February 22, Lobstar Wilde, an autonomous AI running a Solana wallet, accidentally sent 52.4M LOBSTAR tokens to a random address beggar address.

It turned a costly error into one of the strangest accidents of the year.

Key Takeaways

  • The Error: A coding failure caused the agent to send 5% of the total token supply (valued between $250k and $441k) to a random user instead of a $400 donation.
  • The Reaction: Despite the massive loss of treasury funds, LOBSTAR price surged 190% as the community embraced the narrative of “agentic risk.”
  • The Aftermath: The recipient liquidated the tokens for just $40k due to slippage, while the project market cap climbed to $12 million.

What Happened: The AI Agent Fat-Finger Crypto Incident

It started as a joke as an X user sarcastically asked for 4 SOL to treat their uncle’s tetanus. Lobstar Wilde, the AI agent, tried to respond but suffered a session reset that wiped its memory of prior allocations.

Advertisement

The result was chaos. Instead of sending a small amount, the bot transferred 52.439M LOBSTAR tokens, about 5% of the total supply. On-chain data confirms the move, worth roughly $441,000 at the time.

The issue came down to a parsing mistake. The agent likely confused token decimals with raw integer values. A simple guardrail failure turned into a massive on-chain error.

How Did The ‘Beggar’ Lose The Money

Advertisement

What looked like a life changing win turned into a lesson in liquidity.

On paper, the recipient suddenly held $350K to $440K worth of tokens. In reality, the market could not absorb that size. Selling 5% of the supply into thin liquidity crushed the price. After heavy slippage, he walked away with roughly $37K to $40K.

Then came the second mistake.

Instead of cashing out and moving on, he reportedly put around $25K into a new token launched in his name, riding the hype wave. The momentum did not last. Liquidity faded, price collapsed, and the position unraveled fast.

Advertisement

By the end, the six figure accident shrank to roughly $6K.

Discover: Here are the crypto likely to explode!

The post An AI Crypto Agent Sent a ‘Beggar’ Six Figures, Then He Lost It All This Way appeared first on Cryptonews.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Crypto World

AI Routers Can Steal Credentials and Crypto

Published

on

AI Routers Can Steal Credentials and Crypto

University of California researchers have discovered that some third-party AI large language model (LLM) routers can pose security vulnerabilities that can lead to crypto theft. 

A paper measuring malicious intermediary attacks on the LLM supply chain, published on Thursday by the researchers, revealed four attack vectors, including malicious code injection and extraction of credentials

“26 LLM routers are secretly injecting malicious tool calls and stealing creds,” said the paper’s co-author, Chaofan Shou, on X.

LLM agents increasingly route requests through third-party API intermediaries or routers that aggregate access to providers like OpenAI, Anthropic and Google. However, these routers terminate Internet TLS (Transport Layer Security) connections and have full plaintext access to every message. 

Advertisement

This means that developers using AI coding agents such as Claude Code to work on smart contracts or wallets could be passing private keys, seed phrases and sensitive data through router infrastructure that has not been screened or secured.

Multi-hop LLM router supply chain. Source: arXiv.org

ETH stolen from a decoy crypto wallet 

The researchers tested 28 paid routers and 400 free routers collected from public communities. 

Their findings were startling, with nine routers actively injecting malicious code, two deploying adaptive evasion triggers, 17 accessing researcher-owned Amazon Web Services credentials, and one draining Ether (ETH) from a researcher-owned private key.

Related: Anthropic limits access to AI model over cyberattack concerns

The researchers prefunded Ethereum wallet “decoy keys” with nominal balances and reported that the value lost in the experiment was below $50, but no further details such as the transaction hash were provided. 

Advertisement

The authors also ran two “poisoning studies” showing that even benign routers become dangerous once they reuse leaked credentials through weak relays.

Hard to tell whether routers are malicious

The researchers said it was not easy to detect when a router was malicious.  

“The boundary between ‘credential handling’ and ‘credential theft’ is invisible to the client because routers already read secrets in plaintext as part of normal forwarding.” 

Another unsettling find was what the researchers called “YOLO mode.” This is a setting in many AI agent frameworks where the agent executes commands automatically without asking the user to confirm each one.

Previously legitimate routers can be silently weaponized without the operator even knowing, while free routers may be stealing credentials while offering cheap API access as the lure, the researchers found.

Advertisement

“LLM API routers sit on a critical trust boundary that the ecosystem currently treats as transparent transport.” 

The researchers recommended that developers using AI agents to code should bolster client-side defenses, suggesting never letting private keys or seed phrases transit an AI agent session.

The long-term fix is for AI companies to cryptographically sign their responses so the instructions an agent executes can be mathematically verified as coming from the actual model. 

Magazine: Nobody knows if quantum secure cryptography will even work