Connect with us

Crypto World

Sui Foundation Unveils Infrastructure Framework for Autonomous AI Agent Execution

Published

on

21Shares Introduces JitoSOL ETP to Offer Staking Rewards via Solana

TLDR:

  • Sui Foundation released framework addressing infrastructure gaps for autonomous AI agents executing workflows. 
  • Platform provides shared verifiable state, atomic execution, and proof mechanisms for AI-driven operations. 
  • Traditional internet architecture lacks coordination needed for autonomous software operating at machine speed. 
  • Sui’s execution layer enables multi-step workflows to complete fully or fail cleanly without partial states.

 

Sui Foundation released a comprehensive framework on January 30, 2026, addressing infrastructure requirements for autonomous AI agents.

The platform enables AI systems to execute multi-step workflows with verifiable outcomes and shared state management.

Sui’s execution layer treats autonomous software actions as core functionality rather than supplementary features, addressing limitations in current internet architecture designed for human-driven interactions.

Infrastructure Gaps in Current Web Architecture

Traditional internet infrastructure operates under assumptions that autonomous AI agents cannot accommodate effectively.

Advertisement

Session timeouts, manual retries, and human intervention patterns create friction when software executes independently. APIs function as isolated endpoints without shared state coordination across different services and platforms.

Current web systems fragment authoritative information across multiple applications that lack common truth sources. Partial successes and ambiguous failures become problematic when AI agents operate without human oversight.

Reconciling outcomes across disparate systems introduces risks of duplication and inconsistency that humans can manage but autonomous software cannot.

Agentic workflows spanning multiple platforms compound these architectural weaknesses as execution becomes assumption chains rather than coordinated processes.

Advertisement

Logs record events but require interpretation to determine authoritative outcomes. The shift from AI recommendations to actual execution introduces irreversible consequences requiring different trust mechanisms.

Actions trigger permanent changes, including bookings, resource allocations, and financial transactions that cannot be reversed like advisory outputs.

Authorization, intent alignment, and auditable outcomes become mandatory rather than optional features. The fundamental question evolves from whether systems produce plausible answers to whether they execute correct actions under proper constraints.

Sui’s Execution Layer Capabilities

Sui Foundation designed its platform with four foundational capabilities addressing autonomous agent requirements.

Advertisement

The network provides shared verifiable state allowing systems to determine current conditions, changes, and outcomes directly.

Rules and permissions travel with governed data and actions rather than requiring redefinition at system boundaries.

Atomic execution across workflows ensures multi-step processes complete fully or fail cleanly without partial states.

An agent booking travel can reserve flights, confirm hotels, and process payments as single operations. Either the entire workflow succeeds or nothing commits, eliminating reconciliation needs and ambiguity.

Advertisement

The platform generates proof of execution establishing how actions occurred, under which permissions, and whether intended rules were followed.

Verifiable evidence replaces reconstruction requirements and interpretation efforts after fact. Outcomes settle as definitive results rather than requiring piecing together from multiple log sources.

Sui groups data, permissions, and history together within the network for clarity on action scope and authorization. Complex tasks execute directly and settle as single outcomes instead of coordinating intent across applications afterward.

The execution layer coordinates intent, enforces rules, and settles outcomes by default without constant human oversight.

Advertisement

The foundation published detailed technical documentation covering verifiable inputs, execution accountability, value exchange mechanisms, and end-to-end system integration.

These components address data provenance, integrity, policy-aware access, licensing, payments, and agentic commerce handled safely through programmatic methods.

The framework positions execution infrastructure as the differentiator as AI agents assume greater operational responsibility beyond intelligence capabilities alone.

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Crypto World

Anthropic Accuses Three Firms of Using Sophisticated Distillation Attacks

Published

on

Anthropic Accuses Three Firms of Using Sophisticated Distillation Attacks

Artificial intelligence firm Anthropic has accused three AI firms of illicitly using its large language model Claude to improve their own models in a technique known as a “distillation” attack.

In a blog post on Sunday, Anthropic said that it had identified these “attacks” by DeepSeek, Moonshot, and MiniMax, which involve training a less capable model on the outputs of a stronger one.

Anthropic accused the trio of generating “over 16 million exchanges” combined with the firm’s Claude AI across “approximately 24,000 fraudulent accounts.” 

“Distillation is a widely used and legitimate training method. For example, frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers,” Anthropic wrote, adding: 

Advertisement

“But distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”

Anthropic said that the attacks focused on scraping Claude for a wide range of purposes, including agentic reasoning, coding and data analysis, rubric-based grading tasks, and computer vision. 

“Each campaign targeted Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding,” the multi-billion-dollar AI firm said. 

Source: Anthropic

Anthropic says it was able to identify the trio via an “IP address correlation, request metadata, infrastructure indicators, and in some cases corroboration from industry partners who observed the same actors and behaviors on their platforms.”

DeepSeek, Moonshot, and Minimax are all AI companies based in China. All three have estimated valuations in the multi-billion dollar range, with DeepSeek being the most widely internationally recognized out of the three. 

Beyond the intellectual property implications, Anthropic argued that distillation campaigns from foreign competitors present genuine geopolitical risks. 

Advertisement

“Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems—enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance,” the firm said.