Connect with us

Crypto World

BTC price falls with ETH, SOLwhile decred, AI-linked tokens advance: Crypto Markets Today

Published

on

BTC price falls with ETH, SOLwhile decred, AI-linked tokens advance: Crypto Markets Today

Decred (DCR), a token built for autonomy and decentralized governance, extended gains even as the broader market led by bitcoin struggled.

The token has risen 16% in the past 24 hours and now trades at $34.58, the highest since November, CoinDesk data show. It’s the best-performing top-100 token over the past four weeks, having gained more than 80% after a Feb. 8 change to its treasury rules.

Bitcoin, for its part, is facing renewed selling pressure, trading just around $67,000, a weak follow-through after bouncing to $70,000 on Wednesday. The cryptocurrency is down 2% on a 24-hour basis, with ether (ETH), XRP (XRP), solana (SOL), and the CoinDesk 20 Index (CD20) registering similar losses.

Market participants remain cautious and are continuing to seek put options, or downside protection, in bitcoin. Deribit said that ETF holders and corporate treasuries are buying put options at the $60,000 strike expiring in six to 12 months.

Advertisement

Analysts said institutional flows are improving but not yet decisive, and traders should avoid taking big risks.

“Long-term investors may consider staggered accumulation (SIP-style allocation) near support zones rather than deploying lump sums at resistance,” Vikram Subburaj, CEO of crypto exchange Giottus.com, said in an email to CoinDesk.

Derivatives positioning

  • Cumulative crypto futures open interest (OI) has fallen back to recent multimonth lows of around $93.5 billion. The drop shows how quickly the optimism sparked by Wednesday’s bitcoin price bounce has fizzled out.
  • Major tokens, including bitcoin and ether, have seen capital outflows from futures as notional OI declined more than their spot prices.
  • The market-wide long-short ratio continues to show a dominance of shorts, or bearish bets.
  • OI in tether gold (XAUT) dropped another 11% extending the decline from early this week. Gold-linked assets seem to have fallen out of favor lately.
  • Most large-cap tokens, including BTC and ETH, are again seeing negative perpetual funding rates. That means bearish plays are dominating the market once more.
  • Participation in CME bitcoin futures is falling, as shown by open interest hitting the lowest levels this year.
  • On Deribit, one-month bitcoin puts still trade at a 7% premium to calls in a sign of lingering concerns of further spot price declines. The same is true for ether.
  • Bitcoin put spreads, a bearish strategy, accounted for 75% of the total block flow over 24 hours. In ETH’s case, traders chased put spreads and straddles (volatility strategies).

Token Talk

The DFINITY Foundation proposed burning 20% of cloud engine revenue, introducing a deflationary element tied directly to network usage for Internet Computer (ICP).

The remaining 80% of revenue would be routed to node operators, replacing fixed emissions with performance-based incentives. The idea is to make ICP’s token supply more responsive to real demand.

ICP’s price moved up roughly 6% in the last 24-hour period, from around $2.41 to $2.56. It’s down from a high of $2.7 seen during the period. The price appears to be influenced not just by the foundation’s proposal, but also by Nvidia’s blowout earnings.

Advertisement

Those earnings boosted sentiment surrounding artificial intelligence-linked assets, with Nvidia CEO Jensen Huang saying AI is only getting better.

ICP, often marketed as a decentralized alternative to traditional cloud AI infrastructure, was among several AI-linked tokens, including render (RENDER) and bittensor (TAO), to benefit from renewed investor interest in the sector.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Crypto World

AI Infrastructure Development Company Powering Enterprise AI Leadership

Published

on

UK & EU Firms Are Switching to Stablecoin Payroll

Artificial intelligence has entered a defining new phase. The competitive conversation is no longer centered solely around model innovation, data volume, or algorithmic breakthroughs. Instead, the question enterprise leaders must now answer is far more foundational:

Is our compute foundation strong enough to scale AI across the business?

In 2026, the AI race has evolved into an infrastructure race – one that demands collaboration with the right AI infrastructure development Company and long-term architectural foresight. Amazon’s $12 billion investment in AI-focused data center campuses in Louisiana reflects a larger global reality: enterprise AI growth now depends on physical and architectural compute capacity.

The message for business leaders is clear: compute strategy defines market leadership.

The Shift from AI Experimentation to AI Industrialization

For years, AI initiatives lived in innovation labs – contained within pilots, proofs of concept, or isolated departmental use cases. Infrastructure requirements were minimal because workloads were temporary and limited in scale.

Advertisement

That reality has fundamentally changed.

AI now operates inside mission-critical systems, powering core operations, customer experience platforms, cybersecurity defenses, supply chain optimization, real-time analytics engines, and generative copilots. These are not experimental environments; they are revenue-generating, risk-sensitive business functions.

This evolution demands a formalized enterprise AI infrastructure strategy.

Deloitte’s 2026 Tech Trends analysis highlights a critical inflection point: the challenge is no longer just training models, but managing the long-term economics and scalability of inference at enterprise scale. As AI becomes operational, compute demand shifts from sporadic experimentation to continuous, production-level execution.

Advertisement

Enterprises must now make deliberate decisions about workload placement, hybrid scaling models, cost governance, and performance optimization.

AI is no longer a tactical deployment.
It is a strategic compute architecture commitment.

Amazon’s $12B Move: A Blueprint for AI-Ready Data Centers

Amazon’s $12 billion investment in new AI-focused data center campuses in Louisiana is more than geographic expansion – it is a signal of where global AI infrastructure economics are heading.

As reported by CNBC and covered in depth by Bloomberg, Amazon is expanding its cloud and AI capacity through purpose-built, next-generation data center campuses engineered for high-density compute workloads. These facilities are designed to support advanced AI applications that demand massive processing power, ultra-fast networking, and scalable energy infrastructure.

Advertisement

This investment reflects:

  • Long-term compute capacity expansion
  • AI-optimized hardware integration
  • Advanced cooling systems built for dense GPU clusters
  • Infrastructure tailored for large-scale, real-time AI inference

This is what AI-ready data center architecture for enterprises looks like in practice.

Unlike traditional facilities designed for general enterprise IT, AI-optimized data centers are engineered specifically to handle:

  • GPU-intensive model training
  • High-bandwidth, low-latency interconnects
  • Continuous inference workloads
  • Distributed real-time data processing environments

Amazon’s strategic expansion reinforces a broader industry truth: AI leadership is no longer defined solely by software innovation – it is secured through physical infrastructure leadership.

Why Compute Architecture Is Now a Strategic Weapon

Modern AI systems, particularly generative AI, real-time analytics engines, and autonomous decision systems, demand far more than virtualized servers. They require a reimagined enterprise compute architecture for AI workloads. Let’s examine why.

1. AI Is Compute-Intensive by Design

Training advanced foundation models can require thousands of GPUs operating simultaneously. Even inference, once considered lightweight, now demands specialized accelerators for high-speed response times.

Advertisement

Organizations that rely on outdated compute environments face:

  • Processing bottlenecks
  • Latency spikes
  • Escalating operational costs
  • Infrastructure fragility

AI doesn’t tolerate inefficiency. It exposes it.

2. Real-Time AI Changes Infrastructure Requirements

AI is increasingly embedded in live environments:

  • Fraud detection in financial services
  • Predictive maintenance in manufacturing
  • Personalized product recommendations in e-commerce
  • AI copilots in enterprise workflows

These applications require infrastructure for real-time AI, not batch-processing systems designed for overnight analytics.

Real-time AI demands:

  • Ultra-low latency networking
  • Edge integration capabilities
  • Distributed processing
  • Seamless scalability

According to TechRepublic’s enterprise AI coverage, many organizations struggle to transition AI from pilot to production because their compute, storage, and networking layers weren’t designed for production-grade workloads, creating bottlenecks that delay or derail deployments. 

3. Energy, Cooling, and Sustainability Are Now AI Variables

One often overlooked aspect of AI infrastructure is energy intensity. AI workloads consume significantly more power than traditional enterprise systems.

Advertisement

Modern AI-optimized facilities incorporate:

  • Advanced liquid cooling systems
  • High-density rack configurations
  • Renewable energy integration
  • Intelligent power distribution networks

Amazon’s Louisiana campuses are expected to include significant utility and infrastructure upgrades – including new electrical systems funded in partnership with Southwestern Electric Power Company and up to $400 million in water infrastructure improvements to support high-performance operations.

The AI era is also an energy era. Infrastructure planning must integrate sustainability, resilience, and cost efficiency simultaneously.

The Rise of a Formal Enterprise AI Infrastructure Strategy

What separates AI leaders from followers is not experimentation – it is architectural foresight. A strong enterprise AI infrastructure strategy includes:

  • Strategic Capacity Planning

Forecasting compute requirements aligned with AI adoption roadmaps.

  • Hybrid & Multi-Cloud Alignment

Balancing hyperscale cloud, on-premise systems, and edge environments.

Monitoring inference economics to prevent uncontrolled compute spend.

Advertisement

Embedding zero-trust principles into AI workloads and data flows.

Workload Placement Intelligence

Running the right workloads on the right platforms for performance and cost optimization.

Without a structured strategy, enterprises face:

  • Siloed AI deployments
  • Fragmented compute environments
  • Rising operational costs
  • Limited scalability

Infrastructure must move from reactive to predictive.

Why Enterprises Are Turning to Specialized Partners

Designing, deploying, and optimizing AI infrastructure is not trivial. It requires deep expertise across hardware, orchestration, networking, and AI deployment pipelines.

Advertisement

This is why organizations increasingly collaborate with experienced:

  • AI infrastructure development companies
  • Enterprise AI development companies

These partners help enterprises:

  • Architect scalable compute frameworks
  • Optimize GPU utilization
  • Design resilient multi-cloud ecosystems
  • Integrate AI seamlessly into enterprise environments

Infrastructure transformation is complex, but strategic partnerships reduce risk and accelerate deployment timelines.

The Economic Implications of AI Data Center Expansion

Large-scale AI infrastructure investments are signaling a structural transformation in the global economy. Compute capacity is becoming a strategic asset influencing energy markets, semiconductor supply chains, regional talent hubs, and capital allocation priorities.

Enterprises are no longer simply purchasing software licenses; they are competing for sustained access to scalable compute ecosystems. As AI adoption accelerates, infrastructure availability, performance efficiency, and cost governance increasingly determine which organizations can innovate reliably at scale.

The deeper shift is this: AI infrastructure is becoming industrial infrastructure.

Advertisement

Just as railroads powered manufacturing growth and broadband enabled digital commerce, AI-ready compute environments now form the backbone of competitive enterprise ecosystems. Organizations that recognize infrastructure as strategic capital, not operational overhead, will define the next decade of market leadership.

What Enterprise Leaders Must Do Now

Infrastructure decisions can no longer be deferred to IT roadmaps. They must sit at the center of enterprise AI strategy. To remain competitive in the Infrastructure Era of AI, leaders should:

1. Conduct a Compute Readiness Assessment

Identify architectural bottlenecks, GPU constraints, latency risks, and cost inefficiencies that could limit AI scale.

Advertisement

2. Formalize an enterprise AI infrastructure strategy

Align infrastructure investment with long-term AI adoption plans, ensuring compute capacity grows alongside business ambition.

3. Redesign enterprise compute architecture for AI workloads

Move beyond retrofitting legacy systems. Build environments purpose-designed for training, inference, and hybrid scaling.

Advertisement

4. Build a dedicated infrastructure for real-time AI

Enable low-latency, production-grade AI systems that operate within mission-critical workflows.

5. Partner with AI Infrastructure Experts

Work with specialists who can design scalable compute environments and ensure your infrastructure supports sustainable AI growth.

Advertisement

The organizations that act decisively will turn infrastructure into a growth multiplier. Those who delay will find their AI ambitions constrained by architectural limits.

The New Definition of AI Leadership

AI leadership in 2026 is no longer measured by isolated model innovation, but by the strength and scalability of enterprise compute foundations. As AI shifts from experimentation to industrialization, competitive advantage depends on a well-defined enterprise AI infrastructure strategy and a purpose-built enterprise compute architecture for AI workloads. Organizations that invest in AI-ready data center architecture for enterprises and build infrastructure for real-time AI position themselves to scale efficiently, control costs, and sustain performance.

In this new era, infrastructure is not operational support – it is strategic capital. Market leaders will be those who align compute capacity with long-term business vision. Aniter, an enterprise AI development company, helps organizations design, deploy, and optimize scalable AI systems that deliver resilient, production-grade performance and measurable business impact.

Source link

Advertisement
Continue Reading

Crypto World

Axiom Crypto Exposed: ZachXBT Alleges $400k Insider Trading

Published

on

Axiom Crypto Exposed: ZachXBT Alleges $400k Insider Trading

ZachXBT just uncovered what looks like a coordinated insider trading ring at Axiom crypto. According to his findings, senior employees used internal data tools to front-run user trades for more than 10 months, allegedly pocketing over $400,000 in the process. The method involved privileged back-end access that allowed staff to track and mirror high-value wallets before the broader market reacted.

This points to deeper governance failures at a platform generating roughly $390 million in annual revenue. Non-technical staff reportedly had unrestricted access to live user identifiers, exposing a serious breakdown in internal controls.

Key Takeaways

  • The Actor: Senior business development staff with unrestricted admin access to live user databases.
  • The Method: Cross-referencing internal UIDs with on-chain data to identify and front-run KOL wallets.
  • The Failure: A YC-backed unicorn generating $390M revenue operating with zero role-based access controls.

How the Insider Trading Scheme Operated Inside Axiom Crypto

The scheme was simple and effective. Investigators say employees used internal admin dashboards meant for support and compliance to pull private user data. By linking User IDs to on-chain wallets, they could identify high-profile traders and institutions behind supposedly anonymous addresses.

Advertisement

From there, the play was straightforward. Monitor activity, then trade ahead of it. Buy before a large wallet pushed price. Sell before a whale exits. It was front-running their own users.

The activity reportedly lasted at least 10 months. The troubling part is that business development staff had the same level of system access as technical security teams. That breakdown in internal controls created the information asymmetry that made the scheme possible.

Discover: The best crypto to diversify your portfolio with

Advertisement

$390M Revenue vs. Zero Access Controls: What Is Axiom Team Response?

Axiom generated $390 million in revenue and scaled rapidly, but the investigation shows its internal controls lagged far behind its growth.

The platform reportedly lacked basic role-based access controls. Business development staff had broad visibility into user identifiers and trading data, creating a “God mode” environment. Proper least-privilege systems and audit logs likely would have flagged the activity early. Instead, it allegedly went unnoticed for nearly a year.

The case highlights a common startup flaw: growth and volume are prioritized, while governance is deferred. That works at a small scale. At billions in volume, it becomes a liability.

Advertisement

Axiom has confirmed a full internal audit. But the reputational damage is significant, and regulators may view the alleged $400,000 in insider profits as potential fraud.

Discover: The best new crypto in the world

The post Axiom Crypto Exposed: ZachXBT Alleges $400k Insider Trading appeared first on Cryptonews.

Advertisement

Source link

Continue Reading

Crypto World

Pantera, Franklin Join Sentient Arena AI Agent Testing Initiative

Published

on

Pantera, Franklin Join Sentient Arena AI Agent Testing Initiative

Pantera Capital and Franklin Templeton’s digital assets units have joined the first cohort of Arena, a new testing environment from open-source AI lab Sentient that is designed to evaluate how AI agents perform in enterprise-style workflows.

In a Friday announcement shared with Cointelegraph, Sentient positioned Arena as a production-style benchmarking platform rather than a static model test. Instead of scoring agents on fixed datasets alone, it runs them through standardized tasks modeled on enterprise conditions, including long documents, incomplete information and conflicting sources. 

“In this initial phase, participation refers to supporting the Arena program and developer cohort,” Oleg Golev, product lead at Sentient Labs, told Cointelegraph.

He said partners are helping shape what “production-ready reasoning” looks like for document-heavy tasks such as analysis, compliance and operations. The companies are not announcing capital commitments tied to the initiative. 

Advertisement

Related: Jack Dorsey’s Block to cut 4,000 jobs in AI-driven restructuring

The launch comes as enterprises accelerate the deployment of AI agents into research and operational workflows, even as governance frameworks lag. 

According to the Celonis 2026 Process Optimization Report, published Feb. 4, 85% of surveyed senior business leaders aim to become “agentic enterprises” within three years, while only 19% currently use multi-agent systems.

The 2026 Process Optimization Report. Source: Celonis

Production-style evaluation, not static scoring

Golev described Arena as a shared platform where developers submit AI agents to standardized tasks and compare results under consistent testing conditions.

The platform tracks failure categories such as hallucination, missing evidence, incorrect citations and reasoning gaps, allowing developers to diagnose recurring issues.

Advertisement

Arena plans to publish comparative performance metrics through a public leaderboard and release postmortems summarizing common failure modes and fixes.

Infrastructure partners, including OpenRouter and Fireworks, are supplying inference compute for the initial cohort, while other partners support tooling and workshops.

Related: High-yield bond surge signals rising risk, demand in BTC mining, AI infrastructure

Governance layer amid rising AI autonomy

The initiative emerges as financial and crypto companies experiment with giving AI systems greater economic autonomy.

Advertisement

On Wednesday, MoonPay launched infrastructure enabling AI agents to create wallets and execute stablecoin transactions.

On Thursday, Stripe executives warned that blockchains may need significant scaling improvements if AI-driven commerce expands.