Connect with us
DAPA Banner

Crypto World

Key initiatives aimed at quantum-proofing the world’s largest blockchain

Published

on

Key initiatives aimed at quantum-proofing the world's largest blockchain

Quantum computers capable of breaking the Bitcoin blockchain do not exist today. Developers, however, are already considering a wave of upgrades to build defenses against the potential threat, and rightfully so, as the threat is no longer hypothetical.

This week, Google published research suggesting that a sufficiently powerful quantum computer could crack Bitcoin’s core cryptography in under nine minutes — one minute faster than the average Bitcoin block settlement time. Some analysts believe such a threat could become a reality by 2029.

Stakes are high: About 6.5 million bitcoin tokens, worth hundreds of billions of dollars, sit in addresses a quantum computer could directly target. Some of these coins belong to Bitcoin’s pseudonymous creator, Satoshi Nakamoto. Besides, the potential compromise would damage Bitcoin’s core tenets – “trust the code “and “sound money.”

Here’s what the threat looks like, along with proposals under consideration to mitigate it.

Advertisement

Two ways a quantum machine could attack Bitcoin

Let’s first understand the vulnerability before discussing the proposals.

Bitcoin’s security is built on a one-way mathematical relationship. When you create a wallet, a private key and a secret number are generated, from which a public key is derived.

Spending bitcoin tokens requires proving ownership of a private key, not by revealing it, but by using it to generate a cryptographic signature that the network can verify.

This system is foolproof because modern computers would take billions of years to break elliptic curve cryptography — specifically the Elliptic Curve Digital Signature Algorithm (ECDSA) — to reverse-engineer the private key from the public key. So, the blockchain is said to be computationally impossible to compromise.

Advertisement

But a future quantum computer can change this one-way street into a two-way street by deriving your private key from the public key and draining your coins.

The public key is exposed in two ways: From coins sitting idle onchain (the long-exposure attack) or coins in motion or transactions waiting in the memory pool (short-exposure attack).

Pay-to-public key (P2PK) addresses (used by Satoshi and early miners) and Taproot (P2TR), the current address format activated in 2021, are vulnerable to the long exposure attack. Coins in these addresses do not need to move to reveal their public keys; the exposure has already happened and is readable by anyone on earth, including a future quantum attacker. Roughly 1.7 million BTC sits in old P2PK addresses — including Satoshi’s coins.

The short exposure is tied to the mempool — the waiting room of unconfirmed transactions. While transactions sit there awaiting inclusion in a block, your public key and signature are visible to the entire network.

Advertisement

A quantum computer could access that data, but it would have only a brief window — before the transaction is confirmed and buried under additional blocks — to derive the corresponding private key and act on it.

Initiatives

BIP 360: Removing public key

As noted earlier, every new Bitcoin address created using Taproot today permanently exposes a public key onchain, giving a future quantum computer a target that never goes away.

The Bitcoin Improvement Proposal (BIP) 360 removes the public key permanently embedded on-chain and visible to everyone by introducing a new output type called Pay-to-Merkle-Root (P2MR).

Recall that a quantum computer studies the public key, reverse-engineers the exact shape of the private key and forges a working copy. If we remove the public key, the attack has nothing to work from. Meanwhile, everything else, including Lightning payments, multi-signature setups and other Bitcoin features, remains the same.

Advertisement

However, if implemented, this proposal protects only new coins going forward. The 1.7 million BTC already sitting in old exposed addresses is a separate problem, addressed by other proposals below.

SPHINCS+ / SLH-DSA: Hash-based post-quantum signatures

SPHINCS+ is a post-quantum signature scheme built on hash functions, avoiding the quantum risks facing elliptic curve cryptography used by Bitcoin. While Shor’s algorithm threatens ECDSA, hash-based designs like SPHINCS+ are not seen as similarly vulnerable.

The scheme was standardized by the National Institute of Standards and Technology (NIST) in August 2024 as FIPS 205 (SLH-DSA) after years of public review.

The tradeoff for security is size. While current bitcoin signatures are 64 bytes, SLH-DSA are 8 kilobytes (KB) or more in size. As such, adopting SLH-DSA would sharply increase block space demand and raise transaction fees.

Advertisement

As a result, proposals such as SHRIMPS (another hash-based post-quantum signature scheme) and SHRINCS have already been introduced to reduce signature sizes without sacrificing post-quantum security. Both build on SHPINCS+ while aiming to retain its security guarantees in a more practical, space-efficient form suitable for blockchain use.

Tadge Dryja’s Commit/Reveal Scheme: An Emergency Brake for the Mempool

This proposal, a soft fork suggested by Lightning Network co-creator Tadge Dryja, aims to protect transactions in the mempool from a future quantum attacker. It does so by separating transaction execution into two phases: Commit and Reveal.

Imagine informing a counterparty that you will email them, then actually sending an email. The former is the commit phase, and the latter is the reveal.

On the blockchain, this means you first publish a sealed fingerprint of your intention — just a hash, which reveals nothing about the transaction. The blockchain timestamps that fingerprint permanently. Later, when you broadcast the actual transaction, your public key becomes visible — and yes, a quantum computer watching the network could derive your private key from it and forge a competing transaction to steal your funds.

Advertisement

But that forged transaction is immediately rejected. The network checks: does this spend have a prior commitment registered on-chain? Yours does. The attacker’s does not — they created it moments ago. Your pre-registered fingerprint is your alibi.

The issue, however, is the increased cost due to the transaction being broken into two phases. So, it’s described as an interim bridge, practical to deploy while the community works on building quantum defences.

Hourglass V2: Slowing the spending of old coins

Proposed by developer Hunter Beast, Hourglass V2 targets the quantum vulnerability tied to roughly 1.7 million BTC held in older, already-exposed addresses.

The proposal accepts that these coins could be stolen in a future quantum attack and seeks to slow the bleeding by limiting sales to one bitcoin per block, to avoid a catastrophic overnight mass liquidation that could crater the market.

Advertisement

The analogy is a bank run: you cannot stop people from withdrawing, but you can limit the pace of withdrawals to prevent the system from collapsing overnight. The proposal is controversial because even this limited restriction is seen by some in the Bitcoin community as a violation of the principle that no external party can ever interfere with your right to spend your coins.

Conclusion

These proposals are not yet activated, and Bitcoin’s decentralized governance, spanning developers, miners and node operators, means any upgrade is likely to take time to materialize.

Still, the steady flow of proposals predating this week’s Google report suggests the issue has long been on developers’ radar, which may help temper market concerns.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Crypto World

Crypto policy stakes rise as Anthropic launches PAC amid AI policy rift

Published

on

Crypto Breaking News

Anthropic, the AI safety-focused lab behind several widely used language models, has moved to formalize its political engagement by launching an employee-funded political action committee named AnthroPAC. A filing with the Federal Election Commission shows the organization as a connected entity to Anthropic, organized as a separate segregated fund and aimed at receiving voluntary contributions from employees. The filing outlines the PAC’s intent to participate in federal elections while remaining aligned with the company’s stated interest in AI policy and safety considerations.

Under U.S. campaign finance rules, individual contributions to a federal candidate are capped at $5,000 per election, with disclosures required through public filings. AnthroPAC’s organizers say the fund is designed to support candidates from both major parties. However, observers and industry watchers are already raising questions about how closely the effort will stay within bipartisan lines, given broader debates over AI regulation, safety standards, and the strategic direction of AI policy in Washington.

The AnthroPAC move lands as Anthropic navigates a fraught relationship with the U.S. government over how its technology should be employed. Separately, the Defense Department in February designated Anthropic as a supply chain risk—an action tied to the company’s stance against the use of its AI in fully autonomous weapons and mass surveillance. Anthropic has challenged that designation in court, contending it constitutes retaliation for a protected position. A federal judge in California has temporarily blocked the measure and paused further restrictions while the dispute unfolds.

Beyond governance and defense concerns, Anthropic has already been active politically this cycle. Notably, the company contributed $20 million to Public First Action, a political committee focused on AI safety and related policy advocacy, underscoring the firm’s broader strategy to influence AI-related regulation and public safety standards.

Advertisement

Meanwhile, Anthropic’s broader ecosystem is drawing capital and infrastructure support that could accelerate its technology roadmap. In a related development, Google is preparing to back a multibillion-dollar data-center project in Texas that would be leased to Anthropic via Nexus Data Centers. The project’s initial phase could exceed $5 billion, with Google expected to provide construction loans and be joined by banks arranging additional financing. The arrangement highlights the growing demand for AI infrastructure capable of supporting expansion in model training, inference, and data storage.

Key takeaways

  • Anthropic formed AnthroPAC, an employee-funded political action committee registered as a separate segregated fund under the company’s umbrella.
  • The PAC is intended to support candidates from both parties, with strict contribution limits and mandatory disclosures under U.S. election law.
  • The move occurs amid fraught relations with the Pentagon over AI use, including a safety-focused designation that Anthropic is challenging in court.
  • Anthropic has a track record of political giving in this cycle, including a $20 million contribution to Public First Action focused on AI safety.
  • Google’s backing of a Texas data-center project for Anthropic signals strong infrastructure demand and potential financing mechanisms that could accelerate AI deployment.

Anthropic’s political engagement and the policy context

The formation of AnthroPAC marks a notable step in how AI firms engage with lawmakers and regulators. By coordinating staff contributions through a dedicated PAC, Anthropic signals a structured approach to influencing elections and policy debates that shape the development and governance of artificial intelligence. The FEC filing describes AnthroPAC as a “connected organization” operating under a separate segregated fund, aligning with typical industry practices for corporate-employee political activity. While the stated aim is bipartisanship, the broader AI policy environment in the United States has become highly polarized, with differing views on liability, safety mandates, data privacy, and government access to AI systems.

Investors and builders watching the space can interpret this as part of a broader trend: major AI developers increasingly engage directly in policy conversations, seeking to frame the regulatory environment in ways that balance innovation with oversight. The implications extend beyond ethics and governance; policy direction can materially affect the regulatory runway for product development, procurement, and collaboration with public sector actors. The presence of a formal PAC also raises questions about how corporate political contributions could influence which AI-safety and governance proposals gain traction on Capitol Hill and in regulatory agencies.

Defense frictions and legal maneuvering

The tension between Anthropic and the Department of Defense centers on how the company’s models should be deployed in sensitive contexts. The Pentagon’s decision to label Anthropic as a supply chain risk stemmed from the company’s public stance against fully autonomous weapons and broad surveillance use. Anthropic has challenged that designation in court, arguing that it amounts to retaliation for a viewpoint it regards as legitimate and protected. A federal judge in California issued a temporary ruling to pause the measure and related restrictions while the case proceeds, illustrating the jurisdictional balance between corporate risk assessments and national-security considerations in AI technology usage.

For policymakers, the case underscores a core policy question: where should the line be drawn between compelling safety and preserving innovation? If courts narrow how procurement risk designations can be wielded, it could affect how similar technology providers are treated as the government expands its AI procurement and testing programs. Conversely, if the government can justify risk designations on safety grounds, it could strengthen leverage for tighter controls on how AI systems are used in defense contexts.

Advertisement

Political giving and AI-safety advocacy

Anthropic’s political activity isn’t limited to its new PAC. Earlier in the cycle, the company contributed a sizable $20 million to Public First Action, a political arm focused on AI safety and public-interest considerations tied to the development and governance of AI technologies. This level of funding signals a broader strategy to influence public discourse and regulatory design around AI, complementing the PAC’s electoral role with policy advocacy and education efforts. Observers are watching how such funding patterns translate into concrete policy outcomes, particularly in an environment where legislators are weighing landmark AI bills and safety standards that could shape model development, data usage, and transparency requirements.

Infrastructure bets amid AI acceleration

Infrastructure matters are increasingly central to AI strategy, and Google’s involvement in a Texas data-center project for Anthropic is a vivid illustration. The Nexus Data Centers-leased facility, if realized as outlined, could become a cornerstone asset to support large-scale model training and deployment. The project’s initial phase exceeding $5 billion underscores the capital intensity of modern AI initiatives and the financial orchestration that underpins them. Google’s expected role in providing construction loans, alongside competitive financing arrangements from banks, points to the consolidation of AI infrastructure finance as a distinct sub-market within the tech sector. For Anthropic and similar firms, such backing could shorten timelines to deploy more capable models and scale services that demand robust, energy-efficient, and highly reliable data-center capacity.

As policy debates progress, industry participants and investors should monitor both political and practical developments: how much traction new AI safety proposals gain in Congress, how procurement rules evolve in defense programs, and how infrastructure financing evolves to accommodate the next wave of AI workloads. Each of these strands will influence not only which AI products reach market first, but also how quickly the industry can translate research advances into real-world use cases across enterprise, healthcare, and public services.

Readers should stay attentive to any updates on Anthropic’s PAC activity and the Pentagon case outcomes, as both arenas will shape the company’s public-facing strategy and its broader partnerships. The balance between safety-driven governance and aggressive innovation remains a live tension set to define the next phase of AI adoption and investment.

Advertisement

Risk & affiliate notice: Crypto assets are volatile and capital is at risk. This article may contain affiliate links. Read full disclosure

Source link

Advertisement
Continue Reading

Crypto World

Crypto Token Glut Is Diluting Value And Breaking Investor Returns

Published

on

Crypto Token Glut Is Diluting Value And Breaking Investor Returns

The rapid growth in the number of crypto tokens is outpacing the value they generate, creating an “existential” problem for the industry, according to Michael Ippolito, co-founder of Blockworks.

In a series of posts on X, Ippolito noted that while total crypto market capitalization remains relatively strong, the average value per token tells a different story. “The average coin is only slightly higher than where it was in 2020 (!) and down ~50% since 2021,” he wrote.

Median token returns have also deteriorated sharply. Most tokens are down roughly 80% from their highs, suggesting that gains have been concentrated in a narrow set of large-cap assets, while the broader market underperforms, Ippolito claimed.

Media token returns drop. Source: Michael Ippolito

He argued that the imbalance appears to be driven by a rapid expansion in token supply. “We created a TON of new assets and STILL total market cap is flat,” he wrote, adding that this dynamic effectively dilutes value across a growing pool of tokens.

Related: Bitcoin ‘done’ with 85% crashes, says Cathie Wood amid new $34K target

Advertisement

Token prices break from fundamentals

Ippolito also claimed that the relationship between fundamentals and price has weakened. In 2021, token prices closely tracked onchain revenue. Recent data shows that despite a resurgence in protocol revenues, prices have not followed, pointing to a disconnect between usage and investor returns.

He argued that this signals a loss of confidence in tokens as vehicles for capturing value. “The token problem is existential for this industry,” he said, adding that without stronger alignment between fundamentals and price, the sector risks losing its core appeal.

Fundamentals vs price. Source: Michael Ippolito

In a post on X, Arthur Cheong, founder and CEO of DeFiance Capital, said he agrees “with the urgency to fix the current situation of tokens in the crypto industry,” warning that if the market continues to concentrate around a small set of assets like Bitcoin and Ether, the broader crypto ecosystem risks losing relevance.

Related: Bitcoin shorts risk $2.5 billion liquidation at $72K: Are bears in danger?

Capital shifts from tokens to stocks

Investor demand is increasingly moving away from newly launched tokens toward publicly listed crypto firms, as most token launches fail to hold value, a February research from DWF Labs found. The report revealed that over 80% of projects trade below their token generation event (TGE) price, with typical losses of 50% to 70% within about three months.

Advertisement

The pattern appears structural rather than cyclical. According to DWF’s Andrei Grachev, most tokens peak within the first month before declining under sustained selling pressure. Factors such as airdrops and early investor unlocks add to the supply overhang, reinforcing downward price trends even for projects with active products or protocols.

Magazine: Bitcoin may take 7 years to upgrade to post-quantum — BIP-360 co-author