Connect with us

Crypto World

Why AI Needs Sovereign Data Integrity

Published

on

Why AI Needs Sovereign Data Integrity

AI agents dominated ETHDenver 2026, from autonomous finance to on-chain robotics. But as enthusiasm around “agentic economies” builds, a harder question is emerging: can institutions prove what their AI systems were trained on?

Among the startups targeting that problem is Perle Labs, which argues that AI systems require a verifiable chain of custody for their training data, particularly in regulated and high-risk environments. With a focus on building an auditable, credentialed data infrastructure for institutions, Perle has raised $17.5 million to date, with its latest funding round led by Framework Ventures. Other investors include CoinFund, Protagonist, HashKey, and Peer VC. The company reports more than one million annotators contributing over a billion scored data points on its platform.

BeInCrypto spoke with Ahmed Rashad, CEO of Perle Labs, on the sidelines of ETHDenver 2026. Rashad previously held an operational leadership role at Scale AI during its hypergrowth phase. In the conversation, he discussed data provenance, model collapse, adversarial risks and why he believes sovereign intelligence will become a prerequisite for deploying AI in critical systems.

BeInCrypto: You describe Perle Labs as the “sovereign intelligence layer for AI.” For readers who are not inside the data infrastructure debate, what does that actually mean in practical terms?

Advertisement

Ahmed Rashad: “The word sovereign is deliberate, and it carries a few layers.

The most literal meaning is control. If you’re a government, a hospital, a defense contractor, or a large enterprise deploying AI in a high-stakes environment, you need to own the intelligence behind that system, not outsource it to a black box you can’t inspect or audit. Sovereign means you know what your AI was trained on, who validated it, and you can prove it. Most of the industry today cannot say that.

The second meaning is independence. Acting without outside interference. This is exactly what institutions like the DoD, or an enterprise require when they’re deploying AI in sensitive environments. You cannot have your critical AI infrastructure dependent on data pipelines you don’t control, can’t verify, and can’t defend against tampering. That’s not a theoretical risk. NSA and CISA have both issued operational guidance on data supply chain vulnerabilities as a national security issue.

The third meaning is accountability. When AI moves from generating content into making decisions, medical, financial, military, someone has to be able to answer: where did the intelligence come from? Who verified it? Is that record permanent? On Perle, our goal is to have every contribution from every expert annotator is recorded on-chain. It can’t be rewritten. That immutability is what makes the word sovereign accurate rather than just aspirational.

Advertisement

In practical terms, we are building a verification and credentialing layer. If a hospital deploys an AI diagnostic system, it should be able to trace each data point in the training set back to a credentialed professional who validated it. That is sovereign intelligence. That’s what we mean.” 

BeInCrypto: You were part of Scale AI during its hypergrowth phase, including major defense contracts and the Meta investment. What did that experience teach you about where traditional AI data pipelines break?

Ahmed Rashad: “Scale was an incredible company. I was there during the period when it went from $90M and now it’s $29B, all of that was taking shape, and I had a front-row seat to where the cracks form.

The fundamental problem is that data quality and scale pull in opposite directions. When you’re growing 100x, the pressure is always to move fast: more data, faster annotation, lower cost per label. And the casualties are precision and accountability. You end up with opaque pipelines: you know roughly what went in, you have some quality metrics on what came out, but the middle is a black box. Who validated this? Were they actually qualified? Was the annotation consistent? Those questions become almost impossible to answer at scale with traditional models.

Advertisement

The second thing I learned is that the human element is almost always treated as a cost to be minimized rather than a capability to be developed. The transactional model: pay per task then optimize for throughput just degrades quality over time. It burns through the best contributors. The people who can give you genuinely high-quality, expert-level annotations are not the same people who will sit through a gamified micro-task system for pennies. You have to build differently if you want that caliber of input.

That realization is what Perle is built on. The data problem isn’t solved by throwing more labor at it. It’s solved by treating contributors as professionals, building verifiable credentialing into the system, and making the entire process auditable end to end.”

BeInCrypto: You’ve reached a million annotators and scored over a billion data points. Most data labeling platforms rely on anonymous crowd labor. What’s structurally different about your reputation model?

Ahmed Rashad: “The core difference is that on Perle, your work history is yours, and it’s permanent. When you complete a task, the record of that contribution, the quality tier it hit, how it compared to expert consensus, is written on-chain. It can’t be edited, can’t be deleted, can’t be reassigned. Over time, that becomes a professional credential that compounds.

Advertisement

Compare that to anonymous crowd labor, where a person is essentially fungible. They have no stake in quality because their reputation doesn’t exist, each task is disconnected from the last. The incentive structure produces exactly what you’d expect: minimum viable effort.

Our model inverts that. Contributors build verifiable track records. The platform recognizes domain expertise. For example, a radiologist who consistently produces high-quality medical image annotations builds a profile that reflects that. That reputation drives access to higher-value tasks, better compensation, and more meaningful work. It’s a flywheel: quality compounds because the incentives reward it.

We’ve crossed a billion points scored across our annotator network. That’s not just a volume number, it’s a billion traceable, attributed data contributions from verified humans. That’s the foundation of trustworthy AI training data, and it’s structurally impossible to replicate with anonymous crowd labor.”

BeInCrypto: Model collapse gets discussed a lot in research circles but rarely makes it into mainstream AI conversations. Why do you think that is, and should more people be worried?

Advertisement

Ahmed Rashad: “It doesn’t make mainstream conversations because it’s a slow-moving crisis, not a dramatic one. Model collapse, where AI systems trained increasingly on AI-generated data start to degrade, lose nuance, and compress toward the mean, doesn’t produce a headline event. It produces a gradual erosion of quality that’s easy to miss until it’s severe.

The mechanism is straightforward: the internet is filling up with AI-generated content. Models trained on that content are learning from their own outputs rather than genuine human knowledge and experience. Each generation of training amplifies the distortions of the last. It’s a feedback loop with no natural correction.

Should more people be worried? Yes, particularly in high-stakes domains. When model collapse affects a content recommendation algorithm, you get worse recommendations. When it affects a medical diagnostic model, a legal reasoning system, or a defense intelligence tool, the consequences are categorically different. The margin for degradation disappears.

This is why the human-verified data layer isn’t optional as AI moves into critical infrastructure. You need a continuous source of genuine, diverse human intelligence to train against; not AI outputs laundered through another model. We have over a million annotators representing genuine domain expertise across dozens of fields. That diversity is the antidote to model collapse. You can’t fix it with synthetic data or more compute.”

Advertisement

BeInCrypto: When AI expands from digital environments into physical systems, what fundamentally changes about risk, responsibility, and the standards applied to its development?

Ahmed Rashad: The irreversibility changes. That’s the core of it. A language model that hallucinates produces a wrong answer. You can correct it, flag it, move on. A robotic surgical system operating on a wrong inference, an autonomous vehicle making a bad classification, a drone acting on a misidentified target, those errors don’t have undo buttons. The cost of failure shifts from embarrassing to catastrophic.

That changes everything about what standards should apply. In digital environments, AI development has largely been allowed to move fast and self-correct. In physical systems, that model is untenable. You need the training data behind these systems to be verified before deployment, not audited after an incident.

It also changes accountability. In a digital context, it’s relatively easy to diffuse responsibility, was it the model? The data? The deployment? In physical systems, particularly where humans are harmed, regulators and courts will demand clear answers. Who trained this? On what data? Who validated that data and under what standards? The companies and governments that can answer those questions will be the ones allowed to operate. The ones that can’t will face liability they didn’t anticipate.

Advertisement

We built Perle for exactly this transition. Human-verified, expert-sourced, on-chain auditable. When AI starts operating in warehouses, operating rooms, and on the battlefield, the intelligence layer underneath it needs to meet a different standard. That standard is what we’re building toward.

BeInCrypto: How real is the threat of data poisoning or adversarial manipulation in AI systems today, particularly at the national level?

Ahmed Rashad: “It’s real, it’s documented, and it’s already being treated as a national security priority by people who have access to classified information about it.

DARPA’s GARD program (Guaranteeing AI Robustness Against Deception) spent years specifically developing defenses against adversarial attacks on AI systems, including data poisoning. The NSA and CISA issued joint guidance in 2025 explicitly warning that data supply chain vulnerabilities and maliciously modified training data represent credible threats to AI system integrity. These aren’t theoretical white papers. They’re operational guidance from agencies that don’t publish warnings about hypothetical risks.

Advertisement

The attack surface is significant. If you can compromise the training data of an AI system used for threat detection, medical diagnosis, or logistics optimization, you don’t need to hack the system itself. You’ve already shaped how it sees the world. That’s a much more elegant and harder-to-detect attack vector than traditional cybersecurity intrusions.

The $300 million contract Scale AI holds with the Department of Defense’s CDAO, to deploy AI on classified networks, exists in part because the government understands it cannot use AI trained on unverified public data in sensitive environments. The data provenance question is not academic at that level. It’s operational.

What’s missing from the mainstream conversation is that this isn’t just a government problem. Any enterprise deploying AI in a competitive environment, financial services, pharmaceuticals, critical infrastructure, has an adversarial data exposure they’ve probably not fully mapped. The threat is real. The defenses are still being built.”

BeInCrypto: Why can’t a government or a large enterprise just build this verification layer themselves? What’s the real answer when someone pushes back on that?

Advertisement

Ahmed Rashad: “Some try. And the ones who try learn quickly what the actual problem is.

Building the technology is the easy part. The hard part is the network. Verified, credentialed domain experts, radiologists, linguists, legal specialists, engineers, scientists, don’t just appear because you built a platform for them. You have to recruit them, credential them, build the incentive structures that keep them engaged, and develop the quality consensus mechanisms that make their contributions meaningful at scale. That takes years and it requires expertise that most government agencies and enterprises simply don’t have in-house.

The second problem is diversity. A government agency building its own verification layer will, by definition, draw from a limited and relatively homogeneous pool. The value of a global expert network isn’t just credentialing; it’s the range of perspective, language, cultural context, and domain specialization that you can only get by operating at real scale across real geographies. We have over a million annotators. That’s not something you replicate internally.

The third problem is incentive design. Keeping high-quality contributors engaged over time requires transparent, fair, programmable compensation. Blockchain infrastructure makes that possible in a way that internal systems typically can’t replicate: immutable contribution records, direct attribution, and verifiable payment. A government procurement system is not built to do that efficiently.

Advertisement

The honest answer to the pushback is: you’re not just buying a tool. You’re accessing a network and a credentialing system that took years to build. The alternative isn’t ‘build it yourself’, it’s ‘use what already exists or accept the data quality risk that comes with not having it.’”

BeInCrypto: If AI becomes core national infrastructure, where does a sovereign intelligence layer sit in that stack five years from now?

Ahmed Rashad: “Five years from now, I think it looks like what the financial audit function looks like today, a non-negotiable layer of verification that sits between data and deployment, with regulatory backing and professional standards attached to it.

Right now, AI development operates without anything equivalent to financial auditing. Companies self-report on their training data. There’s no independent verification, no professional credentialing of the process, no third-party attestation that the intelligence behind a model meets a defined standard. We’re in the early equivalent of pre-Sarbanes-Oxley finance, operating largely on trust and self-certification.

Advertisement

As AI becomes critical infrastructure, running power grids, healthcare systems, financial markets, defense networks, that model becomes untenable. Governments will mandate auditability. Procurement processes will require verified data provenance as a condition of contract. Liability frameworks will attach consequences to failures that could have been prevented by proper verification.

Where Perle sits in that stack is as the verification and credentialing layer, the entity that can produce an immutable, auditable record of what a model was trained on, by whom, under what standards. That’s not a feature of AI development five years from now. It’s a prerequisite.

The broader point is that sovereign intelligence isn’t a niche concern for defense contractors. It’s the foundation that makes AI deployable in any context where failure has real consequences. And as AI expands into more of those contexts, the foundation becomes the most valuable part of the stack.”

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Crypto World

Will XRP Drop Back to $1.20? Key Support Levels Tested Amid Bearish Pressure

Published

on

Will XRP Drop Back to $1.20? Key Support Levels Tested Amid Bearish Pressure

XRP remains under sustained bearish pressure across both its USDT and BTC pairs, with the price structure continuing to print lower highs and lower lows. Despite short-term bounces from support levels, the broader trend favors sellers as the price trades below key moving averages and within a descending structure.

Ripple Price Analysis: The USDT Pair

On the XRP/USDT chart, the price is trading inside a well-defined descending channel, consistently rejecting dynamic resistance from the midline of the channel, the upper trendline, and the 100-day and 200-day moving averages. The recent bounce from the $1.20 demand zone failed to reclaim the $1.80 supply area, reinforcing the bearish structure and confirming that rallies are still corrective in nature.

The RSI also remains below the neutral 50 level and continues to trend weakly, signaling a lack of bullish momentum. As long as XRP stays below the mid-channel resistance and the 100-day and 200-day moving averages, located near $1.90 and $2.30 levels, respectively, the downside risk toward the lower channel boundary remains elevated, with the $1.20 zone acting as critical structural support.

The BTC Pair

Against Bitcoin, XRP is also showing relative weakness, trading below both the 100-day and 200-day moving averages, which are both located above the 2,200 sats area, after failing to hold prior breakout gains. The rejection from the 2,200-2,400 sats resistance zone confirms that sellers are defending higher levels, while the price compresses near a key horizontal support band at 2,000 sats.

Advertisement

Momentum on the XRP/BTC pair is neutral-to-bearish, with the RSI struggling to establish sustained strength above 50. A breakdown below the current support region could open the door for further relative underperformance, while reclaiming the moving average cluster would be the first signal that XRP is beginning to regain strength versus BTC.

 

SPECIAL OFFER (Exclusive)

SECRET PARTNERSHIP BONUS for CryptoPotato readers: Use this link to register and unlock $1,500 in exclusive BingX Exchange rewards (limited time offer).

Disclaimer: Information found on CryptoPotato is those of writers quoted. It does not represent the opinions of CryptoPotato on whether to buy, sell, or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk. See Disclaimer for more information.

Advertisement

Source link

Continue Reading

Crypto World

Trump Fires Back After SCOTUS Ruling, Announces 10% Global Tariff

Published

on

US Government, United States, Donald Trump

The United States Supreme Court ruled on Friday that President Donald Trump could not use national emergency powers to levy tariffs during peacetime.

US President Donald Trump announced a 10% global tariff on Friday following the Supreme Court’s ruling striking down his authority to levy tariffs under the International Emergency Economic Powers Act (IEEPA).

Trump was critical of the Supreme Court’s decision, calling the decision “ridiculous” at Friday’s press conference, and said that he will levy the tariffs under different legal methods, including the Trade Expansion Act of 1962 and the Trade Act of 1974. Trump said:

Advertisement

“Effective immediately. All national security tariffs under Section 232 and Section 301 tariffs remain fully in place. And in full force and effect. Today, I will sign an order to impose a 10% Global tariff under Section 122 over and above our normal tariffs already being charged.”

US Government, United States, Donald Trump
US President Donald Trump announced a 10% global tariff and commented on Friday’s Supreme Court ruling. Source: The White House

Trump’s tariffs have repeatedly caused severe downturns in markets considered high risk, including crypto and equities, as the threat of tariffs fuels uncertainty and shakes investor confidence.

Related: Bitcoin ignores US Supreme Court Trump tariff strike amid talk of $150B refund

The Supreme Court strikes down Trump’s authority to levy tariffs under emergency powers

Trump levied a 25% tariff on most goods coming in from Canada and Mexico, and a 10% tariff on goods coming in from China under the IEEPA, framing both tariffs as a response to national security threats.

An influx of drugs from foreign countries created a “public health crisis,” according to Trump, while trade deficits with China threatened the industrial manufacturing base in the US, he alleged.

US Government, United States, Donald Trump
The Supreme Court ruling struck down Trump’s authority to levy tariffs under the IEEPA. Source: The US Supreme Court

However, the Supreme Court rejected both premises as national security threats under the IEEPA and said that the Executive Branch does not have the authority to levy tariffs under the IEEPA during peacetime. 

“In IEEPA’s half-century of existence, no president has invoked the statute to impose any tariffs, let alone tariffs of this magnitude and scope,” the ruling said.

Advertisement

“Article I, Section 8, of the Constitution specifies that ‘The Congress shall have Power To lay and collect Taxes, Duties, Imposts and Excises.’ The Framers recognized the unique importance of this taxing power,” the Supreme Court ruled on Friday.

Magazine: Is China hoarding gold so yuan becomes global reserve instead of USD?