Connect with us

Crypto World

Florida executive charged with wire fraud, money laundering in $328M crypto scam

Published

on

Florida executive charged with wire fraud, money laundering in $328M crypto scam

Federal authorities have arrested Christopher Alexander Delgado, the founder and CEO of Goliath Ventures, on federal charges tied to an alleged $328 million Ponzi crypto scam, the U.S. Department of Justice announced.

Summary

  • Goliath Ventures CEO Christopher Delgado was arrested on federal wire fraud and money laundering charges tied to a $328 million Ponzi scheme.
  • Prosecutors say investors were promised monthly crypto returns, but funds were diverted to pay earlier investors and support Delgado’s luxury lifestyle.
  • If convicted, Delgado faces up to 30 years in prison; authorities are reaching out to victims under the Crime Victims’ Rights Act.

Goliath Ventures CEO arrested in $328M crypto scam

Delgado, 34, of Apopka, Florida, was taken into custody on a criminal complaint filed in the United States District Court for the Middle District of Florida, where he is charged with wire fraud and money laundering.

If convicted on all counts, he could face up to 30 years in federal prison.

Advertisement

Prosecutors allege Delgado’s scheme ran from January 2023 through January 2026, during which he solicited investors to put money into purported cryptocurrency “liquidity pools” that promised steady monthly returns. In reality, federal officials say only about $1 million of the funds was actually invested in legitimate crypto assets.

The bulk of the more than $300 million collected from victims was used to pay earlier investors and finance Delgado’s lavish lifestyle, including luxury travel, company-sponsored events, and purchases of multi-million-dollar homes in central Florida.

According to court filings, victims were drawn in through personal referrals, slick marketing materials, and high-end networking events aimed at projecting legitimacy. At the scheme’s unraveling, investors seeking withdrawals were met with delays, inconsistent explanations, and restricted access to account information.

Advertisement

Federal law enforcement agencies including IRS Criminal Investigation and Homeland Security Investigations spearheaded the probe. Victims are being notified of their rights under the Crime Victims’ Rights Act, and authorities have invited potentially unidentified victims to come forward.

The arrest marks one of the largest alleged crypto-related fraud cases in recent years and underscores ongoing regulatory and criminal scrutiny of digital asset investment schemes.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Crypto World

OpenAI Wins Defense Contract Hours After Govt Ditches Anthropic

Published

on

Crypto Breaking News

OpenAI has secured a deal to run its AI models on the Pentagon’s classified network, a move announced by OpenAI CEO Sam Altman in a late Friday post on X. The arrangement signals a formal step toward embedding next-generation AI within sensitive military infrastructure, framed by assurances of safety and governance that align with the company’s operating limits. Altman’s message described the department’s approach as one that respects safety guardrails and is willing to work within the company’s boundaries, underscoring a methodical path from civilian deployment to classified environments. The timing places OpenAI at the center of a broader debate about how public institutions should harness artificial intelligence without compromising civil liberties or operational safety, particularly in defense contexts.

The news comes as the White House directs federal agencies to halt use of Anthropic’s technology, initiating a six-month transition for agencies already relying on its systems. The policy demonstrates the administration’s intent to tighten oversight over AI tools used across government while still leaving room for carefully orchestrated, safety-conscious deployments. The juxtaposition between a Pentagon-backed integration and a nationwide pause on a rival platform highlights a government-wide reckoning about how, where, and under what safeguards AI technologies should operate in sensitive domains.

Altman’s remarks emphasized a cautious but constructive stance toward national-security applications. He framed the OpenAI arrangement as one that prioritizes safety while allowing access to powerful capabilities, an argument that aligns with ongoing discussions about responsible AI use in government networks. The Defense Department’s approach—favoring controlled access and rigorous governance—reflects a broader policy impulse to build operational safety into deployments that could otherwise accelerate where and how AI informs critical decisions. The public signaling from both sides suggests a model in which collaboration with defense entities proceeds under strict compliance frameworks rather than broad, unfiltered usage.

Within this regulatory and political backdrop, Anthropic’s situation remains a focal point. The company had been the first AI lab to deploy models across the Pentagon’s classified environment under a $200 million contract signed in July. Negotiations reportedly collapsed after Anthropic sought assurances that its software would not enable autonomous weapons or domestic mass surveillance. The Defense Department, by contrast, insisted that the technology remain available for all lawful military purposes, a stance designed to preserve flexibility for defense needs while maintaining safeguards. The divergence illustrates the delicate balance between enabling cutting-edge capabilities and enforcing guardrails that align with national security and civil-liberties considerations.

Advertisement

Anthropic later stated it was “deeply saddened” by the designation and signaled its intention to challenge the decision in court. The move, if upheld, could set a significant precedent affecting how American technology firms negotiate with government agencies as political scrutiny of AI partnerships intensifies. OpenAI, for its part, has indicated it maintains similar restrictions and has written them into its own agreement framework. Altman noted that OpenAI prohibits domestic mass surveillance and requires human accountability in decisions involving the use of force, including automated weapons systems. These provisions are meant to align with the government’s expectations for responsible AI use in sensitive operations, even as the military explores deeper integration of AI tools into its workflows.

Public reaction to the developments has been mixed. Some observers on social platforms questioned the trajectory of AI governance and the implications for innovation. The discussion touches on broader concerns about how security and civil liberties can be reconciled with the speed and scale of AI deployment in governmental and defense contexts. Nonetheless, the core takeaway is clear: the government is actively experimenting with AI in national-security spaces while simultaneously imposing guardrails to prevent misuse, with the outcomes likely to shape future procurement and collaboration across the tech sector.

Altman’s comments reiterated that OpenAI’s restrictions include a prohibition on domestic mass surveillance and a requirement for human oversight in decisions involving force, including automated weapons systems. Those commitments are framed as prerequisites for access to classified environments, signaling a governance model that seeks to harmonize the power of large-scale AI models with the safeguards demanded by sensitive operations. The broader trajectory suggests a sustained interest among policymakers and defense stakeholders in harnessing AI’s benefits while maintaining tight oversight to prevent overreach or misuse. As this enters a phase of practical implementation, both government agencies and tech providers will be measured against their ability to maintain safety, transparency, and accountability in high-stakes settings.

The unfolding narrative also underscores how procurement and policy decisions around AI will influence the technology’s broader ecosystem. If the Pentagon’s experiments with OpenAI’s models within classified networks prove scalable and secure, they could set a template for future collaborations that blend cutting-edge AI with rigorous governance, a model likely to ripple into adjacent industries—including those exploring AI-assisted analytics and blockchain-based governance mechanisms. At the same time, the Anthropic episode demonstrates how这样 procurement negotiations can hinge on explicit guarantees regarding weaponization and surveillance—an issue that could shape the terms under which startups and incumbents pursue federal contracts.

Advertisement

In parallel, the public discourse around AI policy continues to evolve, with lawmakers and regulators watching closely how private firms respond to national-security demands. The outcome of Anthropic’s intended legal challenge could influence the negotiating playbook for future government partnerships, potentially affecting how terms are drafted, how risk is allocated, and how compliance is verified across different agencies. The OpenAI-aided deployment inside the Pentagon’s classified network remains a test case for balancing the speed and utility of AI with the accountability and safety constraints that define its most sensitive applications.

As the regulatory landscape continues to shift, many in the tech community will be watching for how these developments crystallize into concrete practice—how assessments of risk, security protocols, and governance standards evolve in next-generation AI deployments. The interplay between aggressive capability development and deliberate risk containment is now a central feature of strategic technology planning, with implications that extend beyond defense to other sectors that rely on AI for decision-making, data analysis, and critical operations. The coming months will reveal whether the OpenAI-DoD collaboration can serve as a durable model for secure, responsible AI integration within the state’s most sensitive enclaves.

OpenAI’s late-Friday X post framing the Pentagon deployment, and the Defense Department’s safety-oriented stance toward Anthropic, anchor the narrative in primary statements. The Truth Social post attributed to President Trump further contextualizes the political climate surrounding federal AI policy. On Anthropic’s side, the company’s official statement provides the formal counterpoint to the designation and its legal trajectory. Together, these sources outline a multi-faceted landscape where national security, civil liberties, and commercial interests intersect in real time.

Risk & affiliate notice: Crypto assets are volatile and capital is at risk. This article may contain affiliate links. Read full disclosure

Advertisement

Source link

Continue Reading

Crypto World

U.S. and Israel Strike Iran, Crypto Market Loses $100M in Minutes

Published

on

U.S. and Israel Strike Iran, Crypto Market Loses $100M in Minutes

TLDR:

  • BTC dropped below $64,000 within hours of Israel’s confirmed strike on Iran’s presidential HQ.
  • Ethereum fell over 5% to under $1,900 as traders liquidated risk positions across altcoins.
  • Over $100M in long positions were wiped out within 15 minutes of the strike news hitting markets.
  • Polymarket trader Vivaldi007 turned $385K profit betting on a U.S.-Israel Iran strike since Feb 8.

Explosions rocked Tehran after Israel launched strikes on Iran’s presidential headquarters and Ministry of Intelligence. Sirens blared across Israel as the IDF sent emergency alerts to citizens’ phones. 

Crypto markets responded immediately, shedding over $100 million in long positions within 15 minutes. The joint operation, reportedly involving the United States, sent shockwaves far beyond the Middle East.

Israel-Iran Strike Sends Crypto Prices Into Freefall

Bitcoin dropped roughly 3% within hours of the news breaking. It fell below $64,000 as traders rushed to cut exposure. 

Ethereum took a harder hit, sliding over 5% to under $1,900. The broader crypto market cap lost around 6% in early trading, according to market data. According to a snapshot from the cryptobubbles, the market appears red. Most assets are recording substantial drops.

crypto market snapshot on Crypto Bubble

The IDF confirmed sirens sounded throughout Israel shortly before the strikes became public. Citizens received direct cellular alerts to stay near protected spaces. The military framed the alert as a proactive measure. It signaled the scale of what was unfolding.

On-chain tracking platform Lookonchain reported one high-profile casualty of the volatility. Trader Machi, who had deposited $245,000 just four days prior, was liquidated again. His account dropped to only $13,580. The timing proved catastrophic for leveraged long positions across the board.

Not everyone lost. Lookonchain also flagged Polymarket trader Vivaldi007, who had been betting on a U.S.-Israel strike against Iran since February 8. He placed wagers on nearly every available date and kept losing until now. The strikes pushed his total profit to $385,000.

Geopolitical Risk Reignites Crypto Market Volatility

This pattern is not new. When the U.S. struck Iranian nuclear sites in June 2025, BTC plunged below $100,000 during a 7% market-wide selloff. 

Advertisement

Oil supply fears and global economic uncertainty drove the move. Crypto behaved like a risk asset, not a safe haven.

The April 2024 Israel-Iran exchange produced a similar response. BTC briefly dipped under $60,000 as capital rotated toward gold and the dollar. Markets recovered once tensions cooled. Whether that playbook repeats depends on what comes next.

Iran’s potential response remains the key variable. A closure of the Strait of Hormuz, which handles roughly 20% of global oil, could spike energy prices and reignite inflation fears. 

Central bank tightening in that scenario would add further pressure on risk assets. Past modeling suggests a full escalation could cut crypto valuations by 10 to 20% in the short term.

Advertisement

The IDF has not issued further operational updates. Markets remain on edge.

Source link

Advertisement
Continue Reading

Crypto World

Bitcoin Crashes as US and Israel Strike Iran, War Begins

Published

on

Bitcoin Crashes as US and Israel Strike Iran, War Begins

Israel and the United States carried out a joint strike on Iran early Saturday, marking a major escalation in regional tensions. Bitcoin reached extremely to the news, dropping straight to $63,000 and extending daily losses to nearly 7%.

Israeli Defense Minister Israel Katz described the operation as a “preemptive strike.” The Israeli government declared a nationwide state of emergency, warning of possible Iranian retaliation using drones and ballistic missiles.

US Iran War Officially Starts

According to CNN, the strike was coordinated between Washington and Jerusalem. Officials said the action aimed to counter what they described as an immediate threat.

Details on the specific targets have not yet been fully disclosed.The move follows weeks of rising tensions between the U.S. and Iran. Washington yesterday designated Iran a State Sponsor of Wrongful Detention, accusing Tehran of holding American citizens for political leverage.

Advertisement

At the same time, the U.S. increased its military presence in Israel, deploying advanced fighter jets and additional assets across the region.

Bitcoin Crashes and Erased Weekly Gains

Bitcoin fell sharply following news of the strike. The cryptocurrency dropped more than 6% in 24 hours, sliding to around $63,300.

The decline erased recent recovery attempts and extended broader weakness over the past month.Traders appear to be cutting risk exposure amid fears of a wider regional conflict.

Bitcoin Daily Price Chart. Source:Coingecko

If Iran retaliates directly against Israeli or U.S. assets, the situation could escalate quickly. Energy markets are also on alert, given Iran’s strategic position in global oil routes.

Source link

Advertisement
Continue Reading

Crypto World

OpenAI Wins Defense Contract After US Halts Anthropic Use

Published

on

OpenAI Wins Defense Contract After US Halts Anthropic Use

OpenAI has reached an agreement with the United States Department of Defense to deploy its artificial intelligence models on classified military networks, just hours after the White House ordered federal agencies to stop using technology from rival firm Anthropic.

In a late Friday post on X, OpenAI CEO Sam Altman announced the deal, saying the company would provide its models inside the Pentagon’s “classified network.” He wrote that the department showed “deep respect for safety” and a willingness to work within the company’s operating limits.

The announcement came amid a turbulent week for the AI sector. Earlier the same day, Defense Secretary Pete Hegseth labeled Anthropic a “Supply-Chain Risk to National Security,” a designation typically applied to foreign adversaries. The ruling requires defense contractors to certify they are not using the company’s models.

Source: Defense Secretary Pete Hegseth

President Donald Trump simultaneously directed every US federal agency to immediately halt use of Anthropic technology, with a six-month transition period for agencies already relying on its systems.

Related: Crypto VC Paradigm expands into AI, robotics with $1.5B fund: WSJ

Advertisement

Anthropic Pentagon talks collapse over AI use limits

Anthropic was the first AI lab to deploy models across the Pentagon’s classified environment under a $200 million contract signed in July. Negotiations collapsed after the company sought guarantees that its software would not be used for autonomous weapons or domestic mass surveillance. The Defense Department insisted the technology be available for all lawful military purposes.

In a statement, Anthropic said it was “deeply saddened” by the designation and intends to challenge the decision in court. The company warned the move could set a precedent affecting how American technology firms negotiate with government agencies, as political scrutiny of AI partnerships continues to intensify.

Altman said OpenAI maintains similar restrictions and that they were written into the new agreement. According to him, the company prohibits domestic mass surveillance and requires human responsibility in decisions involving the use of force, including automated weapons systems.

Related: Pantera, Franklin Templeton join Sentient Arena to test AI agents

Advertisement

OpenAI faces backlash after deal

Meanwhile, some users on X voiced skepticism. “I just canceled ChatGPT and bought Claude Pro Max,” Christopher Hale, an American Democratic politician, wrote. “One stands up for the God-given rights of the American people. The other folds to tyrants,” he added.

Source: Sreemoy Talukdar

“2019 OpenAI: we will never help build weapons or surveillance tools. 2026 OpenAI: department of War, hold my classified cloud instance. Integrity arc go brrrrrrr,” one crypto user wrote.

Magazine: Bitcoin may take 7 years to upgrade to post-quantum — BIP-360 co-author