Connect with us

Crypto World

Gurhan Kiziloz confirms he has $100b in sight for Nexus International

Published

on

$1.2b revenue mark is just the start: Gurhan Kiziloz confirms he has $100b in sight for Nexus International - 2

Disclosure: This article does not represent investment advice. The content and materials featured on this page are for educational purposes only.

Nexus International hits $1.2 billion revenue as billionaire Gurhan Kiziloz sets sights on $100 billon long-term growth.

Advertisement
$1.2b revenue mark is just the start: Gurhan Kiziloz confirms he has $100b in sight for Nexus International - 2

Summary

  • Nexus International hits $1.2b revenue as founder Gurhan Kiziloz targets $100b without outside investors.
  • After five bankruptcies, Gurhan Kiziloz has built a $1.2b revenue empire while retaining full ownership.
  • Spartans.com’s casino-only strategy powers Nexus growth, avoiding dilution while competing with Stake and bet365.

Gurhan Kiziloz, the self-made billionaire behind Nexus International, is not one to celebrate mid-journey. His company just crossed $1.2 billion in annual revenue for 2025, triple its 2024 performance, and yet he’s already thinking ten steps ahead. “We’re not calling $1.2 billion a milestone,” Kiziloz said in a recent interview. “There’s much more scale to build. I’d call $100 billion a turning point. That’s where we’re going.”

For most founders, that kind of revenue would signal a peak. For Kiziloz, it barely registers as a checkpoint. The entrepreneur who once faced five bankruptcies is now the sole owner of a company that competes with billion-dollar operators, without raising a single dollar in venture capital. And he’s openly stating that $100 billion is the number that will define his long-term ambition.

The numbers are clear. In 2024, Nexus International reported $400 million in revenue. By the end of 2025, that number hit $1.2 billion. The 200% year-on-year increase marks the largest single-period growth in the company’s history and puts it firmly in the league of mid-sized global operators.

But what makes Nexus different isn’t just the scale, it’s the structure. The company has no external investors. Every dollar used for growth comes from retained earnings. Kiziloz has maintained full ownership of the parent company throughout this expansion, bypassing the equity dilution that usually follows hypergrowth.

Advertisement

The biggest contributor to Nexus’s revenue explosion is Spartans.com, a casino-only gaming platform that goes head-to-head with names like Stake and bet365. Unlike most competitors, Spartans.com doesn’t combine casino and sportsbook offerings. It’s intentionally focused, designed to dominate the casino niche rather than spread thin across multiple verticals.

In 2025 alone, Spartans.com absorbed $200 million in platform reinvestment, every cent funded internally. This operational discipline has become a hallmark of the Nexus playbook: scale only when the existing product is cash-generative, and never dilute ownership to fuel expansion.

The remaining portfolio includes Megaposta, a licensed Latin American brand, and Lanistar, a platform tailored for Europe. While both contribute to the overall structure, Spartans remains the driving force behind the company’s financial ascent.

What makes Kiziloz’s model unique isn’t just that he avoided venture funding. It’s how he used that constraint as a structural advantage. Without external capital, there’s no boardroom politics, no investor timelines, and no incentive to inflate short-term metrics for the sake of fundraising optics. Decisions are made fast, costs are tightly controlled, and accountability rests entirely with Kiziloz and his internal team.

Advertisement

The numbers reflect that clarity. The company reinvested $200 million in 2025 into tech, compliance, and platform architecture, without tapping into credit lines or private equity. That’s rare in a sector where expansion is almost always debt- or dilution-fueled.

It’s easy to misread Kiziloz’s $100 billion target as bravado. But for him, it’s about building a durable model that doesn’t depend on narrative cycles or temporary hype. The $1.2 billion revenue mark is a milestone, yes, but it’s not the story. The story is that he got there without giving up ownership, without artificial growth, and without compromising execution standards.

“I think the future of high-scale businesses will look more like this,” he said. “You don’t need to raise to grow. You need to build things that work and keep control while doing it.”

That approach stands in contrast to most of today’s unicorns, many of which are propped up by billions in funding with no clear path to profitability. Nexus has already crossed the profitability line. And it’s doing so with a product-first, capital-efficient mindset that remains rare, especially in online gaming.

Advertisement

Nexus has not issued public guidance for 2026, nor has it broken down revenue by platform or geography.  Kiziloz’s philosophy is not to speculate forward but to let operational output speak for itself.

But if past performance is any indication, Nexus International is not slowing down. With Spartans.com driving volume, and Megaposta continuing to benefit from early market entry in Brazil, the company’s momentum is clear. And unlike its competitors, Nexus doesn’t have to wait for board approvals or capital calls to deploy that momentum.

The result is a structure that moves faster, adapts more precisely, and scales without compromise.

Gurhan Kiziloz’s story isn’t clean or conventional. He went bankrupt five times before finding the formula that stuck. That formula was simple: eliminate what doesn’t work, double down on what does, and keep ownership at all costs.

Advertisement

Today, with a $1.7 billion personal net worth and a business generating $1.2 billion annually, the math proves that approach works. But for Kiziloz, it’s still early.

Because the goal was never just survival. The goal, as he says, is to reach the turning point. And that number is $100 billion.

This article was prepared in collaboration with BlockDAG. It does not constitute investment advice.

Advertisement

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Crypto World

Flow Foundation seeks court order to stop FLOW token delisting in South Korea

Published

on

Flow Foundation seeks court order to stop FLOW token delisting in South Korea

Flow Foundation and Dapper Labs have filed a motion with the Seoul Central District Court to suspend the termination of trading support on major South Korean exchanges for the Flow blockchain’s native FLOW token.

Summary

  • Flow Foundation and Dapper Labs have asked a Seoul court to suspend the planned termination of FLOW trading on Upbit, Bithumb, and Coinone.
  • A December 2025 exploit allowed an attacker to duplicate about $3.9 million in tokens, but post-incident reports confirmed user balances were not affected.

According to a March 8 announcement, the firms are asking the court to temporarily halt the delisting decision by Upbit, Bithumb, and Coinone, which announced plans to end trading support for the token on Feb. 12 after a security incident on the layer-1 blockchain.

As previously reported by crypto.news, on Dec. 27, the Flow blockchain suffered a protocol-level exploit that allowed an attacker to mint about $3.9 million in duplicated tokens.

Advertisement

Although post-mortem reports published later confirmed that the incident did not impact user balances, it led to a temporary halt of the network and triggered emergency measures from validators, who were able to pause the chain and work with exchange partners to freeze and recover funds linked to the attack.

Flow initially proposed a full chain rollback recovery plan, which received opposition from ecosystem partners who were concerned it could create double balances for users who bridged assets out during the rollback window and losses for those who had bridged assets into the network during the same period.

Developers later opted for an isolated recovery plan by targeting and destroying the duplicate tokens in a bid to preserve legitimate user activity on the network.

Advertisement

Several exchanges suspended FLOW trading and services following the incident, including Upbit, Bithumb, and Coinone, which are at the center of the Flow Foundation’s court motion.

However, after conducting individual reviews of the incident and the remediation measures taken by the project, many of these platforms have since restored full services for the token.

The foundation contends that the FLOW token remains available on major global exchanges, including Binance, Coinbase, Kraken, while Korbit continues to support FLOW trading in South Korea.

“Given the weight of new evidence, Flow Foundation and Dapper Labs have filed a motion with the Seoul Central District Court requesting a suspension of the trading termination until a thorough review can be completed,” the foundation said.

Advertisement

“This step reflects the responsibility of the Foundation to advocate for the Korean community using every available pathway. The Foundation remains open to constructive conversation with all parties involved,” it added.

The court is set to review the application on March 9 and will determine the next steps in the case.

Further, the foundation said it would continue to pursue additional exchange listings in South Korea and would expand “self-custody access options” for impacted users.

As of last check, FLOW token was down 6.4% over the past 24 hours and was trading 99.9% below its all time high.

Advertisement

Source link

Continue Reading

Crypto World

AI Infrastructure as a Service (AIaaS): Enterprise AI Deployment Guide

Published

on

Building Metaverse Gaming Platforms That Go Beyond Games

AI Summary

  • Enterprises are pivoting towards large-scale AI deployment, with a focus on robust infrastructure to support advanced AI workloads.
  • As global AI spending is set to reach $2.52 trillion by 2026, organizations are investing heavily in AI foundations.
  • AI Infrastructure-as-a-Service (AIaaS) emerges as a pivotal model, offering on-demand access to essential resources for building AI systems without the burden of managing complex hardware.
  • AI cloud infrastructure is becoming the cornerstone of enterprise AI, providing scalable environments optimized for high-performance computing and large-scale model training.
  • Key architectural components of modern AI infrastructure include high-performance compute layers, data engineering, storage layers, machine learning development environments, and MLOps frameworks.

Artificial intelligence has entered a phase where infrastructure, not algorithms, is becoming the defining factor for enterprise success. Organizations are rapidly shifting their focus from experimentation to large-scale deployment of AI solutions. However, running modern AI workloads requires massive computing power, distributed storage systems, and specialized AI development infrastructure.

Industry research shows that enterprises are dramatically increasing their investments in AI foundations. According to research from Gartner, global AI spending is projected to reach $2.52 trillion by 2026, representing a 44% increase compared to previous years. A significant portion of this spending is directed toward AI infrastructure and enterprise AI platforms.

Infrastructure is now the backbone of enterprise AI adoption. Large organizations are investing heavily in high-performance computing clusters, AI cloud infrastructure, and scalable data pipelines to support generative AI and machine learning applications.

As John-David Lovelock, Distinguished VP Analyst at Gartner, explains:

“AI adoption is fundamentally shaped by the readiness of human capital and organizational processes.”

Advertisement

This shift toward infrastructure-led AI adoption has accelerated the rise of AI Infrastructure as a Service (AIaaS), enabling enterprises to build intelligent systems without managing complex underlying hardware.

What Is AI Infrastructure-as-a-Service (AIaaS)? A New Operating Model for Enterprise AI

AI Infrastructure-as-a-Service is a cloud-based delivery model that provides enterprises with on-demand access to computing resources, machine learning environments, and deployment platforms required to build and scale artificial intelligence systems.

Instead of investing in expensive hardware or building AI platforms internally, organizations can leverage managed AI infrastructure services delivered through cloud-based platforms.

An enterprise-grade AI infrastructure platform typically provides:

Advertisement
  • GPU and AI accelerator clusters for large-scale computation
  • Distributed storage for large datasets
  • AI development infrastructure for model training
  • MLOps pipelines for lifecycle management
  • AI deployment and inference environments

This service-based model enables organizations to build advanced AI applications while focusing on innovation rather than infrastructure management.

Industry analysts highlight that AI-optimized infrastructure services are becoming one of the fastest-growing segments of enterprise technology.

According to Gartner research, spending on AI-optimized Infrastructure-as-a-Service is expected to reach $37.5 billion by 2026, driven by the increasing demand for specialized computing hardware such as GPUs and AI accelerators.

The Rise of AI Cloud Infrastructure: Powering the Next Generation of AI Applications

Modern AI systems rely heavily on scalable cloud environments capable of handling massive datasets and complex machine learning workloads. As a result, AI cloud infrastructure has become the foundation of enterprise AI deployment.

Unlike traditional cloud environments, AI cloud infrastructure is optimized for high-performance computing and large-scale model training. It integrates advanced hardware components such as GPUs, tensor processing units, and AI accelerators with distributed storage and networking systems.

Advertisement

Key capabilities of AI cloud infrastructure include:

  • Scalable GPU clusters
  • Distributed computing frameworks
  • High-speed networking for parallel processing
  • Automated model deployment environments

These capabilities allow enterprises to train complex machine learning models, process massive datasets, and deploy AI-driven applications across global markets.

According to reports from Deloitte and Gartner, enterprise spending on AI infrastructure is accelerating as organizations scale generative AI and machine learning deployments. Major technology companies are investing hundreds of billions of dollars into data centers designed specifically for AI workloads.

This growing infrastructure ecosystem is enabling enterprises to build AI systems that can process vast amounts of data in real time.

Building Enterprise AI Infrastructure: Key Architectural Components

A modern enterprise AI infrastructure consists of multiple interconnected layers designed to support the complete lifecycle of AI development.

Advertisement

These layers form the foundation of AI development infrastructure used by data scientists, machine learning engineers, and enterprise technology teams.

High-Performance Compute Layer

AI workloads require specialized hardware capable of handling parallel computations. GPU clusters and AI accelerators enable organizations to train deep learning models and generative AI systems efficiently.

These compute environments are particularly critical for large language models and advanced neural networks that require thousands of parallel operations.

Data Engineering and Storage Layer

AI systems rely on vast volumes of data. Enterprise AI platforms include advanced data pipelines that support data ingestion, storage, transformation, and governance.

Advertisement

These systems allow organizations to process structured and unstructured data at scale while maintaining security and compliance.

Machine Learning Development Environment

AI engineers require sophisticated development environments that allow them to experiment with models, test algorithms, and collaborate across teams.

These environments are an essential component of modern AI development infrastructure.

They typically include:

Advertisement
  • model training frameworks
  • experiment tracking tools
  • collaborative development environments

These capabilities accelerate innovation while ensuring consistency across AI projects.

MLOps and Model Lifecycle Management

As AI systems move into production environments, organizations must manage the entire lifecycle of machine learning models.

MLOps frameworks provide automation for:

  • model deployment
  • monitoring and performance tracking
  • continuous model retraining

These systems ensure that AI applications remain reliable and effective over time.

The Role of AI Development Companies in Accelerating Enterprise AI

For many organizations, building AI infrastructure internally can be both technically complex and financially demanding. As a result, enterprises increasingly collaborate with specialized AI development company partners that provide expertise in building scalable AI ecosystems.

An experienced AI development company can help enterprises:

Advertisement
  • Design scalable AI infrastructure platforms
  • Implement AI cloud infrastructure environments
  • Build custom AI models and data pipelines
  • Deploy AI applications across enterprise systems

By combining infrastructure expertise with advanced AI engineering capabilities, these companies enable organizations to accelerate AI adoption while minimizing operational risks.

Business Advantages of AI Infrastructure-as-a-Service

Adopting AI infrastructure as a service provides multiple strategic benefits for enterprises looking to scale AI initiatives

AIaaS eliminates infrastructure bottlenecks, allowing organizations to focus on building intelligent applications rather than managing hardware.

  • Scalable Computing Resources

Enterprises can dynamically scale computing resources based on demand, enabling them to handle large AI workloads efficiently.

  • Reduced Capital Investment

Organizations avoid large upfront investments in specialized hardware such as GPU clusters and AI accelerators.

  • Improved Operational Efficiency

Managed AI infrastructure services reduce operational complexity and simplify the management of AI environments.

  • Faster Deployment of AI Applications

AIaaS platforms accelerate the development and deployment of AI solutions across enterprise systems.

Transform your Enterprise with Scalable AI Infrastructure

Emerging AI Infrastructure Trends Shaping 2025-2026

The evolution of enterprise AI infrastructure is being shaped by several transformative trends.

  • Generative AI Infrastructure

The rise of generative AI has significantly increased demand for computing power and data processing capabilities. Enterprises are building infrastructure specifically designed to support large language models and multimodal AI systems.

  • AI Supercomputing Clusters

Large-scale AI clusters capable of connecting thousands of GPUs are becoming the backbone of enterprise AI platforms.

Organizations are increasingly deploying AI models closer to data sources to enable real-time processing for applications such as smart manufacturing and autonomous systems.

Advertisement

As AI adoption grows, enterprises are implementing governance frameworks and financial operations strategies to manage the cost and performance of AI workloads.

Experts highlight that infrastructure readiness is becoming a critical factor for successful AI implementation.

Challenges Enterprises Must Address When Building AI Infrastructure

  • Data Security and Compliance

Enterprises must ensure that sensitive data remains protected when deploying AI workloads in cloud environments.

Training large AI models can require significant computing resources, increasing operational expenses.

Many organizations struggle to find professionals with expertise in AI infrastructure engineering.

Advertisement

Relying heavily on a single AI cloud provider can create long-term operational dependencies. Addressing these challenges requires careful planning and a well-defined enterprise AI strategy.

The Future of AI Infrastructure Platforms

AI infrastructure is rapidly evolving as enterprises push the boundaries of machine learning and generative AI technologies.

Future enterprise AI platforms are expected to incorporate:

  • autonomous AI operations
  • distributed AI networks
  • edge computing infrastructure
  • AI-native cloud environments

Researchers predict that the number of AI agents and intelligent systems could increase dramatically over the next decade, placing even greater demands on global computing infrastructure.

This means that scalable AI infrastructure platforms will become essential digital foundations for the next generation of intelligent systems.

Advertisement

Why AIaaS is Becoming the Backbone of Enterprise AI

Artificial intelligence is transforming how organizations operate, compete, and innovate. However, the ability to scale AI initiatives depends heavily on the availability of reliable and high-performance infrastructure. AI Infrastructure-as-a-Service provides enterprises with a powerful solution for building and deploying intelligent systems without the complexity of managing hardware environments. By leveraging scalable computing environments and modern AI platforms, organizations can accelerate innovation, reduce operational complexity, and unlock new opportunities in the AI-driven economy. As AI adoption continues to expand, AIaaS will play a critical role in enabling enterprises to build the intelligent digital ecosystems of the future.

As a trusted AI Development company, Antier helps enterprises design and implement scalable AI environments that support modern AI workloads and intelligent applications. With deep expertise in enterprise AI deployment, Antier empowers organizations to transform ideas into production-ready AI solutions.

Source link

Advertisement
Continue Reading

Crypto World

AI Use in Workplaces Causing ‘Brain Fry,’ Say Researchers

Published

on

AI Use in Workplaces Causing ‘Brain Fry,’ Say Researchers

The excessive use and oversight of artificial intelligence in the workplace is giving workers “AI brain fry,” contrary to the technology’s assurance that it would ease job pressures.

Workers who are using AI tools report that the technology is “intensifying rather than simplifying work,” researchers from Boston Consulting Group and the University of California wrote in the Harvard Business Review on Friday.

A study of nearly 1,500 full-time US workers found 14% said they had experienced “mental fatigue that results from excessive use of, interaction with, and/or oversight of AI tools beyond one’s cognitive capacity,” or what the researchers called “AI brain fry.”

Respondents described having a “mental hangover” with a “fog” or “buzzing” and an inability to think clearly, along with headaches, slower decision-making, and difficulty focusing.

Advertisement
Marketing and HR workers reported the highest levels of AI-induced “brain fry.” Source: Harvard Business Review

AI companies have pushed their products as a productivity booster, allowing workers to offload some or part of their workloads, a message that some companies have taken on and started to measure AI use as a performance metric.

Crypto exchange Coinbase CEO Brian Armstrong has said he fired engineers who didn’t want to use AI, and set a goal late last year to have AI generate half of the platform’s code.

“As enterprises use more multi-agent systems, employees find themselves toggling between more tools,” the researchers wrote. “Contrary to the promise of having more time to focus on meaningful work, juggling and multitasking can become the definitive features of working with AI.”

AI carries “significant costs,” but can improve burnout

The researchers said this AI-induced mental strain “carries significant costs in the form of increased employee errors, decision fatigue, and intention to quit.”

Study respondents who said they had brain fry experienced 33% more decision fatigue compared to those who didn’t, which researchers said could cost large companies millions of dollars a year. Those with AI brain fry were also around 40% more likely to have an active intent to quit.

Advertisement

Those reporting AI brain fry also self-reported making nearly 40% more major errors than those who did not, with a major error defined as one with “serious consequences, such as those that could affect safety, outcomes, or important decisions.” 

The researchers found, however, that the use of AI to replace repetitive and routine tasks decreased burnout, a state of chronic workplace stress that leads to negative feelings about the job and decreased effectiveness.

Related: Anthropic reopens Pentagon talks as tech groups push Trump to drop risk tag

Respondents who used AI to reduce time spent on routine and repetitive tasks reported their levels of burnout were 15% lower than those who didn’t use AI in such a way.

Advertisement

The researchers said company leaders looking to reduce AI brain fry should “clearly define AI’s purpose in the organization” and explain how workloads will change with the tool.