Connect with us

Crypto World

How NFT games are revolutionizing the industry

Published

on

How NFT games are revolutionizing the industry

by Gonzalo Wangüemert Villalba

4 September 2025

Introduction The open-source AI ecosystem reached a turning point in August 2025 when Elon Musk’s company xAI released Grok 2.5 and, almost simultaneously, OpenAI launched two new models under the names GPT-OSS-20B and GPT-OSS-120B. While both announcements signalled a commitment to transparency and broader accessibility, the details of these releases highlight strikingly different approaches to what open AI should mean. This article explores the architecture, accessibility, performance benchmarks, regulatory compliance and wider industry impact of these three models. The aim is to clarify whether xAI’s Grok or OpenAI’s GPT-OSS family currently offers more value for developers, businesses and regulators in Europe and beyond. What Was Released Grok 2.5, described by xAI as a 270 billion parameter model, was made available through the release of its weights and tokenizer. These files amount to roughly half a terabyte and were published on Hugging Face. Yet the release lacks critical elements such as training code, detailed architectural notes or dataset documentation. Most importantly, Grok 2.5 comes with a bespoke licence drafted by xAI that has not yet been clearly scrutinised by legal or open-source communities. Analysts have noted that its terms could be revocable or carry restrictions that prevent the model from being considered genuinely open source. Elon Musk promised on social media that Grok 3 would be published in the same manner within six months, suggesting this is just the beginning of a broader strategy by xAI to join the open-source race. By contrast, OpenAI unveiled GPT-OSS-20B and GPT-OSS-120B on 5 August 2025 with a far more comprehensive package. The models were released under the widely recognised Apache 2.0 licence, which is permissive, business-friendly and in line with requirements of the European Union’s AI Act. OpenAI did not only share the weights but also architectural details, training methodology, evaluation benchmarks, code samples and usage guidelines. This represents one of the most transparent releases ever made by the company, which historically faced criticism for keeping its frontier models proprietary. Architectural Approach The architectural differences between these models reveal much about their intended use. Grok 2.5 is a dense transformer with all 270 billion parameters engaged in computation. Without detailed documentation, it is unclear how efficiently it handles scaling or what kinds of attention mechanisms are employed. Meanwhile, GPT-OSS-20B and GPT-OSS-120B make use of a Mixture-of-Experts design. In practice this means that although the models contain 21 and 117 billion parameters respectively, only a small subset of those parameters are activated for each token. GPT-OSS-20B activates 3.6 billion and GPT-OSS-120B activates just over 5 billion. This architecture leads to far greater efficiency, allowing the smaller of the two to run comfortably on devices with only 16 gigabytes of memory, including Snapdragon laptops and consumer-grade graphics cards. The larger model requires 80 gigabytes of GPU memory, placing it in the range of high-end professional hardware, yet still far more efficient than a dense model of similar size. This is a deliberate choice by OpenAI to ensure that open-weight models are not only theoretically available but practically usable. Documentation and Transparency The difference in documentation further separates the two releases. OpenAI’s GPT-OSS models include explanations of their sparse attention layers, grouped multi-query attention, and support for extended context lengths up to 128,000 tokens. These details allow independent researchers to understand, test and even modify the architecture. By contrast, Grok 2.5 offers little more than its weight files and tokenizer, making it effectively a black box. From a developer’s perspective this is crucial: having access to weights without knowing how the system was trained or structured limits reproducibility and hinders adaptation. Transparency also affects regulatory compliance and community trust, making OpenAI’s approach significantly more robust. Performance and Benchmarks Benchmark performance is another area where GPT-OSS models shine. According to OpenAI’s technical documentation and independent testing, GPT-OSS-120B rivals or exceeds the reasoning ability of the company’s o4-mini model, while GPT-OSS-20B achieves parity with the o3-mini. On benchmarks such as MMLU, Codeforces, HealthBench and the AIME mathematics tests from 2024 and 2025, the models perform strongly, especially considering their efficient architecture. GPT-OSS-20B in particular impressed researchers by outperforming much larger competitors such as Qwen3-32B on certain coding and reasoning tasks, despite using less energy and memory. Academic studies published on arXiv in August 2025 highlighted that the model achieved nearly 32 per cent higher throughput and more than 25 per cent lower energy consumption per 1,000 tokens than rival models. Interestingly, one paper noted that GPT-OSS-20B outperformed its larger sibling GPT-OSS-120B on some human evaluation benchmarks, suggesting that sparse scaling does not always correlate linearly with capability. In terms of safety and robustness, the GPT-OSS models again appear carefully designed. They perform comparably to o4-mini on jailbreak resistance and bias testing, though they display higher hallucination rates in simple factual question-answering tasks. This transparency allows researchers to target weaknesses directly, which is part of the value of an open-weight release. Grok 2.5, however, lacks publicly available benchmarks altogether. Without independent testing, its actual capabilities remain uncertain, leaving the community with only Musk’s promotional statements to go by. Regulatory Compliance Regulatory compliance is a particularly important issue for organisations in Europe under the EU AI Act. The legislation requires general-purpose AI models to be released under genuinely open licences, accompanied by detailed technical documentation, information on training and testing datasets, and usage reporting. For models that exceed systemic risk thresholds, such as those trained with more than 10²⁵ floating point operations, further obligations apply, including risk assessment and registration. Grok 2.5, by virtue of its vague licence and lack of documentation, appears non-compliant on several counts. Unless xAI publishes more details or adapts its licensing, European businesses may find it difficult or legally risky to adopt Grok in their workflows. GPT-OSS-20B and 120B, by contrast, seem carefully aligned with the requirements of the AI Act. Their Apache 2.0 licence is recognised under the Act, their documentation meets transparency demands, and OpenAI has signalled a commitment to provide usage reporting. From a regulatory standpoint, OpenAI’s releases are safer bets for integration within the UK and EU. Community Reception The reception from the AI community reflects these differences. Developers welcomed OpenAI’s move as a long-awaited recognition of the open-source movement, especially after years of criticism that the company had become overly protective of its models. Some users, however, expressed frustration with the mixture-of-experts design, reporting that it can lead to repetitive tool-calling behaviours and less engaging conversational output. Yet most acknowledged that for tasks requiring structured reasoning, coding or mathematical precision, the GPT-OSS family performs exceptionally well. Grok 2.5’s release was greeted with more scepticism. While some praised Musk for at least releasing weights, others argued that without a proper licence or documentation it was little more than a symbolic gesture designed to signal openness while avoiding true transparency. Strategic Implications The strategic motivations behind these releases are also worth considering. For xAI, releasing Grok 2.5 may be less about immediate usability and more about positioning in the competitive AI landscape, particularly against Chinese developers and American rivals. For OpenAI, the move appears to be a balancing act: maintaining leadership in proprietary frontier models like GPT-5 while offering credible open-weight alternatives that address regulatory scrutiny and community pressure. This dual strategy could prove effective, enabling the company to dominate both commercial and open-source markets. Conclusion Ultimately, the comparison between Grok 2.5 and GPT-OSS-20B and 120B is not merely technical but philosophical. xAI’s release demonstrates a willingness to participate in the open-source movement but stops short of true openness. OpenAI, on the other hand, has set a new standard for what open-weight releases should look like in 2025: efficient architectures, extensive documentation, clear licensing, strong benchmark performance and regulatory compliance. For European businesses and policymakers evaluating open-source AI options, GPT-OSS currently represents the more practical, compliant and capable choice.  In conclusion, while both xAI and OpenAI contributed to the momentum of open-source AI in August 2025, the details reveal that not all openness is created equal. Grok 2.5 stands as an important symbolic release, but OpenAI’s GPT-OSS family sets the benchmark for practical usability, compliance with the EU AI Act, and genuine transparency.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Crypto World

Trump’s State of the Union Signals No Relief on Rates, Ignores Crypto

Published

on

Trump's State of the Union Signals No Relief on Rates, Ignores Crypto

The US President Donald Trump delivered a nearly two-hour State of the Union address on Tuesday — the longest in US history — touting economic gains, warning Iran against pursuing nuclear weapons, and defending his tariff agenda after a Supreme Court setback.

Yet in a speech that touched on taxes, AI, housing, and healthcare, digital assets were entirely absent.

All the Trumps Were There, but Not Crypto

The omission is striking. All of Trump’s children were in attendance, including sons Donald Jr. and Eric, who have been deeply involved in crypto ventures such as World Liberty Financial and various token launches.

The president himself has repeatedly pledged to make the US “the crypto capital of the planet.” None of that made it into the address.

Advertisement

Tariff Chaos and Sticky Inflation Keep the Fed on Hold

For crypto markets, the most consequential signals were macro, not legislative.

Trump called the Supreme Court’s ruling striking down his emergency tariffs “very unfortunate” and vowed to maintain them under alternative legal authorities, insisting “congressional action will not be necessary.”

But the rollout quickly turned chaotic. Trump first announced a 10% replacement rate, then revised it to 15% days later. Yet official documents show the lower rate took effect Tuesday with no directive to raise it. The EU suspended ratification of its summer trade deal on Monday; India deferred scheduled talks.

Trump repeated his claim that tariffs could “substantially replace” income taxes. Economists call this implausible. The federal government collected $2.4 trillion in income taxes in 2024 but took in only about $300 billion from tariffs — and must now refund roughly half of that under the court ruling. Also, US importers pay the tariffs, not foreign governments.

Advertisement

On inflation, Trump claimed core inflation fell to 1.7% in late 2025. The reality is more complicated. The Fed’s preferred gauge — core PCE — accelerated to 3% in December, well above the 2% target.

With inflation sticky and tariff policy unresolved, the Fed is widely expected to hold rates steady for the foreseeable future. The three-quarter-point cuts delivered late last year appear to be the last for some time. For risk assets, including crypto, the higher-rate environment persists.

AI Gets Attention, Crypto Does Not

While crypto went unmentioned, AI earned a dedicated segment. Trump announced a “ratepayer protection pledge” requiring tech companies to build their own power plants for data centers, acknowledging the grid “could never handle” surging demand.

First Lady Melania Trump‘s AI legislation work was also highlighted — a sign that AI policy occupies a far more prominent place in the administration’s agenda than digital asset regulation.

Advertisement

The Bottom Line

Trump’s record-length address was a midterm election pitch built on economic optimism. But for crypto participants, the takeaways are clear: no legislative momentum for digital assets despite the president’s family being neck-deep in the industry, unresolved tariff turmoil injecting macro uncertainty, and a Fed locked in place by sticky inflation. The conditions weighing on risk assets aren’t likely to change anytime soon.

Source link

Continue Reading

Crypto World

BTC close to a bottom in price, but bulls will have to be patient

Published

on

BTC close to a bottom in price, but bulls will have to be patient

Bitcoin is exhibiting textbook bottom formation characteristics across multiple indicators, trading at levels that historically precede significant recoveries, according to onchain analyst James Check. Time — not price — is, however, likely to be the bigger test for bitcoin bulls.

“Every mean reversion model, from technical to onchain, is trading within bottom formation levels, typically seen after the price capitulation event (which December 2018 and June 2022 were examples of),” wrote Check on Tuesday morning as bitcoin plunged through $63,000, seemingly on its way to testing the Feb. 5 panic low of $60,000.

“Either Bitcoin is dead, will no longer mean revert, and all your models are broken,” Check continued. “Or you should be ignoring the bears … and quietly [be] dollar cost averaging [and] stacking sats from here on.”

Check — who correctly urged caution in 2025 about investing in any of BTC treasury companies formed to try and replicate the success of Michael Saylor’s Strategy — acknowledged today that it’s possible or even likely that the price of bitcoin could fall even further from here. Time, though, will be the more important factor. He reminded of the brutal 2022 bear market. Folks remember the price low around $15,600 in December of that year, but bitcoin essentially bottomed six months earlier at about $17,600. The rest was just waiting, and then a final liquidity flush (surrounding the FTX collapse).

Advertisement

“This is literally what a de-risked setup looks like for bitcoin,” concluded Check. “If you’re not actively accumulating bitcoin at this stage, then when?”

Source link

Continue Reading

Crypto World

Anthropic Accuses Three Firms of Using Sophisticated Distillation Attacks

Published

on

Anthropic Accuses Three Firms of Using Sophisticated Distillation Attacks

Artificial intelligence firm Anthropic has accused three AI firms of illicitly using its large language model Claude to improve their own models in a technique known as a “distillation” attack.

In a blog post on Sunday, Anthropic said that it had identified these “attacks” by DeepSeek, Moonshot, and MiniMax, which involve training a less capable model on the outputs of a stronger one.

Anthropic accused the trio of generating “over 16 million exchanges” combined with the firm’s Claude AI across “approximately 24,000 fraudulent accounts.” 

“Distillation is a widely used and legitimate training method. For example, frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers,” Anthropic wrote, adding: 

Advertisement

“But distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”

Anthropic said that the attacks focused on scraping Claude for a wide range of purposes, including agentic reasoning, coding and data analysis, rubric-based grading tasks, and computer vision. 

“Each campaign targeted Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding,” the multi-billion-dollar AI firm said. 

Source: Anthropic

Anthropic says it was able to identify the trio via an “IP address correlation, request metadata, infrastructure indicators, and in some cases corroboration from industry partners who observed the same actors and behaviors on their platforms.”

DeepSeek, Moonshot, and Minimax are all AI companies based in China. All three have estimated valuations in the multi-billion dollar range, with DeepSeek being the most widely internationally recognized out of the three. 

Beyond the intellectual property implications, Anthropic argued that distillation campaigns from foreign competitors present genuine geopolitical risks. 

Advertisement

“Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems—enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance,” the firm said.