Connect with us

Crypto World

Revolutionising Advanced Problem-Solving with AI

Published

on

Revolutionising Advanced Problem-Solving with AI

by Gonzalo Wangüemert Villalba

4 September 2025

Introduction The open-source AI ecosystem reached a turning point in August 2025 when Elon Musk’s company xAI released Grok 2.5 and, almost simultaneously, OpenAI launched two new models under the names GPT-OSS-20B and GPT-OSS-120B. While both announcements signalled a commitment to transparency and broader accessibility, the details of these releases highlight strikingly different approaches to what open AI should mean. This article explores the architecture, accessibility, performance benchmarks, regulatory compliance and wider industry impact of these three models. The aim is to clarify whether xAI’s Grok or OpenAI’s GPT-OSS family currently offers more value for developers, businesses and regulators in Europe and beyond. What Was Released Grok 2.5, described by xAI as a 270 billion parameter model, was made available through the release of its weights and tokenizer. These files amount to roughly half a terabyte and were published on Hugging Face. Yet the release lacks critical elements such as training code, detailed architectural notes or dataset documentation. Most importantly, Grok 2.5 comes with a bespoke licence drafted by xAI that has not yet been clearly scrutinised by legal or open-source communities. Analysts have noted that its terms could be revocable or carry restrictions that prevent the model from being considered genuinely open source. Elon Musk promised on social media that Grok 3 would be published in the same manner within six months, suggesting this is just the beginning of a broader strategy by xAI to join the open-source race. By contrast, OpenAI unveiled GPT-OSS-20B and GPT-OSS-120B on 5 August 2025 with a far more comprehensive package. The models were released under the widely recognised Apache 2.0 licence, which is permissive, business-friendly and in line with requirements of the European Union’s AI Act. OpenAI did not only share the weights but also architectural details, training methodology, evaluation benchmarks, code samples and usage guidelines. This represents one of the most transparent releases ever made by the company, which historically faced criticism for keeping its frontier models proprietary. Architectural Approach The architectural differences between these models reveal much about their intended use. Grok 2.5 is a dense transformer with all 270 billion parameters engaged in computation. Without detailed documentation, it is unclear how efficiently it handles scaling or what kinds of attention mechanisms are employed. Meanwhile, GPT-OSS-20B and GPT-OSS-120B make use of a Mixture-of-Experts design. In practice this means that although the models contain 21 and 117 billion parameters respectively, only a small subset of those parameters are activated for each token. GPT-OSS-20B activates 3.6 billion and GPT-OSS-120B activates just over 5 billion. This architecture leads to far greater efficiency, allowing the smaller of the two to run comfortably on devices with only 16 gigabytes of memory, including Snapdragon laptops and consumer-grade graphics cards. The larger model requires 80 gigabytes of GPU memory, placing it in the range of high-end professional hardware, yet still far more efficient than a dense model of similar size. This is a deliberate choice by OpenAI to ensure that open-weight models are not only theoretically available but practically usable. Documentation and Transparency The difference in documentation further separates the two releases. OpenAI’s GPT-OSS models include explanations of their sparse attention layers, grouped multi-query attention, and support for extended context lengths up to 128,000 tokens. These details allow independent researchers to understand, test and even modify the architecture. By contrast, Grok 2.5 offers little more than its weight files and tokenizer, making it effectively a black box. From a developer’s perspective this is crucial: having access to weights without knowing how the system was trained or structured limits reproducibility and hinders adaptation. Transparency also affects regulatory compliance and community trust, making OpenAI’s approach significantly more robust. Performance and Benchmarks Benchmark performance is another area where GPT-OSS models shine. According to OpenAI’s technical documentation and independent testing, GPT-OSS-120B rivals or exceeds the reasoning ability of the company’s o4-mini model, while GPT-OSS-20B achieves parity with the o3-mini. On benchmarks such as MMLU, Codeforces, HealthBench and the AIME mathematics tests from 2024 and 2025, the models perform strongly, especially considering their efficient architecture. GPT-OSS-20B in particular impressed researchers by outperforming much larger competitors such as Qwen3-32B on certain coding and reasoning tasks, despite using less energy and memory. Academic studies published on arXiv in August 2025 highlighted that the model achieved nearly 32 per cent higher throughput and more than 25 per cent lower energy consumption per 1,000 tokens than rival models. Interestingly, one paper noted that GPT-OSS-20B outperformed its larger sibling GPT-OSS-120B on some human evaluation benchmarks, suggesting that sparse scaling does not always correlate linearly with capability. In terms of safety and robustness, the GPT-OSS models again appear carefully designed. They perform comparably to o4-mini on jailbreak resistance and bias testing, though they display higher hallucination rates in simple factual question-answering tasks. This transparency allows researchers to target weaknesses directly, which is part of the value of an open-weight release. Grok 2.5, however, lacks publicly available benchmarks altogether. Without independent testing, its actual capabilities remain uncertain, leaving the community with only Musk’s promotional statements to go by. Regulatory Compliance Regulatory compliance is a particularly important issue for organisations in Europe under the EU AI Act. The legislation requires general-purpose AI models to be released under genuinely open licences, accompanied by detailed technical documentation, information on training and testing datasets, and usage reporting. For models that exceed systemic risk thresholds, such as those trained with more than 10²⁵ floating point operations, further obligations apply, including risk assessment and registration. Grok 2.5, by virtue of its vague licence and lack of documentation, appears non-compliant on several counts. Unless xAI publishes more details or adapts its licensing, European businesses may find it difficult or legally risky to adopt Grok in their workflows. GPT-OSS-20B and 120B, by contrast, seem carefully aligned with the requirements of the AI Act. Their Apache 2.0 licence is recognised under the Act, their documentation meets transparency demands, and OpenAI has signalled a commitment to provide usage reporting. From a regulatory standpoint, OpenAI’s releases are safer bets for integration within the UK and EU. Community Reception The reception from the AI community reflects these differences. Developers welcomed OpenAI’s move as a long-awaited recognition of the open-source movement, especially after years of criticism that the company had become overly protective of its models. Some users, however, expressed frustration with the mixture-of-experts design, reporting that it can lead to repetitive tool-calling behaviours and less engaging conversational output. Yet most acknowledged that for tasks requiring structured reasoning, coding or mathematical precision, the GPT-OSS family performs exceptionally well. Grok 2.5’s release was greeted with more scepticism. While some praised Musk for at least releasing weights, others argued that without a proper licence or documentation it was little more than a symbolic gesture designed to signal openness while avoiding true transparency. Strategic Implications The strategic motivations behind these releases are also worth considering. For xAI, releasing Grok 2.5 may be less about immediate usability and more about positioning in the competitive AI landscape, particularly against Chinese developers and American rivals. For OpenAI, the move appears to be a balancing act: maintaining leadership in proprietary frontier models like GPT-5 while offering credible open-weight alternatives that address regulatory scrutiny and community pressure. This dual strategy could prove effective, enabling the company to dominate both commercial and open-source markets. Conclusion Ultimately, the comparison between Grok 2.5 and GPT-OSS-20B and 120B is not merely technical but philosophical. xAI’s release demonstrates a willingness to participate in the open-source movement but stops short of true openness. OpenAI, on the other hand, has set a new standard for what open-weight releases should look like in 2025: efficient architectures, extensive documentation, clear licensing, strong benchmark performance and regulatory compliance. For European businesses and policymakers evaluating open-source AI options, GPT-OSS currently represents the more practical, compliant and capable choice.  In conclusion, while both xAI and OpenAI contributed to the momentum of open-source AI in August 2025, the details reveal that not all openness is created equal. Grok 2.5 stands as an important symbolic release, but OpenAI’s GPT-OSS family sets the benchmark for practical usability, compliance with the EU AI Act, and genuine transparency.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Crypto World

BTC slides to $65,000, Solana, XRP, dogecoin down 6%

Published

on

BTC slides to $65,000, Solana, XRP, dogecoin down 6%

Bitcoin’s attempt to reclaim $70,000 earlier in the week lasted about 48 hours.

The largest cryptocurrency slid to $65,735 in early Asian hours on Saturday, down 3% over the past day and 2.8% on the week. Wednesday’s rally, which came within touching distance of $70,000, has now given back more than half its gains as broader risk sentiment deteriorated through Thursday and Friday’s U.S. sessions.

Altcoins took a harder hit. Solana dropped 6.7%, ether fell 6.2%, dogecoin shed 5.1%, and XRP lost 4%. The losses pushed most major tokens into the red on a weekly basis, erasing the altcoin outperformance that had been the week’s most encouraging signal. BNB held up better than most, down just 2.5%.

The trigger was familiar. Friday’s U.S. session saw the S&P 500 close down 0.4%, the Nasdaq 100 drop 0.3%, and the Dow fall 1.1%. Nvidia, still digesting its post-earnings reaction, shed another 4.2%.

Advertisement

A hotter-than-expected 0.5% jump in producer prices added fuel, signaling inflationary pressure that may keep the Fed from cutting rates anytime soon. Block Inc.’s massive layoffs fanned broader anxiety that AI is starting to displace jobs across the economy rather than just creating them.

Crypto followed equities lower, but as usual, with amplified magnitude. A 0.4% drop in the S&P became a 3% drop in bitcoin and a more than 6% drop in altcoins. The leverage that re-entered the system during Wednesday’s rally got flushed on the way back down.

The irony is that the institutional flow data this week was actually strong.

U.S. spot bitcoin ETFs added $1.1 billion in three days, putting them on pace for their best week in months. But ETF inflows haven’t been enough to overcome the broader macro headwinds.

Advertisement

“Over-analysis of short-term price movements is misguided,” said Dom Harz, co-founder of bitcoin finance firm BOB said in an email. “Bitcoin’s volatility is no surprise, particularly for early investors who have experienced previous cycles. What’s different this time is the type of capital behind the emerging asset class.”

Meanwhile, CryptoQuant data shows USDT stablecoin reserves on exchanges have fallen from $60 billion to $51.1 billion over the past two months, a decline the firm warned could trigger a “massive sell-off” if reserves drop below $50 billion.

Elsewhere, Strategy shares topped the list of large U.S. companies by short interest volume as markets increasingly question the sustainability of the firm’s debt-funded bitcoin buying program.

And on the Ethereum side, large holders have started selling at a loss, with DAT company ETHZilla officially abandoning its ETH accumulation strategy and rebranding to focus on tokenized real-world assets instead.

Advertisement

Bitcoin is now back in the middle of the $60,000-$70,000 range it has been stuck in since the Feb. 5 crash. Wednesday proved the top of that range is resistance. The question heading into March is whether the bottom still holds.

Source link

Continue Reading

Crypto World

MetaMask debit card goes live across the U.S.

Published

on

MetaMask debit card goes live across the U.S.

MetaMask and Mastercard have officially launched the MetaMask Card across the United States, marking a significant step in bringing cryptocurrency spending into everyday commerce.

Summary

  • MetaMask and Mastercard begin offering the self-custodial MetaMask Card in 49 states, including New York.
  • Users spend directly from their wallets, with up to 1% back in mUSD for standard users and up to 3% for premium members.
  • The card works at over 150 million Mastercard merchants and supports Apple Pay and Google Pay.

New MetaMask and Mastercard card lets users spend crypto

The announcement follows successful pilot programs in Europe and the UK, and now brings the self-custodial crypto payment card to 49 U.S. states, including New York for the first time.

The MetaMask Card connects users’ self-custodied digital assets to traditional payment infrastructure, allowing holders to spend crypto directly from their wallets anywhere Mastercard is accepted, online or in physical stores, without needing to pre-load balances onto custodial accounts.

Advertisement

Users retain full control of their funds until the point of sale, where conversion and payment happen seamlessly.

“We designed MetaMask Card to make crypto disappear. Not go away, but become so seamlessly woven into daily life that the line between onchain and offchain fades away entirely,” said Gal Eldar, Product Lead at MetaMask.

Issued by FDIC-insured Cross River Bank and powered by Mastercard’s global network with technology from Monavate (formerly Baanx), the card works with Apple Pay and Google Pay, making it compatible with contactless digital wallets. The rollout follows a year-long U.S. trial that began in late 2024, with broader access now available nationwide.

Advertisement

A key feature of the program is on-chain rewards: standard MetaMask Card holders earn up to 1% back in MetaMask’s stablecoin mUSD on purchases, while premium MetaMask Metal subscribers, available for a $199 annual fee, can earn up to 3% back on the first $10,000 spent each year alongside additional travel and spending benefits.

The launch represents a strategic effort to integrate decentralized finance into traditional payment rails, making crypto use more intuitive for everyday purchases while preserving self-custody principles at the heart of Web3.

It also positions MetaMask alongside other crypto-native payment cards, expanding crypto’s real-world utility.

Advertisement

Source link

Continue Reading

Crypto World

Bitcoin ETFs Log $1B Inflows During 50% Drawdown

Published

on

Bitcoin ETFs Log $1B Inflows During 50% Drawdown

Spot Bitcoin exchange-traded funds pulled in more than $1 billion of net inflows over three trading sessions this week, a reversal that came even as Bitcoin remained well below its peak.

The US-listed spot Bitcoin (BTC) ETFs logged a combined $1.02 billion in inflows from Tuesday to Thursday, according to data from SoSoValue. The funds pulled in $506.51 million on Wednesday, the largest single-day total during the three days.

On Friday, ETF analyst Nate Geraci said in a post on X that investors appeared to be “buying the dip” amid the recent downturn.

He said spot Bitcoin ETFs have seen about $6.5 billion in outflows since Bitcoin’s record high in early October, a figure he described as modest relative to the $55 billion the category has absorbed since January 2024.

Advertisement

Related: Bitcoin’s 100 BTC club edges toward 20K wallets in a ‘bullish sign’

“50% drawdowns are walk in the park for long-time BTC investors,” Geraci wrote. “But appears newer ETF investors aren’t worried either.”

Spot Bitcoin ETF performance year-to-date. Source: SoSoValue

Flows reverse multi-week outflow streak

This week’s inflows follow five consecutive weeks of net withdrawals, with the last two weeks of January recording a combined $2.82 billion in outflows.

The rebound was led by BlackRock’s iShares Bitcoin Trust (IBIT), which logged $275.82 million in net inflows on Thursday alone. Fidelity’s FBTC and Ark 21Shares’ ARKB posted outflows, but were outweighed by gains in other funds including Bitwise’s BITB and Grayscale’s BTC.

Altcoin ETFs have also turned positive in recent trading sessions. Spot Ether (ETH) ETFs added about $173 million over the same three-day period, while Solana funds logged roughly $35 million in inflows. Meanwhile, XRP (XRP) ETFs logged a modest $7 million in inflows. 

Advertisement

Related: Bitcoin bear market not over as BTC fails to reclaim $68K trend line

Analysts flag ETF flows as sentiment gauge

The inflows come as market participants discuss whether the recent selling pressure is easing. On Friday, several analysts said Bitcoin’s roughly 50% drawdown may be approaching exhaustion

CoinEx chief analyst Jeff Ko previously told Cointelegraph that improvements in spot ETF inflows suggest aggressive selling pressure may be fading. However, he said a sudden V-shaped recovery is unlikely after a steep decline. 

Bitrue research lead Andri Fauzan Adziima similarly pointed to oversold technical indicators and said sustained ETF inflows could serve as a catalyst for stabilization. 

Advertisement