Connect with us
DAPA Banner

Crypto World

How to Start with Elixir? Introduction, Installation, and Practice

Published

on

How to Start with Elixir? Introduction, Installation, and Practice

by Gonzalo Wangüemert Villalba

4 September 2025

Introduction The open-source AI ecosystem reached a turning point in August 2025 when Elon Musk’s company xAI released Grok 2.5 and, almost simultaneously, OpenAI launched two new models under the names GPT-OSS-20B and GPT-OSS-120B. While both announcements signalled a commitment to transparency and broader accessibility, the details of these releases highlight strikingly different approaches to what open AI should mean. This article explores the architecture, accessibility, performance benchmarks, regulatory compliance and wider industry impact of these three models. The aim is to clarify whether xAI’s Grok or OpenAI’s GPT-OSS family currently offers more value for developers, businesses and regulators in Europe and beyond. What Was Released Grok 2.5, described by xAI as a 270 billion parameter model, was made available through the release of its weights and tokenizer. These files amount to roughly half a terabyte and were published on Hugging Face. Yet the release lacks critical elements such as training code, detailed architectural notes or dataset documentation. Most importantly, Grok 2.5 comes with a bespoke licence drafted by xAI that has not yet been clearly scrutinised by legal or open-source communities. Analysts have noted that its terms could be revocable or carry restrictions that prevent the model from being considered genuinely open source. Elon Musk promised on social media that Grok 3 would be published in the same manner within six months, suggesting this is just the beginning of a broader strategy by xAI to join the open-source race. By contrast, OpenAI unveiled GPT-OSS-20B and GPT-OSS-120B on 5 August 2025 with a far more comprehensive package. The models were released under the widely recognised Apache 2.0 licence, which is permissive, business-friendly and in line with requirements of the European Union’s AI Act. OpenAI did not only share the weights but also architectural details, training methodology, evaluation benchmarks, code samples and usage guidelines. This represents one of the most transparent releases ever made by the company, which historically faced criticism for keeping its frontier models proprietary. Architectural Approach The architectural differences between these models reveal much about their intended use. Grok 2.5 is a dense transformer with all 270 billion parameters engaged in computation. Without detailed documentation, it is unclear how efficiently it handles scaling or what kinds of attention mechanisms are employed. Meanwhile, GPT-OSS-20B and GPT-OSS-120B make use of a Mixture-of-Experts design. In practice this means that although the models contain 21 and 117 billion parameters respectively, only a small subset of those parameters are activated for each token. GPT-OSS-20B activates 3.6 billion and GPT-OSS-120B activates just over 5 billion. This architecture leads to far greater efficiency, allowing the smaller of the two to run comfortably on devices with only 16 gigabytes of memory, including Snapdragon laptops and consumer-grade graphics cards. The larger model requires 80 gigabytes of GPU memory, placing it in the range of high-end professional hardware, yet still far more efficient than a dense model of similar size. This is a deliberate choice by OpenAI to ensure that open-weight models are not only theoretically available but practically usable. Documentation and Transparency The difference in documentation further separates the two releases. OpenAI’s GPT-OSS models include explanations of their sparse attention layers, grouped multi-query attention, and support for extended context lengths up to 128,000 tokens. These details allow independent researchers to understand, test and even modify the architecture. By contrast, Grok 2.5 offers little more than its weight files and tokenizer, making it effectively a black box. From a developer’s perspective this is crucial: having access to weights without knowing how the system was trained or structured limits reproducibility and hinders adaptation. Transparency also affects regulatory compliance and community trust, making OpenAI’s approach significantly more robust. Performance and Benchmarks Benchmark performance is another area where GPT-OSS models shine. According to OpenAI’s technical documentation and independent testing, GPT-OSS-120B rivals or exceeds the reasoning ability of the company’s o4-mini model, while GPT-OSS-20B achieves parity with the o3-mini. On benchmarks such as MMLU, Codeforces, HealthBench and the AIME mathematics tests from 2024 and 2025, the models perform strongly, especially considering their efficient architecture. GPT-OSS-20B in particular impressed researchers by outperforming much larger competitors such as Qwen3-32B on certain coding and reasoning tasks, despite using less energy and memory. Academic studies published on arXiv in August 2025 highlighted that the model achieved nearly 32 per cent higher throughput and more than 25 per cent lower energy consumption per 1,000 tokens than rival models. Interestingly, one paper noted that GPT-OSS-20B outperformed its larger sibling GPT-OSS-120B on some human evaluation benchmarks, suggesting that sparse scaling does not always correlate linearly with capability. In terms of safety and robustness, the GPT-OSS models again appear carefully designed. They perform comparably to o4-mini on jailbreak resistance and bias testing, though they display higher hallucination rates in simple factual question-answering tasks. This transparency allows researchers to target weaknesses directly, which is part of the value of an open-weight release. Grok 2.5, however, lacks publicly available benchmarks altogether. Without independent testing, its actual capabilities remain uncertain, leaving the community with only Musk’s promotional statements to go by. Regulatory Compliance Regulatory compliance is a particularly important issue for organisations in Europe under the EU AI Act. The legislation requires general-purpose AI models to be released under genuinely open licences, accompanied by detailed technical documentation, information on training and testing datasets, and usage reporting. For models that exceed systemic risk thresholds, such as those trained with more than 10²⁵ floating point operations, further obligations apply, including risk assessment and registration. Grok 2.5, by virtue of its vague licence and lack of documentation, appears non-compliant on several counts. Unless xAI publishes more details or adapts its licensing, European businesses may find it difficult or legally risky to adopt Grok in their workflows. GPT-OSS-20B and 120B, by contrast, seem carefully aligned with the requirements of the AI Act. Their Apache 2.0 licence is recognised under the Act, their documentation meets transparency demands, and OpenAI has signalled a commitment to provide usage reporting. From a regulatory standpoint, OpenAI’s releases are safer bets for integration within the UK and EU. Community Reception The reception from the AI community reflects these differences. Developers welcomed OpenAI’s move as a long-awaited recognition of the open-source movement, especially after years of criticism that the company had become overly protective of its models. Some users, however, expressed frustration with the mixture-of-experts design, reporting that it can lead to repetitive tool-calling behaviours and less engaging conversational output. Yet most acknowledged that for tasks requiring structured reasoning, coding or mathematical precision, the GPT-OSS family performs exceptionally well. Grok 2.5’s release was greeted with more scepticism. While some praised Musk for at least releasing weights, others argued that without a proper licence or documentation it was little more than a symbolic gesture designed to signal openness while avoiding true transparency. Strategic Implications The strategic motivations behind these releases are also worth considering. For xAI, releasing Grok 2.5 may be less about immediate usability and more about positioning in the competitive AI landscape, particularly against Chinese developers and American rivals. For OpenAI, the move appears to be a balancing act: maintaining leadership in proprietary frontier models like GPT-5 while offering credible open-weight alternatives that address regulatory scrutiny and community pressure. This dual strategy could prove effective, enabling the company to dominate both commercial and open-source markets. Conclusion Ultimately, the comparison between Grok 2.5 and GPT-OSS-20B and 120B is not merely technical but philosophical. xAI’s release demonstrates a willingness to participate in the open-source movement but stops short of true openness. OpenAI, on the other hand, has set a new standard for what open-weight releases should look like in 2025: efficient architectures, extensive documentation, clear licensing, strong benchmark performance and regulatory compliance. For European businesses and policymakers evaluating open-source AI options, GPT-OSS currently represents the more practical, compliant and capable choice.  In conclusion, while both xAI and OpenAI contributed to the momentum of open-source AI in August 2025, the details reveal that not all openness is created equal. Grok 2.5 stands as an important symbolic release, but OpenAI’s GPT-OSS family sets the benchmark for practical usability, compliance with the EU AI Act, and genuine transparency.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Crypto World

Poloniex and the $1.3B bitcoin question

Published

on

Poloniex and the $1.3B bitcoin question

Justin Sun-owned Poloniex has announced fee-free trading for any user who enrols in its “Poloniex Super” membership, which currently offers 30 days’ worth of fee-free “spot, margin, and futures trading.”

Poloniex has yet to announce what this membership will cost once the 30-day period has elapsed, though it does mention that “[a]fter the trial period ends, you will be automatically enrolled in the basic Super plan by default.”

This product announcement has led users to ask how Poloniex will make money without fees. Sun quickly explained that Poloniex has no need to make more money because “we already made enough from the bitcoin (BTC) we bought in 2012.”

Poloniex was founded in 2014 and therefore couldn’t possibly purchase any BTC in 2012, so presumably Sun is referring to BTC he purchased.

This statement that Poloniex can continue to operate based only on these profits brings to the forefront concerns about how Poloniex has managed the BTC in its reserves.

In 2020 Poloniex offered a new product, which it described at different times as “BTC on TRON” and “BTCTRON.”

This initial announcement described BTCTRON as “a type of wrapped BTC token that exists on the TRON blockchain.”

Advertisement

Poloniex’s Help Center provides us the contract address for this token, TN3W4H6rK2ce4vX9YnFQHwKENnHjoxb3m9.

Reviewing this contract address reveals that this token currently has a circulating supply of 17,545 BTC, worth approximately $1.3 billion.

Disturbingly, Poloniex’s so-called “proof of reserves” claims that Poloniex has a balance of only 11,090 BTC in its entire reserves and 11,082 of those are “User Balance.”

Advertisement

This is insufficient to reserve this tokenized BTC product.

Protos has previously repeatedly reached out to Poloniex during our past reporting on this product, and it has never been willing to provide the addresses that hold the BTC for this tokenized product.

We attempted to reach out to Poloniex again; however, it didn’t provide these addresses before publication.

Read more: FTX estate says Justin Sun still owes it millions

Advertisement

Increasing the concern about this product is how deeply it has been integrated into another Sun-owned exchange, HTX.

At HTX, typically there is more of this mysterious BTCTRON product, which provides no transparency, than real BTC.

As of the most recent HTX snapshot, dated March 1, there were a total of 21,362 BTC on HTX. BTCTRON accounted for 10,291 of those.

There are also an additional 1,212 BTC that are in the form of Sun-advised Wrapped Bitcoin.

Advertisement

As of March 1, there were a total of 21,362 BTC on HTX.

What this means, taken as a whole, is that Poloniex will not disclose where the $1.3 billion in BTC that is supposed to collateralize this product is located.

Yet despite that fact, HTX is willing to make it a massive portion of its reserves, all while Sun claims that Poloniex can afford to offer “fee-free” trading because of the appreciation in the price of bitcoin.

Perhaps instead of making grandiose claims about the value of his BTC, Sun should instead work on solving the apparent BTC shortfall at the exchanges he owns.

Advertisement

Got a tip? Send us an email securely via Protos Leaks. For more informed news, follow us on XBluesky, and Google News, or subscribe to our YouTube channel.

Source link

Advertisement
Continue Reading

Crypto World

SEC will Consider most Crypto Assets not Securities under Federal Law

Published

on

Cryptocurrencies, Law, Security, SEC, United States

In one of its first actions since signing a memorandum of understanding with the Commodity Futures Trading Commission (CFTC), the US Securities and Exchange Commission (SEC) said it would interpret how “non-security crypto assets” fall under federal securities laws.

In a Tuesday notice, the SEC said its interpretation of how to address crypto assets would serve as an “important bridge” as lawmakers in the US Congress consider market structure legislation which will codify how financial regulators oversee digital assets. 

The commission said the interpretation would provide a “coherent token taxonomy for digital commodities, digital collectibles, digital tools, stablecoins, and digital securities,” address how a “non-security crypto asset” may or may not be considered an investment contract under the SEC’s purview, and clarify federal securities laws on “airdrops, protocol mining, protocol staking, and the wrapping of a non-security crypto asset.”

“This is what regulatory agencies are supposed to do: draw clear lines in clear terms,” said SEC Chair Paul Atkins. “It also acknowledges what the former administration refused to recognize -– that most crypto assets are not themselves securities. And it reflects the reality that investment contracts can come to an end.”

Advertisement
Cryptocurrencies, Law, Security, SEC, United States
Source: SEC on X

According to Atkins’ prepared remarks for the DC Blockchain Summit on Tuesday, “only one crypto asset class remains subject to the securities laws” under the interpretation, and those were “traditional securities that are tokenized.” The commission called on market participants to review the interpretation to “better understand the regulatory jurisdiction between the SEC and CFTC” on cryptocurrencies. 

Related: SEC, CFTC sign memo to regulate crypto, other markets in harmony

The SEC notice came as lawmakers in the US Senate continue to negotiate terms under which they may reach an agreement on a digital asset market structure bill. The legislation is expected to give the CFTC more authority in overseeing cryptocurrencies.

Shakeup in SEC enforcement leadership draws criticism

On Monday, the SEC announced that its enforcement division director, Margaret Ryan, resigned from the agency. Its principal deputy director, Sam Waldon, was named as acting enforcement director.

In response to Ryan’s departure, former SEC official John Reed Stark said “not a single person on this planet” believed the commission’s claims that the enforcement director prioritized investor protection and “renewed focus on holding individual wrongdoers accountable” at the agency.

Advertisement

“The SEC has abandoned its identity,” said Stark on Monday. “It has transformed from the cop on Wall Street’s beat into something far more troubling, a regulatory body that functions less like a law enforcement agency and more like a concierge service for the largest financial players in the country.”

A 19-year veteran of the regulator, Stark was founder and chief of the SEC’s Office of Internet Enforcement, according to his LinkedIn profile.

Atkins, along with SEC Commissioners Mark Uyeda and Hester Peirce — all Republicans — remain the only three leaders at the agency on a panel intended to consist of a bipartisan group of five members. As of Tuesday, US President Donald Trump had not announced any plans to nominate other commissioners to the SEC or CFTC, which had only one Senate-confirmed member.

Advertisement

Magazine: Clarity Act risks repeat of Europe’s mistakes, crypto lawyer warns