Connect with us

Crypto World

How to Start with Elixir? Introduction, Installation, and Practice

Published

on

How to Start with Elixir? Introduction, Installation, and Practice

by Gonzalo Wangüemert Villalba

4 September 2025

Introduction The open-source AI ecosystem reached a turning point in August 2025 when Elon Musk’s company xAI released Grok 2.5 and, almost simultaneously, OpenAI launched two new models under the names GPT-OSS-20B and GPT-OSS-120B. While both announcements signalled a commitment to transparency and broader accessibility, the details of these releases highlight strikingly different approaches to what open AI should mean. This article explores the architecture, accessibility, performance benchmarks, regulatory compliance and wider industry impact of these three models. The aim is to clarify whether xAI’s Grok or OpenAI’s GPT-OSS family currently offers more value for developers, businesses and regulators in Europe and beyond. What Was Released Grok 2.5, described by xAI as a 270 billion parameter model, was made available through the release of its weights and tokenizer. These files amount to roughly half a terabyte and were published on Hugging Face. Yet the release lacks critical elements such as training code, detailed architectural notes or dataset documentation. Most importantly, Grok 2.5 comes with a bespoke licence drafted by xAI that has not yet been clearly scrutinised by legal or open-source communities. Analysts have noted that its terms could be revocable or carry restrictions that prevent the model from being considered genuinely open source. Elon Musk promised on social media that Grok 3 would be published in the same manner within six months, suggesting this is just the beginning of a broader strategy by xAI to join the open-source race. By contrast, OpenAI unveiled GPT-OSS-20B and GPT-OSS-120B on 5 August 2025 with a far more comprehensive package. The models were released under the widely recognised Apache 2.0 licence, which is permissive, business-friendly and in line with requirements of the European Union’s AI Act. OpenAI did not only share the weights but also architectural details, training methodology, evaluation benchmarks, code samples and usage guidelines. This represents one of the most transparent releases ever made by the company, which historically faced criticism for keeping its frontier models proprietary. Architectural Approach The architectural differences between these models reveal much about their intended use. Grok 2.5 is a dense transformer with all 270 billion parameters engaged in computation. Without detailed documentation, it is unclear how efficiently it handles scaling or what kinds of attention mechanisms are employed. Meanwhile, GPT-OSS-20B and GPT-OSS-120B make use of a Mixture-of-Experts design. In practice this means that although the models contain 21 and 117 billion parameters respectively, only a small subset of those parameters are activated for each token. GPT-OSS-20B activates 3.6 billion and GPT-OSS-120B activates just over 5 billion. This architecture leads to far greater efficiency, allowing the smaller of the two to run comfortably on devices with only 16 gigabytes of memory, including Snapdragon laptops and consumer-grade graphics cards. The larger model requires 80 gigabytes of GPU memory, placing it in the range of high-end professional hardware, yet still far more efficient than a dense model of similar size. This is a deliberate choice by OpenAI to ensure that open-weight models are not only theoretically available but practically usable. Documentation and Transparency The difference in documentation further separates the two releases. OpenAI’s GPT-OSS models include explanations of their sparse attention layers, grouped multi-query attention, and support for extended context lengths up to 128,000 tokens. These details allow independent researchers to understand, test and even modify the architecture. By contrast, Grok 2.5 offers little more than its weight files and tokenizer, making it effectively a black box. From a developer’s perspective this is crucial: having access to weights without knowing how the system was trained or structured limits reproducibility and hinders adaptation. Transparency also affects regulatory compliance and community trust, making OpenAI’s approach significantly more robust. Performance and Benchmarks Benchmark performance is another area where GPT-OSS models shine. According to OpenAI’s technical documentation and independent testing, GPT-OSS-120B rivals or exceeds the reasoning ability of the company’s o4-mini model, while GPT-OSS-20B achieves parity with the o3-mini. On benchmarks such as MMLU, Codeforces, HealthBench and the AIME mathematics tests from 2024 and 2025, the models perform strongly, especially considering their efficient architecture. GPT-OSS-20B in particular impressed researchers by outperforming much larger competitors such as Qwen3-32B on certain coding and reasoning tasks, despite using less energy and memory. Academic studies published on arXiv in August 2025 highlighted that the model achieved nearly 32 per cent higher throughput and more than 25 per cent lower energy consumption per 1,000 tokens than rival models. Interestingly, one paper noted that GPT-OSS-20B outperformed its larger sibling GPT-OSS-120B on some human evaluation benchmarks, suggesting that sparse scaling does not always correlate linearly with capability. In terms of safety and robustness, the GPT-OSS models again appear carefully designed. They perform comparably to o4-mini on jailbreak resistance and bias testing, though they display higher hallucination rates in simple factual question-answering tasks. This transparency allows researchers to target weaknesses directly, which is part of the value of an open-weight release. Grok 2.5, however, lacks publicly available benchmarks altogether. Without independent testing, its actual capabilities remain uncertain, leaving the community with only Musk’s promotional statements to go by. Regulatory Compliance Regulatory compliance is a particularly important issue for organisations in Europe under the EU AI Act. The legislation requires general-purpose AI models to be released under genuinely open licences, accompanied by detailed technical documentation, information on training and testing datasets, and usage reporting. For models that exceed systemic risk thresholds, such as those trained with more than 10²⁵ floating point operations, further obligations apply, including risk assessment and registration. Grok 2.5, by virtue of its vague licence and lack of documentation, appears non-compliant on several counts. Unless xAI publishes more details or adapts its licensing, European businesses may find it difficult or legally risky to adopt Grok in their workflows. GPT-OSS-20B and 120B, by contrast, seem carefully aligned with the requirements of the AI Act. Their Apache 2.0 licence is recognised under the Act, their documentation meets transparency demands, and OpenAI has signalled a commitment to provide usage reporting. From a regulatory standpoint, OpenAI’s releases are safer bets for integration within the UK and EU. Community Reception The reception from the AI community reflects these differences. Developers welcomed OpenAI’s move as a long-awaited recognition of the open-source movement, especially after years of criticism that the company had become overly protective of its models. Some users, however, expressed frustration with the mixture-of-experts design, reporting that it can lead to repetitive tool-calling behaviours and less engaging conversational output. Yet most acknowledged that for tasks requiring structured reasoning, coding or mathematical precision, the GPT-OSS family performs exceptionally well. Grok 2.5’s release was greeted with more scepticism. While some praised Musk for at least releasing weights, others argued that without a proper licence or documentation it was little more than a symbolic gesture designed to signal openness while avoiding true transparency. Strategic Implications The strategic motivations behind these releases are also worth considering. For xAI, releasing Grok 2.5 may be less about immediate usability and more about positioning in the competitive AI landscape, particularly against Chinese developers and American rivals. For OpenAI, the move appears to be a balancing act: maintaining leadership in proprietary frontier models like GPT-5 while offering credible open-weight alternatives that address regulatory scrutiny and community pressure. This dual strategy could prove effective, enabling the company to dominate both commercial and open-source markets. Conclusion Ultimately, the comparison between Grok 2.5 and GPT-OSS-20B and 120B is not merely technical but philosophical. xAI’s release demonstrates a willingness to participate in the open-source movement but stops short of true openness. OpenAI, on the other hand, has set a new standard for what open-weight releases should look like in 2025: efficient architectures, extensive documentation, clear licensing, strong benchmark performance and regulatory compliance. For European businesses and policymakers evaluating open-source AI options, GPT-OSS currently represents the more practical, compliant and capable choice.  In conclusion, while both xAI and OpenAI contributed to the momentum of open-source AI in August 2025, the details reveal that not all openness is created equal. Grok 2.5 stands as an important symbolic release, but OpenAI’s GPT-OSS family sets the benchmark for practical usability, compliance with the EU AI Act, and genuine transparency.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Crypto World

Cross-Chain Governance Attacks – Smart Liquidity Research

Published

on

Cross-Chain Governance Attacks - Smart Liquidity Research

The Governance Exploit Nobody Is Pricing In. Bridges get hacked. That’s old news. We’ve seen the carnage: nine-figure exploits, drained liquidity, emergency shutdowns, Twitter threads filled with “funds are safu” copium.

From Ronin Network to Wormhole, bridge exploits have become a recurring tax on innovation. But here’s the uncomfortable truth. The next systemic risk in crypto probably won’t be a bridge exploit. It’ll be a governance exploit enabled by cross-chain voting power. And almost nobody is pricing it in.

The Shift: From Asset Bridges to Power Bridges

Cross-chain infrastructure has evolved.

We’re no longer just bridging tokens for yield. We’re bridging:

Protocols increasingly allow governance tokens to exist on multiple chains simultaneously — often via wrapped representations or omnichain token standards (like those enabled by LayerZero Labs).

Advertisement

This improves capital efficiency and participation.

But it also introduces a new attack surface:

The separation of voting power from finality.

The Core Problem: Governance Is Local. Voting Power Is Not.

Governance contracts typically live on a single “home” chain.

Advertisement

But voting power can be represented across multiple chains.

This creates a dangerous gap:

  1. Tokens are locked on Chain A

  2. Voting power is mirrored on Chain B

  3. Governance decisions are executed on Chain A

If the system relies on cross-chain messaging to sync voting balances, any delay, exploit, or manipulation in that messaging layer becomes a governance vector.

You don’t need to drain liquidity.

Advertisement

You just need to distort voting power long enough.

And governance proposals often pass with shockingly low turnout.

The Attack Path Nobody Talks About

Let’s walk through a hypothetical.

Step 1: Acquire or Manipulate Voting Power Cross-Chain

An attacker:

Advertisement
  • Borrows governance tokens

  • Bridges them to a secondary chain

  • Exploits a delay in balance updates

  • Or abuses inconsistencies in wrapped token accounting

In poorly designed systems, the same underlying tokens may temporarily influence voting in multiple domains.

Even if briefly.

Even if “just a bug.”

Governance doesn’t need hours. It needs one block.

Advertisement

Step 2: Flash Governance

We’ve already seen governance flash-loan exploits in DeFi.

The most infamous example? The attack on Beanstalk in 2022.

The attacker used flash loans to acquire massive voting power, passed a malicious proposal, and drained ~$182M.

Now imagine that dynamic — but across chains.

Advertisement

Flash-loaned tokens → bridged representation → governance vote → malicious proposal executed → unwind.

All before the watchers even understand what happened.

Step 3: Proposal Payloads as Weapons

Governance proposals can:

If cross-chain voting power is compromised, the proposal payload becomes the exploit.

Advertisement

No bridge drain required.

Just governance “working as designed.”

Why Markets Aren’t Pricing This Risk

Three reasons.

1. Everyone Is Still Fighting the Last War

After major bridge hacks, teams hardened signature validation and multisig thresholds.

Advertisement

But governance-layer risk is subtler.

It doesn’t show up as “TVL at risk” on dashboards.

It shows up as “who controls protocol direction.”

That’s harder to quantify.

Advertisement

2. Voting Participation Is Low

Many DAOs struggle to get 10–20% participation.

Which means:

You don’t need 51%.

You need slightly more than apathy.

Advertisement

Cross-chain voting power distortions don’t need to be massive. They just need to be decisive.

3. Composability Multiplies Complexity

Modern governance stacks combine:

  • Delegation contracts

  • Token wrappers

  • Cross-chain messaging

  • Snapshot systems

  • Execution timelocks

Each layer introduces potential inconsistencies.

And composability means failures cascade.

Advertisement

Where the Real Risk Lives

This isn’t about one protocol.

It’s systemic.

The more governance tokens become:

The more fragile governance assumptions become.

Advertisement

If a governance token is:

You’ve built a multi-dimensional voting derivative.

And derivatives break under stress.

Ask TradFi. They have scars.

Advertisement

The Governance Exploit Nobody Is Pricing In

Markets price:

  • Smart contract risk

  • Bridge exploit risk

  • Oracle manipulation risk

But they do not price:

Cross-domain voting synchronization risk.

No dashboards are tracking:

Advertisement
  • Governance message latency

  • Cross-chain vote desync windows

  • Wrapped-token vote inflation

  • Double-counted delegation

Yet these variables may determine who controls billion-dollar treasuries.

What Builders Should Be Doing (Now)

If you’re designing cross-chain governance:

1. Separate Voting Power from Bridged Liquidity

Avoid naïve 1:1 mirroring without strict finality checks.

2. Introduce Vote Finality Windows

Require:

Advertisement
  • Cross-chain state verification

  • Message settlement delays

  • Proof-of-lock confirmations

Before votes are counted.

3. Use Decay or Cooldowns on Newly Bridged Tokens

Voting power shouldn’t activate instantly after bridging.

If tokens just moved chains 5 seconds ago, maybe they shouldn’t decide protocol destiny.

4. Simulate Governance Stress Scenarios

Run adversarial simulations:

Advertisement

If your governance model breaks under simulation, it will break in production.

What Investors Should Be Asking

Before allocating to a multi-chain DAO:

  • Where does governance live?

  • How is voting power mirrored?

  • Can voting power be double-counted during bridge latency?

  • What happens if the messaging layer stalls?

  • Is there a time lock between the vote and execution?

If the answers are vague, the risk is real.

And it’s not priced in.

Advertisement

The Inevitable Wake-Up Call

Crypto learns through catastrophe.

  • Smart contract exploits → audits became standard.

  • Oracle exploits → TWAP and redundancy

  • Bridge hacks → validator hardening

Governance-layer cross-chain exploits are likely next.

And when it happens, it won’t look like a hack.

It’ll look like a proposal that “passed.”

Advertisement

That’s the scary part.

Final Thought

Cross-chain infrastructure is powerful. It enables capital mobility, global participation, and modular design.

But it also decouples authority from location.

And when authority becomes fluid across chains, attackers don’t need to steal funds.

Advertisement

They just need to win a vote.

That’s the governance exploit nobody is pricing in.

And by the time the market does, it’ll already be too late.

REQUEST AN ARTICLE

Source link

Advertisement
Continue Reading

Crypto World

Payoneer Adds to Crypto, Fintech Firms Seeking Bank Charter

Published

on

Payoneer Adds to Crypto, Fintech Firms Seeking Bank Charter

Global financial services firm Payoneer is the latest in a growing number of companies that have filed for a national trust banking charter in the US, which could enable it to issue a stablecoin and provide various crypto services.

Payoneer said on Tuesday it filed with the Office of the Comptroller of the Currency to form PAYO Digital Bank, a week after it partnered with stablecoin infrastructure firm Bridge to add stablecoin capabilities to its platform that is mainly focused on cross-border transactions.

Payoneer said that it is seeking to issue a GENIUS Act-compliant stablecoin, PAYO-USD, to serve as the holding currency in Payoneer wallets, in addition to allowing customers to pay and receive stablecoins.

OCC approval would also enable Payoneer to manage PAYO-USD reserves, offer custodial services and enable customers to convert between the stablecoins into their local currency.

Advertisement

“We believe stablecoins will play a meaningful role in the future of global trade,” said Payoneer CEO John Caplan.

Source: Payoneer

The OCC gave conditional approval to Crypto.com for a charter on Monday, adding to the banking charters won by crypto companies Circle, Ripple, Fidelity Digital Assets, BitGo and Paxos in December.

Related: Better, Framework Ventures reach $500M stablecoin mortgage financing deal

The Trump family’s World Liberty Financial also applied for one in January to expand the use of its USD1 (USD1) stablecoin, but is still awaiting a decision. 

Crypto trading platform Laser Platform also submitted an application in January, while Coinbase has been awaiting a decision on its application since October.

Advertisement

Stablecoins ideal for business cross-border transfers: Payoneer

Payoneer said OCC approval would allow it to offer its nearly two million customers, which are mostly small and medium-sized businesses, a regulated stablecoin solution to simplify cross-border trade.