Connect with us

Crypto World

Lessons Learned After a Year of Building with Large Language Models (LLMs)

Published

on

Lessons Learned After a Year of Building with Large Language Models (LLMs)

Over the past year, Large Language Models (LLMs) have reached impressive competence for real-world applications. Their performance continues to improve, and costs are decreasing, with a projected $200 billion investment in artificial intelligence by 2025. Accessibility through provider APIs has democratised access to these technologies, enabling ML engineers, scientists, and anyone to integrate intelligence into their products. However, despite the lowered entry barriers, creating effective products with LLMs remains a significant challenge. This is summary of the original paper of the same name by https://applied-llms.org/. Please refer to that documento for detailed information.

Fundamental Aspects of Working with LLMs


· Prompting Techniques


Prompting is one of the most critical techniques when working with LLMs, and it is essential for prototyping new applications. Although often underestimated, correct prompt engineering can be highly effective.

– Fundamental Techniques: Use methods like n-shot prompts, in-context learning, and chain-of-thought to enhance response quality. N-shot prompts should be representative and varied, and chain-of-thought should be clear to reduce hallucinations and improve user confidence.

Structuring Inputs and Outputs: Structured inputs and outputs facilitate integration with subsequent systems and enhance clarity. Serialisation formats and structured schemas help the model better understand the information.

Advertisement

– Simplicity in Prompts: Prompts should be clear and concise. Breaking down complex prompts into more straightforward steps can aid in iteration and evaluation.

– Token Context: It’s crucial to optimise the amount of context sent to the model, removing redundant information and improving structure for clearer understanding.


· Retrieval-Augmented Generation (RAG)


RAG is a technique that enhances LLM performance by providing additional context by retrieving relevant documents.


– Quality of Retrieved Documents: The relevance and detail of the retrieved documents impact output quality. Use metrics such as Mean Reciprocal Rank (MRR) and Normalised Discounted Cumulative Gain (NDCG) to assess quality.

Advertisement

– Use of Keyword Search: Although vector embeddings are useful, keyword search remains relevant for specific queries and is more interpretable.

– Advantages of RAG over Fine-Tuning: RAG is more cost-effective and easier to maintain than fine-tuning, offering more precise control over retrieved documents and avoiding information overload.


Optimising and Tuning Workflows


Optimising workflows with LLMs involves refining and adapting strategies to ensure efficiency and effectiveness. Here are some key strategies:


· Step-by-Step, Multi-Turn Flows


Decomposing complex tasks into manageable steps often yields better results, allowing for more controlled and iterative refinement.

Advertisement

– Best Practices: Ensure each step has a defined goal, use structured outputs to facilitate integration, incorporate a planning phase with predefined options, and validate plans. Experimenting with task architectures, such as linear chains or Directed Acyclic Graphs (DAGs), can optimise performance.


· Prioritising Deterministic Workflows


Ensuring predictable outcomes is crucial for reliability. Use deterministic plans to achieve more consistent results.

Benefits: It facilitates controlled and reproducible results, makes tracing and fixing specific failures easier, and DAGs adapt better to new situations than static prompts.

– Approach: Start with general objectives and develop a plan. Execute the plan in a structured manner and use the generated plans for few-shot learning or fine-tuning.

Advertisement


· Enhancing Output Diversity Beyond Temperature


Increasing temperature can introduce diversity but only sometimes guarantees a good distribution of outputs. Use additional strategies to improve variety.

– Strategies: Modify prompt elements such as item order, maintain a list of recent outputs to avoid repetitions, and use different phrasings to influence output diversity.


· The Underappreciated Value of Caching


Caching is a powerful technique for reducing costs and latency by storing and reusing responses.

– Approach: Use unique identifiers for cacheable items and employ caching techniques similar to search engines.

Advertisement

– Benefits: Reduces costs by avoiding recalculation of responses and serves vetted responses to reduce risks.


· When to Fine-Tune


Fine-tuning may be necessary when prompts alone do not achieve the desired performance. Evaluate the costs and benefits of this technique.

– Examples: Honeycomb improved performance in specific language queries through fine-tuning. Rechat achieved consistent formatting by fine-tuning the model for structured data.

– Considerations: Assess if the cost of fine-tuning justifies the improvement and use synthetic or open-source data to reduce annotation costs.

Advertisement


Evaluation and Monitoring


Effective evaluation and monitoring are crucial to ensuring LLM performance and reliability.

· Assertion-Based Unit Tests


Create unit tests with real input/output examples to verify the model’s accuracy according to specific criteria.

– Approach: Define assertions to validate outputs and verify that the generated code performs as expected.


· LLM-as-Judge

Use an LLM to evaluate the outputs of another LLM. Although imperfect, it can provide valuable insights, especially in pairwise comparisons.

Advertisement

– Best Practices: Compare two outputs to determine which is better, mitigate biases by alternating the order of options and allowing ties, and have the LLM explain its decision to improve evaluation reliability.


· The “Intern Test”

Evaluate whether an average university student could complete the task given the input and context provided to the LLM.

– Approach: If the LLM lacks the necessary knowledge, enrich the context or simplify the task. Decompose complex tasks into simpler components and investigate failure patterns to understand model shortcomings.


· Avoiding Overemphasis on Certain Evaluations

Do not focus excessively on specific evaluations that might distort overall performance metrics.

Advertisement

Example: A needle-in-a-haystack evaluation can help measure recall but does not fully capture real-world performance. Consider practical assessments that reflect real use cases.


Key Takeaways


The lessons learned from building with LLMs underscore the importance of proper prompting techniques, information retrieval strategies, workflow optimisation, and practical evaluation and monitoring methodologies. Applying these principles can significantly enhance your LLM-based applications’ effectiveness, reliability, and efficiency. Stay updated with advancements in LLM technology, continuously refine your approach, and foster a culture of ongoing learning to ensure successful integration and an optimised user experience.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Crypto World

Cross-Chain Governance Attacks – Smart Liquidity Research

Published

on

Cross-Chain Governance Attacks - Smart Liquidity Research

The Governance Exploit Nobody Is Pricing In. Bridges get hacked. That’s old news. We’ve seen the carnage: nine-figure exploits, drained liquidity, emergency shutdowns, Twitter threads filled with “funds are safu” copium.

From Ronin Network to Wormhole, bridge exploits have become a recurring tax on innovation. But here’s the uncomfortable truth. The next systemic risk in crypto probably won’t be a bridge exploit. It’ll be a governance exploit enabled by cross-chain voting power. And almost nobody is pricing it in.

The Shift: From Asset Bridges to Power Bridges

Cross-chain infrastructure has evolved.

We’re no longer just bridging tokens for yield. We’re bridging:

Protocols increasingly allow governance tokens to exist on multiple chains simultaneously — often via wrapped representations or omnichain token standards (like those enabled by LayerZero Labs).

Advertisement

This improves capital efficiency and participation.

But it also introduces a new attack surface:

The separation of voting power from finality.

The Core Problem: Governance Is Local. Voting Power Is Not.

Governance contracts typically live on a single “home” chain.

Advertisement

But voting power can be represented across multiple chains.

This creates a dangerous gap:

  1. Tokens are locked on Chain A

  2. Voting power is mirrored on Chain B

  3. Governance decisions are executed on Chain A

If the system relies on cross-chain messaging to sync voting balances, any delay, exploit, or manipulation in that messaging layer becomes a governance vector.

You don’t need to drain liquidity.

Advertisement

You just need to distort voting power long enough.

And governance proposals often pass with shockingly low turnout.

The Attack Path Nobody Talks About

Let’s walk through a hypothetical.

Step 1: Acquire or Manipulate Voting Power Cross-Chain

An attacker:

Advertisement
  • Borrows governance tokens

  • Bridges them to a secondary chain

  • Exploits a delay in balance updates

  • Or abuses inconsistencies in wrapped token accounting

In poorly designed systems, the same underlying tokens may temporarily influence voting in multiple domains.

Even if briefly.

Even if “just a bug.”

Governance doesn’t need hours. It needs one block.

Advertisement

Step 2: Flash Governance

We’ve already seen governance flash-loan exploits in DeFi.

The most infamous example? The attack on Beanstalk in 2022.

The attacker used flash loans to acquire massive voting power, passed a malicious proposal, and drained ~$182M.

Now imagine that dynamic — but across chains.

Advertisement

Flash-loaned tokens → bridged representation → governance vote → malicious proposal executed → unwind.

All before the watchers even understand what happened.

Step 3: Proposal Payloads as Weapons

Governance proposals can:

If cross-chain voting power is compromised, the proposal payload becomes the exploit.

Advertisement

No bridge drain required.

Just governance “working as designed.”

Why Markets Aren’t Pricing This Risk

Three reasons.

1. Everyone Is Still Fighting the Last War

After major bridge hacks, teams hardened signature validation and multisig thresholds.

Advertisement

But governance-layer risk is subtler.

It doesn’t show up as “TVL at risk” on dashboards.

It shows up as “who controls protocol direction.”

That’s harder to quantify.

Advertisement

2. Voting Participation Is Low

Many DAOs struggle to get 10–20% participation.

Which means:

You don’t need 51%.

You need slightly more than apathy.

Advertisement

Cross-chain voting power distortions don’t need to be massive. They just need to be decisive.

3. Composability Multiplies Complexity

Modern governance stacks combine:

  • Delegation contracts

  • Token wrappers

  • Cross-chain messaging

  • Snapshot systems

  • Execution timelocks

Each layer introduces potential inconsistencies.

And composability means failures cascade.

Advertisement

Where the Real Risk Lives

This isn’t about one protocol.

It’s systemic.

The more governance tokens become:

The more fragile governance assumptions become.

Advertisement

If a governance token is:

You’ve built a multi-dimensional voting derivative.

And derivatives break under stress.

Ask TradFi. They have scars.

Advertisement

The Governance Exploit Nobody Is Pricing In

Markets price:

  • Smart contract risk

  • Bridge exploit risk

  • Oracle manipulation risk

But they do not price:

Cross-domain voting synchronization risk.

No dashboards are tracking:

Advertisement
  • Governance message latency

  • Cross-chain vote desync windows

  • Wrapped-token vote inflation

  • Double-counted delegation

Yet these variables may determine who controls billion-dollar treasuries.

What Builders Should Be Doing (Now)

If you’re designing cross-chain governance:

1. Separate Voting Power from Bridged Liquidity

Avoid naïve 1:1 mirroring without strict finality checks.

2. Introduce Vote Finality Windows

Require:

Advertisement
  • Cross-chain state verification

  • Message settlement delays

  • Proof-of-lock confirmations

Before votes are counted.

3. Use Decay or Cooldowns on Newly Bridged Tokens

Voting power shouldn’t activate instantly after bridging.

If tokens just moved chains 5 seconds ago, maybe they shouldn’t decide protocol destiny.

4. Simulate Governance Stress Scenarios

Run adversarial simulations:

Advertisement

If your governance model breaks under simulation, it will break in production.

What Investors Should Be Asking

Before allocating to a multi-chain DAO:

  • Where does governance live?

  • How is voting power mirrored?

  • Can voting power be double-counted during bridge latency?

  • What happens if the messaging layer stalls?

  • Is there a time lock between the vote and execution?

If the answers are vague, the risk is real.

And it’s not priced in.

Advertisement

The Inevitable Wake-Up Call

Crypto learns through catastrophe.

  • Smart contract exploits → audits became standard.

  • Oracle exploits → TWAP and redundancy

  • Bridge hacks → validator hardening

Governance-layer cross-chain exploits are likely next.

And when it happens, it won’t look like a hack.

It’ll look like a proposal that “passed.”

Advertisement

That’s the scary part.

Final Thought

Cross-chain infrastructure is powerful. It enables capital mobility, global participation, and modular design.

But it also decouples authority from location.

And when authority becomes fluid across chains, attackers don’t need to steal funds.

Advertisement

They just need to win a vote.

That’s the governance exploit nobody is pricing in.

And by the time the market does, it’ll already be too late.

REQUEST AN ARTICLE

Source link

Advertisement
Continue Reading

Crypto World

Payoneer Adds to Crypto, Fintech Firms Seeking Bank Charter

Published

on

Payoneer Adds to Crypto, Fintech Firms Seeking Bank Charter

Global financial services firm Payoneer is the latest in a growing number of companies that have filed for a national trust banking charter in the US, which could enable it to issue a stablecoin and provide various crypto services.

Payoneer said on Tuesday it filed with the Office of the Comptroller of the Currency to form PAYO Digital Bank, a week after it partnered with stablecoin infrastructure firm Bridge to add stablecoin capabilities to its platform that is mainly focused on cross-border transactions.

Payoneer said that it is seeking to issue a GENIUS Act-compliant stablecoin, PAYO-USD, to serve as the holding currency in Payoneer wallets, in addition to allowing customers to pay and receive stablecoins.

OCC approval would also enable Payoneer to manage PAYO-USD reserves, offer custodial services and enable customers to convert between the stablecoins into their local currency.

Advertisement

“We believe stablecoins will play a meaningful role in the future of global trade,” said Payoneer CEO John Caplan.

Source: Payoneer

The OCC gave conditional approval to Crypto.com for a charter on Monday, adding to the banking charters won by crypto companies Circle, Ripple, Fidelity Digital Assets, BitGo and Paxos in December.

Related: Better, Framework Ventures reach $500M stablecoin mortgage financing deal

The Trump family’s World Liberty Financial also applied for one in January to expand the use of its USD1 (USD1) stablecoin, but is still awaiting a decision. 

Crypto trading platform Laser Platform also submitted an application in January, while Coinbase has been awaiting a decision on its application since October.

Advertisement

Stablecoins ideal for business cross-border transfers: Payoneer

Payoneer said OCC approval would allow it to offer its nearly two million customers, which are mostly small and medium-sized businesses, a regulated stablecoin solution to simplify cross-border trade.