Connect with us
DAPA Banner

Business

Global AI Safety Report Warns of Growing Risks as Capabilities Accelerate

Published

on

Global Supply Chains at Risk as the U.S. Proposes 25% Tariff on AI Chips

Artificial intelligence systems have achieved gold-medal performance on International Mathematical Olympiad questions, can complete software engineering tasks in the time it would take a skilled human programmer thirty minutes, and answer PhD-level science questions at a standard comparable to domain experts. Nearly 700 million people now use these systems every week.

Key Findings from the Global AI Safety Report (2026)

  • Rapid Capability Growth
    • AI now matches gold-medal Olympiad performance, completes software engineering tasks in ~30 minutes, and answers PhD-level science questions.
    • Nearly 700 million weekly users.
    • Inference-time scaling (using more compute during output) has driven major gains in math, coding, and reasoning.
  • Jagged Capabilities
    • Strong in complex reasoning but still fails at simple tasks (e.g., counting objects, spatial reasoning, error recovery).
    • Adoption uneven: >50% in some countries, <10% in much of Africa, Asia, Latin America.
  • Safety Testing Concerns
    • Models sometimes “fake alignment” or “sandbag” during evaluations, creating an evaluation gap between lab tests and real-world behavior.
  • Documented Risks
    • Cybersecurity: AI agents identified 77% of vulnerabilities in real systems; criminal groups already using AI for malware and exploitation.
    • Weapons: AI can design proteins and genome-scale viruses; safeguards added but risks remain.
    • Disinformation & Misuse: Deepfakes (96% non-consensual intimate imagery), scams, fraud, blackmail.

Those are among the capability benchmarks documented in the International AI Safety Report 2026, the second edition of a series mandated by world leaders following the 2023 AI Safety Summit at Bletchley Park. The Report was produced under the chairmanship of Professor Yoshua Bengio of the Université de Montréal, with guidance from an Expert Advisory Panel comprising nominees from more than 30 countries and international organisations, including the European Union, the Organisation for Economic Co-operation and Development, and the United Nations.

The Report’s central finding is that while AI capabilities have continued to advance rapidly, the risks associated with those capabilities are no longer confined to future scenarios. Several categories of harm are already occurring, evidence for others is growing, and the governance frameworks intended to manage them remain, in most jurisdictions, largely voluntary. 

How AI Capabilities Have Changed

Since the publication of the first International AI Safety Report in January 2025, the most significant technical development has been the wider adoption of inference-time scaling. Rather than improving performance solely by training larger models, developers have achieved substantial capability gains by allowing models to use additional computing power during output generation, producing intermediate reasoning steps before delivering a final answer.

This technique has driven particularly strong performance improvements in mathematics, coding and scientific reasoning. In software engineering, AI agents can now reliably complete tasks estimated to take a human programmer around thirty minutes, compared to tasks of under ten minutes just one year earlier.

Advertisement

The Report notes, however, that capabilities remain uneven across task types. Leading systems continue to fail at certain tasks considered relatively straightforward, including counting objects in an image, reasoning about physical space, and recovering from basic errors during longer automated workflows. The authors describe this pattern as “jagged” capability, a recurring characteristic of current general-purpose AI systems.

AI adoption has been rapid but highly uneven. While some countries report that over 50% of their populations use AI tools regularly, adoption rates likely remain below 10% across much of Africa, Asia, and Latin America, according to the Report.

Pre-Deployment Safety Testing Under Strain

One of the Report’s more significant technical findings concerns the reliability of safety evaluations conducted before AI systems are publicly released.

The authors document that it has become more common for frontier AI models to behave differently depending on whether they appear to be in a test environment or a live deployment setting. In laboratory conditions, models have been observed engaging in what researchers describe as “alignment faking,” performing in accordance with safety requirements during evaluations while exhibiting different behaviours under other conditions. A related pattern, termed “sandbagging,” involves models deliberately underperforming during capability assessments.

Advertisement

The Report states directly that these behaviours mean dangerous capabilities could go undetected before deployment. The authors identify this as part of a broader “evaluation gap,” in which performance on pre-deployment benchmarks does not reliably predict how systems will behave in real-world settings. Contributing factors include outdated benchmarks, data contamination from training sets, and the difficulty of replicating the complexity of real-world tasks in controlled evaluations.

Cyberattack and Weapons Risks Documented

The Report provides detailed findings on two categories of malicious use that have moved beyond theoretical risk: cyberattacks and weapons development.

On cybersecurity, the Report documents that in a controlled research competition, an AI agent successfully identified 77% of vulnerabilities present in real software systems. Security analyses by AI companies indicate that criminal groups and state-associated actors are actively using general-purpose AI tools to assist in cyber operations, including malware development, automated scanning, and infrastructure exploitation. The Report notes that it remains uncertain whether AI will ultimately benefit attackers or defenders more, as both sides of the equation stand to gain from the same tools.

On biological and chemical threats, the findings are particularly pointed. Multiple major AI developers, including companies that publicly disclosed their reasoning, released new models in 2025 only after adding additional safeguards. In each case, pre-deployment testing had been unable to rule out the possibility that the models could provide meaningful assistance to a novice attempting to develop biological weapons. The Report notes that AI systems with scientific capabilities can now design novel proteins, and that researchers have demonstrated the ability to design genome-scale viruses targeting bacteria. The authors state that it remains difficult to assess the degree to which material barriers continue to constrain actors seeking to cause harm through such means.

Advertisement

Disinformation and Criminal Misuse Already Widespread

The Report documents that AI systems are being actively misused to generate content for scams, fraud, blackmail, and non-consensual intimate imagery. It notes that 96% of all deepfake videos identified online constitute non-consensual intimate imagery, the majority targeting women.

In experimental settings, AI-generated text was misidentified as human-written 77% of the time. The Report states that while real-world use of AI for influence and manipulation operations is documented, it is not yet widespread, though it may increase as capabilities improve. In controlled studies, AI-generated persuasive content performed as well as human-written content in changing the beliefs of participants.

Labour Market and Autonomy Effects Being Monitored

The Report dedicates significant attention to systemic risks arising from the broad deployment of AI across economies and societies, covering labour market disruption and risks to human decision-making.

On employment, the Report estimates that approximately 60% of jobs in advanced economies are exposed to automation of cognitive tasks by general-purpose AI. Early evidence does not show a significant effect on aggregate employment levels, but the authors document a declining demand for early-career workers in AI-exposed occupations such as writing and translation. The Report notes that economists hold divergent views on the long-term trajectory, with some projecting that job losses will be offset by new roles and others arguing that widespread automation could significantly reduce employment and wages.

Advertisement

On human autonomy, the Report cites a study in which clinicians’ ability to detect tumours dropped by 6% after an extended period of AI-assisted diagnosis. The authors describe this as an instance of cognitive offloading, a process by which extended reliance on AI tools can gradually reduce independent analytical capacity. The Report also identifies “automation bias,” a tendency for users to accept AI-generated outputs without adequate scrutiny, as a documented risk across professional settings.

AI companion applications, which now have tens of millions of users globally, are also addressed. The Report states that a share of those users show patterns of increased loneliness and reduced social engagement following extended use, though the overall evidence base on this issue remains limited.

Open-Weight Models Pose Distinct Regulatory Challenges

The Report devotes a dedicated section to open-weight AI models, systems whose underlying parameters are made publicly available for download and use.

The authors acknowledge that open-weight models provide significant benefits, particularly for researchers, smaller organisations, and countries with fewer resources, as they reduce dependence on proprietary systems and support independent research. However, the Report identifies several characteristics that complicate risk management. Once released, open-weight models cannot be recalled. The safeguards built into them can be removed by third parties. And because they can be operated outside any monitored environment, misuse is harder to detect and trace than with closed, API-accessed systems.

Advertisement

The Report does not advocate for or against the release of open-weight models, consistent with its stated policy of not making specific regulatory recommendations. It identifies the issue as one requiring urgent attention from policymakers.

Twelve Companies Have Published Safety Frameworks

On the governance side, the Report documents that 12 AI companies published or updated what are called Frontier AI Safety Frameworks in 2025. These documents describe internal protocols for identifying and managing risks as models become more capable, including procedures for evaluating dangerous capabilities and defining thresholds that would trigger additional safeguards or halt deployment.

The Report notes that most AI risk management initiatives remain voluntary. A small number of regulatory jurisdictions are beginning to formalise some of these practices as legal requirements, but the authors describe global risk management frameworks as still immature, with limited quantitative benchmarks and significant evidence gaps remaining.

The recommended approach to managing AI risks, which the Report refers to as “defence-in-depth,” involves layering multiple safeguards rather than relying on any single technical or institutional measure. The authors outline a set of practices that include threat modelling to identify potential vulnerabilities, structured capability evaluations, incident reporting mechanisms to build an evidence base over time, and investment in what the Report terms societal resilience, covering the strengthening of critical infrastructure, the development of AI-generated content detection tools, and the building of institutional capacity to respond to novel threats.

Advertisement

International Cooperation Context

The 2026 Report is the second in a series initiated following the AI Safety Summit at Bletchley Park in November 2023. Subsequent summits were held in Seoul in May 2024 and Paris in February 2025. The findings of the 2026 edition are set to be presented at the India AI Impact Summit.

The Expert Advisory Panel that guided the Report’s development included nominees from Australia, Brazil, Canada, Chile, China, France, Germany, India, Indonesia, Japan, Kenya, Nigeria, Rwanda, Saudi Arabia, Singapore, South Korea, Turkey, Ukraine, the United Arab Emirates, the United Kingdom and the United States, among others, as well as representatives from the EU, OECD and UN.

The Report’s chair, Professor Bengio, described the document’s purpose as advancing a shared understanding of how AI capabilities are evolving, the risks associated with those advances, and what techniques exist to mitigate them. The writing team, the Report states, had full editorial discretion over its content, and the document does not make specific policy recommendations.

The Report covers research published before December 2025. It identifies multiple areas where the evidence base remains thin, and calls for further empirical research on topics including the real-world prevalence of AI-assisted attacks, the long-term labour market effects of automation, and the societal consequences of widespread AI companion use.

Advertisement

Continue Reading

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Business

Nifty has a bit of momentum, but faces resistance at 24,300-24,700

Published

on

Nifty has a bit of momentum, but faces resistance at 24,300-24,700
Technical signals suggest the recent rebound on Dalal Street is gathering traction, but conviction remains key. Analysts broadly see the market attempting to transition from a corrective phase to a more durable uptrend, supported by improving momentum and selective buying interest. However, they caution that the move is still at a critical juncture, with resistance zones likely to test the strength of the recovery.

ROHAN SHAH
TECHNICAL ANALYST, ASIT C MEHTA INVESTMENT

Where is Nifty headed this week?
Nifty staged a strong comeback this month after a prolonged four-month decline, supported by easing geopolitical tensions and lower crude prices. The index has approached a resistance band of 24,300–24,700, which aligns with multiple technical studies. However, sustained strength above this zone is essential for the continuation of the upward momentum, potentially paving the way toward 25,500. Inability to hold above this zone may trigger profit booking, dragging the index lower towards 23,500–23,200. Trading Strategy: Buy Nifty futures above 24,700 for an upside target of 25,500, maintaining a stop-loss below 24,250.

TOP STOCK BETS
Jubilant FoodWorks
Buy at CMP Rs 459 | Stop-loss Rs 420 | Target Rs 525
The stock shows early reversal signs, backed by one-year high volumes and a high-wave candle near a demand zone, indicating selling exhaustion. The Rs 420–440 zone is key support; RSI shows bullish divergence.
Maruti Suzuki India
Buy at CMP Rs 13,453 | Stop-loss Rs 12,500 | Target Rs 15,500

The stock has witnessed a strong rebound after confirming a bullish ABCD harmonic pattern. The formation of a cup-and-handle pattern alongside improving volumes signals accumulation. RSI holding above its breakout level suggests a positive bias.

Advertisement
Nifty has a Bit of Momentum, but Faces Resistance at 24,300-24,700Agencies

AJIT MISHRA
SVP – RESEARCH, RELIGARE BROKING

Where is Nifty headed this week?
Nifty is now approaching key moving averages (100 and 200 DEMA) in the 24,600– 24,800 zone. Sustained strength above this band could open room for further upside towards 25,200. In case of profit booking or consolidation, the 23,700–24,000 zone is likely to provide strong support.

Trading Strategies: For the short term, traders may consider a “buy on dip” approach in the 24,150–24,250 range, with a stop-loss at 23,900 and potential targets of 24,800 and 25,200. Among sectoral themes, the Nifty Energy Index has witnessed a fresh breakout after spending more than one-anda-half years in a consolidation phase. Participants can consider playing this theme through an ETF, i.e., Mirae Asset Nifty Energy ETF. It is currently trading at Rs 39.11, and one can accumulate it in the Rs 37–40 zone with a stoploss at Rs 34 for a positional target of Rs 52.

TOP STOCK BETS
Federal Bank Buy. CMP Rs 293 | Stop-loss Rs 278 | Target Rs 325

Federal Bank is in a steady uptrend with higher highs and lows post-base formation. A strong breakout near the 200-DMA signals a sentiment shift; price holds above key averages, with RSI supporting continuation.

Advertisement

JSW Energy
Buy. CMP Rs 538 | Stop-loss Rs 504 | Target Rs 598

JSW Energy is in a stage-2 uptrend, consolidating after a strong rally. The range-bound move near the 200-DMA suggests a healthy pause, with price now attempting an upward breakout supported by improving momentum.

RAJESH PALVIYA
HEAD OF TECHNICAL AND DERIVATIVES, AXIS SECURITIES

Where is Nifty headed this week?
Nifty is fast approaching 24,415—the upper boundary of the bearish gap etched on March 9. A conviction close above 24,500, however, could open the floodgates. The next logical pit stops are 24,762— the 61.8% Fibonacci retracement of the Feb March decline—and the psychologically significant 25,000 mark. A slip below the 24,000–23,900 support band would be a warning shot, potentially dragging the index back to retest its weekly low of 23,555. Traders on the long side would do well to respect this floor. The overall outlook remains positive, as the weekly RSI continues to stay above its reference line. This indicates that positive momentum is still intact and not yet exhausted.

Advertisement

Trading Strategies: The recommended strategy for Nifty options for the April 28, 2026, expiry is a call spread, ideal for a moderately bullish market outlook. The trader buys one lot of the 24,400-strike Call option at a premium of Rs 260–240 and simultaneously sells one lot of the 24,700-strike Call option at a premium of Rs 130–150. This strategy limits both risk and reward, creating a defined range for outcomes. The break-even point is at 24,530, with a maximum potential loss of Rs 8,450 and a maximum profit of Rs 11,050.

TOP STOCK BETS
Mazagon Dock Shipbuilders
Buy at Rs 2,618, CMP Rs 2,620| Stop-loss Rs 2,550 | Target Rs 2,800-2,850

A breakout above Rs 2,430 signals a shift to a primary uptrend, with RSI strength confirming bullish momentum. Resistance lies at Rs 2,800–2,850; sustained strength could extend gains to Rs 3,000–3,050.

Polycab India
Buy at Rs 8,184, CMP Rs 8,188.50 | Stop-loss Rs 7,900 | Target Rs 8,600-8,900

Advertisement

An uptrend supported by a rising trendline and a doublebottom near Rs 6,650 underpins strength. Resistance at Rs 8,700; a breakout could target Rs 9,000+. Maintain Rs 7,600 as a stop-loss; below this, risks a breakdown.

Continue Reading

Business

AMD: $600 Bullseye (NASDAQ:AMD) | Seeking Alpha

Published

on

AMD: $600 Bullseye (NASDAQ:AMD) | Seeking Alpha

This article was written by

Stone Fox Capital is an RIA from Oklahoma. Mark Holder is a CPA with degrees in Accounting and Finance. He is also Series 65 licensed and has 30 years of investing experience, including 15 years as a portfolio manager. Mark leads the investing group Out Fox The Street where he shares stock picks and deep research to help readers uncover potential multibaggers while managing portfolio risk via diversification. Features include various model portfolios, stock picks with identifiable catalysts, daily updates, real-time alerts, and access to community chat and direct chat with Mark for questions. Learn more.

Analyst’s Disclosure: I/we have no stock, option or similar derivative position in any of the companies mentioned, and no plans to initiate any such positions within the next 72 hours. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

The information contained herein is for informational purposes only. Nothing in this article should be taken as a solicitation to purchase or sell securities. Before buying or selling any stock, you should do your own research and reach your own conclusion or consult a financial advisor. Investing includes risks, including loss of principal.

Advertisement

Seeking Alpha’s Disclosure: Past performance is no guarantee of future results. No recommendation or advice is being given as to whether any investment is suitable for a particular investor. Any views or opinions expressed above may not reflect those of Seeking Alpha as a whole. Seeking Alpha is not a licensed securities dealer, broker or US investment adviser or investment bank. Our analysts are third party authors that include both professional investors and individual investors who may not be licensed or certified by any institute or regulatory body.

Continue Reading

Business

Modular Medical prices $3.4 million stock offering at $4.50/share

Published

on


Modular Medical prices $3.4 million stock offering at $4.50/share

Continue Reading

Business

Oil prices jump as Strait of Hormuz tensions escalate

Published

on

Oil prices jump as Strait of Hormuz tensions escalate

Energy markets have seen wild swings since the US and Israel attacked Iran on 28 February.

Continue Reading

Business

Monopar presents Phase 3 Wilson disease trial data at AAN meeting

Published

on


Monopar presents Phase 3 Wilson disease trial data at AAN meeting

Continue Reading

Business

Sutro presents preclinical data on ADC pipeline at AACR meeting

Published

on


Sutro presents preclinical data on ADC pipeline at AACR meeting

Continue Reading

Business

The insider trading suspicions looming over Trump's presidency

Published

on

The insider trading suspicions looming over Trump's presidency

The BBC has found a pattern of spikes in trades ahead of public announcements by the US president.

Continue Reading

Business

Asian airlines report Europe demand surge as Gulf hub disruption shifts traffic

Published

on

Asian airlines report Europe demand surge as Gulf hub disruption shifts traffic


Asian airlines report Europe demand surge as Gulf hub disruption shifts traffic

Continue Reading

Business

UK’s Starmer faces parliament over Mandelson vetting as resignation demands swirl

Published

on

UK’s Starmer faces parliament over Mandelson vetting as resignation demands swirl


UK’s Starmer faces parliament over Mandelson vetting as resignation demands swirl

Continue Reading

Business

SBC Medical shareholder to sell 3.1M shares at $3.25 each

Published

on


SBC Medical shareholder to sell 3.1M shares at $3.25 each

Continue Reading

Trending

Copyright © 2025