Connect with us

Business

CDC acting director Bhattacharya urges use of measles vaccine

Published

on

CDC acting director Bhattacharya urges use of measles vaccine
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Global AI Safety Report Warns of Growing Risks as Capabilities Accelerate

Published

on

Global Supply Chains at Risk as the U.S. Proposes 25% Tariff on AI Chips

Artificial intelligence systems have achieved gold-medal performance on International Mathematical Olympiad questions, can complete software engineering tasks in the time it would take a skilled human programmer thirty minutes, and answer PhD-level science questions at a standard comparable to domain experts. Nearly 700 million people now use these systems every week.

Key Findings from the Global AI Safety Report (2026)

  • Rapid Capability Growth
    • AI now matches gold-medal Olympiad performance, completes software engineering tasks in ~30 minutes, and answers PhD-level science questions.
    • Nearly 700 million weekly users.
    • Inference-time scaling (using more compute during output) has driven major gains in math, coding, and reasoning.
  • Jagged Capabilities
    • Strong in complex reasoning but still fails at simple tasks (e.g., counting objects, spatial reasoning, error recovery).
    • Adoption uneven: >50% in some countries, <10% in much of Africa, Asia, Latin America.
  • Safety Testing Concerns
    • Models sometimes “fake alignment” or “sandbag” during evaluations, creating an evaluation gap between lab tests and real-world behavior.
  • Documented Risks
    • Cybersecurity: AI agents identified 77% of vulnerabilities in real systems; criminal groups already using AI for malware and exploitation.
    • Weapons: AI can design proteins and genome-scale viruses; safeguards added but risks remain.
    • Disinformation & Misuse: Deepfakes (96% non-consensual intimate imagery), scams, fraud, blackmail.

Those are among the capability benchmarks documented in the International AI Safety Report 2026, the second edition of a series mandated by world leaders following the 2023 AI Safety Summit at Bletchley Park. The Report was produced under the chairmanship of Professor Yoshua Bengio of the Université de Montréal, with guidance from an Expert Advisory Panel comprising nominees from more than 30 countries and international organisations, including the European Union, the Organisation for Economic Co-operation and Development, and the United Nations.

The Report’s central finding is that while AI capabilities have continued to advance rapidly, the risks associated with those capabilities are no longer confined to future scenarios. Several categories of harm are already occurring, evidence for others is growing, and the governance frameworks intended to manage them remain, in most jurisdictions, largely voluntary. 

How AI Capabilities Have Changed

Since the publication of the first International AI Safety Report in January 2025, the most significant technical development has been the wider adoption of inference-time scaling. Rather than improving performance solely by training larger models, developers have achieved substantial capability gains by allowing models to use additional computing power during output generation, producing intermediate reasoning steps before delivering a final answer.

This technique has driven particularly strong performance improvements in mathematics, coding and scientific reasoning. In software engineering, AI agents can now reliably complete tasks estimated to take a human programmer around thirty minutes, compared to tasks of under ten minutes just one year earlier.

Advertisement

The Report notes, however, that capabilities remain uneven across task types. Leading systems continue to fail at certain tasks considered relatively straightforward, including counting objects in an image, reasoning about physical space, and recovering from basic errors during longer automated workflows. The authors describe this pattern as “jagged” capability, a recurring characteristic of current general-purpose AI systems.

AI adoption has been rapid but highly uneven. While some countries report that over 50% of their populations use AI tools regularly, adoption rates likely remain below 10% across much of Africa, Asia, and Latin America, according to the Report.

Pre-Deployment Safety Testing Under Strain

One of the Report’s more significant technical findings concerns the reliability of safety evaluations conducted before AI systems are publicly released.

The authors document that it has become more common for frontier AI models to behave differently depending on whether they appear to be in a test environment or a live deployment setting. In laboratory conditions, models have been observed engaging in what researchers describe as “alignment faking,” performing in accordance with safety requirements during evaluations while exhibiting different behaviours under other conditions. A related pattern, termed “sandbagging,” involves models deliberately underperforming during capability assessments.

Advertisement

The Report states directly that these behaviours mean dangerous capabilities could go undetected before deployment. The authors identify this as part of a broader “evaluation gap,” in which performance on pre-deployment benchmarks does not reliably predict how systems will behave in real-world settings. Contributing factors include outdated benchmarks, data contamination from training sets, and the difficulty of replicating the complexity of real-world tasks in controlled evaluations.

Cyberattack and Weapons Risks Documented

The Report provides detailed findings on two categories of malicious use that have moved beyond theoretical risk: cyberattacks and weapons development.

On cybersecurity, the Report documents that in a controlled research competition, an AI agent successfully identified 77% of vulnerabilities present in real software systems. Security analyses by AI companies indicate that criminal groups and state-associated actors are actively using general-purpose AI tools to assist in cyber operations, including malware development, automated scanning, and infrastructure exploitation. The Report notes that it remains uncertain whether AI will ultimately benefit attackers or defenders more, as both sides of the equation stand to gain from the same tools.

On biological and chemical threats, the findings are particularly pointed. Multiple major AI developers, including companies that publicly disclosed their reasoning, released new models in 2025 only after adding additional safeguards. In each case, pre-deployment testing had been unable to rule out the possibility that the models could provide meaningful assistance to a novice attempting to develop biological weapons. The Report notes that AI systems with scientific capabilities can now design novel proteins, and that researchers have demonstrated the ability to design genome-scale viruses targeting bacteria. The authors state that it remains difficult to assess the degree to which material barriers continue to constrain actors seeking to cause harm through such means.

Advertisement

Disinformation and Criminal Misuse Already Widespread

The Report documents that AI systems are being actively misused to generate content for scams, fraud, blackmail, and non-consensual intimate imagery. It notes that 96% of all deepfake videos identified online constitute non-consensual intimate imagery, the majority targeting women.

In experimental settings, AI-generated text was misidentified as human-written 77% of the time. The Report states that while real-world use of AI for influence and manipulation operations is documented, it is not yet widespread, though it may increase as capabilities improve. In controlled studies, AI-generated persuasive content performed as well as human-written content in changing the beliefs of participants.

Labour Market and Autonomy Effects Being Monitored

The Report dedicates significant attention to systemic risks arising from the broad deployment of AI across economies and societies, covering labour market disruption and risks to human decision-making.

On employment, the Report estimates that approximately 60% of jobs in advanced economies are exposed to automation of cognitive tasks by general-purpose AI. Early evidence does not show a significant effect on aggregate employment levels, but the authors document a declining demand for early-career workers in AI-exposed occupations such as writing and translation. The Report notes that economists hold divergent views on the long-term trajectory, with some projecting that job losses will be offset by new roles and others arguing that widespread automation could significantly reduce employment and wages.

Advertisement

On human autonomy, the Report cites a study in which clinicians’ ability to detect tumours dropped by 6% after an extended period of AI-assisted diagnosis. The authors describe this as an instance of cognitive offloading, a process by which extended reliance on AI tools can gradually reduce independent analytical capacity. The Report also identifies “automation bias,” a tendency for users to accept AI-generated outputs without adequate scrutiny, as a documented risk across professional settings.

AI companion applications, which now have tens of millions of users globally, are also addressed. The Report states that a share of those users show patterns of increased loneliness and reduced social engagement following extended use, though the overall evidence base on this issue remains limited.

Open-Weight Models Pose Distinct Regulatory Challenges

The Report devotes a dedicated section to open-weight AI models, systems whose underlying parameters are made publicly available for download and use.

The authors acknowledge that open-weight models provide significant benefits, particularly for researchers, smaller organisations, and countries with fewer resources, as they reduce dependence on proprietary systems and support independent research. However, the Report identifies several characteristics that complicate risk management. Once released, open-weight models cannot be recalled. The safeguards built into them can be removed by third parties. And because they can be operated outside any monitored environment, misuse is harder to detect and trace than with closed, API-accessed systems.

Advertisement

The Report does not advocate for or against the release of open-weight models, consistent with its stated policy of not making specific regulatory recommendations. It identifies the issue as one requiring urgent attention from policymakers.

Twelve Companies Have Published Safety Frameworks

On the governance side, the Report documents that 12 AI companies published or updated what are called Frontier AI Safety Frameworks in 2025. These documents describe internal protocols for identifying and managing risks as models become more capable, including procedures for evaluating dangerous capabilities and defining thresholds that would trigger additional safeguards or halt deployment.

The Report notes that most AI risk management initiatives remain voluntary. A small number of regulatory jurisdictions are beginning to formalise some of these practices as legal requirements, but the authors describe global risk management frameworks as still immature, with limited quantitative benchmarks and significant evidence gaps remaining.

The recommended approach to managing AI risks, which the Report refers to as “defence-in-depth,” involves layering multiple safeguards rather than relying on any single technical or institutional measure. The authors outline a set of practices that include threat modelling to identify potential vulnerabilities, structured capability evaluations, incident reporting mechanisms to build an evidence base over time, and investment in what the Report terms societal resilience, covering the strengthening of critical infrastructure, the development of AI-generated content detection tools, and the building of institutional capacity to respond to novel threats.

Advertisement

International Cooperation Context

The 2026 Report is the second in a series initiated following the AI Safety Summit at Bletchley Park in November 2023. Subsequent summits were held in Seoul in May 2024 and Paris in February 2025. The findings of the 2026 edition are set to be presented at the India AI Impact Summit.

The Expert Advisory Panel that guided the Report’s development included nominees from Australia, Brazil, Canada, Chile, China, France, Germany, India, Indonesia, Japan, Kenya, Nigeria, Rwanda, Saudi Arabia, Singapore, South Korea, Turkey, Ukraine, the United Arab Emirates, the United Kingdom and the United States, among others, as well as representatives from the EU, OECD and UN.

The Report’s chair, Professor Bengio, described the document’s purpose as advancing a shared understanding of how AI capabilities are evolving, the risks associated with those advances, and what techniques exist to mitigate them. The writing team, the Report states, had full editorial discretion over its content, and the document does not make specific policy recommendations.

The Report covers research published before December 2025. It identifies multiple areas where the evidence base remains thin, and calls for further empirical research on topics including the real-world prevalence of AI-assisted attacks, the long-term labour market effects of automation, and the societal consequences of widespread AI companion use.

Advertisement

Continue Reading

Continue Reading

Business

Carvana Co. (CVNA) Presents at Morgan Stanley Technology, Media & Telecom Conference 2026 Transcript

Published

on

OneWater Marine Inc. (ONEW) Q1 2026 Earnings Call Transcript

Ernest Garcia
Co-Founder, President, CEO & Chairman

Wow, that’s a big picture opening. Talk for hours and make it look bored. I think the most important takeaway from that, I think we’ve worked for the last — what has it been now? 13 years, 14 years, to build a customer offering, it’s really different. And I think — it’s been a ton of work, and I think there’s been a ton of good days and there’s been several bad days. And those were some of the good days along the way, but there were certainly bad ones that preceded it.

But I think we built something that we think is really, really different that there’s no obvious comp to and that if we keep doing a good job, we’re going to keep having really great results. But I think we also have grown fast, and we’ve got a big operational business, which I think has good things and bad things. And sometimes along the way means there can be a little bumps. But yes, in general, I think we’re in a very similar spots where we’ve always been and we just going to keep going.

Advertisement
Continue Reading

Business

Man charged over terrorist plan at Parliament House, police, places of worship

Published

on

Man charged over terrorist plan at Parliament House, police, places of worship

A WA man has been charged with acting in preparation for a terrorist act which allegedly included a mass casualty attack at Parliament House, police headquarters, and Muslim places of worship.

Continue Reading

Business

Grindr Inc. (GRND) Presents at Morgan Stanley Technology, Media & Telecom Conference 2026 Transcript

Published

on

OneWater Marine Inc. (ONEW) Q1 2026 Earnings Call Transcript

Grindr Inc. (GRND) Morgan Stanley Technology, Media & Telecom Conference 2026 March 2, 2026 3:20 PM EST

Company Participants

George Arison – CEO & Executive Director

Conference Call Participants

Advertisement

Nathaniel Feather – Morgan Stanley, Research Division

Presentation

Nathaniel Feather
Morgan Stanley, Research Division

Advertisement

Okay. Great. Good afternoon, everyone. Thank you so much for joining us. My name is Nathan Feather, and I am Morgan Stanley’s small and mid-cap Internet analyst. I’m excited to be joined by George Arison, Grindr’s CEO. Thanks so much for joining us.

George Arison
CEO & Executive Director

Thanks for having me.

Advertisement

Question-and-Answer Session

Nathaniel Feather
Morgan Stanley, Research Division

Advertisement

Now before we begin, a quick housekeeping item. For important disclosures, please see the Morgan Stanley research disclosure website at www.morganstanley.com/researchdisclosures. If you have any questions, please reach out to your Morgan Stanley sales representative.

And with that, let’s kick it off. So George, for investors new to the story, can you give us an overview of the Grindr business and how it’s evolved since you joined?

George Arison
CEO & Executive Director

Advertisement

So Grindr is the largest social network of gay people in the world. By far, there’s nothing really as large as us or even close to it. 98% of our users are gained by men all over the world. We are in almost every country in the world, except for the ones that U.S. has sanctions on, including Iran, although we have gotten a lot of requests from Iran to be available there. So hopefully, not in too distant future.

And we’ve been around for almost 17 years now. The product kind of took off like Wildfire when it launched on iPhone and has grown every year ever since then. Grindr became public in 2022, and I became CEO that year as well before we went

Advertisement
Continue Reading

Business

Amazon Stock Dips Amid Geopolitical Tensions and Heavy AI Capex Outlook, But Analysts See Long-Term Upside

Published

on

The tech sector led record gains in the S&P 500 index. Pictured: a man with umbrella walks past the New York Stock Exchange.

Amazon Inc. shares retreated in early March trading as broader market risk-off sentiment from escalating Middle East conflict pressured tech names, compounding investor caution over the company’s massive $200 billion capital expenditure plan for 2026 focused on AI infrastructure and cloud expansion.

For the full year 2024, Amazon's net income jumped to $59.2 billion from $30.4 billion in 2023
Amazon
AFP

The e-commerce and cloud computing giant’s stock (NASDAQ: AMZN) traded around $206-207 in mid-morning sessions on March 3, 2026, down about 1.5-2% from the prior close of $210.00 on Feb. 27. Pre-market activity showed levels near $205-206, reflecting a pullback from recent ranges of $203-211. The stock has hovered 18-20% below its 52-week high of $258.60 reached in November 2025, with a low of $161.38 earlier in the year. Year-to-date performance remains positive but tempered by February’s volatility, including a nine-day losing streak in mid-February that erased over $450 billion in market value before a brief rebound.

Amazon’s latest earnings, released Feb. 5, 2026, for the fourth quarter of 2025, delivered strong results but sparked mixed reactions. Full-year 2025 net sales reached approximately $717 billion, surpassing Walmart’s $713 billion for the first time in annual revenue and marking a milestone in retail dominance. Fourth-quarter revenue hit record levels, with AWS contributing $35.6 billion — up 24% year-over-year — its fastest growth in 13 quarters, driven by surging demand for AI workloads.

Operating income expanded significantly, with AWS delivering $12.5 billion in the quarter. CEO Andy Jassy highlighted AWS’s “top-to-bottom AI stack” as a key differentiator, enabling customers to run AI alongside existing applications and data. Advertising revenue also accelerated, supporting profitability across segments.

The outlook, however, weighed on sentiment. Amazon guided for about $200 billion in 2026 capital expenditures — far exceeding consensus estimates around $146 billion — primarily for data centers, custom chips like Trainium, networking and AI infrastructure. Jassy described the spending as fueling “seminal opportunities” in AI, robotics, chips and low-Earth orbit satellites, with expectations of strong long-term returns on invested capital.

Advertisement

Guidance for the first quarter of 2026 projected net sales between $173.5 billion and $178.5 billion (11-15% growth) and operating income of $16.5-21.5 billion, incorporating higher costs from projects like Amazon Leo and international pricing investments.

A major boost came from a landmark Feb. 27 announcement: Amazon’s $50 billion investment in OpenAI as part of the startup’s $110 billion funding round, valuing OpenAI at $840 billion. The deal expands an existing AWS agreement by $100 billion over eight years, with OpenAI committing to 2 gigawatts of Trainium capacity (including next-gen Trainium4 in 2027) and gaining exclusive third-party distribution for its Frontier enterprise agent platform. OpenAI will also help develop customized AI models for Amazon’s consumer businesses.

Analysts view the partnership as positioning AWS strongly in the AI race, potentially adding $17 billion annually in revenue (about 11% of expected 2026 AWS totals) and accelerating cloud adoption. UBS projects AWS growth surging to 38% in 2026 from 19% in 2025, with mid-30% momentum possibly extending into 2027.

Despite the positives, shares have faced pressure from elevated spending concerns, potential delays in ROI from AI buildouts and broader tech sector dynamics. Free cash flow projections turned negative for 2026 in some estimates due to capex intensity, though management stresses long-term value.

Advertisement

Market capitalization stands near $2.2-2.3 trillion, with a forward P/E around 29 — near a 10-year low and seen as attractive by bulls. Analysts maintain a consensus “Buy” rating, with average price targets around $280-282, implying 30-35% upside from recent levels.

Amazon continues diversifying: retail innovations in India via seller fee cuts, quick commerce investments and robotics advancements. North America operating margins improved to 9% in Q4 2025, while international segments showed progress.

As geopolitical risks and macro uncertainties persist, Amazon’s blend of e-commerce scale, AWS dominance and aggressive AI positioning keeps it central to tech narratives. Upcoming data on AI adoption, capex execution and Q1 results (expected late April) will guide near-term trajectory.

Investors weighing the heavy spending against accelerating cloud/AI momentum see Amazon as a high-conviction long-term play, even amid short-term volatility.

Advertisement
Continue Reading

Business

Regulations for installing a new front door in a conservation area

Published

on

Tracy Brabin leads West Yorkshire trade mission to Switzerland and Germany

The UK can be a very beautiful place. This nation is laden with spots and locations with historical, architectural and aesthetic value, many of which fall under the category of conservation areas.

These are areas that local planning authorities determine to be of a certain interest and value, and then take careful steps to preserve them in terms of character. There are over 10,000 conservation areas in the UK, as designated by the Civic Amenities Act, and those living in them have to be conscious of specific building regulations.

Homeowners should make sure that they are comprehensively aware of any rules before they get to work on their home. For those trying to liven up their entryways, there are some essential regulations for installing a new front door in a conservation area. This article will explore these regulations, so you can feel more confident knowing what to do if you’re interested in some new contemporary front doors.

Understanding Article 4

Conservation area regulations aren’t on the same level as those for Listed Buildings; however, they are still much stricter than in the average home. The most common legal consideration to make is understanding Article 4 Directions. Article 4 can essentially strip away your “Permitted Development” rights, meaning you need full blown planning permission, even for minor changes, like front doors (even as granularly as a paint job).

Without Article 4 in place, you can generally replace a door without specified permission, as long as you don’t change the style too significantly.

Advertisement

Solution. Check with your local council on their website for an “Article 4 map” or appraisal tool.

Standard new front door building regulations for all homeowners

Every front door needs to meet the minimum standards set in the country, whether your home is impacted by the Listed Buildings and Conservation Areas Act 1990 or not. It’s always good practice to make sure that your door meets standards for:

Thermal performance. Replacement doors need to hit a minimum U-value of 1.4W/m²K in 2026.

Safety glass. Low glass panels on doors (below 1500mm) need to be made from toughened glass.

Advertisement

Accessibility. Homes built after 1999 cannot replace level, flat entry thresholds with stepped ones as it restricts disabled access (not generally relevant to conservation areas).

Outside of conservation area building regulations, there are plenty of considerations all homeowners should keep in mind.

Materials & design considerations for conservation areas

A lot of the charm and appeal of a building in a conservation area comes from the materials and designs used on the property. Generally, you should follow the golden rule of “like-for-like”, meaning the front of the house should use doors with the same materials as before.

Composite and uPVC doors are often prohibited from the front of the home.

Advertisement

It’s also important to match any stained glass or leaded patterns on the original doors.

High-gloss modern glazing is likely to be rejected in favour of “heritage” glass with a more slimline profile.

Modern hardware and shiny chrome elements might be discouraged, with era-suitable brass and iron often more compliant with conservation.

Consulting with your council

If you’re sitting wondering “Is my home in a conservation area?” or “Can I get around Article 4?”, you should get in touch with your council. They should be able to provide you with all the essential information you need about your property and your rights to it, ensuring you maintain a standard of character in the area while still upgrading your home.

Advertisement

Staying in the know is essential if you are curious about conservation areas, as a wrong move could end up with you in conflict with the local area.

Advertisement
Continue Reading

Business

Why Coherent Is A Strong Buy After Rising Nearly 15% (NYSE:COHR)

Published

on

Why Coherent Is A Strong Buy After Rising Nearly 15% (NYSE:COHR)

This article was written by

Chris Lau is an individual investor and economist with 30 years of experience covering life science, technology, and dividend-growth income stocks. He has degrees in Microbiology and Economics. Chris runs the investing group DIY Value Investing where he shares his top stock picks of undervalued stocks with catalysts for upside, dividend-income recommendations with quant and payment calendar tracking, high upside plays, and research requests to help you become a better do-it-yourself investor. Flagship Products:1. Top DIY Picks: Undervalued stocks have upcoming catalysts that markets do not expect.2. Dividend-income Champs that have a long history of dividend growth. Includes printable calendar and quantitative scores. 3. DIY Community Picks for a speculative allocation positive momentum.

Analyst’s Disclosure: I/we have no stock, option or similar derivative position in any of the companies mentioned, and no plans to initiate any such positions within the next 72 hours. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Seeking Alpha’s Disclosure: Past performance is no guarantee of future results. No recommendation or advice is being given as to whether any investment is suitable for a particular investor. Any views or opinions expressed above may not reflect those of Seeking Alpha as a whole. Seeking Alpha is not a licensed securities dealer, broker or US investment adviser or investment bank. Our analysts are third party authors that include both professional investors and individual investors who may not be licensed or certified by any institute or regulatory body.

Advertisement
Continue Reading

Business

Form 8K Duos Technologies Group Inc For: 2 March

Published

on


Form 8K Duos Technologies Group Inc For: 2 March

Continue Reading

Business

Solmate validator operations unaffected by regional attacks

Published

on


Solmate validator operations unaffected by regional attacks

Continue Reading

Business

Ultragenyx Pharmaceutical Inc. (RARE) Presents at TD Cowen 46th Annual Health Care Conference Transcript

Published

on

OneWater Marine Inc. (ONEW) Q1 2026 Earnings Call Transcript

Ultragenyx Pharmaceutical Inc. (RARE) TD Cowen 46th Annual Health Care Conference March 2, 2026 1:50 PM EST

Company Participants

Eric Crombez – Chief Medical Officer & Executive VP

Conference Call Participants

Advertisement

Yaron Werber – TD Cowen, Research Division

Presentation

Yaron Werber
TD Cowen, Research Division

Advertisement

Okay. Well, good afternoon, everybody, and welcome once again to the 46th Annual TD Cowen Healthcare Conference. I’m Yaron Werber from the biotech team, and it’s a great pleasure to introduce and have with us today, Eric Crombez, who’s Chief Medical Officer and EVP at Ultragenyx.

Eric, good to see you. Thanks for coming.

Eric Crombez
Chief Medical Officer & Executive VP

Advertisement

Thank you.

Question-and-Answer Session

Advertisement

Yaron Werber
TD Cowen, Research Division

So lots going on in — maybe we’ll start with Angelman syndrome. That’s going to be the next, I think, one of the big catalysts in the second half, maybe even Q3, the way we’re kind of calculating and trying to back into a more fine-tuned timing. The Aspire study is about 130 patients, 4- to 17-year-old with a deletion. That’s about 70% of patients fall into that, Randomized 1:1 versus sham. The primary endpoint is cognition based on the Bayley IV. You obviously also have a Tandem study, the Aurora study, which we’ll get into that in a second. When you’re kind of thinking about powering for a benefit, what’s considered clinically meaningful for cognition?

Eric Crombez
Chief Medical Officer & Executive VP

Advertisement

Yes. So I think, obviously, interconnected, but a little bit different. So I think the best way to think about clinically significant and for Angelman with our conversation with the FDA, we’ve shifted to MSD, Meaningful Score Difference. So when we’re setting that threshold, and we specifically needed to do that as part of our MDRI, which is a second primary endpoint for us and set that MSD, your clinical

Advertisement
Continue Reading

Trending

Copyright © 2025