Connect with us
DAPA Banner

Tech

This simple ChatGPT trick forces the AI to poke holes in its own logic

Published

on

ChatGPT has a talent for sounding sure of itself. Ask it a question, and it delivers a polished, coherent response. But should you always trust it?

The tone promises authoritative answers, and the confidence is enticing, but it can also mask the fact that the answer is only one possible interpretation of the problem.

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Musi hands Apple big win as judge rules apps can be delisted 'with or without cause'

Published

on

A lawsuit from music streaming app Musi suggested Apple had removed its app over unsubstantiated copyright claims, but it has been dismissed by courts with prejudice.

iPhone displaying a colorful music app interface with recent albums, playlists, and a currently playing track, overlaid on large orange text reading Meet Mus behind the phone
Musi loses its lawsuit over App Store removal

Apps are removed from the App Store for many reasons, some less clear than others. However, a judge just ruled that Apple can remove an app from the App Store, “with or without cause.”
It’s a significant win for Apple that sets precedence for future potential lawsuits. US District Judge Eumi Lee didn’t just rule in Apple’s favor — he tore Musi’s case apart on multiple levels.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Finance Bros To Tech Bros: Don’t Mess With My Bloomberg Terminal

Published

on

An anonymous reader quotes a report from the Wall Street Journal: A battle of insults and threats has broken out between the tech world and Wall Street. What’s got everyone so worked up? The same thing that starts most fights: business software. A series of social-media posts went viral in recent days with claims that AI has created a worthy — and way cheaper — alternative to the Bloomberg terminal, a computer system that is like oxygen to professional investors. Now “Bloomberg is cooked,” some posters argued as they heralded the arrival of a newly released AI tool from startup Perplexity. […]

The finance bros who worship at the altar of Bloomberg have declared war on the tech evangelists who have put all their faith in AI. To suggest that the terminal is replaceable is “laughable,” said Jason Lemire, who jumped into the conversation on LinkedIn. (Ironically or not, his post also included an AI-generated image of churchgoers praying to the Bloomberg terminal). “It seems quite obvious to me that those propagating that post are either just looking for easy engagement and/or have never worked in a serious financial institution,” he wrote. […] Morgan Linton, the co-founder and CTO of AI startup Bold Metrics and an avid Perplexity Computer user, said it’s rare for a single AI prompt to generate anything close to what Bloomberg does. That said, he added that tools like this can lay “a really good foundation for a financial application. And that really has not been possible before.”

Others aren’t so sure. Michael Terry, an institutional investment manager who used the terminal for more than 30 years, said he used a prompt circulating online to try to vibe code a Bloomberg replica on Anthropic’s Claude. “It was laughable at best, horrific at worst,” he said. Shevelenko acknowledged there are some aspects of the terminal that can’t be replicated with vibe coding, including some of Bloomberg’s proprietary data inputs. The live chat network, which includes 350,000 financial professionals in 184 countries, would also be hard to re-create, as well as the terminal’s data security, reliability and robust support system. “I love Bloomberg. And I know most people that use Bloomberg are very, very loyal and extremely happy,” said Lemire. His message to the techies? “There’s nothing that you can vibe code in a weekend or even like over the course of a year that’s going to come anywhere close.”

Source link

Advertisement
Continue Reading

Tech

Open source Mamba 3 arrives to surpass Transformer architecture with nearly 4% improved language modeling, reduced latency

Published

on

The generative AI era began for most people with the launch of OpenAI’s ChatGPT in late 2022, but the underlying technology — the “Transformer” neural network architecture that allows AI models to weigh the importance of different words in a sentence (or pixels in an image) differently and train on information in parallel — dates back to Google’s seminal 2017 paper “Attention Is All You Need.”

Yet while Transformers deliver unparalleled model quality and have underpinned most of the major generative AI models used today, they are computationally gluttonous. They are burdened by quadratic compute and linear memory demands that make large-scale inference an expensive, often prohibitive, endeavor. Hence, the desire by some researchers to improve on them by developing a new architecture, Mamba, in 2023, which has gone on to be included in hybrid Mamba-Transformer models like Nvidia’s Nemotron 3 Super.

Now, the same researchers behind the original Mamba architecture including leaders Albert Gu of Carnegie Mellon and Tri Dao of Princeton have released the latest version of their new architecture, Mamba-3, as a language model under a permissive Apache 2.0 open source license — making it immediately available to developers, including enterprises for commercial purposes. A technical paper has also been published on arXiv.org.

This model signals a paradigm shift from training efficiency to an “inference-first” design. As Gu noted in the official announcement, while Mamba-2 focused on breaking pretraining bottlenecks, Mamba-3 aims to solve the “cold GPU” problem: the reality that during decoding, modern hardware often remains idle, waiting for memory movement rather than performing computation.

Advertisement

Perplexity (no, not the company) and the newfound efficiency of Mamba 3

Mamba, including Mamba 3, is a type of State Space Model (SSM).

These are effectively a high-speed “summary machine” for AI. While many popular models (like the ones behind ChatGPT) have to re-examine every single word they’ve already seen to understand what comes next—which gets slower and more expensive the longer the conversation lasts—an SSM maintains a compact, ever-changing internal state. This state is essentially a digital “mental snapshot” of the entire history of the data.

As new information flows in, the model simply updates this snapshot instead of re-reading everything from the beginning. This allows the AI to process massive amounts of information, like entire libraries of books or long strands of DNA, with incredible speed and much lower memory requirements.

To appreciate the leap Mamba-3 represents, one must first understand perplexity, the primary metric used in the research to measure model quality.

Advertisement

In the context of language modeling, perplexity is a measure of how “surprised” a model is by new data.

Think of a model as a professional gambler. If a model has high perplexity, it is unsure where to place its bets; it sees many possible next words as equally likely.

A lower perplexity score indicates that the model is more “certain”—it has a better grasp of the underlying patterns of human language. For AI builders, perplexity serves as a high-fidelity proxy for intelligence.

The breakthrough reported in the Mamba-3 research is that it achieves comparable perplexity to its predecessor, Mamba-2, while using only half the state size. This means a model can be just as smart while being twice as efficient to run.

Advertisement

A new philosophy

Mamba 3 architecture diagram

Mamba 3 architecture diagram. Credit: Tri Dao

The philosophy guiding Mamba-3 is a fundamental shift in how we think about AI “intelligence” versus the speed of the hardware it runs on. While the previous generation, Mamba-2, was designed to be trained at record-breaking speeds, Mamba-3 is an “inference-first” architecture — inference referring to the way AI models are served to end users, through websites like ChatGPT or Google Gemini, or through application programming interfaces (APIs).

Mamba 3’s primary goal is to maximize every second the computer chip (GPU) is active, ensuring that the model is thinking as hard as possible without making the user wait for an answer.

In the world of language models, every point of accuracy is hard-won. At the 1.5-billion-parameter scale, the most advanced “MIMO” variant of Mamba-3 achieved a 57.6% average accuracy across benchmarks, representing a 2.2-percentage-point leap over the industry-standard Transformer.

Advertisement
Mamba 3 accuracy benchmark chart

Mamba 3 benchmark comparison chart. Credit: Aakash Lahoti, Kevin Y. Li, Berlin Chen, Caitlin Wang, Aviv Bick, J. Zico Kolter, Tri Dao, Albert Gu

While a two-point jump might sound modest, it actually represents a nearly 4% relative increase in language modeling capability compared to the Transformer baseline. Even more impressively, as alluded to above, Mamba-3 can match the predictive quality of its predecessor while using only half the internal “state size,” effectively delivering the same level of intelligence with significantly less memory lag.

For years, efficient alternatives to Transformers suffered from a “logic gap”—they often failed at simple reasoning tasks, like keeping track of patterns or solving basic arithmetic, because their internal math was too rigid. Mamba-3 solves this by introducing complex-valued states.

This mathematical upgrade acts like an internal compass, allowing the model to represent “rotational” logic. By using this “rotary” approach, Mamba-3 can near-perfectly solve logic puzzles and state-tracking tasks that its predecessors could only guess at, finally bringing the reasoning power of linear models on par with the most advanced systems.

Advertisement

The final piece of the puzzle is how Mamba-3 interacts with physical hardware. Most AI models today are “memory-bound,” meaning the computer chip spends most of its time idle, waiting for data to move from memory to the processor.

Mamba-3 introduces a Multi-Input, Multi-Output (MIMO) formulation that fundamentally changes this dynamic. By performing up to four times more mathematical operations in parallel during each step, Mamba-3 utilizes that previously “idle” power. This allows the model to do significantly more “thinking” for every word it generates without increasing the actual time a user spends waiting for a response. More on these below.

Three new technological leaps

The appeal of linear models has always been their constant memory requirements and linear compute scaling.

However, as the Mamba 3 authors point out, there is “no free lunch”. By fixing the state size to ensure efficiency, these models are forced to compress all historical context into a single representation—the exact opposite of a Transformer’s ever-growing KV cache. Mamba-3 pulls three specific levers to make that fixed state do more work.

Advertisement

1. Exponential-Trapezoidal Discretization

State Space Models are fundamentally continuous-time systems that must be “discretized” to handle the discrete sequences of digital data.

Previous iterations relied on “Exponential-Euler” discretization—a heuristic that provided only a first-order approximation of the system.

Mamba-3 introduces a generalized trapezoidal rule, providing second-order accurate approximation. This isn’t just a mathematical refinement; it induces an “implicit convolution” within the core recurrence.

By combining this with explicit B and C bias terms, the researchers were able to remove the short causal convolution that has been a staple of recurrent architectures for years.

Advertisement

2. Complex-Valued SSMs and the “RoPE Trick”

One of the most persistent criticisms of linear models has been their inability to solve simple state-tracking tasks, such as determining the parity of a bit sequence.

This failure stems from restricting the transition matrix to real numbers, which prevents the model from representing “rotational” dynamics.Mamba-3 overcomes this by viewing the underlying SSM as complex-valued.

Using what the team calls the “RoPE trick,” they demonstrate that a complex-valued state update is mathematically equivalent to a data-dependent rotary embedding (RoPE) applied to the input and output projections.

This allows Mamba-3 to solve synthetic reasoning tasks that were impossible for Mamba-2.

Advertisement

3. MIMO: Boosting Arithmetic Intensity

The most significant leap in inference efficiency comes from the transition from Single-Input, Single-Output (SISO) to Multi-Input, Multi-Output (MIMO) SSMs.

In a standard SSM, the state update is an outer-product operation that is heavily memory-bound.By switching to a matrix-multiplication-based state update, Mamba-3 increases the “arithmetic intensity” of the model—the ratio of FLOPs to memory traffic.

This allows the model to perform more computation during the memory-bound decoding phase. Essentially, Mamba-3 utilizes the “idle” compute cores of the GPU to increase model power for “free,” maintaining the same decoding speed as its simpler predecessors.

What Mamba 3 means for enterprises and AI builders

For enterprises, Mamba-3 represents a strategic shift in the total cost of ownership (TCO) for AI deployments.

Advertisement
  • Cost vs. Performance: By matched-parameter performance, Mamba-3 (MIMO) matches the perplexity of Mamba-2 while using half the state size. For enterprise deployment, this effectively doubles the inference throughput for the same hardware footprint.

  • Agentic Workflows: As organizations move toward parallel, agentic workflows (like automated coding or real-time customer service agents), the demand for low-latency generation increases exponentially. Mamba-3 is designed specifically to prevent GPU hardware from sitting “cold” during these tasks.

  • The Hybrid Advantage: The researchers predict that the future of enterprise AI lies in hybrid models. By interleaving Mamba-3 with self-attention, organizations can combine the efficient “memory” of SSMs with the precise “database” storage of Transformers.

Availability, licensing, and usage

Mamba-3 is not merely a theoretical research paper; it is a fully realized, open-source release available for immediate use with model code published on Github.

The project is released under the Apache-2.0 License. This is a permissive, business-friendly license that allows for free usage, modification, and commercial distribution without requiring the disclosure of proprietary source code.

This release is good for developers building long-context applications, real-time reasoning agents, or those seeking to reduce GPU costs in high-volume production environments.

Leading the State Space Models (SSM) revolution

The release was met with enthusiasm on social media, particularly regarding the “student-led” nature of the project. Gu, whose X/Twitter bio describes him as “leading the ssm revolution,” gave full credit to the student leads, including Aakash Lahoti and Kevin Y. Li

Advertisement

.Gu’s thread highlighted the team’s satisfaction with the design:

“We’re quite happy with the final model design! The three core methodological changes are inspired by (imo) some elegant math and methods.”

As agentic workflows push inference demand “through the roof,” the arrival of Mamba-3 suggests that the future of AI may not just be about having the biggest model, but about having the most efficient one.

Mamba-3 has successfully re-aligned the SSM with the realities of modern hardware, proving that even in the age of the Transformer, the principles of classical control theory still have a vital role to play.

Source link

Advertisement
Continue Reading

Tech

Mark Built Diablo 2 as a First-Person Game in Unreal Engine 5

Published

on

Diablo 2 First Person Mod Unreal Engine
Mark of I Make Games, chose to rebuild Diablo 2 from the ground up in Unreal Engine 5, but with one major difference: the entire game is played in first person. A clean heads-up display lies at the bottom of the screen, displaying your current location, an experience bar that ticks upward as you fight monsters, skill slots, glowing potion icons, and a stamina meter that drains anytime you push yourself too far.



Mark has been adding spells to the mix as well, with Fireball letting you watch the projectile arc through the air and detonate on impact, and Teleport doing exactly what it sounds like, making your character vanish and reappear somewhere else in the blink of an eye.

Sale


ASUS ROG Xbox Ally – 7” 1080p 120Hz Touchscreen Gaming Handheld, 3-month Xbox Game Pass Premium…
  • XBOX EXPERIENCE BROUGHT TO LIFE BY ROG The Xbox gaming legacy meets ROG’s decades of premium hardware design in the ROG Xbox Ally. Boot straight into…
  • XBOX GAME BAR INTEGRATION Launch Game Bar with a tap of the Xbox button or play your favorite titles natively from platforms like Xbox Game Pass…
  • ALL YOUR GAMES, ALL YOUR PROGRESS Powered by Windows 11, the ROG Xbox Ally gives you access to your full library of PC games from Xbox and other game…

There’s also sliding, which allows you to glide down slopes or across slippery floors to maintain speed, because you never know when you’ll need to escape quickly. Climbing allows you to scale narrow ledges or sneak into concealed routes, which is ideal for continued exploration. Meanwhile, dismemberment is already at work on the evil guys, so when you smack them hard enough, their pitiful limbs just fly off.

Advertisement

Diablo 2 First Person Mod Unreal Engine
Teleport, of course, allows you to simply walk through walls for a variety of nefarious purposes, and then there’s Whirlwind, the hapless barbarian spinning around in circles with blades out, mowing down all comers. Lightning is the other new ability, which fires bolts back and forth between targets with impressive visual effects to keep you on your toes. Both were slightly tweaked to ensure proper timing. Mark does use a few pre-made character meshes to save time, but for everything else, he browses the Unreal Marketplace like a kid in a candy store.

Diablo 2 First Person Mod Unreal Engine
During testing, you can switch the camera to third person for a brief look, but Mark prefers to maintain the focus on the first-person experience. Visual effects will have to wait till things are a little more established. As it stands, Mark is only adding new regions and powers to his channel one at a time, creating a gradual but constant trickle of advancement, and the followers are already getting antsy; who knows, maybe one day they’ll get to check it out for themselves.
[Source]

Source link

Continue Reading

Tech

Meta will shut down VR Horizon Worlds access in June

Published

on

Horizon Worlds, Meta’s first pass at a metaverse, will be inaccessible via virtual reality headset after June 15, 2026. The company shared plans to separate Horizon Worlds from Quest VR platform and focus exclusively on the smartphone version of the app in February, and now in a new post on its community forums, Meta detailed when the VR version of Horizon Worlds will be deprecated.

By March 31, Meta says individual Horizon Worlds and Events will no longer be listed in the Quest’s Store and headset owners will be unable to visit worlds like “Horizon Central, Events Arena, Kaiju and Bobber Bay.” Then, after June 15, the app will be removed from Quest headsets and worlds will be completely unavailable to visit in VR. From that point on, the easiest place to visit Horizon Worlds will be in the Meta Horizon app for iOS and Android.

Additionally, Hyperscape Capture, a recently added beta feature that allows Quest headset owners to capture, share and visit each other in detailed 3D scans of real-life locations, is also being removed from Horizon Worlds. Meta says users will still be able to capture and view Hyperscapes, “but sharing, inviting, and co-experiencing Hyperscapes with others will no longer be supported.”

While Meta’s original blog detailing its 2026 VR strategy left open the possibility that a committed Quest owner might still be able to access some part of Meta’s original VR metaverse, that apparently was never the company’s plan. Meta saw enough “positive momentum” focusing on supporting the mobile version of Horizon Worlds in 2025 that it made sense to completely abandon the VR one in 2026. While that seems to run contrary to Meta’s positioning as a “metaverse company,” it does reflect where the company is spending the most money and seeing the most (relative) success: AI and smart glasses.

Advertisement

Source link

Continue Reading

Tech

AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet

Published

on

An anonymous reader quotes a report from 404 Media, written by Jason Koebler: Over the last few months, various academics and AI companies have attempted to predict how artificial intelligence is going to impact the labor market. These studies, including a high-profile paper published by Anthropic earlier this month, largely try to take the things AI is good at, or could be good at, and match them to existing job categories and job tasks. But the papers ignore some of the most impactful and most common uses of AI today: AI porn and AI slop.

Anthropic’s paper, called “Labor market impacts of AI: A new measure and early evidence,” essentially attempts to find 1:1 correlations between tasks that people do today at their jobs and things people are using Claude for. The researchers also try to predict if a job’s tasks “are theoretically possible with AI,” which resulted in this chart, which has gone somewhat viral and was included in a newsletter by MSNOW’s Phillip Bump and threaded about by tech journalist Christopher Mims. (Because everything is terrible, the research is now also feeding into a gambling website where you can see the apparent odds of having your job replaced by AI.) In his thread, Mims makes the case that the “theoretical capability” of AI to do different jobs in different sectors is totally made up, and that this chart basically means nothing. Mims makes a good and fair observation: The nature of the many, many studies that attempt to predict which people are going to lose their jobs to AI are all flawed because the inputs must be guessed, to some degree.

But I believe most of these studies are flawed in a deeper way: They do not take into account how people are actually actually using AI, though Anthropic claims that that is exactly what it is doing. “We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily,” the researchers write. This is based in part on the “Anthropic Economic Index,” which was introduced in an extremely long paper published in January that tries to catalog all the high-minded uses of AI in specific work-related contexts. These uses include “Complete humanities and social science academic assignments across multiple disciplines,” “Draft and revise professional workplace correspondence and business communications,” and “Build, debug, and customize web applications and websites.” Not included in any of Anthropic’s research are extremely popular uses of AI such as “create AI porn” and “create AI slop and spam.” These uses are destroying discoverability on the internet, cause cascading societal and economic harms. “Anthropic’s research continues a time-honored tradition by AI companies who want to highlight the ‘good’ uses of AI that show up in their marketing materials while ignoring the world-destroying applications that people actually use it for,” argues Koebler. “Meanwhile, as we have repeatedly shown, huge parts of social media websites and Google search results have been overtaken by AI slop. Chatbots themselves have killed traffic to lots of websites that were once able to rely on ad revenue to employ people, so on and so forth…”

“This is all to say that these studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media,” writes Koebler, in closing. “We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet. What’s happening may be too chaotic, messy, and unpleasant for AI companies to want to reckon with, but to ignore it entirely is malpractice.”

Advertisement

Source link

Continue Reading

Tech

MSI Prestige 14 AI+ business laptop review

Published

on

Why you can trust TechRadar


We spend hours testing every product or service we review, so you can be sure you’re buying the best. Find out more about how we test.

MSI Prestige 14 AI+: Two-minute review

The MSI Prestige 14 AI+ is a sleek business-focused laptop with a premium design that manages an interesting and useful mix of the features and performance you need, but skips a lot of the bloat.

As the name suggests, it’s a 14-inch laptop, and it’s aimed at users on the go who need a thin and light machine that still offers decent performance and battery life. The Prestige 14 measures in at 31.6 x 22.2 x 1.2 – 1.4cm (12.4 x 8.7 x 0.47 – 0.55 inches) and weighs 1.32kg (2.91 lbs) — an excellent size for portability without being too small. Compared to the non-Windows competition, it’s chunkier than a MacBook Air, but is slimmer and lighter than a MacBook Pro.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Europe sanctions Chinese and Iranian firms for cyberattacks

Published

on

Europe sanctions Chinese and Iranian firms for cyberattacks

The Council of the European Union has sanctioned three Chinese and Iranian companies and two individuals for cyberattacks targeting devices and critical infrastructure.

One of the two sanctioned Chinese companies, identified as Integrity Technology Group, provided “technical and material support” between 2022 and 2023 that led to hacking more than 65,000 devices in six EU states.

The other Chinese company is Anxun Information Technology, which provided hacking services targeting “critical infrastructure and critical functions of member states and third countries.”

The two individuals added to the Council’s sanctions list are the co-founders of Anxun Information Technology, believed to have played a significant role in cyberattacks against EU member states.

Advertisement

The sanctioned Iranian company is Emennet Pasargad, which has been attributed multiple influence campaigns and the compromise of an SMS service in Sweden.

Emennet Pasargad has been involved in hijacking advertising billboards to spread misinformation during the 2024 Paris Olympics.

According to Microsoft, using the moniker Holy Souls on a hacker forum, the actor also offered in early January 2023 to sell personal information of 230,000 subscribers of the French magazine Charlie Hebdo.

Holy Souls asked for 20 bitcoins, worth around $340,000 at the time, and published a sample of the stolen details, which included Charlie Hebdo subscriber names and addresses.

Advertisement
Caption

Emennet Pasargad is believed to have provided cybersecurity services for the Iranian government and has a long history of influence campaigns. In November 2021, the U.S. Department of Justice offered a $10 million reward for two Iranian nationals who worked as contractors for the company.

“Those listed today under both regimes are subject to an asset freeze, and EU citizens and companies are forbidden from making funds, financial assets, or economic resources available to them. Natural persons also face a travel ban that prohibits them from entering or transiting through EU territories,” notes the European Council.

Integrity Technology Group was connected by the FBI in 2024 to the ‘Raptor Train’ botnet, believed to be operated by the Chinese state-sponsored threat actor ‘Flax Typhoon.’

In January 2025, the U.S. Treasury Department sanctioned the company for its involvement in these cyberattacks, allowing the Raptor Train to build a massive network of 260,000 infected devices.

In March 2025, the U.S. Justice Department sanctioned Anxun Information Technology (also known as i-Soon) for advertising hacker-for-hire services and carrying out cyberattacks since at least 2011.

Advertisement

In mid-February 2024, i-Soon suffered a data leak that exposed the company’s internal operations as a China-affiliated hacking contractor and its offensive toolkit.

The U.S. authorities also announced rewards of up to $10 million for valid information leading to the location of 10 Anxun Information Technology executives and technical staff members.

The European Union started imposing cyber sanctions in 2019 and, as of today, the restrictions target 19 individuals and seven entities responsible for malicious cyber activities.

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Advertisement

Source link

Continue Reading

Tech

Samsung's Galaxy Z TriFold was a hit with buyers, but it's still shutting down after three months

Published

on


From the outset, Samsung positioned the TriFold as an experimental, tightly controlled product rather than a mass-market flagship. Early batches in Korea were limited to around 3,000 units per release, each selling out within minutes on Samsung’s online store.
Read Entire Article
Source link

Continue Reading

Tech

Europe’s most impactful AI startups

Published

on

The 2026 AWS Pioneers cohort spans healthcare, climate, and conflict zones, and lands alongside a stark warning that Europe risks losing its best innovators if the regulatory environment doesn’t change.


Amazon Web Services announced today the second annual cohort of its Pioneers Project: twelve European companies using AI and cloud infrastructure to tackle problems that range from the molecular to the geopolitical.

One maps unmapped ocean floor with zero-emission autonomous vessels. Another warns two million civilians in northwest Syria when an airstrike is incoming. A third can diagnose rare leukaemia subtypes in hours rather than the weeks it typically takes.

The announcement is tied to a new AWS-commissioned study, “Unlocking Europe’s AI Potential”, conducted by research firm Strand Partners across 17 European markets and 34,000 respondents.

Advertisement

Its headline figures are bullish,  91% of AI-first startups surveyed say AI has accelerated their innovation, 89% report productivity gains, but the report also surfaces a harder finding: 38% of European startups would consider relocating outside Europe to scale, rising to 51% among the fastest-growing cohort.

Advertisement

When asked what would persuade them to stay, 65% cited a clearer and more proportionate regulatory environment. The research figures are self-reported from an AWS-commissioned survey and should be read with that context in mind.

The twelve companies named span France, Germany, Ireland, the Netherlands, Portugal, Sweden, and the UK, and were selected, AWS says, for placing measurable global impact at the heart of their work rather than for commercial scale alone.

The most immediately striking entry is MLL Munich Leukaemia Laboratory, a German diagnostics organisation that combines genomics at cloud scale with deep haematological expertise to diagnose rare leukaemia subtypes in hours or days.

The company says it has analysed over 1.4 million cases to date, though that figure comes from AWS’s own press materials and has not been independently verified.

Advertisement

XOCEAN, the Irish company, operates a global fleet of autonomous surface vessels roughly the size of a car, powered by battery and solar rather than a crew.

The company has been deploying these in offshore wind surveys for clients including SSE Renewables, Ørsted, BP, and Shell, and says its vessels emit a fraction of the carbon of conventional survey ships.

AWS describes XOCEAN as operating across 23 jurisdictions; the company’s own public materials confirm a global footprint spanning Ireland, the UK, Norway, the US, Canada, and Australia, though the 23-jurisdiction figure comes from the press release alone.

Hala Systems, headquartered in Lisbon, began in Syria. Its Sentry platform, an indication and warning system combining acoustic sensors, volunteer observer networks, AI prediction, and remotely activated sirens, has provided advance warning of airstrikes to civilians in northwest Syria and, more recently, has contributed to war crimes documentation efforts in Ukraine.

Advertisement

The Smithsonian’s National Air and Space Museum has acquired Sentry hardware for its collection; the system is the subject of the world’s first ICC Article 15 war crimes dossier featuring cryptographically secured evidence, according to the company.

myTomorrows, the Dutch healthtech company, runs an AI-powered platform connecting patients and physicians to clinical trials and expanded access programmes for pre-approval treatments.

AWS’s press release states the company has helped over 17,700 patients in 135 countries; the most recent independently verifiable figures, from a November 2025 press release at the time of the company’s €25 million funding round, put the number at approximately 16,900 patients across 133 countries.

The figures will have grown since then, and the direction is consistent, but editors should confirm the current number directly with myTomorrows before publication.

Advertisement

Quandela, the French quantum computing company, is building photonic quantum machines that operate at room temperature and use existing fibre networks, a design choice that distinguishes it from most quantum computing approaches, which require cooling to near absolute zero.

The inclusion of a quantum computing startup in a cohort alongside humanitarian and climate companies is a reflection of AWS’s broader argument that deep infrastructure investment and societal benefit are not in tension.

The remaining six companies are Callyope (France), which uses AI to detect early signs of mental health relapse before a crisis.

CareMates (Germany), which has cut hospital patient admission time from five hours to one using AI-powered software.

Advertisement

ETERNO (Germany), whose AI assistant LENI is designed to help clinicians make better use of brief consultations; Iktos (France), which combines AI with laboratory robotics to accelerate drug molecule design.

Mindflow (France), an enterprise automation platform that bundles AI agents, no-code workflows, and over 4,000 integrations; Paebbl (Sweden and Netherlands), which accelerates natural mineralisation to reduce the carbon footprint of concrete.

And Proximie (UK), a surgical coordination platform aimed at the estimated five billion people who currently lack access to safe surgery.

“These innovators are advancing Europe’s position as a global AI leader, mapping the oceans, revolutionising patient care, accelerating drug discovery, and predicting imminent threats to help save lives,” said Sasha Rubel, who AWS describes as its Head of AI and Generative AI Policy for EMEA. 

Advertisement

The research report accompanying the announcement attempts to quantify what Europe stands to lose if its AI startups leave.

It cites an estimate that cloud-enabled AI could generate €1.5 trillion of global GDP by 2030, and warns that 78% of startups say they are prepared for agentic AI, compared to just 19% of businesses overall. Both figures are from the AWS-commissioned Strand Partners study and carry the usual caveats of self-reported, sponsor-funded research.

AWS also used the announcement to highlight existing commitments: $1 billion in cloud credits for startups developing generative AI solutions, and $100 million over five years to support underserved learners through its Education Equity Initiative.

Whether those commitments are enough to address the relocation pressures the same report identifies is a question the Pioneers cohort itself may eventually answer.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025