Connect with us
DAPA Banner

Tech

DJI Mic Mini 2 vs DJI Mic Mini: tiny upgrade, massive price cut, but there’s a Mini 2S on the horizon which will add a key feature

Published

on


We rated the DJI Mic Mini as the best small wireless mic when it was launched in 2024, and it now has a successor in the shape of the Mic Mini 2. Both are 5-star products for content creators wanting an affordable, lightweight, and simple mic for better audio on the go.

Advertisement

If you already own a Mic Mini, there’s very little reason to upgrade to the Mic Mini 2 because performance is practically the same; both mics feature clear 24-bit audio, two-level noise reduction, a transmission range up to 400m, healthy battery life, and a lightweight 11g build.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Amazon brings dark mode to Kindle Colorsoft and Scribe Colorsoft

Published

on

Amazon has today announced a software update for both the Kindle Colorsoft and Kindle Scribe Colorsoft which will bring dark mode to both e-readers. Even better, users will be able to toggle the settings for specific menus on both devices, so if they want their library dark and their notebook light, they can. Given the option is available on plenty of other Kindle devices, its omission here always felt like something Amazon was just getting around to addressing.

In addition, the update brings Smart Shapes to notebooks, enabling users to add pre-drawn lines, arrows, circles, triangles and rectangles from the toolbar. In addition, a hold-to-snap tool lets you draw a shape freehand, after which point it’ll pull itself into a nice tidy design. Both should help folks who want to add some graphical zing to their note taking who can’t do all those fancy journal designs on their own.

The update is rolling out across the ecosystem across the next few days, further empowering would-be journal scribes using these tablets. For tablets like the Kindle Scribe Colorsoft, it’s clear Amazon needs to build out the Scribe half of the equation, which looks like a poor relative compared to its competition. As Cherlynn Low wrote in her review, it’s a fine e-reader, but one that’s sorely lacking in many areas.

Source link

Advertisement
Continue Reading

Tech

The GPS III Rollout Is Almost Complete, But What Is It?

Published

on

Considering how integral it is to our modern way of life, you could be excused for thinking that the Global Positioning System (GPS) is a product of the smartphone era. But the first satellites actually came online back in 1978, although the system didn’t reach full operational status until April of 1995. While none of the active GPS satellites currently in orbit are quite that old, several of them were launched in the early 2000s — and despite a few tweaks and upgrades, their core technology isn’t far removed from their 1990s era predecessors.

But in the coming years, that’s finally going to change. Just last week, the tenth GPS III satellite was placed in orbit by a SpaceX Falcon 9 rocket. Once it’s properly configured and operational, it will join its peers to form the first complete “block” of third-generation GPS satellites. Over the next decade, as many as 22 revised GPS III satellites are slated to take their position over the Earth, eventually replacing all of the aging satellites that billions of people currently rely on.

So what new capabilities do these third-generation GPS satellites offer, and why has it taken so long to implement needed upgrades in such a critical system?

GPS Is Good, But Could Be Better

To understand the future of GPS, it’s helpful to look at its past. Developed by the United States military during the Cold War, what we now call GPS was originally known as Navigation System with Timing and Ranging (NAVSTAR). While the intent was always to allow civilian use of NAVSTAR, the equipment necessary to receive the signal and get a position was cumbersome and expensive.

There was little public interest in the system until Korean Air Lines Flight 007 was shot down in 1983 after mistakenly entering the Soviet Union’s airspace. With the lifesaving potential of NAVSTAR clearly evident, pressure started building on the industry to develop smaller and more affordable receivers — GPS as we know it was born.

Advertisement
NAVSTAR Satellite

That the development of such devices was possible in the first place was thanks to the design of NAVSTAR. Each satellite in the constellation broadcasts a timed radio signal which receivers on the ground use to compute their distance from the source. By comparing the signals from multiple satellites, a receiver can plot its position without the need for any local infrastructure. Since the process is entirely one-way, the can could be freely used by any device can can receive and decode the signal.

But while this operational simplicity was key to the proliferation of cheap ubiquitous GPS receivers, there’s certainly room for improvement given more modern technology. When NAVSTAR was designed knowing where a receiver was located within a radius of a few meters was more than sufficient, but today there’s a demand for greater accuracy by both civilian and military users. Given the essentially incalculable value of GPS to the global economy, improving reliability is also paramount. Not only has GPS jamming and spoofing become trivial, but even without the involvement of bad actors, legacy GPS struggles in urban environments.

Plans to deliver improved performance in these areas have been in the works for decades, with the United States Congress first authorizing the work on what would become GPS III all the way back in 2000. But when working on a system so critical that even a few minutes of downtime could put the entire planet into turmoil, such changes don’t come easy.

Can You Hear Me Now?

While modern GPS receivers are more sensitive than those in the past, there’s simply no getting over the fact that signals coming from a satellite more than 20,000 kilometers away will be by their very nature weak. So not only is it relatively easy for adverse environmental conditions to block or hinder the signal, but it doesn’t take much to override the signal with a local transmitter if somebody is looking to cause trouble.

As such, one of the key goals of the GPS III program was to deliver higher transmission power. This will lead to better reception for all GPS users across the board, but the new satellites also offer some special modes that offer even greater performance.

Advertisement

In addition to the backwards compatible signals transmitted by GPS III satellites, there’s also a new “Safety of Life” signal. This signal is transmitted at a different frequency, 1176 MHz, and at a higher power, so compatible receivers should hear it come in at approximately 3 dB above the “classic” signal. It’s intended primarily for high-performance applications such as aviation, but as compatible receivers get cheaper, it will start to show up in more devices.

These improvements should be enough for civilian use, but the military has higher expectations and operates under more challenging conditions. In such cases, future GPS III satellites will come equipped with a high-gain directional antenna that can project a “spot beam” signal anywhere on Earth. For receivers located within the beam, which is estimated to be a few hundred kilometers in diameter, the received signal from the satellite will be boosted by up to 20 dB. In contested environments, this should make it far more resistant to jamming and spoofing.

Speaking New Languages

The new signals being transmitted by GPS III satellites won’t just be louder than their predecessors, they’ll gain some new features as well.

For one thing, GPS III satellites will transmit a standardized signal known as L1C which offers interoperability with other global navigation systems such as Europe’s Galileo, China’s BeiDou, the Indian Regional Navigation Satellite System (IRNSS), and Japan’s Quasi-Zenith Satellite System. In theory a compatible receiver will be able to process signals from any combination of these systems simultaneously, improving overall performance.

Advertisement

The new satellites will also support the L2C signal. While this signal was technically available on earlier generation satellites, it’s still not considered fully operational and its adoption is expected to accelerate as more GPS III satellites come online. Compared with the legacy GPS protocol, L2C offers improved faster acquisition of signal, better error correction, and a more capable packet format.

To make GPS III transmissions even more secure, the military is also getting their own signal known as M-code. As you might expect, little is publicly known about M-code currently, but it’s a safe bet that it utilizes encryption and other features to make it more difficult for adversaries to create spoofed transmissions. For what it’s worth, a recent press release from the US Space Force claims that the use of M-code makes the next-generation GPS satellites “three-times more accurate and eight times more resistant to jamming than the previous constellation.”

Testing Out New Toys

Although all ten GPS III satellites are now in orbit, that doesn’t mean the constellation is complete. Starting in 2027, a new fleet of revised satellites known as GPS IIIF will start launching. They will take the lessons learned from the initial GPS III deployment to create a smaller, lighter, and more efficient platform that should have a service life of at least 15 years.

Artist impression of a future GPS IIIF satellite.

They’ll also include new in-development equipment that wasn’t quite ready for deployment when the current GPS III satellites were being assembled. This includes optical reflectors that will allow ground stations to more accurately track the position of each satellite, laser data links that will allow high-speed communication between satellites, and an improved atomic clock known as the Digital Rubidium Atomic Frequency Standard (DRAFS).

Of course, the vast majority of the people who use GPS every day will never be aware of all the changes and improvements happening behind the scenes. When they get a new phone with a GPS III-compatible receiver, they may notice that their navigation app locks on a bit faster or that the position shown on the screen is a little closer to where they are actually standing, but only if they are particularly attentive. But that’s entirely by design — the most important aspect of implementing GPS III is making the whole process as invisible as possible.

Advertisement

Source link

Continue Reading

Tech

Amazon Gets Exemption From Trump FCC Router (Extortion) Ban, Doesn’t Say How

Published

on

from the cybersecurity-shakedown dept

Late last month we noted how the Trump FCC under Brendan Carr announced a new “ban” on all routers made overseas (which means pretty much all of them). At the time, we also noted how this was less of a ban and more of a shakedown, with router manufacturers required to beg the Trump FCC for conditional waivers (fees, favors, whatever) to continue doing business in the States.

Not long after, Netgear, which does a lot of work with the U.S. government, announced it had received an exemption from the Trump FCC, though neither Netgear or the government transparently indicated what Netgear had to do to get the exemption. Pay a bribe? Host Brendan Carr for a game of golf? Install a surreptitious backdoor for CIA and ICE access? Nobody knows.

Now Amazon is the latest to get an exemption for both its Eero consumer routers and its Leo low Earth orbit (LEO) routers. Amazon showed up on the exemption list, but again there’s absolutely no indication of what the company had to actually do to get it, or the standards the Trump FCC is using to determine what hardware can be trusted. An Amazon announcement is painfully vague:

“We’re pleased to share that the U.S. government has recognized eero as a trusted and secure provider of routers.”

How did this happen? Does anybody trust the Trump administration to make this determination? Are there concerns about backdoors in exchange for being allowed to continue to do business? Nobody knows, though the FCC has indicated the ban has been expanded to include personal hotspots.

Advertisement

This would all likely be less alarming if the Trump administration wasn’t aggressively transactional, unethical, and authoritarian. Little to nothing Brendan Carr and Donald Trump do is genuinely for the public interest; and while this ban is being proposed as an act to protect national security, with their other hand they’ve taken countless steps to ensure consumers are less secure than ever.

That’s ranged from firing of officials responsible for online election security and investigating hacks, or to the relentless “deregulation” (real, the elimination of corporate oversight) of a U.S. telecom sector that was just the target of one of the worst cybersecurity incidents in U.S. history (in large part because telecom executives failed to change default router admin passwords).

Most press coverage of this new router ban acts as if the Trump FCC is still a trusted actor when it comes to the public interest, but that’s a pretty broad assumption given all the dodgy, unethical, and illegal behavior we’ve seen from the agency and administration more generally.

I don’t think most U.S. journalism is journalism. It’s some weird simulacrum designed to not offend. Why would you not at least include one sentence or paragraph on how nothing about this is transparent? Or that the administration has a bad track record on ethics and transparency?

Advertisement

Similarly, no outlets have been inclined to mention that the Trump administration’s open corruption and mindless dismantling of corporate oversight and consumer protection have most certainly endangered national security and consumer cybersecurity and privacy in ways we’ve not yet begun to calculate. “You can trust us on this,” isn’t something anybody, especially media outlets, should be accepting as an answer.

Filed Under: backdoors, cybersecurity, extortion, fcc, hardware, hotspots, privacy, routers

Companies: amazon

Source link

Advertisement
Continue Reading

Tech

Get Ready for More Brain-Scanning Consumer Gadgets

Published

on

The next gadget you put on your head could scan your brain. Neurable, a Boston-based company that embeds its noninvasive brain-scanning technology into hardware to monitor a person’s focus levels, announced on Tuesday that it is transitioning to a licensing platform model. By certifying third parties, Neurable expects its tech to be in a “flood” of consumer gadgets this year and next.

Neurable has until now focused its efforts on a pair of consumer-grade headphones—made in partnership with audio brand Master & Dynamic. It also has a contract with the US Department of Defense to see how its technology can monitor blast overpressure and potentially help diagnose mild traumatic brain injuries in soldiers. With the licensing model, we could see more of Neurable’s tech in everyday head-based wearables.

The headphones use built-in electroencephalography (EEG) sensors to monitor brain waves. That information is sent to a companion app and lets wearers know when they need a “brain break,” nudging them to take a breather before they feel burnt out to maximize productivity. The app also lets users discover their cognitive readiness for the day, their brain age, and other metrics, such as mental recovery, cognitive strain, and anxiety resilience. WIRED staff writer Emily Mullin tested the original headphones in 2024, though she found it difficult to verify the accuracy of Neurable’s algorithms.

Now, HP-owned gaming brand HyperX is releasing a gaming headset with Neurable’s technology, and it’s all about improving human performance while esports gaming. The headphones are purported to help wearers ease into the right state of mind for the best performance. Ramses Alcaide, Neurable cofounder and CEO, tells WIRED that the company has published a white paper showing improved performance among gamers using Neurable’s tech, with reduced response times in first-person shooter games and a small increase in accuracy.

Advertisement

The improvements may sound minor, but milliseconds are precious in the fast-paced world of esports gaming. And Alcaide says it could translate similarly to other fields: It could help a student reduce anxiety before an exam, while athletes could condition their nerves ahead of a race or game. Neurable is hardware-agnostic; Alcaide says it can be embedded in headphones, smart glasses, hats, or helmets. “There’s a whole landscape of technology that touches your head that’s yet to be embedded with our platform,” he says.

He likens it to when Fitbit made the idea of a wrist-worn heart-rate tracker popular. In the beginning, no one knew how fitness wearables would be received, but now no one blinks an eye at one on a wrist. Soon, no one will think twice about brain-scanning tech in headphones—or, at least, that’s the idea. Neurable’s tech is “invisible” in these types of gadgets.

Companies licensing Neurable’s tech can integrate it into existing hardware, Alcaide says, and will control the entire experience from product design to the software experience; these products will be advertised as “Powered by Neurable AI.” The user data still flows to Neurable’s servers for processing, but Neurable sets the data privacy protections. User identifiers are separated from the data, and while partner companies host the user-facing layer, Neurable says it keeps control of the underlying system and data handling. Neurable has previously said its business model is not to sell user data.

“Any time there’s a new transition to technology, there’s always going to be some anxiety,” Alcaide says. “We’ve been very careful when it comes to that transition. We’re protecting the data, being as ethical as possible.”

Advertisement

Neurable is one of many brain-computer interface (BCI) companies in the growing category. Elemind uses EEGs to improve sleep quality, and Sabi wants to turn thoughts into text. Even Apple filed a patent for EEG-sensing AirPods, though they’re not yet available.

Source link

Continue Reading

Tech

At Nvidia, compute already costs more than employees. The rest of corporate America is catching up

Published

on


At Nvidia, that shift is already visible. “For my team, the cost of compute is far beyond the costs of the employees,” Bryan Catanzaro, vice president of applied deep learning at Nvidia, told Axios.
Read Entire Article
Source link

Continue Reading

Tech

Towson Apple Store employee union strikes back, allege unfair treatment after closure

Published

on

Apple is claiming that the Towson Apple Store employee contract prevents guaranteed employment at other locations, and the union has filed an unlawful discrimination suit about the matter.

IAM Union logo on blue background, featuring a white gear with red and blue sections, central mechanical emblem, and bold white IAM UNION text to the right
IAM Union lobs Unfair Labor Practice charge at Apple after alleged discrimination against unionized Towson workers | Image Credit: IAM

On Monday, the International Association of Machinists and Aerospace Workers (IAM) Union officially filed an Unfair Labor Practice (ULP) charge against Apple. The charge, which has been submitted to the National Labor Relations Board (NLRB), alleges that the company unlawfully discriminated against unionized workers at its Towson, Maryland retail location.
Towson, Maryland was the first unionized Apple retail location in the United States. It was also one of three locations Apple would be closing permanently in June.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

‘Human lives are already being lost’: Open letter signed by hundreds of Google employees requests CEO reject ‘unethical and dangerous’ US military AI use

Published

on


  • Google employees sign open letter to CEO over concerns of military AI use
  • AI developers do not want their technology used for ‘classified purposes’
  • Google is currently negotiating a contract with the Pengaton

Over 600 Google employees have signed a letter calling on CEO Sundar Pichai to reject any uses of its AI technology for military purposes.

The open letter highlights the serious ethical concerns the staff have, stating, “Human lives are already being lost and civil liberties put at risk at home and abroad from misuses of the technology we are playing a key role in building.”

Source link

Continue Reading

Tech

15 best employers in S’pore to grow your career in 2026

Published

on

On Apr 28, LinkedIn unveiled its 2026 Top Companies list, naming the 15 best places to work in Singapore.

The rankings are based on LinkedIn’s own data, with companies assessed on various elements of career progression, including factors like how well they help employees progress in their careers and build new skills.

Here are this year’s top companies to grow your career in Singapore, according to LinkedIn:

1. DBS Bank

Image Credit: DBS Bank

Claiming the top spot once again is DBS Bank, Southeast Asia’s largest bank. The financial giant is currently hiring for over 200 roles here, including:

You can view their job openings here.

Advertisement

2. Microsoft

Image Credit: Shutterstock.com

Microsoft is a technology company that develops software, hardware and cloud‑based services. Singapore serves as a key regional hub for its Asia‑Pacific operations, supporting customers across consumer, enterprise and public sector markets.

It is also the parent company of Activision Blizzard, GitHub, Skype, LinkedIn and others. LinkedIn and its employees are excluded from Microsoft’s score.

The company is looking for new hires for these positions:

Click here to view their full job list.

3. Goldman Sachs

Image Credit: Paulo Fridman

Goldman Sachs is a financial services firm that provides investment banking, asset management and financial advisory services. It has offices across Asia, including Singapore, serving corporations, governments and institutional investors in the region.

These are some of the jobs the firm is hiring for:

Advertisement

View Goldman Sachs’ full job list here.

4. Roche

Image Credit: SCA Design

Originally founded in Switzerland, Roche is a multinational healthcare company that focuses on research and development of medical solutions for major disease areas such as oncology, immunology, and neuroscience.

Some of its available positions include:

See its full job list here.

5. JPMorgan Chase

Image Credit: Anim Farm via Google

The fifth largest bank in the world, JPMorgan Chase & Company, first opened in Singapore back in 1964 and has established itself as a global financial services firm across 17 markets in the Asian Pacific region.

The firm is looking for fresh faces for these roles:

Advertisement

Here’s the bank’s full list of available roles.

6. HP

Image Credit: Travel_Adventure/ Shutterstock.com

A heavyweight in the global IT industry, HP is a technology company that manufactures a range of monitors, laptops and desktops. It also produces and offers services around printers and 3D printers.

The tech company is currently looking to fill these roles:

You can browse through HP’s full job listings here.

7. Standard Chartered

Image Credit: Standard Chartered

Another notable bank on the list, Standard Chartered offers banking services across 52 markets worldwide.

The bank’s on the lookout for people to fill these positions:

Advertisement

You can look at Standard Chartered’s full job list here.

8. MSD

Image Credit: MSD

Known as Mereck in the United States and Canada, MSD is a pharmaceutical company that specialises in producing prescription medicines, vaccines and animal health products.

MSD is currently hiring for the following roles:

Browse through their full job list here.

9.  Genting Berhad

Image Credit: Genting

Genting Berhad is a diversified company with businesses in leisure, hospitality, energy and plantations.

The group’s Singapore subsidiary, Genting Singapore Limited, has a significant presence in the city-state linked to its regional leisure and hospitality activities.

Advertisement

It is currently hiring for these roles in Singapore:

Click here to view their full job list.

10. Alphabet

Image Credit: Shutterstock

Alphabet is the parent company behind tech powerhouses, including Google and YouTube.

It is currently hiring for the positions below:

View Alphabet’s full job list here.

Advertisement

11. Barclays

Image Credit: Shutterstock.com

Barclays is a financial services company providing banking, lending, investment and wealth management services. It serves individuals, businesses and institutional clients through retail and corporate banking operations.

These are some of the roles it is hiring for in Singapore:

You can look at Barclays’ full job list here.

12. Apple

Image Credit: Shutterstock.com

The company behind the all-familiar iPhone, Apple, first opened its facility in Singapore in 1981 and has since grown its presence in the city-state with three outlets in Orchard, Marina Bay Sands and Jewel Changi.

Apple has close to 100 openings listed on LinkedIn as of writing, including:

View all of Apple’s job openings here.

Advertisement

13. Micron Technology

Image Credit: Micron Technology

Micron Technology is a semiconductor company that designs and manufactures memory and storage products. These components are used in computers, mobile devices, data centres and other electronic systems.

The firm is currently hiring for these positions:

Click here to view Micron Technology’s full job list.

14. Rockwell Automation

Image Credit: Shutterstock.com

Rockwell Automation is an industrial technology company that provides hardware, software and services for manufacturing and production operations. Its products help businesses automate processes and manage industrial systems.

In Singapore, it has 38 job openings, including:

View Rockwell Automation’s full job listing here.

Advertisement

15. Citi

Image Credit: Bloomberg

Citi operates as a full-service bank in Singapore. It provides individuals, corporations, governments, investors and institutions with a range of financial products and banking services.

The bank’s on the lookout for people to fill these positions:

You can view their job openings here.

Featured Image Credit: Shutterstock.com/ Micron Technology/ Standard Chartered/ Bloomberg

Advertisement

Source link

Continue Reading

Tech

RAG precision tuning can quietly cut retrieval accuracy by 40%, putting agentic pipelines at risk

Published

on

Enterprise teams that fine-tune their RAG embedding models for better precision may be unintentionally degrading the retrieval quality those pipelines depend on, according to new research from Redis.

The paper, “Training for Compositional Sensitivity Reduces Dense Retrieval Generalization,” tested what happens when teams train embedding models for compositional sensitivity. That is the ability to catch sentences that look nearly identical but mean something different — “the dog bit the man” versus “the man bit the dog,” or a negation flip that reverses a statement’s meaning entirely. That training consistently broke dense retrieval generalization, how well a model retrieves correctly across broad topics and domains it wasn’t specifically trained on. Performance dropped by 8 to 9 percent on smaller models and by 40 percent on a current mid-size embedding model teams are actively using in production.

The findings have direct implications for enterprise teams building agentic AI pipelines, where retrieval quality determines what context flows into an agent’s reasoning chain. A retrieval error in a single-stage pipeline returns a wrong answer. The same error in an agentic pipeline can trigger a cascade of wrong actions downstream.

Srijith Rajamohan, AI Research Leader at Redis and one of the paper’s authors, said the finding challenges a widespread assumption about how embedding-based retrieval actually works. 

Advertisement

“There’s this general notion that when you use semantic search or similar semantic similarity, we get correct intent. That’s not necessarily true,” Rajamohan told VentureBeat. “A close or high semantic similarity does not actually mean an exact intent.”

The geometry behind the retrieval tradeoff

Embedding models work by compressing an entire sentence into a single point in a high-dimensional space, then finding the closest points to a query at retrieval time. That works well for broad topical matching — documents about similar subjects end up near each other. The problem is that two sentences with nearly identical words but opposite meanings also end up near each other, because the model is working from word content rather than structure.

That is what the research quantified. When teams fine-tune an embedding model to push structurally different sentences apart — teaching it that a negation flip which reverses a statement’s meaning is not the same as the original — the model uses representational space it was previously using for broad topical recall. The two objectives compete for the same vector.

The research also found the regression is not uniform across failure types. Negation and spatial flip errors improved measurably with structured training. Binding errors — where a model confuses which modifier applies to which word, such as which party a contract obligation falls on — barely moved. For enterprise teams, that means the precision problem is harder to fix in exactly the cases where getting it wrong has the most consequences.

Advertisement

The reason most teams don’t catch it is that fine-tuning metrics measure the task being trained for, not what happens to general retrieval across unrelated topics. A model can show strong improvement on near-miss rejection during training while quietly regressing on the broader retrieval job it was hired to do. The regression only surfaces in production.

Rajamohan said the instinct most teams reach for — moving to a larger embedding model — does not address the underlying architecture.

“You can’t scale your way out of this,” he said. “It’s not a problem you can solve with more dimensions and more parameters.”

Why the standard alternatives all fall short

The natural instinct when retrieval precision fails is to layer on additional approaches. The research tested several of them and found each fails in a different way.

Advertisement

Hybrid search. Combining embedding-based retrieval with keyword search is already standard practice for closing precision gaps. But Rajamohan said keyword search cannot catch the failure mode this research identifies, because the problem is not missing words — it is misread structure.

“If you have a sentence like ‘Rome is closer than Paris’ and another that says ‘Paris is closer than Rome,’ and you do an embedding retrieval followed by a text search, you’re not going to be able to tell the difference,” he said. “The same words exist in both sentences.”

MaxSim reranking. Some teams add a second scoring layer that compares individual query words against individual document words rather than relying on the single compressed vector. This approach, known as MaxSim or late interaction and used in systems like ColBERT, did improve relevance benchmark scores in the research. But it completely failed to reject structural near-misses, assigning them near-identity similarity scores. 

The problem is that relevance and identity are different objectives. MaxSim is optimized for the former and blind to the latter. A team that adds MaxSim and sees benchmark improvement may be solving a different problem than the one they have.

Advertisement

Cross-encoders. These work by feeding the query and candidate document into the model simultaneously, letting it compare every word against every word before making a decision. That full comparison is what makes them accurate — and what makes them too expensive to run at production scale. Rajamohan said his team investigated them. They work in the lab and break under real query volumes.

Contextual memory. Also sometimes referred to as agentic memory, these systems are increasingly cited as the path beyond RAG, but Rajamohan said moving to that type of  architecture does not eliminate the structural retrieval problem. Those systems still depend on retrieval at query time, which means the same failure modes apply. The main difference is looser latency requirements, not a precision fix.

The two-stage fix the research validated

The common thread across every failed approach is the same: a single scoring mechanism trying to handle both recall and precision at once. The research validated a different architecture: stop trying to do both jobs with one vector, and assign each job to a dedicated stage.

Stage one: recall. The first stage works exactly as standard dense retrieval does today — the embedding model compresses documents into vectors and retrieves the closest matches to a query. Nothing changes here. The goal is to cast a wide net and bring back a set of strong candidates quickly. Speed and breadth are what matter at this stage, not perfect precision.

Advertisement

Stage two: precision. The second stage is where the fix lives. Rather than scoring candidates with a single similarity number, a small learned Transformer model examines the query and each candidate at the token level — comparing individual words against individual words to detect structural mismatches like negation flips or role reversals. This is the verification step the single-vector approach cannot perform.

The results. Under end-to-end training, the Transformer verifier outperformed every other approach the research tested on structural near-miss rejection. It was the only approach that reliably caught the failure modes the single-vector system missed.

The tradeoff. Adding a verification stage costs latency. The latency cost depends on how much verification a team runs. For precision-sensitive workloads like legal or accounting applications, full verification at every query is warranted. For general-purpose search, lighter verification may be sufficient. 

The research grew out of a real production problem. Enterprise customers running semantic caching systems were getting fast but semantically incorrect responses back — the retrieval system was treating similar-sounding queries as identical even when their meaning differed. The two-stage architecture is Redis’s proposed fix, with incorporation into its LangCache product on the roadmap but not yet available to customers.

Advertisement

What this means for enterprise teams

The research does not require enterprise teams to rebuild their retrieval pipelines from scratch. But it does ask them to pressure-test assumptions most teams have never examined — about what their embedding models are actually doing, which metrics are worth trusting and where the real precision gaps live in production.

Recognize the tradeoff before tuning around it. Rajamohan said the first practical step is understanding the regression exists. He evaluates any LLM-based retrieval system on three criteria: correctness, completeness and usefulness. Correctness failures cascade directly into the other two, which means a retrieval system that scores well on relevance benchmarks but fails on structural near-misses is producing a false sense of production readiness.

RAG is not obsolete — but know what it can’t do. Rajamohan pushed back firmly on claims that RAG has been superseded. “That’s a massive oversimplification,” he said. “RAG is a very simple pipeline that can be productionized by almost anyone with very little lift.” The research does not argue against RAG as an architecture. It argues against assuming a single-stage RAG pipeline with a fine-tuned embedding model is production-ready for precision-sensitive workloads.

The fix is real but not free. For teams that do need higher precision, Rajamohan said the two-stage architecture is not a prohibitive implementation lift, but adding a verification stage costs latency. “It’s a mitigation problem,” he said. “Not something we can actually solve.”

Advertisement

Source link

Continue Reading

Tech

Mistral AI launches Workflows, a Temporal-powered orchestration engine already running millions of daily executions

Published

on

Mistral AI, the Paris-based artificial intelligence company valued at €11.7 billion ($13.8 billion), today released Workflows in public preview — a production-grade orchestration layer designed to move enterprise AI systems out of proofs of concept and into the business processes that generate revenue.

The product, which launches as part of Mistral’s Studio platform, is the company’s clearest articulation yet of a thesis that is quietly reshaping the enterprise AI market: that the bottleneck for organizations adopting AI is no longer the model itself, but the infrastructure required to run it reliably at scale.

“What we’re seeing today is that organizations are struggling to go beyond isolated proofs of concept,” Elisa Salamanca, who leads go-to-market for Mistral’s enterprise products, told VentureBeat in an exclusive interview ahead of the launch. “The gap is operational. Workflows is the infrastructure to run AI systems reliably across business-critical processes.”

The release arrives at a pivotal moment for both Mistral and the broader AI industry. The dedicated agentic AI market has been valued at approximately $10.9 billion in 2026 and is projected to reach $199 billion by 2034. Yet despite that staggering growth trajectory, industry research points to a stark reality: over 40% of agentic AI projects will be aborted by 2027 due to high costs, unclear value, and complexity. Mistral is betting that Workflows can help its enterprise customers avoid becoming one of those statistics.

Advertisement

Mistral’s new orchestration layer separates execution from control to keep enterprise data private

At its core, Workflows provides a structured system for defining, executing, and monitoring multi-step AI processes — from simple sequential tasks to complex, stateful operations that blend deterministic business rules with the probabilistic outputs of large language models.

Salamanca described Workflows as containing several key components. The first is a development kit that allows engineers to build orchestration logic in just a few lines of Python code. “We have also been able to expose MCP servers,” she explained, referring to the Model Context Protocol standard for connecting AI systems to external tools, “so that they can actually do this with agent authoring.”

The second — and arguably more technically significant — component is an architecture that separates orchestration from execution. “We’re decorrelating the orchestration from the execution,” Salamanca said. “Execution can happen close to the customer’s data — their critical systems — and orchestration can happen on the cloud or wherever they want to run it.” This means the data never has to leave the customer’s perimeter, a design decision with enormous implications for regulated industries where data sovereignty is non-negotiable. “Enterprises do not have to worry about us having access to the data,” she added.

The third pillar is observability. According to Mistral’s blog post announcing the release, every branch, retry, and state change within a workflow is recorded in Studio with native support for OpenTelemetry. Salamanca noted that this is not an afterthought: “You can easily see what decisions have been taken by the workflow, by the agent, and you can deep dive into where problems are happening.”

Advertisement

Workflows is fully customizable across models — engineers can select which model handles which step and can inject arbitrary code, allowing them to blend deterministic pipelines with agentic sections. The system also supports connectors that integrate directly with CRMs, ticketing systems, support platforms, and other enterprise tools, with built-in authentication and secrets management.

Why Mistral chose a code-first approach over low-code drag-and-drop builders

Unlike some competitors offering drag-and-drop workflow builders, Mistral has deliberately targeted developers and engineers rather than business users. “There are a couple of solutions out there that have click-and-drag, drag-and-drop solutions for workflows,” Salamanca acknowledged. “This is not the approach that we’ve been taking. We’ve been really focused towards developers and critical systems that will not scale if you’re doing these drag-and-drop workflows.”

The decision is part of a broader philosophy at Mistral: that enterprise AI systems handling mission-critical operations — cargo releases, compliance reviews, financial transactions — require the precision and version control that only code can provide. Business users are not excluded from the picture, but their role is downstream. Once engineers write a workflow in Python, it can be published to Le Chat, Mistral’s chatbot platform, so anyone in the organization can trigger it. Every step remains tracked and auditable in Studio.

Under the hood, Workflows runs on Temporal’s durable execution engine — a platform whose $5 billion valuation reflects how its durable execution capabilities, originally built for cloud workflow orchestration, have become essential infrastructure for AI agents requiring reliable, long-running, stateful processes. Temporal’s customers include OpenAI, Snap, Netflix, and JPMorgan Chase, and its technology powers orchestration at companies like Stripe and Salesforce.

Advertisement

Mistral extended Temporal’s core engine for AI-specific workloads by adding streaming, payload handling, multi-tenancy, and observability that the base engine does not provide out of the box. “Workflows is built on top of Temporal,” Salamanca confirmed. “We added all the AI requirements to make these AI workflows reliable. It provides out of the box durability, retries, state management. Whenever there’s a failure, it starts again wherever it stopped.” Originally spun out of Uber’s Cadence project, Temporal transparently handles retries, state persistence, and timeouts, providing durable execution across failures. In late 2025, Temporal joined the newly formed Agentic AI Foundation as a Gold Member and announced an official OpenAI Agents SDK integration. By building on this infrastructure rather than creating a proprietary alternative, Mistral inherits battle-tested reliability while focusing its own engineering efforts on the AI-specific layer that sits above it.

From cargo ships to KYC reviews, customers are already running millions of daily executions

Mistral is not launching Workflows as a concept — the company says customers are already running the product in production, processing millions of executions daily across three primary use cases.

The first is cargo release automation in the logistics sector. Global shipping still runs on paperwork, and a single cargo release can involve customs declarations, dangerous goods classifications, safety inspections, and regulatory checks spanning multiple jurisdictions. Salamanca described the scope of the problem: “Their global shipping today runs on paperwork. They have to involve customs declaration, Dangerous Goods classification, safety inspections, regulatory checks, and Workflows is now powering that with our models and business rules inside.”

Critically, the system keeps humans in the loop at the right moments. According to Mistral’s blog, the human approval step in a workflow is a single line of code — wait_for_input() — that pauses the workflow indefinitely with no compute consumption, notifies the reviewer, and resumes exactly where it left off once approval is given. “Humans are still in the loop, but they’re in the loop at the right time,” Salamanca said. “They just get the validation — I don’t have to go into multiple tools — and the shipment gets released.”

Advertisement

The second production use case is document compliance checking for financial institutions, specifically Know Your Customer reviews. These reviews are manual, repetitive, and traditionally require hours of analyst time per case. Salamanca said Workflows now processes these reviews in minutes and provides outputs in an auditable manner — a requirement for meeting regulatory obligations.

The third example involves customer support in the banking sector. “You’d have millions of users actually asking to have credit cards blocked, or feedbacks on their account situation, on their credit feedbacks,” Salamanca said. With Workflows, incoming support tickets are analyzed, categorized by intent and urgency, and routed automatically. Each routing decision is visible and traceable in Studio, and when the system gets a categorization wrong, the team can correct it at the workflow level without retraining the model.

How Workflows fits into Mistral’s three-layer enterprise AI platform strategy

Workflows does not exist in isolation. It is the middle layer of a three-part enterprise platform that Mistral has been assembling at a rapid clip throughout 2026.

At the bottom sits Forge, the custom model training platform Mistral launched in March at Nvidia’s GTC conference. Forge allows organizations to build, customize, and continuously improve AI models using their own proprietary data. At the top sits Vibe, Mistral’s coding agent platform that provides the user-facing interaction layer — available on web, mobile, or desktop.

Advertisement

Salamanca connected the three explicitly: “We just released Forge. It enables you to create your own models. But the question is, how do you put these models to do valuable work for your enterprise? That’s where Workflows comes in, because this is the orchestration piece — how you blend in deterministic rules and agentic capabilities. And then if you really want to have your end users interact with these AI patterns, it’s where Vibe comes into play.”

Forge is already seeing strong traction, Salamanca said, across two distinct patterns of enterprise demand. “First, they wanted to really build completely dedicated models to solve unique problems — transformers-based architecture for time series in the financial sector, adding new types of modalities to the LLMs,” she explained. “And the second motion was about customers with really specific tasks they want to solve. Reinforcement learning really caught their attention as to how they can use Forge and Forge RL to actually have models do these tasks very well.”

This layered architecture — model customization, workflow orchestration, and end-user interfaces — positions Mistral as something more ambitious than a model provider. It is building a full-stack enterprise AI platform, a strategy that pits it directly against not just other AI labs like OpenAI and Anthropic, but also against the hyperscale cloud providers. The company’s product portfolio now ranges, as Salamanca put it, “from compute to end-user interfaces,” including data centers in Europe, document processing with its OCR model, and audio capabilities through its Voxtral models.

Mistral’s aggressive scaling campaign and the $14 billion valuation powering it

The Workflows launch comes as Mistral executes one of the most aggressive scaling campaigns in the history of the European technology industry. The French AI startup has increased its revenue twentyfold within a year, with co-founder and CEO Arthur Mensch putting the company’s annualized revenue run rate at over $400 million, compared to just $20 million the previous year. The Paris-based company aims to achieve recurring annual revenue of more than $1 billion by year-end.

Advertisement

The company’s fundraising trajectory has been equally dramatic. Mistral announced a €1.7 billion ($1.9 billion) Series C round at a €11.7 billion ($12.8 billion) valuation in September 2025. Bloomberg reported in September 2025 that the company was finalizing a €2 billion investment valuing it at €12 billion ($14 billion). ASML led the round and contributed €1.3 billion, a landmark investment that aligned chip manufacturing expertise with frontier AI development and underscored European industrial capital’s commitment to building a sovereign AI ecosystem. Mistral then secured $830 million in debt in March 2026 to buy 13,800 Nvidia chips for a new data center near Paris.

The financial picture illustrates why Workflows matters strategically. Mistral’s revenue growth is being driven primarily by enterprise adoption, with approximately 60% of revenue coming from Europe, according to CEO Mensch’s public statements. Those enterprise customers are not buying Mistral’s models for casual chatbot applications — they are deploying them in regulated, mission-critical environments where reliability and data sovereignty are table stakes. Workflows gives those customers the production infrastructure they need to actually deploy AI systems that matter.

In May 2025, Mistral released Mistral Medium 3, which was priced at $0.40 per million input tokens and $2 per million output tokens. The company said clients in financial services, energy, and healthcare had been beta testing it for customer service, workflow automation, and analyzing complex datasets. That model now becomes one of many that can be plugged into Workflows, creating a flywheel where better models drive more workflow adoption, which in turn drives more inference revenue.

Where Mistral’s orchestration play fits in an increasingly crowded competitive landscape

Mistral’s entry into workflow orchestration arrives in an increasingly crowded field. AI orchestration platforms are quickly becoming the backbone of enterprise AI systems in 2026, and as businesses deploy multiple AI agents, tools, and LLMs, the need for unified control, oversight, and efficiency has never been greater.

Advertisement

Major cloud providers — Amazon with Bedrock AgentCore, Microsoft with Copilot Studio, Google with Vertex AI’s agent tools, and IBM with WatsonX — all offer some form of workflow or agent orchestration. Open-source frameworks like LangChain, LlamaIndex, and Microsoft AutoGen provide developer-level building blocks. And dedicated orchestration startups are proliferating.

Mistral’s differentiation rests on three pillars. First, vertical integration: because Workflows is native to Studio, the orchestration layer and the components it orchestrates — models, agents, connectors, observability — are built to work together, eliminating the integration tax that enterprises pay when stitching together disparate tools. Second, deployment flexibility: the split control-plane/data-plane architecture means customers in regulated industries can run execution workers in their own environments while still benefiting from managed orchestration. Third, data sovereignty: Mistral’s European roots and infrastructure investments give it a natural advantage with organizations wary of routing sensitive data through U.S.-headquartered cloud providers — a concern that has intensified amid ongoing geopolitical tensions and growing European anxiety about relying on foreign providers for over 80% of digital services and infrastructure.

Still, the challenges are real. OpenAI and Anthropic both have significantly larger model ecosystems and developer communities. The hyperscalers control the cloud infrastructure where most enterprise workloads actually run. And the enterprise sales cycles for production-grade AI deployments remain long and complex, requiring deep technical integration work that even well-funded startups can struggle to staff.

What comes next for Workflows — and why Mistral thinks orchestration is the real AI battleground

Salamanca outlined three areas of near-term development. First, Mistral plans to release a more managed version of Workflows that abstracts deployment logic for developers who don’t need granular control over worker placement. “Whenever you want to have this flexibility, you can, but if you want to be able to have this on a managed infrastructure, even if it’s running in your own VPC, this is something that we’re adding,” she said.

Advertisement

Second, the company intends to make Workflows accessible to business users, not just engineers. “With Vibe code, you can actually author a workflow. This can be executed at scale, and any end user, in the end, can actually do that with Workflows,” Salamanca explained. The third area is enterprise guardrails and safety controls for agentic applications — ensuring agents use the correct tools, run with appropriate permissions, and that administrators can enforce policies at scale. “Making sure that we have all these enterprise controls to be able to scale the authoring and the building of these workflows is something we’re actively working on,” she said.

The Python SDK for Workflows (v3.0) is now publicly available. Developers can try the product in Studio and access documentation and demo templates immediately. Mistral will be hosting its inaugural AI Now Summit in Paris on May 27–28, where the company is expected to provide additional details on its platform roadmap.

For three years, the AI industry has been captivated by a single question: who can build the most powerful model? Mistral’s Workflows launch suggests the company has moved on to a different question entirely — one that may prove far more consequential for the enterprises writing the checks. It’s not about which model is smartest. It’s about which one can actually show up for work.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025