Connect with us
DAPA Banner

Tech

Honor 600 Pro vs Samsung Galaxy S26: Which should you get?

Published

on

Although it’s currently only available to buy in Malaysia, we’re keen to see how the top-end Honor 600 Pro compares to the Samsung Galaxy S26.

Ahead of our review, we’ve compared the specs of the two Android phones and highlighted all the noteworthy differences between them below. Keep reading to learn more about what separates the Honor 600 Pro from the Samsung Galaxy S26

Not sold on either handset? Our best Android phones and best mid-range phones should have you covered.

Price and Availability

At the time of writing, the Honor 600 Pro is only available to buy in Malaysia although it will see a more global launch in the coming weeks.

Advertisement

The Galaxy S26 is part of Samsung’s 2026 flagship S-series. With a starting RRP of £899/$899, the S26 is the most affordable of the line-up. 

Advertisement

Snapdragon 8 Elite vs Exynos 2600

Samsung caused something of a stir when the S26 series first launched, as it was revealed that while US customers see Snapdragon 8 Elite Gen 5 for Galaxy across the entire line-up, elsewhere the Galaxy S26 and S26 Plus are fitted with Exynos 2600.

Even so, Samsung promises there shouldn’t be any differences in performance between Exynos 2600 and Snapdragon 8 Elite Gen 5. In fact, we found that the phone benchmarked strongly against the Snapdragon 8 Elite Gen 5-equipped Ultra variant.

Advertisement
Samsung Galaxy S26 in handSamsung Galaxy S26 in hand
Samsung Galaxy S26. Image Credit (Trusted Reviews)

Otherwise, and as expected, the S26 runs brilliantly for everything from scrolling and taking photos to even intense gaming sessions too. So, although it may not boast a Qualcomm chip, it’s still a solid performer.

Instead, Honor 600 Pro runs on Qualcomm’s Snapdragon 8 Elite – the brand’s 2025 flagship. With this in mind, while it will likely struggle to reach the heights of Snapdragon 8 Elite Gen 5 and Exynos 2600, it is still a great chip that powered many of the best Android phones last year. 

Honor 600 Pro has a larger battery

Samsung isn’t known for equipping its phones with mighty batteries, and the Galaxy S26 is no exception. In fact, with a 4300mAh cell, it’s actually on the smaller side. Having said that, we still found the S26 to be a solid all-day phone, especially for those who average around three to four hours of screen time a day.

Advertisement

Honor 600 Pro in handHonor 600 Pro in hand
Honor 600 Pro in hand. Image Credit (Honor)

In comparison, the Honor 600 Pro sports a mammoth 7000mAh cell. We’ll have to wait until we review the Honor 600 Pro to see how its battery life really measures up, however we’d hope that such a large cell will result in at least a full day, or even a two-day charge.

Honor 600 Pro is IP68, IP69 and IP69K rated

Durability should be a key consideration when you’re buying a new phone, and it’s fair to say that Honor doesn’t want you to take any chances with the 600 Pro. In fact, with IP68, IP69 and IP69K ratings, not only is the handset dust-tight but it can survive water submersion and even exposure to high pressure and high temperature water jets too. 

Advertisement
Honor 600 Pro in waterHonor 600 Pro in water
Honor 600 Pro. Image Credit (Honor)

That undoubtedly sounds impressive on paper, but we’d argue that perhaps IP69 and IP69K ratings aren’t that necessary. After all, how often will your phone realistically be exposed to water jets?

Samsung takes a more realistic approach, as the Galaxy S26 is equipped with a simple IP68 rating instead. That means it’s dust-tight and can survive water submersion too. 

Advertisement

Honor 600 Pro has a 200MP main lens

Both phones are fitted with three rear lenses, including a main, ultrawide and a telephoto, however they differ with their exact resolutions and offerings.

The Galaxy S26’s camera hardware may seem a bit familiar, as Samsung hasn’t made any major changes in the last few series. This is a shame as, although overall the camera hardware is solid, it’s starting to show its age – especially as it needs to compete with the likes of the best camera phones

Advertisement
Samsung Galaxy S26 rearSamsung Galaxy S26 rear
Samsung Galaxy S26. Image Credit (Trusted Reviews)

While the main 50MP lens is the strongest of the three, and can cope across most lighting conditions, the ultrawide is easily the weakest of the bunch. It’s fine in bright conditions, but image quality drops quickly in difficult lighting situations, with images looking grainy and rough. Finally, the 3x telephoto works well in specific scenarios, but it can struggle to completely lock focus with an object.

The Honor 600 Pro is instead fitted with an impressive sounding 200MP main camera that’s supported by a 50MP 3.5x zoom and a 12MP ultrawide. As we’re yet to review the Honor 600 Pro, we’ll have to wait and see how the hardware really performs. However, Honor promises that the camera set-up should offer true-to-life colour reproduction, impressive stability and better shots taken at night too. 

Advertisement

Honor AI vs Galaxy AI

Is it even a smartphone in 2026 if there isn’t a sprinkling of AI features? Both the Honor 600 Pro and Galaxy S26 are equipped with plenty of AI tools, including access to Google Gemini too. 

You’ll likely have heard of Galaxy AI, as the toolkit is arguably one of the most fleshed out available, with genuinely useful features including an array of photo editing capabilities, Live Translate and more.

Advertisement
Samsung Galaxy S26 Audio Eraser Samsung Galaxy S26 Audio Eraser
Samsung Galaxy S26. Image Credit (Trusted Reviews)

In comparison, the Honor 600 Pro has its own dedicated AI button that opens up what’s arguably Honor’s headline feature: AI Image to Video 2.0. This tool allows you to turn up to three images into a short video, with just a few prompts.

Otherwise, like Galaxy AI, there are photo editing features to remove unwanted objects from images, plus the Honor 600 Pro includes AI scam and deepfake detection too. 

Early Verdict

It’s difficult to compare the Honor 600 Pro and Samsung Galaxy S26 fairly, as we don’t know how much the former will cost in the UK. Having said that, if you want a phone with a larger battery, a fairly recent Qualcomm chip and an intriguing camera set-up, then the Honor 600 Pro could be the one for you.

Advertisement

On the other hand, if you want to play around with a more established set of AI features, want a reliable camera set-up and are in Samsung’s ecosystem, then the Galaxy S26 remains a tough one to beat.

Advertisement

We’ll be sure to update this versus once we review the Honor 600 Pro.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

DJI Mic Mini 2 vs DJI Mic Mini: tiny upgrade, massive price cut, but there’s a Mini 2S on the horizon which will add a key feature

Published

on


We rated the DJI Mic Mini as the best small wireless mic when it was launched in 2024, and it now has a successor in the shape of the Mic Mini 2. Both are 5-star products for content creators wanting an affordable, lightweight, and simple mic for better audio on the go.

Advertisement

If you already own a Mic Mini, there’s very little reason to upgrade to the Mic Mini 2 because performance is practically the same; both mics feature clear 24-bit audio, two-level noise reduction, a transmission range up to 400m, healthy battery life, and a lightweight 11g build.

Source link

Advertisement
Continue Reading

Tech

‘Human lives are already being lost’: Open letter signed by hundreds of Google employees requests CEO reject ‘unethical and dangerous’ US military AI use

Published

on


  • Google employees sign open letter to CEO over concerns of military AI use
  • AI developers do not want their technology used for ‘classified purposes’
  • Google is currently negotiating a contract with the Pengaton

Over 600 Google employees have signed a letter calling on CEO Sundar Pichai to reject any uses of its AI technology for military purposes.

The open letter highlights the serious ethical concerns the staff have, stating, “Human lives are already being lost and civil liberties put at risk at home and abroad from misuses of the technology we are playing a key role in building.”

Source link

Continue Reading

Tech

15 best employers in S’pore to grow your career in 2026

Published

on

On Apr 28, LinkedIn unveiled its 2026 Top Companies list, naming the 15 best places to work in Singapore.

The rankings are based on LinkedIn’s own data, with companies assessed on various elements of career progression, including factors like how well they help employees progress in their careers and build new skills.

Here are this year’s top companies to grow your career in Singapore, according to LinkedIn:

1. DBS Bank

Image Credit: DBS Bank

Claiming the top spot once again is DBS Bank, Southeast Asia’s largest bank. The financial giant is currently hiring for over 200 roles here, including:

You can view their job openings here.

Advertisement

2. Microsoft

Image Credit: Shutterstock.com

Microsoft is a technology company that develops software, hardware and cloud‑based services. Singapore serves as a key regional hub for its Asia‑Pacific operations, supporting customers across consumer, enterprise and public sector markets.

It is also the parent company of Activision Blizzard, GitHub, Skype, LinkedIn and others. LinkedIn and its employees are excluded from Microsoft’s score.

The company is looking for new hires for these positions:

Click here to view their full job list.

3. Goldman Sachs

Image Credit: Paulo Fridman

Goldman Sachs is a financial services firm that provides investment banking, asset management and financial advisory services. It has offices across Asia, including Singapore, serving corporations, governments and institutional investors in the region.

These are some of the jobs the firm is hiring for:

Advertisement

View Goldman Sachs’ full job list here.

4. Roche

Image Credit: SCA Design

Originally founded in Switzerland, Roche is a multinational healthcare company that focuses on research and development of medical solutions for major disease areas such as oncology, immunology, and neuroscience.

Some of its available positions include:

See its full job list here.

5. JPMorgan Chase

Image Credit: Anim Farm via Google

The fifth largest bank in the world, JPMorgan Chase & Company, first opened in Singapore back in 1964 and has established itself as a global financial services firm across 17 markets in the Asian Pacific region.

The firm is looking for fresh faces for these roles:

Advertisement

Here’s the bank’s full list of available roles.

6. HP

Image Credit: Travel_Adventure/ Shutterstock.com

A heavyweight in the global IT industry, HP is a technology company that manufactures a range of monitors, laptops and desktops. It also produces and offers services around printers and 3D printers.

The tech company is currently looking to fill these roles:

You can browse through HP’s full job listings here.

7. Standard Chartered

Image Credit: Standard Chartered

Another notable bank on the list, Standard Chartered offers banking services across 52 markets worldwide.

The bank’s on the lookout for people to fill these positions:

Advertisement

You can look at Standard Chartered’s full job list here.

8. MSD

Image Credit: MSD

Known as Mereck in the United States and Canada, MSD is a pharmaceutical company that specialises in producing prescription medicines, vaccines and animal health products.

MSD is currently hiring for the following roles:

Browse through their full job list here.

9.  Genting Berhad

Image Credit: Genting

Genting Berhad is a diversified company with businesses in leisure, hospitality, energy and plantations.

The group’s Singapore subsidiary, Genting Singapore Limited, has a significant presence in the city-state linked to its regional leisure and hospitality activities.

Advertisement

It is currently hiring for these roles in Singapore:

Click here to view their full job list.

10. Alphabet

Image Credit: Shutterstock

Alphabet is the parent company behind tech powerhouses, including Google and YouTube.

It is currently hiring for the positions below:

View Alphabet’s full job list here.

Advertisement

11. Barclays

Image Credit: Shutterstock.com

Barclays is a financial services company providing banking, lending, investment and wealth management services. It serves individuals, businesses and institutional clients through retail and corporate banking operations.

These are some of the roles it is hiring for in Singapore:

You can look at Barclays’ full job list here.

12. Apple

Image Credit: Shutterstock.com

The company behind the all-familiar iPhone, Apple, first opened its facility in Singapore in 1981 and has since grown its presence in the city-state with three outlets in Orchard, Marina Bay Sands and Jewel Changi.

Apple has close to 100 openings listed on LinkedIn as of writing, including:

View all of Apple’s job openings here.

Advertisement

13. Micron Technology

Image Credit: Micron Technology

Micron Technology is a semiconductor company that designs and manufactures memory and storage products. These components are used in computers, mobile devices, data centres and other electronic systems.

The firm is currently hiring for these positions:

Click here to view Micron Technology’s full job list.

14. Rockwell Automation

Image Credit: Shutterstock.com

Rockwell Automation is an industrial technology company that provides hardware, software and services for manufacturing and production operations. Its products help businesses automate processes and manage industrial systems.

In Singapore, it has 38 job openings, including:

View Rockwell Automation’s full job listing here.

Advertisement

15. Citi

Image Credit: Bloomberg

Citi operates as a full-service bank in Singapore. It provides individuals, corporations, governments, investors and institutions with a range of financial products and banking services.

The bank’s on the lookout for people to fill these positions:

You can view their job openings here.

Featured Image Credit: Shutterstock.com/ Micron Technology/ Standard Chartered/ Bloomberg

Advertisement

Source link

Continue Reading

Tech

RAG precision tuning can quietly cut retrieval accuracy by 40%, putting agentic pipelines at risk

Published

on

Enterprise teams that fine-tune their RAG embedding models for better precision may be unintentionally degrading the retrieval quality those pipelines depend on, according to new research from Redis.

The paper, “Training for Compositional Sensitivity Reduces Dense Retrieval Generalization,” tested what happens when teams train embedding models for compositional sensitivity. That is the ability to catch sentences that look nearly identical but mean something different — “the dog bit the man” versus “the man bit the dog,” or a negation flip that reverses a statement’s meaning entirely. That training consistently broke dense retrieval generalization, how well a model retrieves correctly across broad topics and domains it wasn’t specifically trained on. Performance dropped by 8 to 9 percent on smaller models and by 40 percent on a current mid-size embedding model teams are actively using in production.

The findings have direct implications for enterprise teams building agentic AI pipelines, where retrieval quality determines what context flows into an agent’s reasoning chain. A retrieval error in a single-stage pipeline returns a wrong answer. The same error in an agentic pipeline can trigger a cascade of wrong actions downstream.

Srijith Rajamohan, AI Research Leader at Redis and one of the paper’s authors, said the finding challenges a widespread assumption about how embedding-based retrieval actually works. 

Advertisement

“There’s this general notion that when you use semantic search or similar semantic similarity, we get correct intent. That’s not necessarily true,” Rajamohan told VentureBeat. “A close or high semantic similarity does not actually mean an exact intent.”

The geometry behind the retrieval tradeoff

Embedding models work by compressing an entire sentence into a single point in a high-dimensional space, then finding the closest points to a query at retrieval time. That works well for broad topical matching — documents about similar subjects end up near each other. The problem is that two sentences with nearly identical words but opposite meanings also end up near each other, because the model is working from word content rather than structure.

That is what the research quantified. When teams fine-tune an embedding model to push structurally different sentences apart — teaching it that a negation flip which reverses a statement’s meaning is not the same as the original — the model uses representational space it was previously using for broad topical recall. The two objectives compete for the same vector.

The research also found the regression is not uniform across failure types. Negation and spatial flip errors improved measurably with structured training. Binding errors — where a model confuses which modifier applies to which word, such as which party a contract obligation falls on — barely moved. For enterprise teams, that means the precision problem is harder to fix in exactly the cases where getting it wrong has the most consequences.

Advertisement

The reason most teams don’t catch it is that fine-tuning metrics measure the task being trained for, not what happens to general retrieval across unrelated topics. A model can show strong improvement on near-miss rejection during training while quietly regressing on the broader retrieval job it was hired to do. The regression only surfaces in production.

Rajamohan said the instinct most teams reach for — moving to a larger embedding model — does not address the underlying architecture.

“You can’t scale your way out of this,” he said. “It’s not a problem you can solve with more dimensions and more parameters.”

Why the standard alternatives all fall short

The natural instinct when retrieval precision fails is to layer on additional approaches. The research tested several of them and found each fails in a different way.

Advertisement

Hybrid search. Combining embedding-based retrieval with keyword search is already standard practice for closing precision gaps. But Rajamohan said keyword search cannot catch the failure mode this research identifies, because the problem is not missing words — it is misread structure.

“If you have a sentence like ‘Rome is closer than Paris’ and another that says ‘Paris is closer than Rome,’ and you do an embedding retrieval followed by a text search, you’re not going to be able to tell the difference,” he said. “The same words exist in both sentences.”

MaxSim reranking. Some teams add a second scoring layer that compares individual query words against individual document words rather than relying on the single compressed vector. This approach, known as MaxSim or late interaction and used in systems like ColBERT, did improve relevance benchmark scores in the research. But it completely failed to reject structural near-misses, assigning them near-identity similarity scores. 

The problem is that relevance and identity are different objectives. MaxSim is optimized for the former and blind to the latter. A team that adds MaxSim and sees benchmark improvement may be solving a different problem than the one they have.

Advertisement

Cross-encoders. These work by feeding the query and candidate document into the model simultaneously, letting it compare every word against every word before making a decision. That full comparison is what makes them accurate — and what makes them too expensive to run at production scale. Rajamohan said his team investigated them. They work in the lab and break under real query volumes.

Contextual memory. Also sometimes referred to as agentic memory, these systems are increasingly cited as the path beyond RAG, but Rajamohan said moving to that type of  architecture does not eliminate the structural retrieval problem. Those systems still depend on retrieval at query time, which means the same failure modes apply. The main difference is looser latency requirements, not a precision fix.

The two-stage fix the research validated

The common thread across every failed approach is the same: a single scoring mechanism trying to handle both recall and precision at once. The research validated a different architecture: stop trying to do both jobs with one vector, and assign each job to a dedicated stage.

Stage one: recall. The first stage works exactly as standard dense retrieval does today — the embedding model compresses documents into vectors and retrieves the closest matches to a query. Nothing changes here. The goal is to cast a wide net and bring back a set of strong candidates quickly. Speed and breadth are what matter at this stage, not perfect precision.

Advertisement

Stage two: precision. The second stage is where the fix lives. Rather than scoring candidates with a single similarity number, a small learned Transformer model examines the query and each candidate at the token level — comparing individual words against individual words to detect structural mismatches like negation flips or role reversals. This is the verification step the single-vector approach cannot perform.

The results. Under end-to-end training, the Transformer verifier outperformed every other approach the research tested on structural near-miss rejection. It was the only approach that reliably caught the failure modes the single-vector system missed.

The tradeoff. Adding a verification stage costs latency. The latency cost depends on how much verification a team runs. For precision-sensitive workloads like legal or accounting applications, full verification at every query is warranted. For general-purpose search, lighter verification may be sufficient. 

The research grew out of a real production problem. Enterprise customers running semantic caching systems were getting fast but semantically incorrect responses back — the retrieval system was treating similar-sounding queries as identical even when their meaning differed. The two-stage architecture is Redis’s proposed fix, with incorporation into its LangCache product on the roadmap but not yet available to customers.

Advertisement

What this means for enterprise teams

The research does not require enterprise teams to rebuild their retrieval pipelines from scratch. But it does ask them to pressure-test assumptions most teams have never examined — about what their embedding models are actually doing, which metrics are worth trusting and where the real precision gaps live in production.

Recognize the tradeoff before tuning around it. Rajamohan said the first practical step is understanding the regression exists. He evaluates any LLM-based retrieval system on three criteria: correctness, completeness and usefulness. Correctness failures cascade directly into the other two, which means a retrieval system that scores well on relevance benchmarks but fails on structural near-misses is producing a false sense of production readiness.

RAG is not obsolete — but know what it can’t do. Rajamohan pushed back firmly on claims that RAG has been superseded. “That’s a massive oversimplification,” he said. “RAG is a very simple pipeline that can be productionized by almost anyone with very little lift.” The research does not argue against RAG as an architecture. It argues against assuming a single-stage RAG pipeline with a fine-tuned embedding model is production-ready for precision-sensitive workloads.

The fix is real but not free. For teams that do need higher precision, Rajamohan said the two-stage architecture is not a prohibitive implementation lift, but adding a verification stage costs latency. “It’s a mitigation problem,” he said. “Not something we can actually solve.”

Advertisement

Source link

Continue Reading

Tech

Mistral AI launches Workflows, a Temporal-powered orchestration engine already running millions of daily executions

Published

on

Mistral AI, the Paris-based artificial intelligence company valued at €11.7 billion ($13.8 billion), today released Workflows in public preview — a production-grade orchestration layer designed to move enterprise AI systems out of proofs of concept and into the business processes that generate revenue.

The product, which launches as part of Mistral’s Studio platform, is the company’s clearest articulation yet of a thesis that is quietly reshaping the enterprise AI market: that the bottleneck for organizations adopting AI is no longer the model itself, but the infrastructure required to run it reliably at scale.

“What we’re seeing today is that organizations are struggling to go beyond isolated proofs of concept,” Elisa Salamanca, who leads go-to-market for Mistral’s enterprise products, told VentureBeat in an exclusive interview ahead of the launch. “The gap is operational. Workflows is the infrastructure to run AI systems reliably across business-critical processes.”

The release arrives at a pivotal moment for both Mistral and the broader AI industry. The dedicated agentic AI market has been valued at approximately $10.9 billion in 2026 and is projected to reach $199 billion by 2034. Yet despite that staggering growth trajectory, industry research points to a stark reality: over 40% of agentic AI projects will be aborted by 2027 due to high costs, unclear value, and complexity. Mistral is betting that Workflows can help its enterprise customers avoid becoming one of those statistics.

Advertisement

Mistral’s new orchestration layer separates execution from control to keep enterprise data private

At its core, Workflows provides a structured system for defining, executing, and monitoring multi-step AI processes — from simple sequential tasks to complex, stateful operations that blend deterministic business rules with the probabilistic outputs of large language models.

Salamanca described Workflows as containing several key components. The first is a development kit that allows engineers to build orchestration logic in just a few lines of Python code. “We have also been able to expose MCP servers,” she explained, referring to the Model Context Protocol standard for connecting AI systems to external tools, “so that they can actually do this with agent authoring.”

The second — and arguably more technically significant — component is an architecture that separates orchestration from execution. “We’re decorrelating the orchestration from the execution,” Salamanca said. “Execution can happen close to the customer’s data — their critical systems — and orchestration can happen on the cloud or wherever they want to run it.” This means the data never has to leave the customer’s perimeter, a design decision with enormous implications for regulated industries where data sovereignty is non-negotiable. “Enterprises do not have to worry about us having access to the data,” she added.

The third pillar is observability. According to Mistral’s blog post announcing the release, every branch, retry, and state change within a workflow is recorded in Studio with native support for OpenTelemetry. Salamanca noted that this is not an afterthought: “You can easily see what decisions have been taken by the workflow, by the agent, and you can deep dive into where problems are happening.”

Advertisement

Workflows is fully customizable across models — engineers can select which model handles which step and can inject arbitrary code, allowing them to blend deterministic pipelines with agentic sections. The system also supports connectors that integrate directly with CRMs, ticketing systems, support platforms, and other enterprise tools, with built-in authentication and secrets management.

Why Mistral chose a code-first approach over low-code drag-and-drop builders

Unlike some competitors offering drag-and-drop workflow builders, Mistral has deliberately targeted developers and engineers rather than business users. “There are a couple of solutions out there that have click-and-drag, drag-and-drop solutions for workflows,” Salamanca acknowledged. “This is not the approach that we’ve been taking. We’ve been really focused towards developers and critical systems that will not scale if you’re doing these drag-and-drop workflows.”

The decision is part of a broader philosophy at Mistral: that enterprise AI systems handling mission-critical operations — cargo releases, compliance reviews, financial transactions — require the precision and version control that only code can provide. Business users are not excluded from the picture, but their role is downstream. Once engineers write a workflow in Python, it can be published to Le Chat, Mistral’s chatbot platform, so anyone in the organization can trigger it. Every step remains tracked and auditable in Studio.

Under the hood, Workflows runs on Temporal’s durable execution engine — a platform whose $5 billion valuation reflects how its durable execution capabilities, originally built for cloud workflow orchestration, have become essential infrastructure for AI agents requiring reliable, long-running, stateful processes. Temporal’s customers include OpenAI, Snap, Netflix, and JPMorgan Chase, and its technology powers orchestration at companies like Stripe and Salesforce.

Advertisement

Mistral extended Temporal’s core engine for AI-specific workloads by adding streaming, payload handling, multi-tenancy, and observability that the base engine does not provide out of the box. “Workflows is built on top of Temporal,” Salamanca confirmed. “We added all the AI requirements to make these AI workflows reliable. It provides out of the box durability, retries, state management. Whenever there’s a failure, it starts again wherever it stopped.” Originally spun out of Uber’s Cadence project, Temporal transparently handles retries, state persistence, and timeouts, providing durable execution across failures. In late 2025, Temporal joined the newly formed Agentic AI Foundation as a Gold Member and announced an official OpenAI Agents SDK integration. By building on this infrastructure rather than creating a proprietary alternative, Mistral inherits battle-tested reliability while focusing its own engineering efforts on the AI-specific layer that sits above it.

From cargo ships to KYC reviews, customers are already running millions of daily executions

Mistral is not launching Workflows as a concept — the company says customers are already running the product in production, processing millions of executions daily across three primary use cases.

The first is cargo release automation in the logistics sector. Global shipping still runs on paperwork, and a single cargo release can involve customs declarations, dangerous goods classifications, safety inspections, and regulatory checks spanning multiple jurisdictions. Salamanca described the scope of the problem: “Their global shipping today runs on paperwork. They have to involve customs declaration, Dangerous Goods classification, safety inspections, regulatory checks, and Workflows is now powering that with our models and business rules inside.”

Critically, the system keeps humans in the loop at the right moments. According to Mistral’s blog, the human approval step in a workflow is a single line of code — wait_for_input() — that pauses the workflow indefinitely with no compute consumption, notifies the reviewer, and resumes exactly where it left off once approval is given. “Humans are still in the loop, but they’re in the loop at the right time,” Salamanca said. “They just get the validation — I don’t have to go into multiple tools — and the shipment gets released.”

Advertisement

The second production use case is document compliance checking for financial institutions, specifically Know Your Customer reviews. These reviews are manual, repetitive, and traditionally require hours of analyst time per case. Salamanca said Workflows now processes these reviews in minutes and provides outputs in an auditable manner — a requirement for meeting regulatory obligations.

The third example involves customer support in the banking sector. “You’d have millions of users actually asking to have credit cards blocked, or feedbacks on their account situation, on their credit feedbacks,” Salamanca said. With Workflows, incoming support tickets are analyzed, categorized by intent and urgency, and routed automatically. Each routing decision is visible and traceable in Studio, and when the system gets a categorization wrong, the team can correct it at the workflow level without retraining the model.

How Workflows fits into Mistral’s three-layer enterprise AI platform strategy

Workflows does not exist in isolation. It is the middle layer of a three-part enterprise platform that Mistral has been assembling at a rapid clip throughout 2026.

At the bottom sits Forge, the custom model training platform Mistral launched in March at Nvidia’s GTC conference. Forge allows organizations to build, customize, and continuously improve AI models using their own proprietary data. At the top sits Vibe, Mistral’s coding agent platform that provides the user-facing interaction layer — available on web, mobile, or desktop.

Advertisement

Salamanca connected the three explicitly: “We just released Forge. It enables you to create your own models. But the question is, how do you put these models to do valuable work for your enterprise? That’s where Workflows comes in, because this is the orchestration piece — how you blend in deterministic rules and agentic capabilities. And then if you really want to have your end users interact with these AI patterns, it’s where Vibe comes into play.”

Forge is already seeing strong traction, Salamanca said, across two distinct patterns of enterprise demand. “First, they wanted to really build completely dedicated models to solve unique problems — transformers-based architecture for time series in the financial sector, adding new types of modalities to the LLMs,” she explained. “And the second motion was about customers with really specific tasks they want to solve. Reinforcement learning really caught their attention as to how they can use Forge and Forge RL to actually have models do these tasks very well.”

This layered architecture — model customization, workflow orchestration, and end-user interfaces — positions Mistral as something more ambitious than a model provider. It is building a full-stack enterprise AI platform, a strategy that pits it directly against not just other AI labs like OpenAI and Anthropic, but also against the hyperscale cloud providers. The company’s product portfolio now ranges, as Salamanca put it, “from compute to end-user interfaces,” including data centers in Europe, document processing with its OCR model, and audio capabilities through its Voxtral models.

Mistral’s aggressive scaling campaign and the $14 billion valuation powering it

The Workflows launch comes as Mistral executes one of the most aggressive scaling campaigns in the history of the European technology industry. The French AI startup has increased its revenue twentyfold within a year, with co-founder and CEO Arthur Mensch putting the company’s annualized revenue run rate at over $400 million, compared to just $20 million the previous year. The Paris-based company aims to achieve recurring annual revenue of more than $1 billion by year-end.

Advertisement

The company’s fundraising trajectory has been equally dramatic. Mistral announced a €1.7 billion ($1.9 billion) Series C round at a €11.7 billion ($12.8 billion) valuation in September 2025. Bloomberg reported in September 2025 that the company was finalizing a €2 billion investment valuing it at €12 billion ($14 billion). ASML led the round and contributed €1.3 billion, a landmark investment that aligned chip manufacturing expertise with frontier AI development and underscored European industrial capital’s commitment to building a sovereign AI ecosystem. Mistral then secured $830 million in debt in March 2026 to buy 13,800 Nvidia chips for a new data center near Paris.

The financial picture illustrates why Workflows matters strategically. Mistral’s revenue growth is being driven primarily by enterprise adoption, with approximately 60% of revenue coming from Europe, according to CEO Mensch’s public statements. Those enterprise customers are not buying Mistral’s models for casual chatbot applications — they are deploying them in regulated, mission-critical environments where reliability and data sovereignty are table stakes. Workflows gives those customers the production infrastructure they need to actually deploy AI systems that matter.

In May 2025, Mistral released Mistral Medium 3, which was priced at $0.40 per million input tokens and $2 per million output tokens. The company said clients in financial services, energy, and healthcare had been beta testing it for customer service, workflow automation, and analyzing complex datasets. That model now becomes one of many that can be plugged into Workflows, creating a flywheel where better models drive more workflow adoption, which in turn drives more inference revenue.

Where Mistral’s orchestration play fits in an increasingly crowded competitive landscape

Mistral’s entry into workflow orchestration arrives in an increasingly crowded field. AI orchestration platforms are quickly becoming the backbone of enterprise AI systems in 2026, and as businesses deploy multiple AI agents, tools, and LLMs, the need for unified control, oversight, and efficiency has never been greater.

Advertisement

Major cloud providers — Amazon with Bedrock AgentCore, Microsoft with Copilot Studio, Google with Vertex AI’s agent tools, and IBM with WatsonX — all offer some form of workflow or agent orchestration. Open-source frameworks like LangChain, LlamaIndex, and Microsoft AutoGen provide developer-level building blocks. And dedicated orchestration startups are proliferating.

Mistral’s differentiation rests on three pillars. First, vertical integration: because Workflows is native to Studio, the orchestration layer and the components it orchestrates — models, agents, connectors, observability — are built to work together, eliminating the integration tax that enterprises pay when stitching together disparate tools. Second, deployment flexibility: the split control-plane/data-plane architecture means customers in regulated industries can run execution workers in their own environments while still benefiting from managed orchestration. Third, data sovereignty: Mistral’s European roots and infrastructure investments give it a natural advantage with organizations wary of routing sensitive data through U.S.-headquartered cloud providers — a concern that has intensified amid ongoing geopolitical tensions and growing European anxiety about relying on foreign providers for over 80% of digital services and infrastructure.

Still, the challenges are real. OpenAI and Anthropic both have significantly larger model ecosystems and developer communities. The hyperscalers control the cloud infrastructure where most enterprise workloads actually run. And the enterprise sales cycles for production-grade AI deployments remain long and complex, requiring deep technical integration work that even well-funded startups can struggle to staff.

What comes next for Workflows — and why Mistral thinks orchestration is the real AI battleground

Salamanca outlined three areas of near-term development. First, Mistral plans to release a more managed version of Workflows that abstracts deployment logic for developers who don’t need granular control over worker placement. “Whenever you want to have this flexibility, you can, but if you want to be able to have this on a managed infrastructure, even if it’s running in your own VPC, this is something that we’re adding,” she said.

Advertisement

Second, the company intends to make Workflows accessible to business users, not just engineers. “With Vibe code, you can actually author a workflow. This can be executed at scale, and any end user, in the end, can actually do that with Workflows,” Salamanca explained. The third area is enterprise guardrails and safety controls for agentic applications — ensuring agents use the correct tools, run with appropriate permissions, and that administrators can enforce policies at scale. “Making sure that we have all these enterprise controls to be able to scale the authoring and the building of these workflows is something we’re actively working on,” she said.

The Python SDK for Workflows (v3.0) is now publicly available. Developers can try the product in Studio and access documentation and demo templates immediately. Mistral will be hosting its inaugural AI Now Summit in Paris on May 27–28, where the company is expected to provide additional details on its platform roadmap.

For three years, the AI industry has been captivated by a single question: who can build the most powerful model? Mistral’s Workflows launch suggests the company has moved on to a different question entirely — one that may prove far more consequential for the enterprises writing the checks. It’s not about which model is smartest. It’s about which one can actually show up for work.

Source link

Advertisement
Continue Reading

Tech

5 Popular Honda Motorcycles Offering Deep Discounts & Rebates Until June 2026

Published

on





If you’ve been thinking about a Honda motorcycle, this might be the sign you’ve been looking for. From now until the end of June, Honda’s offering a bunch of really nice “Bonus Bucks” rebates on some of their most popular bikes. The only catch is that you have to buy before June 30, 2026, when the rebate expires.

The fine print is pretty straightforward: Buy one of the new and unregistered models listed below, Honda will give you anywhere from $700 to $1,000 in the form of Bonus Bucks. Unlike those misleading 11% rebates at Menards, this rebate can be applied right there at the dealership at the time of purchase. (Just so you know, though: Bonus Bucks are non-transferrable and can’t be used on taxes and destination-related fees.)

Sounds simple enough, right? To help make it easy to decide, we’ve put together a compilation of the biggest Bonus Bucks offers available on Honda motorcycles. Take a look at some of the meatiest discounts being offered below, then head to Honda’s site to see the full list of models included in the Bonus Bucks promo.

Advertisement

$1,000 bonus bucks on CBR500R models

Honda’s CBR500R is a middleweight sportbike that makes for a nice little entry point into supersport riding. Right now, you can get $1,000 Bonus Bucks if you buy a new and unregistered model from 2025 or earlier. Accounting for that $7,399 base MSRP, that translates to about 13.5% off.

Advertisement

The CBR500R uses a 471cc liquid-cooled parallel-twin engine with dual overhead cams and four valves per cylinder. That’ll give you low-end torque with high-revving horsepower. Its six-speed manual transmission comes with an assist-and-slipper clutch for less lever effort and more stabilized rear-wheel behavior, especially under aggressive downshifting. Plus, the bike’s 41mm inverted Showa SFF-BP fork and Pro-Link rear suspension give you 4.7 inches of travel front and rear, which translates to more responsive handling across all sorts of different road conditions.

Buy one any time between now and the end of June, you’ll get that $1,000 rebate right there on the spot.

Advertisement

$1,000 bonus bucks on CB500F models

It might sound similar to the model above, but the CB500F is not quite the same as the CB500R. One thing that is the same, though? A matching $1,000 rebate. With a base MSRP of $6,899, a thousand bucks off the CB500F comes out to be about the same as a 14.5% discount.

This Honda motorcycle is a great option for riders who prefer a stripped-down, naked-bike aesthetic. Plus, you still get a lot of the same performance fundamentals as its sibling, the CB500R. The CB500F uses the same 471cc liquid-cooled parallel-twin engine plus six-speed manual transmission and slipper clutch combination, just with a more ergonomic and minimalist build. The bike’s compact exhaust system and cast aluminum wheels also help to give it a visual identity all its own. It’s one of the most fuel-efficient cruisers around, as well.

It’s already more affordable than the CB500R, but with an extra $1,000 off, you can ride home on this bike for under $6,000 MSRP.

Advertisement

$750 bonus bucks on CRF450R models

For off-road enthusiasts, Honda also has you covered with a Bonus Bucks incentive of your own. They’re offering $750 off all CRF450R models from 2025 or earlier. This motocross machine gives you competition-level performance for a MSRP of $9,699, which means the Bonus Bucks offer will slash the price to just under nine thousand before taxes and fees.

The CRF450R features a 450cc liquid-cooled single-cylinder engine with a Unicam SOHC design. It’s engineered with a high 13.5:1 compression ratio and an advanced fuel-injection system for all the revving your heart desires. Plus, a close-ratio five-speed transmission for that precise gear spacing you need out there on the track. The Honda dirt bike also includes rider-adjustable features such as selectable engine modes and Selectable Torque Control, so you can tailor your performance based on track conditions.

Advertisement

$750 might not be as much as what the CB500R and CB500F get in Bonus Bucks, but it’s still a significant chunk of change saved.

Advertisement

$700 bonus bucks on CB650R E-Clutch models

Honda also has a Bonus Bucks offer available for its CB650R E-Clutch models. Buy a new and unregistered model from 2025 or earlier, they’ll give you $700 off. With a base MSRP of $9,399, you can drive off on a high-performance naked bike for about $8,699 (pre-taxes and fees).

At its core is a 649cc liquid-cooled inline four-cylinder engine along with Honda’s E-Clutch system. This sweet tech lets riders shift gears without having to use the handlebar-mounted clutch. (Of course, the option’s still there if you prefer that manual operation.) The system also mimics quick-shifter functionality for faster, smoother gear changes as you ride. The CB650R combines this drivetrain with a six-speed transmission, a 41mm Showa SFF-BP front fork, and a rear shock with 5.1 inches of travel.

It’s not as steep as $750 or $1,000 off, but it’s nevertheless a generous discount off the MSRP.

Advertisement

$700 bonus bucks on CBR650R E-Clutch models

At first glance, it might look like we’re covering the same bike twice. But no, the CBR650R is its own distinct bike. It simply shares the same 649cc inline four-cylinder engine as its sibling bike, the CB650R. As a matter of fact, it’s actually more expensive than the latter: a base MSRP of $9,899, a whole $200 more. Still, the offer of $700 off remains the same.

This is a fully faired sportbike that emphasizes both aerodynamic performance and aggressive styling. Chassis components are similar to those of its naked counterpart, but what truly differentiates the CBR650R is its sportbike design. It comes with full fairings that boost aerodynamics and give you a more aggressive riding posture overall. It also comes with a twin-spar frame and Y-spoke aluminum wheels.

Advertisement

Yes, cost of entry is higher than the CB650R, but the rebate brings down the price from the high nine thousands to the low. It might only be 7% off, but it’s much better than nothing.



Advertisement

Source link

Continue Reading

Tech

Apple’s testing 12-month subscriptions with monthly payments

Published

on

Apple is experimenting with a new kind of App Store subscription that sits somewhere between monthly and yearly plans.

Developers can now set up monthly subscriptions with a 12-month commitment, letting users pay in smaller chunks while still signing up for a full year.

The feature is already live for developers to test in App Store Connect and Xcode. However, it hasn’t reached the App Store just yet. That should change when iOS 26.5 rolls out next month, at which point the option will go live for users running iOS 26.4 or later. However, the US and Singapore are notably excluded at launch.

From a user perspective, this isn’t quite as flexible as a typical monthly plan. While you can technically cancel at any time, doing so only stops the subscription from renewing after the full 12-month commitment is completed. In other words, you’re still on the hook for the entire term. You are just paying for it monthly instead of up front.

Advertisement

Apple says it’s adding a few safeguards to make that clearer. Users will be able to track how many payments they’ve made (and how many are left) directly in their Apple account. Meanwhile, reminders via email and push notifications will flag upcoming renewals.

Advertisement

The move gives developers another pricing lever, especially for apps that typically rely on annual plans but want a lower barrier to entry. Splitting the cost across 12 months could make higher-priced subscriptions feel more manageable. This applies even if the overall commitment hasn’t changed.

It’s not clear why Apple is skipping the US and Singapore for now. The company hasn’t said when those regions will get access. Still, the direction here is pretty obvious. Apple is looking for ways to make longer-term subscriptions easier to sell, without fully giving up the predictability of annual billing.

Advertisement

If widely adopted, this could reshape how app subscriptions are presented. This can make “monthly” plans a bit less flexible than they first appear.

Source link

Advertisement
Continue Reading

Tech

AI coding agent running Claude wiped a startup's database (and its backups) in 9 seconds

Published

on


PocketOS, which provides software to car rental businesses, was using the agent against live infrastructure rather than keeping it strictly in a test environment. In a public post, founder Jer Crane described the episode as evidence of “systemic failures” and argued it was more than a single mistaken command.
Read Entire Article
Source link

Continue Reading

Tech

This cute watch is actually a Game Boy Color in disguise. And yes, it can run games

Published

on

A modder has turned a Game Boy Color into something you can wear on your wrist, and it’s not just borrowing the look. This is an actual, playable retro console slapped onto your wrist.

YouTuber LeggoMyFroggo managed to squeeze a fully functional Game Boy Color into a wristwatch-sized form factor, creating one of the more bizarre yet impressive retro builds in recent memory.

How’d he cram a Game Boy Color into a tiny watch?

In the YouTube video, modder Chris Hackmann called the project “Time Frog Color”. Rather than going for a simpler route of relying on emulation, the build uses original Game Boy Color hardware, including the Sharp SM83 processor, paired with its video memory and support for physical cartridges.

If that last part sounds insane, it absolutely is. The watch can actually run games using tiny cartridges, which Hackmann even demonstrated by playing Pokémon Gold without any issues. He used an RP2040 chip that handled translating the display signal. This allowed the wearable console to function as a watch when powered off.

Advertisement

How was the gameplay experience?

Shrinking a late ’90s handheld console into a 38mm wristwatch does sound like a cool side project, but it comes with its fair share of compromises. The display is just 1.12 inches, and controls are handled by tiny tactile buttons tucked under 3D-printed caps, which doesn’t exactly sound like game-friendly controls. Making the experience even less immersive is the lack of audio and limited battery life.

In other words, it works, but it’s not exactly the best way to replay your childhood favorites. The Time Frog Color just shows how far retro hardware modding has come. It was never meant to replace the actual Game Boy Color or make gaming on a watch a real thing. Though watching enthusiasts finding ways to preserve and repurpose original components is always fun.

Source link

Advertisement
Continue Reading

Tech

WIRED’s Smart Home Ecosystem Guide (2026)

Published

on

To achieve a smart home, you need a voice assistant to run it. A smart home assistant, usually folded into a smart speaker, will let you command your smart home with your voice and run your various routines. It also acts as a center for every gadget you want to add to your home. And you can add almost anything these days, from smart garage control to even voice-commanding your blinds.

But which assistant should you choose? Each of the big players comes with its own pros and cons, but I recommend choosing based on what you already use day-to-day. Your smartphone is the easiest entry point to pick from Apple or Google, or if you want a huge suite of smart speakers to choose from and have a Prime subscription, you may want to consider Amazon.

Take a look around what’s already in your home to see what works with which ecosystem before deciding. The best system for you will be the path of least resistance, whether that’s using your smartphone’s dedicated assistant or sticking with a platform that best integrates with the devices you already have.

Amazon Alexa

Advertisement
Image may contain Cushion Home Decor Electronics Stereo Mobile Phone Phone Screen Computer Hardware and Hardware

Courtesy of Amazon

WIRED: Huge selection of smart speakers and device compatibility.

TIRED: Paywalls, a meh new assistant, and Ring’s problematic policy.

It all began with Alexa, to some extent. It was the first Amazon Echo speaker back in 2012 that kicked off the smart home in an accessible way, letting anyone voice-command smart bulbs and ask for the weather without needing a custom installer or costing a fortune. Today, Amazon still has the widest range of options. The brand has the most smart speakers by a long shot, with 11 main models of smart speakers and displays currently available, plus several older versions of those same devices also available on Amazon’s website or at other retailers. It’s a huge suite with something for everyone, whether you want a screen, something made for kids, or fantastic sound with Alexa built in.

I do really like Amazon’s speakers and how easy the devices are to use, so this is a great entry point if voice control is of utmost importance. It can bring voice control into any room and for anyone in the house, and Alexa can create different profiles for different members of the family and attach information like calendars to those profiles. Amazon also owns Ring, so those smart home security devices work seamlessly with an Echo speaker, but we don’t recommend using Ring’s cameras because of its partnership with Axon, which enables local law enforcement to request footage directly from Ring users. My colleagues also have concerns about its data collection (and there have been other privacy issues over the years).

Advertisement

You’re also going to hit some paywalls. Amazon has an updated version of Alexa rolling out, Alexa+, which will cost $20 a month unless you have Amazon Prime. (Right now it’s out on Early Access, so it’s free, but non-Prime users can only demo it for 30 days before needing to upgrade to Prime to keep the demo.) The monthly fee is more expensive than Prime membership, so if you want it, it’s better to just join Prime. But neither I nor other WIRED staffers have been impressed by this updated, more expensive Alexa, so I hesitate to say it’s worth any investment. You’ll also need separate subscriptions for Ring devices if you choose to use them.

Alexa Smart Home Starter Pack

Still looking for an Alexa? Here are my favorite devices to start with.

Amazon

Advertisement

Echo Show 11

This is one of Amazon’s newest smart displays, and it’s a great size to use in kitchens without being too large for console tables. The sound is excellent, too, and there’s a built-in hub.

Amazon

Echo Studio (2nd Gen) and Echo Dot Max

Advertisement

Amazon’s new flagship speakers have great sound quality and more volume than you probably need. Both have a built-in hub to connect devices to.

Source link

Continue Reading

Trending

Copyright © 2025