Connect with us
DAPA Banner
DAPA Coin
DAPA
COIN PAYMENT ASSET
PRIVACY · BLOCKDAG · HOMOMORPHIC ENCRYPTION · RUST
ElGamal Encrypted MINE DAPA
🚫 GENESIS SOLD OUT
DAPAPAY COMING

Tech

Why The US Can’t Adopt Ukraine’s Innovative Approach To Unmanned Warfare Systems

Published

on

from the affordable-precise-mass dept

It is widely accepted that drones have changed the conduct of modern war dramatically. The war in Ukraine, in particular, is driving the rapid evolution of drone technology. Evidence of how far things have come was provided recently by the following claim from Ukraine, reported here on The Next Web (TNW):

In April, Ukrainian President Volodymyr Zelensky announced that his forces had, for the first time in the history of warfare, seized an enemy position using only unmanned systems. No infantry. No human soldiers entering the contested ground. Drones and ground robots identified the target, suppressed defensive fire, and captured the position without a single Ukrainian casualty. The claim has not been independently verified in detail, and Ukraine’s military has declined to provide specifics.

The TNW article goes on to give some details about the company that apparently played a major role in that unmanned assault:

a Ukrainian-British defence technology startup called UFORCE, has conducted more than 150,000 combat missions since Russia’s full-scale invasion in 2022, achieved unicorn status with a valuation exceeding one billion dollars, and is now scaling production from a discreet London headquarters designed, the company says, to protect it from Russian sabotage. The age of unmanned warfare is no longer a conference-circuit prediction. It is a line item on a defence contractor’s balance sheet.

Politico interviewed the Ukrainian commander in charge of the Third Assault Brigade’s ground robotic systems unit, the one which carried out the attack. Mykola Zinkevych provided some interesting indications of what robotic systems were already doing today, and what Ukraine’s future plans were for unmanned warfare systems. For example, Zinkevych said:

Delivery of important cargo, evacuation of the wounded, conducting surveillance in open areas, destruction of enemy fortifications, sabotage operations behind enemy lines, laying minefields — all this is now performed by ground robotic systems

In the short term:

Advertisement

Infantrymen can and should be taken out of direct fire. Our goal for 2026 is to replace up to 30 percent of personnel in the most difficult areas of the front with technology

In a post on Facebook (in Ukrainian), Zinkevych gave details of the ambitious longer-term goals (via Google Translate), which will involve the wider deployment of unmanned ground vehicles (UGV):

In March alone, 9,000+ missions were completed by the military. Our goal is for 100% of front-line logistics to be performed by robotic systems.

In the first half of 2026, due to increased demand, we will contract 25,000 UGVs, which will be gradually delivered to the front. This is twice as much as in the entire year 2025.

A new paper from the Carnegie Endowment for International Peace, written by the former defense minister of Ukraine, Andriy Zagorodnyuk, explores what he calls “The New Revolution in Military Affairs”, which is being brought about by “rapid innovation and adaptation, introducing new types of unmanned systems, countermeasures, and operating methods at unprecedented speed.” A key element of this is “affordable precise mass” the highly effective deployment of cheap, long-range drones on a massive scale. He calls this transformation:

a structural shift in warfare in which new technologies drive the development of novel operational concepts and doctrines, fundamentally altering how military power is generated and employed, and forcing enduring changes in military organizations. These trends include the emergence of affordable precise mass, the fragmentation of the air domain, the growing difficulty of maneuver, the centrality of networked warfare, and the elevation of rapid adaptation as a core military capability. This transformation is still in its early stages, but countries that fail to recognize and adapt to it risk preparing for a form of war that has lost its decisiveness.

One important aspect of this shift touches on an area that will be familiar to Techdirt readers. As noted in the quotation above, Zagorodnyuk underlines the importance of rapid adaptation for this new kind of warfare:

Advertisement

The decisive advantage lies with those who can shorten the loop between combat experience, technical adaptation, and redeployment. As a result, ultra-fast adaptation becomes a paramount requirement for survival—and directly shapes force organization.

In Ukraine, this has led to drone operators being deeply involved in the technology’s evolution:

Units maintain their own repair facilities, component stocks, and small-scale production capabilities. Some operate informal research-and-development cells. Successful adaptations spread laterally through personal networks, messaging platforms, and volunteer communities rather than through centralized bureaucratic channels.

But Zagorodnyuk points out a key reason why the important lessons emerging from the wars in Ukraine and Iran are unlikely to be learned in many Western countries, including the US:

legal, contractual, and technical restrictions often prevent units from modifying or repairing their own equipment. In the United States, for example, defense contractors frequently retain control over maintenance data, software, and diagnostics, limiting what military personnel can do independently. The debate around the “right to repair” reflects this tension. While intended to protect intellectual property and safety standards, such restrictions can slow adaptation cycles and reduce operational flexibility—precisely the opposite of what high-intensity, technology-driven warfare now demands.

In other words, today’s obsession with protecting intellectual monopolies above all else could one day prove a major obstacle to fighting and winning future wars.

Follow me @glynmoody on Mastodon and on Bluesky

Advertisement

Filed Under: adaptability, affordable precise mass, development, drones, ground robots, intellectual monopolies, london, research, right to repair, russia, ugv, ukraine, unicorn, unmanned ground vehicle, warfare, zelensky

Companies: politico, Uforce

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Inside The Heathkit Factory | Hackaday

Published

on

If you are a certain age, you doubtlessly remember Heathkit. They produced a wide array of electronic kits that were models of completeness and clear instructions. They started with surplus war parts in 1947 and wound up a major player in ham radio and early personal computers. But they made so many other things like TVs, radio control planes, and test equipment. All of it was made for you to build yourself. [Unseen History] released a video with the story of Heathkit from the start to the finish.

The company started out building kit airplanes, but after the war, they built a kit for an oscilloscope using military surplus. The less than $40 scope was still pricey in 1947 when a pound of bacon sold for 64 cents. But a “real” oscilloscope at the time would cost at least $400. The rest is history.

The Heathkit manuals were made simple enough that anyone could build a kit. But they also contained enough detail that you could truly understand what you built. Heathkit gear is still prized today.

Advertisement

Heathkit lost the kit business when Zenith bought the company, partly due to inattention and partly because fewer people cared about electronic kits. This was hastened by a drop in interest and to the availability of inexpensive electronics that you didn’t have to build. The company limped along with educational materials and home automation. By 2012, it was done. At its peak, the company employed over 1,800 people, and by the end, there were six people who lost their jobs.

We’ve covered Heathkit’s history before. Heathkit appears to have rebooted in some form, but we don’t know much about it.

Advertisement

Source link

Continue Reading

Tech

Let’s Help Children, Not Trial Lawyers

Published

on

from the the-liability-tax dept

The recent “internet addiction” verdicts against Apple, Meta, and YouTube drew applause from those eager to see big tech take a hit. But look behind the headlines and the result is something else entirely. These cases won’t help children. They will fuel a litigation plague that raises costs, chills innovation and hits smaller companies the hardest. 

The legal theory behind these cases tries to work around Section 230 by shifting the focus from user content to product design. Plaintiffs argue that features like infinite scroll or “like” buttons create harm independent of users’ personal content. It is a creative argument. It is also a slippery slope with no clear limiting principle.  

Once product design becomes the hook for liability, any widely used product becomes a target. Newspapers, magazines and even packaged goods design headlines with catchy taglines to capture attention. Platforms do the same with feeds, to deliver value to their users. Labeling these as “addictive” design shouldn’t be seen as a viable path to sidestepping Section 230. 

This shift also has broader economic consequences. 

Advertisement

Trial lawyer lawsuits do not stay in the courtroom, they are priced into everything. Companies pay more for insurance, more for compliance, and more for legal defense. Those costs flow through to consumers in the form of higher prices and fewer options. At a moment when affordability dominates national conversations, this is a factor we cannot ignore. 

These cases are shaped by a litigation system that rewards scale and escalation. They are enormously expensive and often backed by third-party funders, which drives plaintiffs’ lawyers to seek the highest possible damages. In last month’s Los Angeles trial, plaintiffs asked for billions but secured just $6 million, about 0.5% of what was requested. Even that figure is diminished when measured against the cost of bringing the case. And when outcomes fall short, the incentive is to pursue more cases or larger awards to justify the investment. 

This burden is uniquely American. U.S. companies face a level of litigation exposure that most global competitors simply do not. That gap acts as an innovation tax on American firms, particularly small and early-stage companies that drive job creation and new ideas. We should be asking how to reduce that burden, not expand it. 

Roughly 80% of CTA’s members are small or early-stage companies. They do not have the budgets or legal teams to absorb years of litigation risk. For them, the threat of open-ended lawsuits is not theoretical. It shapes what they build, how they build it, and whether they can exist it at all.  

Advertisement

This is how an innovation economy slows without a single vote in Congress. Startups pull back, new features go unbuilt, and investment shifts away from risk. Over time, innovation slows, and momentum shifts from startups to incumbents. 

None of this means concerns about children’s online experiences should be dismissed. They should be taken seriously. But lawsuits are blunt instruments that do little to address the underlying issues. 

There are better and more effective paths. 

Platforms have already invested heavily in tools that give parents real control over how their children use technology. Supervised accounts, screen time limits, content filters, and transparency into usage patterns are improving quickly and becoming easier to use. Industry efforts like NetChoice’s Digital Safety Shield build on that progress by putting parents in charge rather than outsourcing decisions to courts. 

Advertisement

Congress also has a clear role. A national privacy law that protects personal data, including children’s information, would provide real safeguards while giving companies a consistent set of rules. What Congress should avoid is layering on vague obligations that invite more litigation. It’s delayed action for years. It should not delay further. 

And parents remain central. Technology has changed, but the need for engagement has not. Knowing what children are doing online, setting boundaries and staying involved matters more than any verdict. 

Social media is a powerful tool with real benefits and real risks. The right response is to manage those tradeoffs in a practical way that protects children without undermining innovation. 

Recent verdicts move us in the opposite direction. They reward litigation, raise costs and make it harder for the next generation of companies to succeed. 

Advertisement

We should focus on solutions that help children, not expand a system that is already very good at benefiting trial lawyers.

Michael Petricone is the Senior VP of Government Affairs at the Consumer Technology Association.

Filed Under: innovation, internet addiction, liability, negligence, product liability, section 230, trial lawyers

Companies: meta

Advertisement

Source link

Continue Reading

Tech

How RecursiveMAS speeds up multi-agent inference by 2.4x and reduces token usage by 75%

Published

on

One of the key challenges of current multi-agent AI systems is that they communicate by generating and sharing text sequences, which introduces latency, drives up token costs, and makes it difficult to train the entire system as a cohesive unit. 

To overcome this challenge, researchers at University of Illinois Urbana-Champaign and Stanford University developed RecursiveMAS, a framework that enables agents to collaborate and transmit information through embedding space instead of text. This change results in both efficiency and performance gains. 

Experiments show that RecursiveMAS achieves accuracy improvement across complex domains like code generation, medical reasoning, and search, while also increasing inference speed and slashing token usage. 

RecursiveMAS is significantly cheaper to train than standard full fine-tuning or LoRA methods, making it a scalable and cost-effective blueprint for custom multi-agent systems.

Advertisement

The challenges of improving multi-agent systems

Multi-agent systems can help tackle complex tasks that single-agent systems struggle to handle. When scaling multi-agent systems for real-world applications, a big challenge is enabling the system to evolve, improve, and adapt to different scenarios over time. 

Prompt-based adaptation improves agent interactions by iteratively refining the shared context provided to the agents. By updating the prompts, the system acts as a director, guiding the agents to generate responses that are more aligned with the overarching goal. The fundamental limitation is that the capabilities of the models underlying each agent remain static. 

A more sophisticated approach is to train the agents by updating the weights of the underlying models. Training an entire system of agents is difficult because updating all the parameters across multiple models is computationally non-trivial.

Even if an engineering team commits to training their models, the standard method of agents communicating via text-based interactions creates major bottlenecks. Because agents rely on sequential text generation, it causes latency as each model must wait for the previous one to finish generating its text before it can begin its own processing. 

Advertisement

Forcing models to spell out their intermediate reasoning token-by-token just so the next model can read it is highly inefficient. It severely inflates token usage, drives up compute costs, and makes iterative learning across the whole system painfully slow to scale. 

How RecursiveMAS works

Instead of trying to improve each agent as an isolated, standalone component, RecursiveMAS is designed to co-evolve and scale the entire multi-agent system as a single integrated whole. 

The framework is inspired by recursive language models (RLMs). In a standard language model, data flows linearly through a stack of distinct layers. In contrast, a recursive language model reuses a set of shared layers that processes the data and feeds it back to itself. By looping the computation, the model can deepen its reasoning without adding parameters.

RecursiveMAS

RecursiveMAS architecture (source: arXiv)

Advertisement

RecursiveMAS extends this scaling principle from a single model to a multi-agent architecture that acts as a unified recursive system. In this setup, each agent functions like a layer in a recursive language model. Rather than generating text, the agents iteratively pass their continuous latent representations to the next agent in the sequence, creating a looped hidden stream of information flowing through the system. 

This latent hand-off continues down the line through all the agents. When the final agent finishes its processing, its latent outputs are fed directly back to the very first agent, kicking off a new recursion round. 

This structure allows the entire multi-agent system to interact, reflect, and refine its collective reasoning over multiple rounds entirely in the latent space, with only the very last agent producing a textual output in the final round. It is like the agents are communicating telepathically as a unified whole and the last agent provides the final response as text.

The architecture of latent collaboration

To make continuous latent space collaboration possible, the authors introduce a specialized architectural component called the RecursiveLink. This is a lightweight, two-layer module designed to transmit and refine a model’s latent states rather than forcing it to decode text. 

Advertisement

A language model’s last-layer hidden states contain the rich, semantic representation of its reasoning process. The RecursiveLink is designed to preserve and transmit this high-dimensional information from one embedding space to another. 

To avoid the cost of updating every parameter across multiple large language models, the framework keeps the models’ parameters frozen. Instead, it optimizes the system by only training the parameters of the RecursiveLink modules.

RecursiveLearning

Recursive learning process (source: arXiv)

To handle both internal reasoning and external communication, the system uses two variations of the module. The inner RecursiveLink operates inside an agent during its reasoning phase. It takes the model’s newly generated embeddings and maps them directly back into its own input embedding space. This allows the agent to continuously generate a stream of latent thoughts without generating discrete text tokens. 

Advertisement

The outer RecursiveLink serves as the bridge between agents. Because agents in a real-world system might use different model architectures and sizes, their internal embedding spaces have entirely different dimensions. The outer RecursiveLink includes an additional layer designed to match the embeddings from one agent’s hidden dimension with the next agent’s embedding space.

During training, first, the inner links are trained independently to warm up each agent’s ability to think in continuous latent embeddings. Then, the system enters outer-loop training, where the diverse, frozen models are chained together in a loop, and the system is evaluated based on the final textual output of the last agent. 

The only thing that gets updated in the training process is the RecursiveLink parameters and the original model weights remain unchanged, similar to low-rank adaptation (LoRA). Another advantage of this system comes into effect when you have multiple agents on top of the same backbone model. 

If you have a multi-agent system where two agents are built on the exact same foundation model acting in different roles, you do not need to load two copies of the model into your GPU memory, nor do you train them separately. The agents will share the same backbone as the brain and use the RecursiveLink as the connective tissue.

Advertisement

RecursiveMAS in action

The researchers evaluated RecursiveMAS across nine benchmarks spanning mathematics, science and medicine, code generation, and search-based question answering. They created a multi-agent system using open-weights models including Qwen, Llama-3, Gemma3, and Mistral. These models were assigned roles to form different agent collaboration patterns such as sequential reasoning and mixture-of-experts collaboration. 

inference speedup

RecursiveMAS improves inference speed by 1.2-2.2X (source: GitHub)

RecursiveMAS was compared to baselines under identical training budgets, including standalone models enhanced with LoRA or full supervised fine-tuning, alternative multi-agent frameworks like Mixture-of-Agents and TextGrad, and recursive baselines like LoopLM. It was also compared to Recursive-TextMAS, which uses the same recursive loop structure as RecursiveMAS but forces the agents to explicitly communicate via text.

RecursiveMAS achieved an average accuracy improvement of 8.3% compared to the strongest baselines across the benchmarks. It excelled particularly on reasoning-heavy tasks, outperforming text-based optimization methods like TextGrad by 18.1% on AIME2025 and 13% on AIME2026. 

Advertisement
token speedup

RecursiveMAS reduces token consumption by up to 75% (source: GitHub)

Because it avoids generating text at every step, RecursiveMAS achieved 1.2x to 2.4x end-to-end inference speedup. RecursiveMAS is also much more token efficient than the alternative. Compared to the text-based Recursive-TextMAS, it reduces token usage by 34.6% in the first round of the recursion, and by round three, it achieves 75.6% token reduction. RecursiveMAS also proved remarkably cheap to train. Because it only updates the lightweight RecursiveLink modules, which consist of roughly 13 million parameters or about 0.31% of the trainable parameters of the frozen models, it requires the lowest peak GPU memory and cuts training costs by more than half compared to full fine-tuning.

Enterprise adoption

The efficiency gains — lower token consumption, reduced GPU memory requirements, and faster inference — are intended to make complex multi-step agent workflows viable in production environments without the compute overhead that limits enterprise agentic deployments. The researchers have released the code and trained model weights under the Apache 2.0 license.

Source link

Advertisement
Continue Reading

Tech

Today’s NYT Mini Crossword Answers for May 16

Published

on

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Need some help with today’s Mini Crossword? I’ll tell you, 6-Across was a completely new fact to me, and I even had those particular pets when I was a kid. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Advertisement

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

completed-nyt-mini-crossword-puzzle-for-may-16-2026.png

The completed NYT Mini Crossword puzzle for May 16, 2026.

Advertisement

NYT/Screenshot by CNET

Mini across clues and answers

1A clue: De Armas of “Knives Out”
Answer: ANA

4A clue: Jack ___, five-time “S.N.L.” host
Answer: WHITE

6A clue: Class pets that thump their feet to warn of approaching kindergarteners
Answer: GERBILS

Advertisement

8A clue: Boxer Muhammad or Laila
Answer: ALI

9A clue: Reach 0% battery
Answer: DIE

10A clue: Big name in hair loss prevention
Answer: ROGAINE

12A clue: Jack ___, six-time “S.N.L.” musical guest
Answer: WHITE

Advertisement

13A clue: Attempt
Answer: TRY

Mini down clues and answers

1D clue: “Okay then …”
Answer: ALRIGHT

2D clue: Catch, as a criminal
Answer: NAB

3D clue: Notable feature of lemon and grapefruit juice
Answer: ACIDITY

Advertisement

4D clue: Underneath
Answer: BELOW

5D clue: Actor Kevin of “Dave”
Answer: KLINE

6D clue: Long-nosed fish
Answer: GAR

7D clue: Understand
Answer: SEE

Advertisement

11D clue: Make public, as a grievance
Answer: AIR

Source link

Advertisement
Continue Reading

Tech

Wood Burning Is Reintroducing Lead Pollution Into the Air, Scientists Find

Published

on

An anonymous reader quotes a report from The Guardian: Wood heating is reintroducing lead into the air of local communities and homes, a systematic investigation by academics has found. Overwhelming evidence of lead’s neurotoxicity meant the metal was banned as an additive in petrol more than 25 years ago. The research by academics from the University of Massachusetts Amherst began by analysing samples of particle pollution from five suburban and rural towns in the north-east US. They looked for tiny particles of potassium that are given off when wood is burned and also particles containing lead. Samples from seven winters revealed associations between potassium and lead. When there were more wood burning particles in a daily sample, there was more lead in the air, with clear straight-line relationships in four of the five towns.

The project was extended to 22 other towns across the US. The relationships between lead and potassium varied from place to place, being strongest in the Rocky Mountains. By factoring in the effects of temperature, moderate to strong associations in their analysis strengthened the conclusion that the extra lead came from wood burning. The lead concentrations were less than the US legal limits, but any exposure to the metal is harmful. […] Although less than legal limits, lead particles are routinely measured in UK cities in winter when people are also burning wood. This is normally attributed to waste wood covered with old lead paint, but the Umass Amherst study suggests the metal is coming from the wood itself. This means that any wood burning could increase exposure in neighborhoods and at home. Tricia Henegan, a PhD student at Umass Amherst and the first author on the research, said: “The most logical answer [to the question of how lead ends up in wood] is that it comes from uptake in the soil, probably riding along with the nutrients and water that trees need. Once in the tree, it deposits in the tree’s tissues and remains until that tree is burned.” Other research has found that it can then become part of the smoke.

“The use of wood as an energy source is a relic of the past, one that should not be relived if given a choice. Although wood fuel use can feel nostalgic, it does have negative consequences on air quality, and therefore public health.”

Source link

Advertisement
Continue Reading

Tech

Intercom, now called Fin, launches an AI agent whose only job is managing another AI agent

Published

on

The company formerly known as Intercom just did something that no major customer service platform has attempted at scale: it built an AI agent whose sole job is to manage another AI agent.

Fin Operator, announced Thursday at a live event in San Francisco, is a new AI-powered system designed specifically for the back-office teams that configure, monitor, and improve Fin, the company’s customer-facing AI agent. Rather than replacing human support agents — which is what Fin itself does on the front lines — Operator targets the growing army of support operations professionals who spend their days updating knowledge bases, debugging conversation failures, and combing through performance dashboards.

“Fin is an agent for your customers,” Brian Donohue, the company’s VP of Product, told VentureBeat in an exclusive interview ahead of the launch. “Operator is an agent for your support ops team. This is an agent for the back office team who manages Fin and then manages their human agents.”

The announcement arrives at a pivotal moment for the company. Just two days ago, CEO Eoghan McCabe formally renamed the 15-year-old company from Intercom to Fin — an aggressive signal that the AI agent is now the business, not merely a feature of it. Fin recently crossed $100 million in annual recurring revenue and is growing at 3.5x. The broader company generates $400 million in ARR, meaning the AI agent now accounts for roughly a quarter of total revenue and virtually all of its growth.

Advertisement

Fin Operator enters early access for Pro-tier users starting today, with general availability planned for summer 2026.

The invisible crisis behind every AI customer service deployment

As companies push their AI agents to handle more conversations — Fin alone now resolves more than two million customer issues each week across 8,000 customers globally, including Anthropic, DoorDash, and Mercury — the operational complexity behind those systems has exploded. Someone has to keep the knowledge base current. Someone has to figure out why the bot entered an infinite loop with a frustrated customer last Tuesday. Someone has to analyze whether the automation rate dropped after a product update.

That “someone” is the support operations team, and according to Donohue, they are drowning.

“Almost every support ops team is already doing data analysis and knowledge management — that’s table stakes today,” Donohue said. “Where teams struggle is the agent builder work. It’s a new skill set, and most don’t have enough time for it. They get their first iteration up and running, and then they get stuck.”

Advertisement

The problem is structural. AI customer agents are not static software. They require constant tuning — a process that looks more like training a new employee than configuring a SaaS tool. Each customer conversation is a potential source of failure, and each failure requires diagnosis, root-cause analysis, a configuration fix, testing, and monitoring. It is tedious, technical, and relentless. Fin Operator aims to collapse that entire loop into a conversational interface.

How one AI system plays data analyst, knowledge manager, and debugger all at once

Donohue described Operator as filling three distinct roles that typically consume the bandwidth of support ops teams: expert data analyst, expert knowledge manager, and expert agent builder.

As a data analyst, Operator can field high-level questions like, “How did my team perform last week?” and generate on-the-fly charts, trend reports, and drill-down analyses across all of the data already stored in Intercom’s platform. The company has loaded Operator with contextual knowledge about customer-specific data attributes to help it interpret workspace-specific metrics accurately.

As a knowledge manager, Operator can ingest a product update — say, a three-page PDF describing a new feature — and autonomously search the company’s entire content library to identify what needs to change. It finds gaps, drafts new articles, suggests edits to existing ones, and presents everything in a diff-style review interface. The underlying search engine is the same semantic search system that Intercom has built and optimized for Fin over more than two years.

Advertisement

“On that knowledge management front, you just have such a time compression of something that would take, certainly hours, sometimes days, into the space of about 10 minutes,” Donohue said.

As an agent builder, Operator introduces what the company calls a “debugger skill.” Support ops teams can paste in a link to a conversation where Fin misbehaved, and Operator will trace every step of Fin’s internal reasoning, identify the root cause — often a piece of guidance that unintentionally creates a loop — propose a rewrite, back-test the change against the original conversation, and then suggest creating a production monitor to catch similar issues going forward.

“This is literally what our professional services team does,” Donohue explained. “You’ve written guidance that is unintentionally causing Fin to repeat itself — this happens a lot. You didn’t realize it, but you never gave it an escape hatch.”

The ‘pull request’ safety net that keeps humans in control of AI changes

One of the most consequential design decisions in Fin Operator is what the company calls its “proposal system” — a mechanism that functions like a pull request in software engineering.

Advertisement

Every change that Operator recommends — whether it is an edit to a help article, a rewrite of an AI guidance rule, or the creation of a new QA monitor — appears as a proposal with a full diff view. Users can inspect, edit, and approve each change before it takes effect. Nothing goes live without a human clicking “Apply.”

“Right now, we’re taking zero risk on this — Fin cannot make any changes to the system without human approval,” Donohue emphasized. “Nothing goes live until a human clicks apply.”

This is a notable architectural choice. In a market increasingly enamored with fully autonomous AI systems, the company is deliberately keeping a human approval gate in place — at least for now. Donohue acknowledged this will evolve, but said the current moment demands caution: “It’s too big a leap to just let Operator make changes automatically and then tell the team, ‘Hey, let me tell you about what I did.’”

For enterprise buyers evaluating AI tools, this design point matters. It is the difference between an AI system that proposes changes and one that enacts them — a distinction that compliance teams, security officers, and risk managers will scrutinize closely.

Advertisement

Why Fin Operator runs on Anthropic’s Claude instead of the company’s own AI models

In a revealing technical detail, Donohue confirmed that Fin Operator does not use the company’s proprietary Apex models — the same custom AI models that power the customer-facing Fin agent and that the company has promoted as outperforming GPT-5.4 and Claude Sonnet 4.6 in customer service benchmarks.

Instead, Operator runs on Anthropic’s Claude.

“We’re not using our custom models,” Donohue said. “Those are designed to directly answer customer questions, whereas these are closer to what frontier models are best suited for. This is really closer to software engineering.”

The distinction is telling. Fin’s Apex models are optimized for one thing: resolving customer service conversations with minimal hallucination and maximum accuracy. Operator’s tasks — analyzing data, writing code-like configurations, debugging complex reasoning chains — demand a different kind of intelligence. Donohue characterized these capabilities as more akin to software engineering, an area where Anthropic’s Claude models have been deliberately optimized.

Advertisement

The company has not ruled out building custom models for Operator in the future, but Donohue positioned it as a lower priority. What the team has built around Claude, he argued, is the differentiated layer: the proposal system, the debugger skill, the semantic search integration, the data attribution logic, and the charting capabilities that make Operator more than just “Claude inside the app.”

Early beta testers say Fin Operator feels like adding five people to the team

Fin Operator is currently in beta with roughly 200 customers, a number Donohue said has “ramped up pretty fast the last couple of weeks.”

Constantina Samara, VP of Customer Support, Enablement & Trust at Synthesia, said the tool has already changed how her team works: “Previously, improving how Fin handles a conversation often meant reviewing everything yourself — the conversation, the configuration, the content. With Fin Operator, you just ask. It walks you through what happened and makes improving Fin dramatically easier.”

Jordan Thompson, an AI Conversational Analyst at Raylo, reported that he has been using Operator daily and has run head-to-head comparisons between Operator’s analysis and his own manual work. “It’s very accurate,” Thompson said. “It’s just as strong at high-level trend analysis as it is at debugging individual conversations. That’s a real limitation when using an LLM connector on its own — you get conversational depth but nothing on reporting or trends.”

Advertisement

Donohue also shared an internal anecdote from the company’s own knowledge management team. Beth, who leads knowledge operations, told the product team that Operator made her feel like she had “five more people on my team.” Whether internal testimonials carry the same weight as external customer validation is debatable, but Donohue said the knowledge management use case consistently generates the most visceral reactions because the time savings are so stark — collapsing hours or days of content auditing into roughly 10 minutes.

A new pricing model signals how AI is reshaping the economics of enterprise software

Fin Operator will live inside the company’s Pro add-on tier — a relatively new bundle that already includes advanced analytics features like CX scoring, topic detection, real-time issue detection, and quality assurance monitoring across both AI and human agent conversations.

The pricing model introduces something new for the company: usage-based billing. Intercom has historically relied on outcome-based pricing — charging roughly $0.99 per conversation that Fin resolves without human intervention. Operator’s work does not map cleanly to that model because it produces configuration changes, not customer resolutions.

“This has pushed us to a different model, to go more into that usage model for support ops teams,” Donohue said. “We’ll try to be generous with the usage amounts that come into Pro, but for people who are leaning heavily in, we’ll have the ability to buy more usage blocks.”

Advertisement

The shift is worth watching. Outcome-based pricing was one of the company’s most distinctive market positions — a bet that customers would pay for results rather than seats. Extending that philosophy to internal operations work proved impractical, which suggests that as AI agents take on more diverse roles within an organization, the pricing models that support them will need to become equally diverse.

How Fin Operator stacks up in a crowded field of AI customer service competitors

Fin Operator lands in an increasingly competitive landscape. Zendesk, Salesforce, Sierra, and a constellation of AI-native startups are all building some version of AI-powered support operations tooling. The broader AI automation market is projected to reach $169 billion in 2026, according to Grand View Research, growing at a 31.4% compound annual rate.

But Donohue argued that Operator’s differentiation lies in two areas. First, breadth: Operator works across the full surface area of the company’s configuration system — data, content, procedures, simulations, guidance, and monitoring — rather than addressing a single narrow use case. Second, the fact that it spans both AI and human operations.

“Most critically, where I think we have the most differentiation is because it’s for your human system and your AI system,” Donohue said. “That’s really one of the unique spaces we have — to have a first-class AI agent and a first-class help desk, and Operator works across both.”

Advertisement

The competitive positioning also benefits from timing. The company’s recent corporate rebrand from Intercom to Fin signals a wholesale commitment to AI that legacy players may struggle to match. As CEO McCabe wrote in announcing the name change, the AI agent “is about to be the largest part of our business.” The help desk product continues as Intercom 2, but the parent company now carries the name of its AI agent — a branding move that some industry observers have interpreted as pre-IPO positioning. The Fin API Platform, launched in early April, adds another dimension: the company opened its proprietary Apex models to third-party developers and even offered to license the technology to direct competitors like Decagon and Sierra.

The real paradigm shift isn’t a new chat interface — it’s an agent that does the thinking for you

Step back from the product specifics and Fin Operator represents something potentially more consequential than a new dashboard or analytics tool. It is one of the first commercial products to explicitly embody the emerging paradigm of AI agents that manage other AI agents — a two-layer abstraction that is beginning to reshape how companies think about operational software.

Donohue was emphatic on this point. The real paradigm shift, he argued, is not the chat interface replacing buttons and menus. It is that the AI is doing the actual knowledge work — figuring out what should change, why, and how.

“The UX change is secondary, even though it’s most visible,” Donohue said. “The change is that we are identifying and doing the work of support operations. It’s doing the work of what the knowledge manager is doing, so that they just have to approve that. That’s the huge shift.”

Advertisement

The analogy to software engineering is apt. Over the past year, AI coding agents have fundamentally altered the daily workflow of developers, shifting their primary responsibility from writing code to reviewing and guiding the AI that writes it. Donohue sees the same transformation arriving for support operations professionals.

“Software engineers — three months have upended their world, where their primary job now is managing agents who are actually writing the code,” he said. “Similarly now, support ops, your job is to manage an agent who’s managing the agent for your customers.”

Whether this vision pans out at enterprise scale remains to be seen. The company is still launching Operator in beta precisely because it wants to keep refining quality through what Donohue described as a painstaking, conversation-by-conversation debugging process. “We’ve spent three months, conversation by conversation, learning, fixing, learning, fixing, to get it where it’s robust,” he said.

But if the early returns hold, Fin Operator may preview what the next generation of enterprise software looks like: not tools that help humans do work faster, but agents that do the work themselves, subject to human judgment and approval. For customer service leaders already running AI agents in production, the question is no longer just “how good is my bot?” It is now, inevitably, “who is managing it?” And increasingly, the answer is another bot.

Advertisement

Source link

Continue Reading

Tech

After testing over a dozen digital notebooks, I’ve realized that the stylus is the real MVP in the e-ink tablet equation

Published

on

I’m a massive fan of digital notebooks (aka epaper or E Ink tablets) — I’ve used over a dozen in the last few years and, as a habitual list maker and note taker, I find them extremely useful. My favorite e-notebook — purely from a writing experience — is the Amazon Kindle Scribe (2024 edition) and, while I loved it when it first launched, the Kobo Elipsa 2E is now my least favorite as newer options just do it better.

I’ve come to realize that a lot of that preference boils down to one surprising element: the stylus. Or rather, the stylus’ little nib and how it feels when you get down to the act of (figuratively) putting pen to paper.

Source link

Continue Reading

Tech

Restoring A 3DO Blaster Card From The Early 90s

Published

on

Before the modern trifecta of video game giants came to dominate the market around two decades ago, the world was awash in video game consoles. Many of these retro platforms have largely been forgotten outside of the enthusiast communities, and an average gamer today might not have ever heard of brands like ColecoVision or TurboGrafx. Among these unusual, rare, or forgotten systems was the 3DO which wasn’t strictly a console but rather a specification that manufacturers could use to make consoles on their own. But even more unusual was that this standard could be used to build 3DO-compatible expansion cards for PCs as well.

In this video, [The Retro Collective] received one of these boards to add to their museum, but like much retro hardware of this era it wasn’t working exactly like it would have out-of-the-box. After adding it to one of their period-correct 386 machines of the time, they found that it would only work properly with weight applied at one of the corners. This led to the discovery of some disconnected pins on the PCB, and a repair of that and some other issues brought the card back to life again.

The video also discusses the platform itself and shows how it would connect to a PC from that time. The PC would have needed a Sound Blaster card, a CD ROM drive with a particular proprietary interface, and a few other hardware requirements, but with everything up and working the player would have a console that theoretically competed with the original Playstation or Nintendo 64. It also illustrates an alternative path video games might have taken where expansion cards added console compatibility to any modern PC, but unfortunately the 3DO never really caught on.

Advertisement

Source link

Advertisement
Continue Reading

Tech

DOJ investigation into vehicle modding hardware leads to Apple subpoena

Published

on

Over 100,000 EZ Lynk users could find their data being handed over to the United States government if Apple complies with a request for app download information.

Governments subpoena Apple for information all of the time, but that doesn’t mean it gets handed over automatically. Apple will push back if the scope of the request is too broad or vague.

In the case of the EZ Lynk lawsuit, the US Department of Justice has asked Apple and Google to hand over information about over 100,000 users. According to a report from Forbes, the DOJ wants information like the name and address of every person that downloaded the EZ Lynk app.

It’s an incredible request that’s 10x the size of a previous request about a gun scope app in 2019. While the companies involved haven’t commented directly, EZ Lynk shared that it expects Apple and Google to refuse the subpoena.

Advertisement

The lawsuit itself centers around EZ Lynk being accused of breaking the Clean Air Act by selling devices that let users bypass emission controls. While EZ Lynk’s devices can be used for all kinds of modifications, it seems that emission bypass was a popular use case.

The DOJ claims it needs the information of all of these users for evidence gathering. They’d like to contact some of the individuals to act as witnesses to the case.

There are obvious Fourth Amendment issues at play here. Beyond the wide scope of the data request, the DOJ seemingly wants people to self-incriminate themselves on the stand.

Apple does respond to subpoenas and provides data within reason. This request is too vague and wide in scope, so if history is anything to go by, Apple will reject it.

Advertisement

However, that doesn’t mean the government can’t narrow its scope and ask for specific individuals’ app download records. If requested properly and legally, Apple will hand over a download record and a user’s name and address.

As we’ve seen in other cases, Apple can’t hand over data that is end-to-end encrypted like Apple Health data. Since an app download is basically just a receipt of purchase, Apple will have an unencrypted record of that interaction.

Aaron Mackey from the Electronic Frontier Foundation questions the reasoning behind the subpoena. They question what the data might be used for beyond the prosecution of the particular case.

To make matters worse, EZ Lynk alleged that the US government wanted “a backdoor” in 2019 that would enable monitoring unsuspecting users. The government denies that claim.

Advertisement

Time will tell how Apple responds and if it gets dragged into the case any further.

Source link

Advertisement
Continue Reading

Tech

Irish quantum start-up Equal1 unveils RacQ data centre computer

Published

on

RacQ will be demonstrated in action at next week’s Dell Technologies World expo in Las Vegas.

Irish quantum computing start-up Equal1 has launched the next iteration of its server technology for deployment, integration and use in data centre infrastructure.

The ‘RacQ’ is described as “the next generation” of the company’s ‘Bell-1’ server and is claimed to be “the world’s first deployable rack-mounted silicon-spin quantum computer designed to live within a standard 19-inch data centre rack”.

According to the Dublin-based company, RacQ is designed to utilise hybrid quantum-classical computing, in which classical and quantum technologies work in tandem as single system to optimise efficiency and effect, for “high-impact” application such as investment risk analysis, materials simulation and supply chain optimisation.

Advertisement

“For nearly every organisation, quantum computing remains out of reach, confined to labs,” said Jason Lynch, CEO of Equal1.

“We’re changing that. We are putting quantum inside the rack so customers can roll it in, plug it in and begin running hybrid quantum-classical workloads in days, using the infrastructure they already own.”

RacQ’s configuration is optimised for use at standard data centres, according to Equal1, with power requirements, cooling mechanisms, and weight and footprint dimensions designed for accessibility to centre operators working with “existing server stacks” or specialised high-performance “nodes”.

The system’s architecture, according to the company, is built using standard semiconductor processes and powered by ‘UnityQ’, a “breakthrough quantum system-on-chip that will integrate the complete quantum system onto a single silicon package”.

Advertisement

RacQ will be demonstrated in action at next week’s Dell Technologies World expo in Las Vegas through a research collaboration between Equal1 and Dell to explore how hybrid quantum-classical computing can operate inside existing data centre environments.

Equal1, which was founded in 2017 at University College Dublin, says quantum computing using standard silicon is the way to overcome challenges posed by AI to the power and cost thresholds of traditional computers.

The RacQ predecessor Bell-1 server, launched in March 2025, was claimed at the time as the first-ever Irish-made quantum computer, as well as the the world’s first silicon-based quantum server designed for data centres and high-performance computing.

In January of this year, the company raised $60m in a funding round led by the Ireland Strategic Investment Fund. In April, Equal1 said it would partner with Californian quantum infrastructure software maker Q-Ctrl for the deployment of rack-mounted quantum computers in enterprise data centres, as well as with French computer company Bull to help “advance the next generation of hybrid quantum-classical technologies with European solutions”.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025