Connect with us
DAPA Banner

Tech

Anthropic says DeepSeek, Moonshot, and MiniMax used 24,000 fake accounts to rip off Claude

Published

on

Anthropic dropped a bombshell on the artificial intelligence industry Monday, publicly accusing three prominent Chinese AI laboratories — DeepSeek, Moonshot AI, and MiniMax — of orchestrating coordinated, industrial-scale campaigns to siphon capabilities from its Claude models using tens of thousands of fraudulent accounts.

The San Francisco-based company said the three labs collectively generated more than 16 million exchanges with Claude through approximately 24,000 fake accounts, all in violation of Anthropic’s terms of service and regional access restrictions. The campaigns, Anthropic said, are the most concrete and detailed public evidence to date of a practice that has haunted Silicon Valley for months: foreign competitors systematically using a technique called distillation to leapfrog years of research and billions of dollars in investment.

“These campaigns are growing in intensity and sophistication,” Anthropic wrote in a technical blog post published Monday. “The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers, and the global AI community.”

The disclosure marks a dramatic escalation in the simmering tensions between American and Chinese AI developers — and it arrives at a moment when Washington is actively debating whether to tighten or loosen export controls on the advanced chips that power AI training. Anthropic, led by CEO Dario Amodei, has been among the most vocal advocates for restricting chip sales to China, and the company explicitly connected Monday’s revelations to that policy fight.

Advertisement

How AI distillation went from obscure research technique to geopolitical flashpoint

To understand what Anthropic alleges, it helps to understand what distillation actually is — and how it evolved from an academic curiosity into the most contentious issue in the global AI race.

At its core, distillation is a process of extracting knowledge from a larger, more powerful AI model — the “teacher” — to create a smaller, more efficient one — the “student.” The student model learns not from raw data, but from the teacher’s outputs: its answers, reasoning patterns, and behaviors. Done correctly, the student can achieve performance remarkably close to the teacher’s while requiring a fraction of the compute to train.

As Anthropic itself acknowledged, distillation is “a widely used and legitimate training method.” Frontier AI labs, including Anthropic, routinely distill their own models to create smaller, cheaper versions for customers. But the same technique can be weaponized. A competitor can pose as a legitimate customer, bombard a frontier model with carefully crafted prompts, collect the outputs, and use those outputs to train a rival system — capturing capabilities that took years and hundreds of millions of dollars to develop.

The technique burst into public consciousness in January 2025 when DeepSeek released its R1 reasoning model, which appeared to match or approach the performance of leading American models at dramatically lower cost. Databricks CEO Ali Ghodsi captured the industry’s anxiety at the time, telling CNBC: “This distillation technique is just so extremely powerful and so extremely cheap, and it’s just available to anyone.” He predicted the technique would usher in an era of intense competition for large language models.

Advertisement

That prediction proved prescient. In the weeks following DeepSeek’s release, researchers at UC Berkeley said they recreated OpenAI’s reasoning model for just $450 in 19 hours. Researchers at Stanford and the University of Washington followed with their own version built in 26 minutes for under $50 in compute credits. The startup Hugging Face replicated OpenAI’s Deep Research feature as a 24-hour coding challenge. DeepSeek itself openly released a family of distilled models on Hugging Face — including versions built on top of Qwen and Llama architectures — under the permissive MIT license, with the model card explicitly stating that the DeepSeek-R1 series supports commercial use and allows for any modifications and derivative works, “including, but not limited to, distillation for training other LLMs.”

But what Anthropic described Monday goes far beyond academic replication or open-source experimentation. The company detailed what it characterized as deliberate, covert, and large-scale intellectual property extraction by well-resourced commercial laboratories operating under the jurisdiction of the Chinese government.

Anthropic traces 16 million fraudulent exchanges to researchers at DeepSeek, Moonshot, and MiniMax

Anthropic attributed each campaign “with high confidence” through IP address correlation, request metadata, infrastructure indicators, and corroboration from unnamed industry partners who observed the same actors on their own platforms. Each campaign specifically targeted what Anthropic described as Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding.

DeepSeek, the company that ignited the distillation debate, conducted what Anthropic described as the most technically sophisticated of the three operations, generating over 150,000 exchanges with Claude. Anthropic said DeepSeek’s prompts targeted reasoning capabilities, rubric-based grading tasks designed to make Claude function as a reward model for reinforcement learning, and — in a detail likely to draw particular political attention — the creation of “censorship-safe alternatives to policy sensitive queries.”

Advertisement

Anthropic alleged that DeepSeek “generated synchronized traffic across accounts” with “identical patterns, shared payment methods, and coordinated timing” that suggested load balancing to maximize throughput while evading detection. In one particularly notable technique, Anthropic said DeepSeek’s prompts “asked Claude to imagine and articulate the internal reasoning behind a completed response and write it out step by step — effectively generating chain-of-thought training data at scale.” The company also alleged it observed tasks in which Claude was used to generate alternatives to politically sensitive queries about “dissidents, party leaders, or authoritarianism,” likely to train DeepSeek’s own models to steer conversations away from censored topics. Anthropic said it was able to trace these accounts to specific researchers at the lab.

Moonshot AI, the Beijing-based creator of the Kimi models, ran the second-largest operation by volume at over 3.4 million exchanges. Anthropic said Moonshot targeted agentic reasoning and tool use, coding and data analysis, computer-use agent development, and computer vision. The company employed “hundreds of fraudulent accounts spanning multiple access pathways,” making the campaign harder to detect as a coordinated operation. Anthropic attributed the campaign through request metadata that “matched the public profiles of senior Moonshot staff.” In a later phase, Anthropic said, Moonshot adopted a more targeted approach, “attempting to extract and reconstruct Claude’s reasoning traces.”

MiniMax, the least publicly known of the three but the most prolific by volume, generated over 13 million exchanges — more than three-quarters of the total. Anthropic said MiniMax’s campaign focused on agentic coding, tool use, and orchestration. The company said it detected MiniMax’s campaign while it was still active, “before MiniMax released the model it was training,” giving Anthropic “unprecedented visibility into the life cycle of distillation attacks, from data generation through to model launch.” In a detail that underscores the urgency and opportunism Anthropic alleges, the company said that when it released a new model during MiniMax’s active campaign, MiniMax “pivoted within 24 hours, redirecting nearly half their traffic to capture capabilities from our latest system.”

How proxy networks and ‘hydra cluster’ architectures helped Chinese labs bypass Anthropic’s China ban

Anthropic does not currently offer commercial access to Claude in China, a policy it maintains for national security reasons. So how did these labs access the models at all?

Advertisement

The answer, Anthropic said, lies in commercial proxy services that resell access to Claude and other frontier AI models at scale. Anthropic described these services as running what it calls “hydra cluster” architectures — sprawling networks of fraudulent accounts that distribute traffic across Anthropic’s API and third-party cloud platforms. “The breadth of these networks means that there are no single points of failure,” Anthropic wrote. “When one account is banned, a new one takes its place.” In one case, Anthropic said, a single proxy network managed more than 20,000 fraudulent accounts simultaneously, mixing distillation traffic with unrelated customer requests to make detection harder.

The description suggests a mature and well-resourced infrastructure ecosystem dedicated to circumventing access controls — one that may serve many more clients than just the three labs Anthropic named.

Why Anthropic framed distillation as a national security crisis, not just an IP dispute

Anthropic did not treat this as a mere terms-of-service violation. The company embedded its technical disclosure within an explicit national security argument, warning that “illicitly distilled models lack necessary safeguards, creating significant national security risks.”

The company argued that models built through illicit distillation are “unlikely to retain” the safety guardrails that American companies build into their systems — protections designed to prevent AI from being used to develop bioweapons, carry out cyberattacks, or enable mass surveillance. “Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems,” Anthropic wrote, “enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance.”

Advertisement

This framing directly connects to the chip export control debate that Amodei has made a centerpiece of his public advocacy. In a detailed essay published in January 2025, Amodei argued that export controls are “the most important determinant of whether we end up in a unipolar or bipolar world” — a world where either only the U.S. and its allies possess the most powerful AI, or one where China achieves parity. He specifically noted at the time that he was “not taking any position on reports of distillation from Western models” and would “just take DeepSeek at their word that they trained it the way they said in the paper.”

Monday’s disclosure is a sharp departure from that earlier restraint. Anthropic now argues that distillation attacks “undermine” export controls “by allowing foreign labs, including those subject to the control of the Chinese Communist Party, to close the competitive advantage that export controls are designed to preserve through other means.” The company went further, asserting that “without visibility into these attacks, the apparently rapid advancements made by these labs are incorrectly taken as evidence that export controls are ineffective.” In other words, Anthropic is arguing that what some observers interpreted as proof that Chinese labs can innovate around chip restrictions was actually, in significant part, the result of stealing American capabilities.

The murky legal landscape around AI distillation may explain Anthropic’s political strategy

Anthropic’s decision to frame this as a national security issue rather than a legal dispute may reflect the difficult reality that intellectual property law offers limited recourse against distillation.

As a March 2025 analysis by the law firm Winston & Strawn noted, “the legal landscape surrounding AI distillation is unclear and evolving.” The firm’s attorneys observed that proving a copyright claim in this context would be challenging, since it remains unclear whether the outputs of AI models qualify as copyrightable creative expression. The U.S. Copyright Office affirmed in January 2025 that copyright protection requires human authorship, and that “mere provision of prompts does not render the outputs copyrightable.”

Advertisement

The legal picture is further complicated by the way frontier labs structure output ownership. OpenAI’s terms of use, for instance, assign ownership of model outputs to the user — meaning that even if a company can prove extraction occurred, it may not hold copyrights over the extracted data. Winston & Strawn noted that this dynamic means “even if OpenAI can present enough evidence to show that DeepSeek extracted data from its models, OpenAI likely does not have copyrights over the data.” The same logic would almost certainly apply to Anthropic’s outputs.

Contract law may offer a more promising avenue. Anthropic’s terms of service prohibit the kind of systematic extraction the company describes, and violation of those terms is a more straightforward legal claim than copyright infringement. But enforcing contractual terms against entities operating through proxy services and fraudulent accounts in a foreign jurisdiction presents its own formidable challenges.

This may explain why Anthropic chose the national security frame over a purely legal one. By positioning distillation attacks as threats to export control regimes and democratic security rather than as intellectual property disputes, Anthropic appeals to policymakers and regulators who have tools — sanctions, entity list designations, enhanced export restrictions — that go far beyond what civil litigation could achieve.

What Anthropic’s distillation crackdown means for every company running a frontier AI model

Anthropic outlined a multipronged defensive response. The company said it has built classifiers and behavioral fingerprinting systems designed to identify distillation attack patterns in API traffic, including detection of chain-of-thought elicitation used to construct reasoning training data. It is sharing technical indicators with other AI labs, cloud providers, and relevant authorities to build what it described as a more holistic picture of the distillation landscape. The company has also strengthened verification for educational accounts, security research programs, and startup organizations — the pathways most commonly exploited for setting up fraudulent accounts — and is developing model-level safeguards designed to reduce the usefulness of outputs for illicit distillation without degrading the experience for legitimate customers.

Advertisement

But the company acknowledged that “no company can solve this alone,” calling for coordinated action across the industry, cloud providers, and policymakers.

The disclosure is likely to reverberate through multiple ongoing policy debates. In Congress, the bipartisan No DeepSeek on Government Devices Act has already been introduced. Federal agencies including NASA have banned DeepSeek from employee devices. And the broader question of chip export controls — which the Trump administration has been weighing amid competing pressures from Nvidia and national security hawks — now has a new and vivid data point.

For the AI industry’s technical decision-makers, the implications are immediate and practical. If Anthropic’s account is accurate, the proxy infrastructure enabling these attacks is vast, sophisticated, and adaptable — and it is not limited to targeting a single company. Every frontier AI lab with an API is a potential target. The era of treating model access as a simple commercial transaction may be coming to an end, replaced by one in which API security is as strategically important as the model weights themselves.

Anthropic has now put names, numbers, and forensic detail behind accusations that the industry had only whispered about for months. Whether that evidence galvanizes the coordinated response the company is calling for — or simply accelerates an arms race between distillers and defenders — may depend on a question no classifier can answer: whether Washington sees this as an act of espionage or just the cost of doing business in an era when intelligence itself has become a commodity.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI

Published

on

Jeff Bezos is reportedly seeking $100 billion for a new fund, the likes of which will be used to buy up companies in major industrial sectors and, ultimately, modernize and automate them with AI, according to sources cited by The Wall Street Journal.

The effort is related to Bezos’ AI startup, Project Prometheus. Bezos, whose involvement with the company was originally reported in November, is serving as co-founder and co-CEO, alongside former Google executive Vik Bajaj.

Prometheus, which launched with $6.2 billion in funding, is focused on creating high-level AI models to improve manufacturing and engineering in aerospace, automotive, and other sectors. The new manufacturing fund will support that mission by buying up companies that will ultimately use Prometheus’ models.

According to the WSJ, Bezos recently traveled to Singapore and the Middle East in his mission to raise funds for the effort. The plan is to acquire companies in areas like aerospace, chipmaking, and defense.

Advertisement

TechCrunch reached out to Bezos via Amazon for more information.

Source link

Continue Reading

Tech

Sega Genesis Finally Gets Long-Awaited Stock Ticker App 37 Years After Launch

Published

on

Until now, if you were seated at your Sega Genesis and wanted to check your stock portfolio, you were out of luck. You had to get a smartphone, or a computer, or maybe even a television to look up stock prices and understand your financial position. Thankfully, though, Sega’s neglect of its hero platform has finally been corrected. [Mike Wolak] has given the 16-bit console the real-time stock ticker it so desperately needed. 

The build runs on a MegaWiFi cartridge, which uses an ESP8266 or ESP32 microcontroller to add WiFi communication to the Sega Genesis (or Mega Drive). [Mike] wrote a custom program for the platform that would query the Finnhub HTTPS API and display live stock prices via the Genesis’s Video Display Processor. It does so via a clean console-like interface that would be familiar to users of other 16-bit machines from this era, though seeing so much textual output would have been uncommon.

By default, the stock ticker is set to show prices for major tech stocks, but you can set it up to display any major symbol available in the Finnhub data stream. You can configure up to eight custom stocks and input your holdings, and the software will calculate and display your net worth in real time.

All the files are available for those eager to monitor their portfolios on a Sega, as the financial gods intended. [Mike] notes it took a little work to get this project over the line, particularly as the ESP32-C3 doesn’t support HTTPs with stock firmware. A few other hacks were needed to keep the Genesis updating the screen during HTTP queries, too.

Advertisement

If you have a concentrated portfolio and a spare Sega Genesis, this could be a fun retro way to keep an eye on your holdings. Alternatively, you might prefer to go the classic paper tape route.

Advertisement

Source link

Continue Reading

Tech

SK Hynix boss says the memory chip shortage is going to last until 2030

Published

on


According to recent statements by SK Group chairman Chey Tae-won, ongoing issues in the memory and silicon supply chain are unlikely to improve for another four to five years. SK Group owns SK Hynix, the world’s third-largest semiconductor manufacturer and an integrated device manufacturer with in-house foundry capabilities. While SK…
Read Entire Article
Source link

Continue Reading

Tech

Google finally bringing Gemini app to Mac after Apple partnership

Published

on

As everyone waits for the new Apple Foundation Models trained by Gemini, Google is pushing ahead on bringing a native Gemini app to the Mac. It’ll be similar to those offered by Anthropic and OpenAI.

A MacBook Neo sitting on a bar top open, citrus color, with the Google Gemini logo shown on screen
Macs could finally get a Gemini app

Apple and Google have always been uneasy partners. Google Maps predated Apple Maps on iPhone, and Google is always slow to adopt new Apple APIs in its apps.
Some of that has shifted in Apple’s favor since the Gemini partnership was announced. First, a native YouTube app was finally released on Apple Vision Pro, and now Gemini is getting a native app on Mac.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Six women to follow in the vibrant engineering space

Published

on

If you aspire to a career in engineering then make sure you are keeping an eye on the professional lives of these six women.

During the month of March, SiliconRepublic.com is paying particular attention to careers and skills in the engineering space, and what better way to continue that coverage than with an exploration of some of the most exciting women in this field.

The following engineers have contributed greatly to their industries, through their work, discoveries, builds, and advocacy for themselves and others. 

Áine Brazil

A managing principal at structural engineering company Thornton Tomasetti, Salthill’s Áine Brazil holds a bachelor’s degree in engineering from the University of Galway and a master’s degree in engineering from the Imperial College of Science and Technology in London. She was the first president of the Structural Engineers Association of New York and is a member of the American Concrete Institute, the American Society of Civil Engineers and the Institute of Engineers in Ireland. 

Advertisement

In her 30-plus-year career she has overseen several crucial projects; for example, she led the structural engineering team for the design of more than 3m square feet of high-rise office development in the Times Square area, as well as the expansion of New York Hospital spanning the FDR highway, the 60-storey 731 Lexington Avenue mixed-use project, and the Nationwide Arena in Columbus, Ohio.

She has been included on the list ‘New York’s 100 Most Influential Women in Business’, by Crain’s New York Business, and has authored numerous technical papers and lectured at universities throughout the US including Cornell, Princeton, and Columbia. 

Justine Butler

A chemical engineer with more than 18 years of experience in the pharmaceutical space, Justine Butler is currently the director of engineering at Jacobs Life Sciences for Ireland, the UK and the Nordic region. She has significant experience in leading teams and is responsible for the engineering design of a wide-ranging project portfolio. 

She is the first woman to hold this position at the organisation and is also among its youngest people working in a leadership capacity in her region. Butler was also the first young engineer to serve on Engineers Ireland’s council and executive committee after first chairing its young engineers committee. In 2024, she was honoured with the ‘Women in STEM – Engineering’ award, given by Engineers Ireland. 

Advertisement

Dervilla Mitchell

A former deputy chair of Arup Group, Dervilla Mitchell is a civil engineer with a background in the design of the built environment. She has led a number of major projects, including the Athletes’ Village for the London 2012 Olympics, Terminal 5 at Heathrow airport, Dublin airport’s Terminal 2, and Abu Dhabi airport’s Midfield Terminal. She is the co-chair of the Royal Academy of Engineering and also chairs the UK’s National Engineering Policy Centre’s decarbonisation group. 

Georgina Molloy

A programme manager for energy performance at the Sustainable Energy Authority of Ireland (SEAI), Georgina Molloy is also the chair of the Engineers Ireland Women in Engineering group. As part of her role there, Molloy chairs a committee of 12 engineers with the aim of achieving better gender balance and supporting women who have chosen a career in the engineering space.

She is a chartered structural engineer with more than 20 years of experience working in consulting engineering practices large and small, and has spent a considerable portion of that time in scaffold and temporary works design and construction. Molloy is particularly passionate about working on refurbishment projects and enjoys being part of teams that bring old builds of historical importance back to life.

Norah Patten

Set to be the first Irish person in space, Norah Patten is an aerospace engineer and bioastronautics researcher at the International Institute for Astronautical Sciences (IIAS). She has received multiple recognitions for her contributions to the industry, such as a 2015 ’emerging space leader’ award, an appearance in Limerick’s ‘top 40 under 40’ for 2018, and an IIAS ‘science educator’ accolade, among others.

Advertisement

She is a regular keynote speaker, an author and an advocate for other women in the industry. Later in the year, Patten will join Kellie Gerardi of the US and Dr Shawna Pandya of Canada as crew members aboard Virgin Galactic’s new Delta vehicle for a space mission organised by the US-based IIAS.

Anisa Pjetri

A former senior structural engineer and project manager at AtkinsRéalis, Anisa Pjetri is now an associate director at the company. She earned a BSc in civil engineering in Albania and an MSc in structural engineering in London before relocating to Ireland, where she earned a chartership from Engineers Ireland and took up a position at AtkinsRéalis.

Pjetri has 12 years of international expertise in designing, planning and overseeing the construction of a wide variety of buildings, structures and infrastructure for residential, commercial, medical, industrial and hospitality projects. In 2025, she was a finalist for the Chartered Engineer of the Year award. 

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

GeekWire Awards: Billion-dollar deals, rare IPO, pharma pact, and mega-round vie for Deal of the Year

Published

on

The finalists for Deal of the Year at the 2026 GeekWire Awards. Clockwise from top left: Temporal co-founders Samar Abbas and Maxim Fateev; Protect AI co-founders Badar Ahmed, Daryan Dehghanpisheh, and Ian Swanson; Kestra Medical Technologies’ cardiac monitoring device; the ribbon cutting at OpenAI’s new Bellevue office, home to the former Statsig team; an Omeros lab. (GeekWire / Company Photos)

The finalists for Deal of the Year at the 2026 GeekWire Awards include two major acquisitions, a landmark licensing deal, a big funding round, and a rare IPO — collectively representing billions of dollars in transactions.

This award, presented by Wilson Sonsini, recognizes the transactions that made the biggest impact in tech and innovation in Seattle and the Pacific Northwest. The Deal of the Year finalists this year are Kestra Medical Technologies, Omeros, Protect AI, Statsig, and Temporal.

Now in its 18th year, the GeekWire Awards is the premier event recognizing the top leaders, companies and breakthroughs in Pacific Northwest tech, bringing together hundreds of people to celebrate innovation and the entrepreneurial spirit. It takes place May 7 at the Showbox SoDo in Seattle.

Seattle startup Lexion was the Deal of the Year winner last year after being acquired by Docusign for $165 million, a successful exit for the AI-powered contract management company, which got its start in 2018 at the Allen Institute for AI in Seattle.

Continue reading for information on Deal of the Year finalists, who were chosen by a panel of independent judges from community nominations. You can help pick the winner: Cast your ballot here or in the embedded form at the bottom. Voting runs through April 10.

Statsig was acquired by OpenAI for $1.1 billion in an all-stock deal announced in September, in a surprise exit for the Bellevue, Wash.-based product experimentation platform. The deal also landed Statsig CEO Vijaye Raji, a former Facebook engineering leader, in the newly created role of CTO of Applications at OpenAI. 

Advertisement

Founded in 2021, Statsig powers A/B testing, feature flagging, and real-time decisioning for major companies. It had raised more than $153 million, including a $100 million Series C round at a $1.1 billion valuation just months before the acquisition, with backing from Sequoia and Madrona. Statsig now forms the nucleus of OpenAI’s new Bellevue engineering office.

Kestra Medical Technologies raised $202 million in its IPO in March 2025, pricing shares above the expected range in a strong debut for the Kirkland, Wash.-based maker of wearable cardiac devices. Shares began trading on the Nasdaq at more than 30% above the IPO price. 

Founded in 2014, Kestra makes devices that detect and respond to sudden cardiac arrest. Its IPO marked the end of a long dry stretch with no traditional IPOs for Seattle-area tech companies since 2021.

Omeros Corporation, a Seattle-based clinical-stage biopharmaceutical company, struck a deal worth up to $2.1 billion with pharmaceutical giant Novo Nordisk for zaltenibart, its clinical-stage drug candidate in development for rare blood and kidney disorders. Announced in October, the agreement gives Novo Nordisk exclusive global rights to develop and commercialize the drug. 

Advertisement

Founded in 1994 by orthopedic surgeon Gregory A. Demopulos, who still serves as CEO, Omeros went public in 2009 and recently received FDA approval for its lead drug Yartemlea, the first therapy for a rare post-transplant complication.

Protect AI, a Seattle startup that helps companies secure machine learning systems, agreed to be acquired by cybersecurity giant Palo Alto Networks in a deal announced in April. Terms were not disclosed, but sources familiar with the deal said it was valued north of $500 million. 

Founded in 2022 by former engineering leaders at Amazon and Oracle, Protect AI serves Fortune 500 companies across finance, healthcare, and government. Palo Alto Networks said the deal will bolster its ability to secure the new attack surfaces created by AI.

Temporal raised $300 million in a Series D round at a $5 billion valuation in February, doubling its valuation from just months earlier. The Bellevue, Wash.-based company builds open-source software and a cloud service that helps companies run complex workflows reliably — what it calls “durable execution.” The rise of AI agents has amplified demand for its platform, with customers including OpenAI, ADP, and Block. 

Advertisement

Founded in 2019 by co-founders Samar Abbas and Maxim Fateev, who previously built an internal orchestration engine at Uber, Temporal has raised $650 million to date, with backing from Andreessen Horowitz, Sequoia, and Madrona.

Astound Business Solutions is the presenting sponsor of the 2026 GeekWire Awards. Thanks also to gold sponsors Amazon Sustainability, BairdBECU, JLLFirst Tech and Wilson Sonsini, and silver sponsors Prime Team Partners.

The event will feature a VIP reception, sit-down dinner and fun entertainment mixed in. Tickets go fast. A limited number of half-table and full-table sponsorships available. Contact events@geekwire.com to reserve a spot for your team today.

(function(t,e,s,n){var o,a,c;t.SMCX=t.SMCX||[],e.getElementById(n)||(o=e.getElementsByTagName(s),a=o[o.length-1],c=e.createElement(s),c.type=”text/javascript”,c.async=!0,c.id=n,c.src=”https://widget.surveymonkey.com/collect/website/js/tRaiETqnLgj758hTBazgd5M58tggxeII7bOlSeQcq8A_2FgMSV6oauwlPEL4WBj_2Fnb.js”,a.parentNode.insertBefore(c,a))})(window,document,”script”,”smcx-sdk”); Create your own user feedback survey

Advertisement

Source link

Continue Reading

Tech

Sketchy iPhone Fold launch timing shared by analyst with shaky history

Published

on

An analyst at Barclays believes that if the iPhone Fold launches in 2026, it will be in December, months after the iPhone 18 Pro. He’s the only one saying this.

An iPhone Fold render showing the device open, dual back cameras visible, on a table by a cute cat lamp
iPhone Fold could launch in December

Many rumors point to the iPhone Fold launching in late 2026 alongside the iPhone 18 Pro, though no parts have leaked yet. It will be an incredibly expensive device and Apple’s first attempt at a foldable.
A note from Barclays analyst Tim Long, viewed by MacRumors suggests the iPhone Fold will release in December 2026. He offers no detail as to why it would come out three months after its announcement other than supply chain sources.
Rumor Score: 🙄 Unlikely
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Tyndall’s Peter O’Brien awarded for contributions to chip sector

Published

on

Peter O’Brien has received the 2025 Semi European Award, which recognises those who have had an impact on global semiconductor innovation.

Tyndall National Institute’s photonics expert Prof Peter O’Brien has been honoured by the global semiconductor industry for his work in the sector.

O’Brien is the head of research for photonics packaging and systems integration at the University College Cork-based deep-tech institute. He has received the 2025 Semi European Award, which recognises leaders whose work has had a significant impact in global semiconductor innovation.

Semi is a global industry association representing companies and research organisations across the semiconductor and electronics development and manufacturing supply chain.

Advertisement

O’Brien has been recognised for his contributions to photonics electronic packaging, his leadership in Europe’s semiconductor pilot lines, and his work in developing specialised training programmes for up-and-coming researchers in the field.

“It is a great honour to receive the Semi European Award for 2025,” said O’Brien. “Through this award, I would like to recognise my many collaborators around the world. Working together, we accelerate research and development, turning early ideas into impactful breakthroughs.”

Prof William Scanlon, the CEO of Tyndall, added: “Prof O’Brien’s leadership and vision have placed Tyndall at the forefront of advanced packaging globally, and his contributions are shaping Europe’s semiconductor future.”

Meanwhile, Eric Beyne, a senior fellow at the Belgium-based nanoelectronics and digital tech research and innovation hub IMEC, received the Special Service Award at the ceremony earlier this month for his contributions to high-density interconnection and packaging technologies, and helping advance next-gen semiconductor integration techniques.

Advertisement

“We are honoured to recognise Peter O’Brien and Eric Beyne for their outstanding contributions to advancing semiconductor innovation and strengthening Europe’s technology ecosystem,” said Semi Europe president Laith Altimime.

“Their leadership and vision have helped drive transformative progress across the industry while inspiring the next generation of engineers and researchers, reflecting the spirit of collaboration and innovation that continues to propel the semiconductor industry toward a more resilient, digital and sustainable future.”

Tyndall has made several major announcements this year. The Cork-based research institute recently announced a €100m expansion project.

It is also co-ordinating I-C3, Ireland’s National Competence Centre in Semiconductors, leading Ireland in a major €50m European initiative called Photonics for Quantum, and supporting a new €2.5bn pilot line to develop EU’s semiconductor leadership.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

AI analytics agents need guardrails, not more model size

Published

on

Picture a VP of finance at a large retailer. She asks the company’s new AI analytics agent a simple question: “What was our revenue last quarter?” The answer comes back in seconds.

Confident.

Clean.

Wrong.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

That exact scenario happens more frequently than many organizations would care to admit. AtScale, which enables organizations to deploy governed analytics environments and semantic consistency, has found that simply increasing model parameterization alone cannot address the AI governance and context issues enterprises face.

When AI systems query inconsistent or ungoverned data, adding more model complexity doesn’t contain the problem, it compounds it. Organizations across industries have acted quickly to develop agentic AI, deploying systems that analyze data, generate insights, and trigger automated workflows. In response to this trend, the AI models have adapted to react quickly via larger model parameters, increased computing power, and additional features. The underlying assumption has been that as long as the model gets large enough, the eventual result will be reliable.

Advertisement

However, there are indications that this assumption may not hold up. Recent TDWI research found that nearly half of respondents characterized their AI governance initiatives as either immature or very immature. This may have more to do with data lineage and the business definitions on which these models are based than with the models’ capabilities.

Why bigger models don’t solve governance

The AI industry tends to operate on an unexamined assumption about what drives better performance: as we build more advanced models, they will somehow self-correct their performance errors. In enterprise analytics, that assumption can fall apart quickly.

While scale may improve the breadth of reasoning in a model, it doesn’t automatically enforce which definition of gross margin the business has agreed to use. It doesn’t resolve metric inconsistencies that have lived in separate dashboards for years. And it also doesn’t produce traceable lineage on its own.

Governance problems don’t resolve at scale. Business rules buried in individual tools, inconsistent definitions across teams, and outputs with no audit trail are structural issues, and a larger model doesn’t fix structure. It just produces unreliable answers more fluently.

Advertisement

At AtScale, there’s a consistent theme among our clients: When inconsistent data definitions followed organizations into their AI layer, the problems didn’t stop there. They propagated forward, typically at greater speed and with less transparency than the previous layer had offered.

Performance and responsibility are separate jobs. A model reasons. A governance layer defines what the model reasons over, constrains how it applies business logic, and ensures outputs can be traced back to a source of record. One cannot substitute for the other.

The real risk: Unconstrained agents in enterprise environments

The problem with AI agents is seldom the model itself. It’s what the model is working with, and if anyone can see what it did.

With common context, AI agents might read data differently on different systems. In large enterprises, even small differences in definitions can lead to different results. Structural risks typically stem from four main causes:

Advertisement
  • Agents pull from sources where the same metric can mean different things to different teams, making data definitions less clear.
  • Metrics from different departments that don’t agree – two agents give two answers, but it’s not clear which one is right.
  • Unclear reasoning produces outputs without a clear lineage as to how a decision was made.
  • Audit gaps: When outputs can’t be traced back to a governed source of record, there’s no reliable way to catch errors, assign accountability, or course-correct.

These are not signs that AI is not working. They show that the infrastructure around AI hasn’t kept up.

What guardrails actually mean in AI analytics

Guardrails are often viewed as a limitation. However, in many cases, guardrails are the very conditions that permit AI agents to operate with greater confidence.

Guardrails can help align AI-generated outputs with established business logic. They also create a structure in which autonomous agents can operate; this way, as autonomy increases, so does reliability. In analytics, guardrails typically exist in several specific formats:

  • Shared data definitions: A single definition of terms such as revenue, churn, or margin that are shared across all systems.
  • Business logic constraints: Rules governing how calculations are to be performed, regardless of the tools or agents performing those calculations.
  • Lineage visibility: The capability to identify where any output originated from.
  • Access controls: Defined permissions determining what data an agent can query.
  • Standardization of metrics: Consistent definitions applicable across departments and platforms.

The intention isn’t to impede AI’s performance. It’s to offer AI a base upon which it can stand.

The role of the semantic layer as a constraint framework

A semantic layer sits between data and the applications and AI agents that use it, defining business concepts, implementing logical processes, and providing a common framework of terms for all applications and AI agents to draw upon.

A semantic layer does not manipulate or duplicate data; it defines what the data represents. By asking questions of a governed semantic layer rather than the base table, AI agents can generate output based on business-defined logic, rather than on inference. The distinction of this output becomes particularly important when multiple AI agents across multiple systems must produce similar outputs.

Advertisement

From AtScale’s perspective, the semantic layer serves as a context boundary that can help ensure AI agents interpret data according to shared business definitions. The semantic layer is more analogous to a common language, as opposed to a guardrail, that ensures all systems operate with a common understanding.

Governance is an architectural question, not a model question

Enterprise organizations realize that AI governance is less about building the largest model and more about making an environment where the chosen model can work well. A well-designed and governed architecture (with shared definitions for concepts, traceable logic, and a shared context across all systems) will likely deliver better, more reliable results than a larger model running in an uncontrolled data environment.

Scaling models without improving semantic clarity tends to add complexity, not reduce it. As each additional tool, system, or workflow is added to an uncontrolled environment, the opportunities for divergence increase.

In this sense, responsible AI is an infrastructure challenge. Organizations with successful AI deployments treat the meaning of their data as a design decision,before the model is even chosen.

Advertisement

Economic and operational implications

Governance gaps do not stay abstract for long. They tend to show up in the budget.

Ambiguity in data meaning may increase operational friction, agents that produce inconsistent outputs require human review, reconciliation cycles, and rework that compounds across teams and tools. When lineage is not clear, audits cost more. Retrofitting controls after deployment typically costs more than building the right architecture from the start.

In complex enterprise settings, costs can show up in predictable ways: redundant validation when outputs don’t match across systems, excess compute triggered by unclear queries, and slower analysis as teams pause to figure out which answer is actually reliable. Clear semantic constraints can mean fewer validation cycles, and that operational value is becoming easier to measure.

The path forward: Constrained autonomy

AI agents aren’t a future consideration, they’re already in use. What’s still catching up is the infrastructure around them. Agents without clear context and constraints tend to operate beyond what the organization can actually govern. That gap doesn’t close on its own.

Advertisement

The differentiator in enterprise AI, AtScale contends, won’t be model scale, it will be the clarity of the environment models operate in. As agents become more common in business workflows, how well the semantic layer is defined may matter more than how large the model is.

This shift toward governed context and constrained autonomy is explored in more detail in AtScale’s 2026 State of the Semantic Layer report, which examines how open standards, interoperability, and semantic governance are shaping the next phase of enterprise intelligence.

Source link

Advertisement
Continue Reading

Tech

DoorDash will start paying gig workers for creating content to train AI models

Published

on

DoorDash has launched a new option for its gig economy workers to earn some extra cash. The delivery service introduced Tasks, which it describes as “short activities Dashers can complete between deliveries or in their own time.” It gives taking pictures of restaurant dishes or recording video of unscripted conversations in languages other than English as examples. These materials will be used to train artificial intelligence and robotics models.

A representative from DoorDash told Bloomberg News that it will use Tasks content for evaluating its in-house AI models as well as those made by its partner companies in retail, insurance, hospitality and tech. DoorDash is piloting a standalone app for Tasks where Dashers will submit their content. The blog post notes that pay will be displayed upfront, and compensation will vary based on the complexity of the activity.

This idea isn’t new. We’ve seen other startups in AI and robotics offering payment for content filmed by regular people. Considering how many lawsuits are underway against AI companies that have already benefited from unauthorized use of copyrighted materials, at least this approach lets people be directly compensated for training content.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025