Connect with us
DAPA Banner

Tech

Cayin N8iii Flagship DAP Announced: Tube Design Returns to Take on Astell&Kern, FiiO, and iBasso

Published

on

Cayin has officially taken the wraps off the N8iii, its next generation flagship digital audio player (DAP), and this time there is enough real information to move past speculation. The timing matters. Astell&Kern continues to dominate the premium tier with refined hardware and software, FiiO has become far more aggressive at the top end, and iBasso keeps pushing output and modular flexibility. Cayin is no longer competing in a niche it helped create. It is now part of a very crowded field where execution matters more than ambition.

Cayin is positioning the N8iii as a limited release with just 500 units worldwide and a suggested retail price of $3,999, placing it squarely in the upper tier of the DAP market and making it clear this is not intended for a broad audience.

cayin-n8iii-dap-sides

A Flagship That Sticks With Tubes

Cayin is continuing with its hybrid tube and solid state approach. The N8iii introduces a Triple Timbre system with Tube Classic, Tube Modern, and Solid State modes. This is less about novelty and more about giving users different tonal options depending on the headphone and music. Cayin has been consistent here. It is one of the few brands willing to deal with the complexity of tube integration in a portable device, even if that comes with tradeoffs in size, heat, and battery life.

Power Output and Amplifier Design

The N8iii offers up to 900 milliwatts single ended and 1285 milliwatts balanced output, which translates to roughly 0.9 watts and 1.285 watts respectively. That is enough power for a wide range of headphones, including many planar magnetics and most dynamic designs in the portable category. It should not have any issue with efficient or moderately demanding full size headphones.

cayin-n8iii-dap-top-angle

Where things get less certain is with high impedance dynamic headphones. Models in the 300 to 600 ohm range often require voltage swing as much as current, and Cayin has not provided enough detail yet to determine how the N8iii handles that. It is likely usable, but whether it offers full control and headroom is still an open question.

Advertisement

It is also worth stating the obvious. This is not designed for electrostatic headphones. That requires a completely different amplification approach, and Cayin is not trying to solve that problem here.

Cayin includes triple amplifier modes and dual output modes, which gives users some flexibility in how the player behaves, but it also adds complexity that will need to be justified in real world use.

cayin-n8iii-dap-bottom-angle

DAC & Platform

Cayin is moving forward with a new flagship AKM DAC architecture, although full details have not been confirmed. That will appeal to listeners who prefer the AKM presentation, especially after several years where ESS dominated the category.

The N8iii runs on a Snapdragon 665 platform with 8GB of RAM and 256GB of internal storage. That is not cutting edge by smartphone standards, but it is in line with what most high end DAPs are using and should be sufficient for streaming and local playback without performance issues.

Advertisement

Software & Battery

The player uses a customized Android audio system with DTA, allowing SRC bypass for bit perfect playback across supported apps. This is expected at this level and Cayin is in line with the rest of the market here.

Battery capacity is listed at 13,500mAh with PD2.0 fast charging. That is a large battery, which makes sense given the use of tubes and relatively high output power. Actual runtime will depend on how those features are used, and Cayin has not provided estimates yet.

Advertisement. Scroll to continue reading.
cayin-n8iii-dap-rear-top

The Competition

At this price and level, Cayin is up against established competition. Astell&Kern offers more polished industrial design and a mature user experience. FiiO is delivering strong performance with competitive pricing. iBasso continues to push output power and modular flexibility. These are complete products that balance sound quality with usability.

Cayin’s approach remains more specialized. The N8iii focuses on offering a different listening experience rather than trying to be the most practical option.

Advertisement

Cayin N8ii vs N8iii: What’s Actually Changed

Looking at the available data, the jump from the N8ii to the N8iii is not about reinventing the concept. Cayin is refining it, adding flexibility, and pushing output a bit further while trying to clean up some of the practical limitations that came with the earlier design.

The N8ii already established the blueprint. Snapdragon 660 platform, 6GB of RAM, 128GB storage, Android 9, ROHM DACs, and dual Nutube implementation. It was powerful for its time, but it also felt like a device that prioritized experimentation over usability. Battery life hovered around 8 to 11 hours depending on mode, the chassis was thick and heavy at around 442 grams, and while the output was respectable, it was not class leading.

cayin-n8iii-dap-left-side

On the output side, the N8ii delivered up to 420mW at 16 ohms from the single ended output in standard mode, and up to 720mW in its higher power setting. Balanced output pushed that further to 760mW standard and up to 1200mW in its higher power mode. That translates to roughly 0.76W to 1.2W balanced depending on how hard you push it. In practical terms, it could handle most headphones reasonably well, but it was not the last word in authority, especially with higher impedance dynamics where voltage swing matters more than raw wattage.

The N8iii moves that forward, but not dramatically. Cayin is now quoting up to 900 milliwatts single ended and 1285 milliwatts balanced output, which translates to roughly 0.9 watts and 1.285 watts respectively. That is enough power for a wide range of headphones, including many planar magnetics and most dynamic designs in the portable category. It should not have any issue with efficient or moderately demanding full size headphones.

Advertisement

Where things remain uncertain is with high impedance dynamic headphones. The increase in output is incremental, not transformative, and Cayin has not provided detailed voltage specs yet. That means headphones in the 300 to 600 ohm range may still be usable, but not necessarily driven to their full potential. And just to be clear, neither the N8ii nor the N8iii is designed for electrostatic headphones, so that remains outside the scope entirely.

The more meaningful change is in flexibility. The N8ii gave you tube or solid state. The N8iii expands that into Triple Timbre with Tube Classic, Tube Modern, and Solid State. That suggests Cayin is focusing more on user tuning and adaptability rather than just raw performance gains. It is a shift toward giving listeners more control over presentation depending on the headphone pairing.

Internally, there is also a shift in direction. The N8ii relied on dual ROHM BD34301 DACs, which offered a certain tonal character that some preferred over ESS implementations. The N8iii is moving to a new flagship AKM architecture, which likely signals a different tuning approach. That is not inherently better or worse, but it does indicate Cayin is responding to market preferences and the return of AKM supply.

cayin-n8iii-dap-rear-bottom

Platform and usability are also getting a modest update. The N8iii moves to 8GB of RAM and 256GB of storage, along with a Snapdragon 665. That is not cutting edge, but it is an improvement and should make the device feel less constrained with modern streaming apps. The inclusion of a customized Android audio system with SRC bypass brings it in line with what competitors have already been doing, rather than pushing ahead.

Battery is another area where Cayin appears to be compensating for its design choices. The N8ii used a 10,000mAh battery rated at 38Wh and delivered between roughly 8 to 11 hours depending on mode. The N8iii increases that to 13,500mAh and adds PD fast charging. That suggests Cayin is trying to offset the power demands of tubes and higher output rather than fundamentally improving efficiency.

Advertisement

The rest of the design philosophy remains consistent. Both devices are heavy, complex, and not particularly concerned with being pocket friendly. Both are built around the idea that a portable device can approximate a desktop listening experience if you are willing to accept the tradeoffs.

Advertisement. Scroll to continue reading.
cayin-n8iii-dap-front

The Bottom Line

The Cayin N8iii builds on what the company has been doing with its flagship line. It keeps the tube hybrid concept, adds more flexibility in tuning, and delivers enough power for most headphones people are likely to use with a portable device. It is not intended to cover every use case. High impedance dynamics may still require more careful matching, and electrostatic headphones are not part of the equation.

At nearly $4,000 USD and with only 500 units available, this is a focused product for a specific audience. The competition is strong and more well rounded than it used to be. Cayin is relying on differentiation and sound tuning to justify its place at the top. Whether that is enough will depend on how it performs outside of the spec sheet.

What the charts from the previous model make clear is how much detail still has not been confirmed for the N8iii. The N8ii offered a very complete set of physical connections including both 3.5mm single ended and 4.4mm balanced headphone outputs, along with matching line outputs on both connections. It also included digital outputs over USB and I2S via a mini HDMI connection, plus coaxial S PDIF. That made it more than just a portable player. It could function as a transport or DAC in a larger system. With the N8iii, Cayin has not yet clarified whether all of those outputs carry over unchanged, or if anything has been added or removed. Given how important that flexibility is to this category, that is not a small omission.

Advertisement

Bluetooth is another area where details matter. The N8ii supported a wide range of codecs including LDAC, UAT, AAC, and SBC, with both transmit and receive capability. That placed it ahead of many competitors at the time, especially with UAT support for higher bandwidth wireless audio. So far, Cayin has not confirmed the codec support for the N8iii. If it remains unchanged, it is still competitive. If it has been updated, that could be a meaningful improvement. If it has been simplified, that would be a step backward. Right now, we simply do not know.

The digital section is where the lack of detail becomes harder to ignore. The N8ii supported PCM up to 32-bit/768kHz and DSD512 over USB and I2S, along with DoP support over coaxial. It could function as a USB DAC across multiple platforms and offered asynchronous USB audio with broad compatibility. Those are not niche features. They are part of what makes a flagship DAP viable as a hub in a desktop or transport based system. Cayin has confirmed a new DAC architecture for the N8iii, but has not yet outlined the full range of supported formats, digital input and output capabilities, or whether its USB DAC functionality has been expanded or refined.

MSRP: $3,999 (launch date not confirmed at en.cayin.cn).

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Arcee’s new, open source Trinity-Large-Thinking is the rare, powerful U.S.-made AI model that enterprises can download and customize

Published

on

The baton of open source AI models has been passed on between several companies over the years since ChatGPT debuted in late 2022, from Meta with its Llama family to Chinese labs like Qwen and z.ai. But lately, Chinese companies have started pivoting back towards proprietary models even as some U.S. labs like Cursor and Nvidia release their own variants of the Chinese models, leaving a question mark about who will originate this branch of technology going forward.

One answer: Arcee, a San Francisco based lab, which this week released AI Trinity-Large-Thinking—a 399-billion parameter text-only reasoning model released under the uncompromisingly open Apache 2.0 license, allowing for full customizability and commercial usage by anyone from indie developers to large enterprises.

The release represents more than just a new set of weights on AI code sharing community Hugging Face; it is a strategic bet that “American Open Weights” can provide a sovereign alternative to the increasingly closed or restricted frontier models of 2025.

This move arrives precisely as enterprises express growing discomfort with relying on Chinese-based architectures for critical infrastructure, creating a demand for a domestic champion that Arcee intends to fill.

Advertisement

As Clément Delangue, co-founder and CEO of Hugging Face, told VentureBeat in a direct message on X: “The strength of the US has always been its startups so maybe they’re the ones we should count on to lead in open-source AI. Arcee shows that it’s possible!”

Genesis of a 30-person frontier lab

To understand the weight of the Trinity release, one must understand the lab that built it. Based in San Francisco, Arcee AI is a lean team of only 30 people.

While competitors like OpenAI and Google operate with thousands of engineers and multibillion-dollar compute budgets, Arcee has defined itself through what CTO Lucas Atkins calls “engineering through constraint”.

The company first made waves in 2024 after securing a $24 million Series A led by Emergence Capital, bringing its total capital to just under $50 million. In early 2026, the team took a massive risk: they committed $20 million—nearly half their total funding—to a single 33-day training run for Trinity Large.

Advertisement

Utilizing a cluster of 2048 NVIDIA B300 Blackwell GPUs, which provided twice the speed of the previous Hopper generation, Arcee bet the company’s future on the belief that developers needed a frontier model they could truly own.

This “back the company” bet was a masterclass in capital efficiency, proving that a small, focused team could stand up a full pipeline and stabilize training without endless reserves.

Engineering through extreme architectural constraint

Trinity-Large-Thinking is noteworthy for the extreme sparsity of its attention mechanism. While the model houses 400 billion total parameters, its Mixture-of-Experts architecture means that only 1.56%, or 13 billion parameters, are active for any given token.

This allows the model to possess the deep knowledge of a massive system while maintaining the inference speed and operational efficiency of a much smaller one—performing roughly 2 to 3 times faster than its peers on the same hardware. Training such a sparse model presented significant stability challenges.

Advertisement

To prevent a few experts from becoming “winners” while others remained untrained “dead weight,” Arcee developed SMEBU, or Soft-clamped Momentum Expert Bias Updates.

This mechanism ensures that experts are specialized and routed evenly across a general web corpus. The architecture also incorporates a hybrid approach, alternating local and global sliding window attention layers in a 3:1 ratio to maintain performance in long-context scenarios.

The data curriculum and synthetic reasoning

Arcee’s partnership with fellow startup DatologyAI provided a curriculum of over 10 trillion curated tokens. However, the training corpus for the full-scale model was expanded to 20 trillion tokens, split evenly between curated web data and high-quality synthetic data.

Unlike typical imitation-based synthetic data where a smaller model simply learns to mimic a larger one, DatologyAI utilized techniques to synthetically rewrite raw web text—such as Wikipedia articles or blogs—to condense the information.

Advertisement

This process helped the model learn to reason over concepts and information rather than merely memorizing exact token strings.

To ensure regulatory compliance, tremendous effort was invested in excluding copyrighted books and materials with unclear licensing, attracting enterprise customers who are wary of intellectual property risks associated with mainstream LLMs.

This data-first approach allowed the model to scale cleanly while significantly improving performance on complex tasks like mathematics and multi-step agent tool use.

The pivot from yappy chatbots to reasoning agents

The defining feature of this official release is the transition from a standard “instruct” model to a “reasoning” model.

Advertisement

By implementing a “thinking” phase prior to generating a response—similar to the internal loops found in the earlier Trinity-Mini—Arcee has addressed the primary criticism of its January “Preview” release.

Early users of the Preview model had noted that it sometimes struggled with multi-step instructions in complex environments and could be “underwhelming” for agentic tasks.

The “Thinking” update effectively bridges this gap, enabling what Arcee calls “long-horizon agents” that can maintain coherence across multi-turn tool calls without getting “sloppy”.

This reasoning process enables better context coherence and cleaner instruction following under constraint. This has direct implications for Maestro Reasoning, a 32B-parameter derivative of Trinity already being used in audit-focused industries to provide transparent “thought-to-answer” traces.

Advertisement

The goal was to move beyond “yappy” or inefficient chatbots toward reliable, cheap, high-quality agents that stay stable across long-running loops.

Geopolitics and the case for American open weights

The significance of Arcee’s Apache 2.0 commitment is amplified by the retreat of its primary competitors from the open-weight frontier.

Throughout 2025, Chinese research labs like Alibaba’s Qwen and z.ai (aka Zhupai) set the pace for high-efficiency MoE architectures.

However, as we enter 2026, those labs have begun to shift toward proprietary enterprise platforms and specialized subscriptions, signaling a move away from pure community growth.

Advertisement

The fragmentation of these once-prolific teams, such as the departure of key technical leads from Alibaba’s Qwen lab, has left a void at the high end of the open-weight market. In the United States, the movement has faced its own crisis.

Meta’s Llama division notably retreated from the frontier landscape following the mixed reception of Llama 4 in April 2025, which faced reports of quality issues and benchmark manipulation.

For developers who relied on the Llama 3 era of dominance, the lack of a current 400B+ open model created an urgent need for an alternative that Arcee has risen to fill.

Benchmarks and how Arcee’s Trinity-Large-Thinking stacks up to other U.S. frontier open source AI model offerings

Trinity-Large-Thinking’s performance on agent-specific evaluations establishes it as a legitimate frontier contender. On PinchBench, a critical metric for evaluating model capability on autonomous agentic tasks, Trinity achieved a score of 91.9, placing it just behind the proprietary market leader, Claude Opus 4.6 (93.3).

Advertisement
Arcee Trinity-Large-Thinking benchmark comparison chart

Arcee Trinity-Large-Thinking benchmark comparison chart. Credit: Arcee

This competitiveness is mirrored in IFBench, where Trinity’s score of 52.3 sits in a near-dead heat with Opus 4.6’s 53.1, indicating that the reasoning-first “Thinking” update has successfully addressed the instruction-following hurdles that challenged the model’s earlier preview phase.

The model’s broader technical reasoning capabilities also place it at the high end of the current open-source market. It recorded a 96.3 on AIME25, matching the high-tier Kimi-K2.5 and outstripping other major competitors like GLM-5 (93.3) and MiniMax-M2.7 (80.0).

While high-end coding benchmarks like SWE-bench Verified still show a lead for top-tier closed-source models—with Trinity scoring 63.2 against Opus 4.6’s 75.6—the massive delta in cost-per-token positions Trinity as the more viable sovereign infrastructure layer for enterprises looking to deploy these capabilities at production scale.

Advertisement

When it comes to other U.S. open source frontier model offerings, OpenAI’s gpt-oss tops out at 120 billion parameters, but there’s also Google with Gemma (Gemma 4 was just released this week) and IBM’s Granite family is also worth a mention, despite having lower benchmarks. Nvidia’s Nemotron family is also notable, but is fine-tuned and post-trained Qwen variants.

Benchmark

Arcee Trinity-Large

gpt-oss-120B (High)

Advertisement

IBM Granite 4.0

Google Gemma 4

GPQA-D

76.3%

Advertisement

80.1%

74.8%

84.3%

Tau2-Airline

Advertisement

88.0%

65.8%*

68.3%

76.9%

Advertisement

PinchBench

91.9%

69.0% (IFBench)

89.1%

Advertisement

93.3%

AIME25

96.3%

97.9%

Advertisement

88.5%

89.2%

MMLU-Pro

83.4%

Advertisement

90.0% (MMLU)

81.2%

85.2%

So how is an enterprise supposed to choose between all these?

Advertisement

Arcee Trinity-Large-Thinking is the premier choice for organizations building autonomous agents; its sparse 400B architecture excels at “thinking” through multi-step logic, complex math, and long-horizon tool use. By activating only a fraction of its parameters, it provides a high-speed reasoning engine for developers who need GPT-4o-level planning capabilities within a cost-effective, open-source framework.

Conversely, gpt-oss-120B serves as the optimal middle ground for enterprises that require high-reasoning performance but prioritize lower operational costs and deployment flexibility.

Because it activates only 5.1B parameters per forward pass, it is uniquely suited for technical workloads like competitive code generation and advanced mathematical modeling that must run on limited hardware, such as a single H100 GPU.

Its configurable reasoning effort—offering “Low,” “Medium,” and “High” modes—makes it the best fit for production environments where latency and accuracy must be balanced dynamically across different tasks.

Advertisement

For broader, high-throughput applications, Google Gemma 4 and IBM Granite 4.0 serve as the primary backbones. Gemma 4 offers the highest “intelligence density” for general knowledge and scientific accuracy, making it the most versatile option for R&D and high-speed chat interfaces.

Meanwhile, IBM Granite 4.0 is engineered for the “all-day” enterprise workload, utilizing a hybrid architecture that eliminates context bottlenecks for massive document processing. For businesses concerned with legal compliance and hardware efficiency, Granite remains the most reliable foundation for large-scale RAG and document analysis.

Ownership as a feature for regulated industries

In this climate, Arcee’s choice of the Apache 2.0 license is a deliberate act of differentiation. Unlike the restrictive community licenses used by some competitors, Apache 2.0 allows enterprises to truly own their intelligence stack without the “black box” biases of a general-purpose chat model.

“Developers and Enterprises need models they can inspect, post-train, host, distill, and own,” Lucas Atkins noted in the launch announcement.

Advertisement

This ownership is critical for the “bitter lesson” of training small models: you usually need to train a massive frontier model first to generate the high-quality synthetic data and logits required to build efficient student models.

Furthermore, Arcee has released Trinity-Large-TrueBase, a raw 10-trillion-token checkpoint. TrueBase offers a rare, “unspoiled” look at foundational intelligence before instruction tuning and reinforcement learning are applied. For researchers in highly regulated industries like finance and defense, TrueBase allows for authentic audits and custom alignments starting from a clean slate.

Community verdict and the future of distillation

The response from the developer community has been largely positive, reflecting the desire for more open weights, U.S.-made mdoels.

On X, researchers highlighted the disruption, noting that the “insanely cheap” prices for a model of this size would be a boon for the agentic community.

Advertisement

On open AI model inference website OpenRouter, Trinity-Large-Preview established itself as the #1 most used open model in the U.S., serving over 80.6 billion tokens on peak days like March 1, 2026.

The proximity of Trinity-Large-Thinking to Claude Opus 4.6 on PinchBench—at 91.9 versus 93.3—is particularly striking when compared to the cost. At $0.90 per million output tokens, Trinity is approximately 96% cheaper than Opus 4.6, which costs $25 per million output tokens.

Arcee’s strategy is now focused on bringing these pretraining and post-training lessons back down the stack. Much of the work that went into Trinity Large will now flow into the Mini and Nano models, refreshing the company’s compact line with the distillation of frontier-level reasoning.

As global labs pivot toward proprietary lock-in, Arcee has positioned Trinity as a sovereign infrastructure layer that developers can finally control and adapt for long-horizon agentic workflows.

Advertisement

Source link

Continue Reading

Tech

AI is doing the dirty work for insurance companies, and it’s getting worse

Published

on

Insurance claims adjusters have never had a reputation for generosity. But at least they were human. That’s changing fast, and not in your favor. A report by Futurism details how AI automation is now a major trend in personal insurance, the health, home, and auto coverage most of us rely on. 

Is your doctor’s opinion even part of the process anymore?

It doesn’t seem that your doctor’s opinion carries that much weight now. A Palm Beach Post investigation found that Iris Smith, an 80-year-old suffering from arthritis, may be a victim of AI-fueled preauthorization denials.

In another case, UnitedHealth is currently facing a class-action lawsuit alleging that AI-denied Medicare nursing care contributed to patient deaths. Meanwhile, a National Association of Insurance Commissioners survey found 84% of health insurers are using AI, with 68% deploying it for prior authorization approvals.

Most people give up and don’t even appeal these rejections because the process is too confusing or exhausting, which, if you’re an insurance company, is the outcome you want.

The worst part is that we know AI isn’t always accurate and has a tendency to hallucinate. It’s one thing if it makes a mistake while writing a report, but it’s a completely different ball game when it ends up denying medical aid to someone who truly needs it.

Advertisement

Is there anyone protecting your interests?

Florida Representative Lois Frankel isn’t having any of it. She told the Palm Beach Post she plans to fight any expansion into other states. “We believe Medicare was based on a promise that if your doctor says you need care, if you’re hurt and you need care, Medicare will be there for you, not AI.”

But if the past is any indication, her fight alone won’t be enough. Florida lawmakers tried to pass a bill in 2025, requiring human review for AI-generated denials. It passed the House, died in the Senate, and a Trump executive order discouraging state AI regulations didn’t help.

The silver lining, if you can call it that: nonprofits like Counterforce Health now offer free AI tools that analyze your denial letter and draft a customized appeal, making it easier to fight back. It’s AI versus AI at this point, and the world is growing gloomier by the day.

Advertisement

Source link

Continue Reading

Tech

The AI Doc’s Falsehoods And False Balance

Published

on

from the hype-without-substance dept

There is a familiar media failure in which opposing viewpoints are presented as equally valid, even when the evidence overwhelmingly supports one side. It’s called Bothsidesism. This false balance phenomenon legitimizes misinformation and undermines public understanding by giving disproportionate weight to baseless claims.

Why bring this up? Because the new AI Doc film is based on it.

The film wants credit for being “balanced” because it assembles a wide range of experts. But putting Prof. Fei-Fei Li, a pioneering computer scientist, next to someone like Eliezer Yudkowsky, an author of a Harry Potter fanfic, is not “balance.”

Once you understand that false equivalence is baked into the film’s storytelling, you understand how misleading and manipulative the documentary is. And it is compounded by a series of falsehoods that go unchallenged and uncorrected.

This review addresses both failures. 

Advertisement

The “AI Doc” Movie

“The AI Doc: Or How I Became an Apocaloptimist,” co-directed by Daniel Roher and Charlie Tyrell, sets out to explore AI, especially its potential for good and bad, with a strong emphasis on the filmmakers’ anxieties and fears. Its basic premise is: “A father-to-be tries to figure out what is happening with all this AI insanity.” As summarized by Andrew Maynard from Future of Being Human:

“The documentary progresses through the eyes of director Daniel Roher as he faces a tsunami of existential AI angst while grappling with the responsibility of becoming a father. Motivated by a fear that artificial intelligence could spell the end of everything that matters, he sets out to interview some of the largest (and loudest) voices in AI to fathom out whether this is the best of times or worst of times for him and his wife (filmmaker Caroline Lindy) to bring a kid into the world.”

The “loudest voices” include many AI doomer figures, such as Eliezer Yudkowsky, Dan Hendrycks, Daniel Kokotajlo, Connor Leahy, Jeffrey Ladish, and two of the most populist voices on emerging tech (first social media and now AI): Tristan Harris and Yuval Noah Harari. The film also features voices on AI ethics, including David Evan Haris, Emily M. Bender, Timnit Gebru, Deborah Raji, and Karen Hao. On the more boosterish side, there are Peter Diamandis and Guillaume Verdon (AKA Beff Jezos). Three leading AI CEOs were also interviewed: OpenAI’s Sam Altman, DeepMind’s Demis Hassabis, and Anthropic’s Amodei siblings, Dario and Daniela. (Meta’s Mark Zuckerberg declined, and xAI’s Elon Musk agreed but never showed up).

The movie started playing in theaters on March 27, but there are already plenty of reviews (dating back to the Sundance Film Festival). The praise is fairly consistent: It is timely, wide-ranging, visually energetic, and unusually well-connected, with access to major AI figures.

Advertisement

The most common criticism is that it is too deferential to interviewees and too thin on hard interrogation or concrete answers. As several reviewers put it:

  1. “Roher’s willingness to blindly accept any and all of his speakers’ pronouncements leaves The AI Doc feeling toothless.”
  2. “By giving its doomer and accelerationist voices so much time to present AI’s most hyperbolic potential outcomes with little pushback, the documentary’s first half plays more like an overlong advertisement for the technology as opposed to a piece of measured analysis.”
  3. “Roher acts as a fantastic storyteller, but he treats his subjects too gently. The film desperately needs more pushback during the interviews.”

Tristan Harris, co-founder of the Center for Humane Technology, told the AP: “My hope is that this film is kind of like ‘An Inconvenient Truth’ or ‘The Social Dilemma’ for AI.”

That is not reassuring. It is more like a glaring warning sign. Harris’s “Social Dilemma” and “AI Dilemma” movies were full of misinformation and nonsensical hyperbole, and both were designed to be manipulative and dishonest. If anything, his endorsement tells you exactly what kind of movie this is.

After watching the AI Doc, I realized what the doomers had managed to accomplish here: The film absorbs the panic rather than investigates it.

The False Balance of The AI Doc

Advertisement

The AI Doc starts with what one reviewer called a “Doom Parade.” It aims to set the tone.

The worst AI predictions are presented first,” another reviewer noted. “Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute, calmly talks of the ‘abrupt extermination’ of humanity.”

And it is worth remembering who Yudkowsky is and what he has actually advocated. In his notorious TIME op-ed, “Shut it All Down,” he argued that governments should “be willing to destroy a rogue datacenter by airstrike.” In his book “If Anyone Builds It, Everyone Dies,” which many reviewers found unconvincing and “unnecessarily dramatic sci-fi,” he (and his co-author Nate Soares) proposed that governments must bomb labs suspected of developing AI. Based on what exactly? On the authors’ overconfident, binary worldview and speculative scenarios, which they mistake for inevitability.

One review of that book observed, “The plan with If Anyone Builds It seems to be to sane-wash him [Yudkowsky] for the airport books crowd, sanding off his wild opinions.”

Advertisement

That is more or less what the new documentary does, too. The AI Doc sane-washes the loudest doomers for mainstream viewers, sanding off their wild opinions.

In his newsletter, David William Silva addresses the documentary’s “series of doomers,” who “describe AI-driven extinction with the calm confidence of people who have said these things so many times they have stopped noticing they have no evidence for them.”

“Roher’s reaction is full terror,” Silva adds. “I hope it is unequivocally evident that this is not journalism.”

That gets to the heart of it. The film pretends to weigh competing perspectives, but in practice, it grants disproportionate authority to people most invested in flooding the zone with AI panic. And there is a well-oiled machine behind this kind of AI panic. As Silva writes:

Advertisement

“The people behind the AI anxiety machine. […] They know that predicting human extinction by software is an extraordinary claim requiring extraordinary evidence. They know they don’t have it. They know ‘my kids won’t live to see middle age’ is nothing but performance. […] And they do it anyway. Why do you think that is? The calculation is simple. Some people will see through it, and they will be annoyed, write rebuttals, call it what it is. Ok, fine. Just an acceptable loss. The believers, on the other hand, are a market. As long as the ratio stays favorable, the machine is profitable.”

One of the biggest beneficiaries of this film is Harris.[1] He is framed as if he is in the middle between the two main camps (doomers and accelerationists), and his narrative gradually becomes the film’s narrative (similar to the Social Dilemma). His call to action even serves as the ending (with a QR code directing viewers to a designated website).

The problem is that this framing has very little to do with reality. Harris’s Center for Humane Technology got $500,000 from the Future of Life Institute for “AI-related policy work and messaging cohesion within the AI X-risk [existential risk] community.” That is not a neutral player.

There’s a touching scene in the film where Roher mentions his father’s cancer treatment and expresses hope that AI might help. Harris appears visibly emotional. But in other contexts, Harris has argued against looking at AI for help with cancer treatment… in the belief that it would lead to extinction. Here he is on Glenn Beck’s show in 2023:

“My mother died from cancer several years ago. And if you told me that we could have AI that was going to cure her of cancer, but on the other side of that coin was that all the world would go extinct a year later, because of the, the only way to develop that was to bring something, some Demon into the world that would we would not be able to control, as much as I love my mother, and I would want her to be here with me right now, I wouldn’t take that trade.”

That sort of hyperbole seems relevant to Harris’ stance on such things, but was not mentioned in the film at all.

Advertisement

Connor Leahy of Conjecture and ControlAI gets a similar makeover. In the documentary, he appears as another pessimistic expert. Elsewhere, he said he does not expect humanity “to make it out of this century alive; I’m not even sure we’ll get out of this decade!” His “Narrow Path” proposal for policymakers begins with the claim that “AI poses extinction risks to human existence.” Instead of calling for a six-month AI pause, he argued for a 20-year pause, because “two decades provide the minimum time frame to construct our defenses.”

This is exactly why background checks matter. Viewers of the AI Doc deserve to know the full scope of the more extreme positions these interviewees have publicly taken elsewhere. If someone has publicly argued for destroying data centers by airstrikes or stopping AI for 20 years, the audience should know that.

Debunking the Falsehoods

The film goes way beyond just pushing a panic. It also recycles several misleading or plainly false claims, letting them pass as established facts. Three stood out in particular.  

Advertisement

Anthropic’s Blackmail study

One of the most repeated “facts” in reviews of the movie is that Anthropic’s AI model, Clause, decided, unprompted, to blackmail a fictional employee. In the film, Daniel Roher asks, “And nobody taught it to do that?” Jeffrey Ladish, of Palisade Research and Tristan’s Center for Humane Technology, replies: “No, it learned to do that on its own.”

That is a misleading characterization of the actual experiment, it has already been debunked in “AI Blackmail: Fact-Checking a Misleading Narrative.” Anthropic researchers admitted that they strongly pressured the model and iterated through hundreds of prompts before producing that outcome. It wasn’t a spontaneous emergence of “evil” behavior; the researchers explicitly ensured it would be the default. Telling viewers that the model has gone full “HAL 9000” omits the facts about the heavily engineered experimental setup.  

Although this is a classic case of big claims and thin evidence, the film offers so little pushback that viewers are left to take Ladish’s statements at face value.

Advertisement

It is also worth remembering that Ladish has fought against open-source AI, pushed for a crackdown on open-source models, and once said, “We can prevent the release of a LLaMA 2! We need government action on this asap.” He later updated his position (and it’s good to revise such views). But does the film mention his earlier public hysteria? No.

Is AI less regulated than sandwich shops? No.

Connor Leahy tells Daniel Roher, “There is currently more regulation on selling a sandwich to the public” than there is on AI development. This talking point has become a favorite slogan in AI doomer circles. It was repeatedly stated by The Future of Life Institute’s Max Tegmark and, more recently, by Senator Bernie Sanders. It’s catchy. It’s also false.

State attorneys general from both parties have explicitly argued that existing laws already apply to AI. Lina Khan, writing on behalf of the Federal Trade Commission, stated that “AI is covered by existing laws. Each agency here today has legal authorities to readily combat AI-driven harm.” The existing AI regulatory stack already includes antitrust & competition regulation, civil rights & anti-discrimination law, consumer protection, data privacy & security, employment & labor law, financial regulation, insurance & accident compensation, property & contract law, among others.

Advertisement

So no, AI is not less regulated than sandwich shops. It’s a misleading soundbite, not a serious description of legal reality.  

Data center water usage

In the film, Karen Hao criticizes data centers, warning that “People are literally at risk, potentially of running out of drinking water.” That sounds alarming, which is presumably the point. But it is highly misleading.

In fact, Karen Hao had to issue corrections to her “Empire of AI” book because a key water-use figure was off by a factor of 4,500. The discrepancy was not 45x or 450x, but rather 4,500x. That is not a rounding error. For detailed rebuttals, see Andy Masley’s “The AI water issue is fake” and “Empire of AI is widely misleading about AI water use.”

Advertisement

There is also a basic proportionality issue here. As demonstrated by The Washington Post, “The water used by data centers caused a stir in Arizona’s drought-prone Maricopa County. But while they used about 905 million gallons there last year, that’s a small fraction of the 29 billion gallons devoted to the country’s golf courses.” To put that plainly: data centers accounted for just 0.1% of the county’s water use.

It is also worth noting that “most of the water used by data centers returns to its source unchanged.” In closed-loop cooling systems, for example, water is recirculated multiple times, which significantly reduces net consumption. 

None of this is hidden information. A basic fact-check by the filmmakers could have brought it to light. But that was not the film’s goal. They chose fear-based framing over actual reporting. They could have pressed interviewees on their track records, failed predictions, and political agendas. Instead, they let them narrate the stakes, unchallenged.

So, I think we can conclude that the AI Doc may want to appear balanced and thoughtful, but, unfortunately, too often it is not.

Advertisement

Final Remark

While Western filmmakers are busy platforming advocates for “bombing data centers” and “Stop AI for 20 years,” the Chinese Communist Party is building the actual infrastructure. The CCP is not making doom-and-gloom documentaries; it is racing ahead. This is a real strategic threat, and it is far more concerning than anything featured in this film.

—————————

Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and the “AI Panic” newsletter.

Advertisement

Filed Under: ai, ai doomerism, daniel roher, eliezer yudkowsky, the ai doc, tristan harris

Source link

Advertisement
Continue Reading

Tech

DC In The Data Center For A More Efficient Future

Published

on

If you own a computer that’s not mobile, it’s almost certain that it will receive its power in some form from a mains wall outlet. Whether it’s 230 V at 50 Hz or 120 V at 60 Hz, where once there might have been a transformer and a rectifier there’s now a switch-mode power supply that delivers low voltage DC to your machine. It’s a system that’s efficient and works well on the desktop, but in the data center even its efficiency is starting to be insufficient. IEEE Spectrum has a look at newer data centers that are moving towards DC power distribution, raising some interesting points which bear a closer look.

A traditional data center has many computers which in power terms aren’t much different from your machine at home. They get their mains power at distribution voltage — probably 33 KV AC where this is being written — they bring it down to a more normal mains voltage with a transformer just like the one on your street, and then they feed a battery-backed uninterruptible Power Supply (UPS) that converts from AC to DC, and then back again to AC. The AC then snakes around the data center from rack to rack, and inside each computer there’s another rectifier and switch-mode power supply to make the low voltage DC the computer uses.

The increasing demands of data centers full of GPUs for AI processing have raised power consumption to the extent that all these conversion steps now cost a significant amount of wasted power. The new idea is to convert once to DC (at a rather scary 800 volts) and distribute it direct to the cabinet where the computer uses a more efficient switch mode converter to reach the voltages it needs.

Advertisement

It’s an attractive idea not just for the data center. We’ve mused on similar ideas in the past and even celebrated a solution at the local level. But given the potential ecological impact of these data centers, it’s a little hard to get excited about the idea in this context. The fourth of our rules for the responsible use of a new technology comes in to play. Fortunately we think that both an inevitable cooling of the current AI hype and a Moore’s Law driven move towards locally-run LLMs may go some way towards solving that problem on its own.


header image: Christopher Bowns, CC BY-SA 2.0.

Advertisement

Source link

Continue Reading

Tech

A Major Publisher Just Canceled This Book Over AI Writing Concerns

Published

on

Last June, Mia Ballard’s self-published novel Shy Girl took the internet by storm. After winning the hearts of readers and publisher Hachette alike, it was set for a major US debut in the coming months. 

Now, the novel may never become available through any official channel again. Hachette has officially pulled the plug on the novel’s US release following a wave of allegations that generative AI played a role in the manuscript’s creation. 

Originally self-published in February 2025, the horror novel was traditionally released by Hachette’s science fiction and fantasy label Orbit in the UK in November. After The New York Times provided evidence of AI usage in Shy Girl, Hachette canceled the planned spring US release and removed the book from its website completely.

Advertisement

“Hachette remains committed to protecting original creative expression and storytelling,” the publisher said in a statement to the Times. 

Authors are required to disclose to Hachette whether AI was used in the creation of their work. Ballard has denied using AI tools to write the book, claiming an editor was responsible for the portions that appear to be AI-generated.

“My name is ruined for something I didn’t even personally do,” Ballard wrote in an email to the New York Times.

Advertisement
The book cover for Shy Girl by Mia Ballard.

Hachette UK

The cancellation of Shy Girl by Hachette marks the first time a major publisher has publicly pulled an existing title due to suspicions of AI-generated prose.

For the past few months, readers online have raised concerns about the book’s apparent use of AI.

A video from YouTuber frankie’s shelf provides a lengthy analysis of the novel, pointing out linguistic patterns that are characteristic of AI writing. The video also lists words in Shy Girl that are repeated with unusual frequency (“edge” is used 84 times and “sharp” 159 times), often in ways that are abstract and nonsensical.

In January, Max Spero, founder and chief executive of Pangram, ran the text of Shy Girl through his AI detection program. He claimed that the novel was 78% AI-generated.

The rise of AI has caught the publishing industry off guard. Though AI writing has already appeared in many self-published books, traditional publishers like Hachette are more critical of the technology.

Advertisement

Representatives for Hachette didn’t immediately respond to a request for comment.

Source link

Advertisement
Continue Reading

Tech

‘Wood is wood’: WSU research finds Yankees’ viral ‘torpedo’ bats perform the same as traditional bats

Published

on

A research team determined that the torpedo bat, left, and traditional bat perform equally well in hitting power with only a slight difference in the location of the bat’s sweet spot (WSU Photo / Voiland College of Engineering and Architecture)

The New York Yankees just cruised through Seattle and won two out of three games against the Mariners. On the other side of Washington state, the Bronx Bombers’ “torpedo bats” were being scientifically scrutinized.

In what Washington State University is calling the first-ever laboratory experiments on the new baseball bat design, researchers found that torpedo bats and traditional bats basically perform the same.

It didn’t look that way last season, when the Yankees hit a franchise-record nine home runs in a game against the Milwaukee Brewers and drew viral attention to the bats that they were swinging.

The torpedo bat design relies on a slightly different shape in which wood is removed from the barrel tip and added to the bat’s sweet spot, so that the diameter tapers down, a little like a bowling pin. But the hype appears overblown.

“Wood is wood,” Lloyd Smith, a professor in WSU’s School of Mechanical and Materials Engineering and director of the university’s Sports Science Laboratory, told WSU Insider. “When it comes to baseball, there’s not a lot you can do with wood. If your goal is to keep the game steady and consistent and not have a lot of change, wood bats are good.”

Advertisement

Smith is part of a research team that includes Alan Nathan from University of Illinois and Daniel Russell from Penn State University. They’ll present their findings at the upcoming International Sports Engineering Association conference, June 1–4 in Pullman, Wash.

According to WSU Insider, the researchers created two maple bats that were duplicates of a standard Major League Baseball bat. Two additional maple bats were made with a torpedo-shaped barrel that gave them the same swing weight as the standard bat.

They measured how much energy the bat returns to the ball by firing baseballs from an air cannon at a stationary bat and using light gates and cameras to measure the speed of the incoming and rebounding ball.

The team found nearly identical performance for the torpedo and standard bats except that the sweet spot for the torpedo bat was a half inch farther from the bat tip than the standard bat.

Advertisement

“It was actually pretty phenomenal how close they were,” said Smith.

While some Yankees players said last year that any little tweak could provide an advantage, the team’s captain wasn’t convinced.

Aaron Judge hit an American League-record 62 homers in 2022, 58 in an MVP season in 2024 and 53 as repeat MVP in 2025. He had three homers using a traditional bat in that much-talked-about rout of the Brewers.

“The past couple of seasons kind of speak for itself,” Judge told ESPN last May. “Why try to change something?”

Advertisement

Source link

Continue Reading

Tech

Xteink’s X3 E-Reader Snaps Onto Your iPhone and Ready for Any Spare Moment

Published

on

Xteink X3 E-Reader
Slapping the Xteink X3 onto an iPhone takes only a few seconds. This is owing in part to its built-in magnets, which exactly align with MagSafe and allow it to be easily snapped into place. You get a thin black or white slab that sits flush against the phone’s back without adding any bulk. Anyone who is continually reaching for their phone dozens of times per day would appreciate having a book right at their fingertips, all from the same move.



At only 58 grams, this device is easy to forget about until you need it, and then, as if by magic, it appears. Its overall size is a modest 100mm long and 60mm wide, so it goes unnoticed in a pocket until reading time beckons. Commuters and individuals waiting in lines can just pull out their phones and start reading a chapter without having to dig through their bags for another device.


XTEINK X4 E-Book Reader, 4.3″ Portable Pocket E-Ink eReader with Physical Page-Turn Buttons, Ultra-Thin…
  • Pocket-Size Mini eReader for Reading Anywhere: Ultra-light at just 0.23 inch and only 2.72 oz, Xteink X4 is designed for true portability. Slip it…
  • 4.3″ Paper-Like E-Ink Display: The 4.3-inch E-Ink screen delivers a natural paper-like reading experience that’s gentle on the eyes. Enjoy clear…
  • Magnetic-Ready Design for Quick Access: Includes magnetic stick-on rings, so you can attach Xteink X4 to the back of your phone or other magnetic…


The 3.7-inch E Ink screen displays clear text, with over 250 pixels per inch. You can easily change the font size with a few simple adjustments, so even the smallest pages provide a comfortable reading experience. With adequate lighting, the characters simply pop, and there is no eye strain to contend with, as opposed to phone screens. You also have real buttons on the sides and bottom for turning pages and accessing menus. One-handed operation feels perfectly normal, whether you’re on a train or confined in bed. The gyroscope within detects even the tiniest shake and flips pages forward, allowing you to maintain a solid grip during those rapid reading periods.

Xteink X3 E-Reader
Navigation is straightforward, with a grid of icons instead of swipes or touches. Choose a book or change the settings with a few presses, and it remains dependable even when your fingers are clumsy. The strategy minimizes distractions and allows you to concentrate on the words themselves. You can load books onto the device using either the 16GB microSD card included in the box or a companion app on your phone. Transferring EPUB files is quick and easy over Wi-Fi or by inserting the card into your computer, and storage increases up to 512 gigabytes, allowing you to carry thousands of titles without running out of space.

Xteink X3 E-Reader
The battery will last you 10 to 14 days on a single charge, even if you only read for an hour or two every day, and charging is simple; simply insert the special cable with magnetic pogo pins into the gadget and it will clip right into place. Okay, there is one little flaw: there is no built-in front light (yet), but you can get a separate clip-on version for only $9.99 if you plan on reading late into the evening. If you need more connectivity, there are Bluetooth and NFC connections available, as well as Wi-Fi for the occasional update or transfer. It’s available now on the official Xteink website and can be purchased for $79.
[Source]

Advertisement

Source link

Continue Reading

Tech

'The Bonfire of the Vanities' series headed to Apple TV

Published

on

Maybe the third time is the charm. Writer/producer David E. Kelley is adapting Tom Wolfe’s “The Bonfire of the Vanities” novel into a series for Apple TV, with “The Batman” director Matt Reeves.

Apple logo followed by lowercase letters t and v, all glowing with soft pastel gradient colors on a solid black background
Apple TV is dramatizing “The Bonfire of the Vanities” — image credit: Apple

David E. Kelley is still best known for “The Practice” and “Ally McBeal” shows, but he’s also the writer of Apple TV’s “Presumed Innocent” and “Margo’s Got Money Troubles.” Now according to Deadline, he’s dramatizing Tom Wolfe’s famous 1987 novel of greed and Wall Street money.
Not to spoil the story, but as excellent as it is, Wolfe’s novel feels as if it fades out rather than have a big finish, which has made it difficult to successfully adapt. It was filmed in 1990, with Tom Hanks starring and Brian DePalma directing from a screenplay by Michael Cristofer, but that was a flop.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

The leadership dilemma: Governing the “Agentic AI” workforce

Published

on

Artificial intelligence is no longer a back office enabler or a set of isolated automation software tools. It is becoming a core component of how organizations operate, compete, and deliver value.

As businesses accelerate their adoption of increasingly autonomous systems, often referred to as agentic AI, a significant leadership dilemma is emerging. The workforce is no longer exclusively human.

Source link

Continue Reading

Tech

How CIOs can create a strong foundation for an AI-enabled workplace

Published

on

As with any new tech, there’s a scale for AI adoption among businesses leaving some are ahead of the curve and others much further behind as they continue to resist and delay.

But what’s clear is that adoption is happening with or without formal strategy because nearly two-thirds (65%) of employees now say they intentionally use AI for work.

Source link

Continue Reading

Trending

Copyright © 2025