Connect with us
DAPA Banner

Tech

These Excellent Computer Speakers Are $100 Off

Published

on

Looking for a great set of computer speakers that still sound awesome? You can grab a pair of the IK Multimedia iLoud Micro Monitor desktop speakers in either black or white from Amazon for just $250, a $100 discount from their usual price. They’re a perfect pick for aspiring music producers, audiophiles, or anyone who just wants a premium audio experience while scrolling YouTube videos.

Black rectangular speakers on light wooden surface

IK Multimedia

iLoud Micro Monitor

One thing that both our computer speaker reviewer and audio expert agree on is that these speakers sound awesome, particularly for their size and power. Because they’re intended to work as studio monitors, they have a nice flat midrange that makes it easier to mix or balance, bold bass response, and detailed high-end for getting those little details dialed in. They sound great for just sitting back and listening to music or watching a movie, too, but anyone working on production audio or video might appreciate them more than your average PC gamer.

While they sport both RCA and 3.5-mm TRS inputs for your desktop connections, they also have Bluetooth in case you’d like to hook up your phone for some jams, or even a turntable that has its own phono preamp. The included stands point them up right at your face, and they’re tuned for near field listening, so they’re perfectly setup for sitting at your computer and getting audio work done. If you’ve got a more dedicated setup, there are 3/8-inch 16-threaded inserts on the bottom of each, so you can mount them to dedicated mic stands for even better positioning.

Advertisement

While the audio capabilities impressed us universally, there are some usability concerns for everyday users that might give you some pause. The onboard controls are all stashed around the back of the left speaker, so you may need to reach a bit depending on where you mount them. These do include a set of three equalizer switches, in case you need to really dial in the audio levels, which is a nice touch for desktop speakers that don’t usually include any kind of EQ.

If you can manage those concerns, we think you’ll be really impressed by the IK Multimedia iLoud Micro Monitors, and the $100 discount makes the deal even sweeter. Amazon has both colors, black and white, on sale for just $250, or you can check out our guides to computer speakers and standalone bookshelf speakers for other options.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Fan fiction website AO3 is finally coming out of beta

Published

on

The famous fan fiction website Archive of Our Own or AO3 has finally exited open beta, 17 years after it launched way back in 2009. AO3 is a nonprofit created by the by the Organization for Transformative Works. In an announcement, the team reminisced about its early days and how volunteers had to manually send out invitations to prospective writers. Upon launching the website on open beta, it only had 347 accounts and hosted 6,598 works. Now, it has 10 million registered users and is hosting 17 million fan-created works.

The team has highlighted some of the most useful features it has added over the past 17 years, including its tagging system. It also mentioned a feature it calls “Orphaning,” which allows authors to leave their works online even after deleting their account. In addition, it released the ability to download fanworks in AZW3, EPUB, MOBI, PDF or HTML format for offline access.

Even though the website has only just exited open beta, it has been stable for a long time. Users will not see huge changes, but the team also promised that it will not stop improving the fan fiction portal. It says its contributors and volunteers will continue tweaking the website, and it also continues to welcome anybody who has coding knowledge to contribute their time.

Source link

Advertisement
Continue Reading

Tech

Let Us Text You the Very Best Deals Directly to Your Phone

Published

on

Even though the Amazon Big Spring Sale has come to a close, that’s no reason to pay full price for the devices and gadgets you want. There are still some exceptional savings to be had with deals popping up left and right. Whether you’re picking up a new laptop or shopping for kitchen essentials, retailers regularly run sales that slash the amount you have to pay. But we know that it can be a real chore finding the best prices out there, and finding the time to trawl the web for the right deal isn’t always in the cards.

Well, luckily for you, it’s our job to search through all the sales. CNET’s shopping experts focus on finding deals that genuinely save you money. We know how to avoid the ones padded by inflated list prices or clever wording. If the discount isn’t real or the product isn’t worth owning, it doesn’t make the cut.

The team and I continually track and handpick the best offers from your favorite retailers, including Amazon and Walmart, as well as others, for our CNET Deals text subscribers. I’ll text the best sales at no cost straight to your phone, so you can keep an eye on the hottest drops and jump on them before everyone else does. It is never a bad time to save money, but with recent holiday expenses behind us, finding affordable items in early 2026 is more welcome than ever.

Advertisement

Why go through the effort of sifting through sales on your own when we can do it for you? Signing up for the CNET Deals text group (just scroll down) takes less than a minute. It is safe and trusted, plus you can opt out anytime. And the best part: It is free. The service costs nothing, and you’ll save money on products you love.

More about our deals text curation

This is the good stuff, not just “discounts” on items that were artificially inflated last week. We vet every deal to ensure the price is accurate and that the product is in stock when we send the text.

We send out a major Deal of the Day most days. During big shopping events, we send two texts a day on standout sales. If we find multiple deals at an ultra-low price, we hook you up in a single text.

My team and I apply the same care that we do across all of CNET, just in a bite‑size format. With daily deals texting, you will receive the same level of deep research and the same confirmation that these discounts are legitimate. And there is no AI pulling the strings. Real people are behind these texts and the research that has gone into finding the best deals.

Advertisement

We are a passionate and dedicated group of bargain hunters. So when we uncover something interesting for an affordable price, usually less than $50 with a significant discount, you will hear about it. If we find a cool thing on sale, we share that discovery. It is as simple as that. Join us today.

Source link

Advertisement
Continue Reading

Tech

How Apple keeps redefining personal computing at 50

Published

on

For a 50-year-old company, Apple remains pretty hip and nimble. This week, Devindra and Senior Reporter Igor Bonifacic dive into Apple’s big birthday, the state of the company today and what the next 50 years could bring. It remains one of the few PC companies that’s still firmly committed to the idea of personal computing. Also, we celebrate the successful launch of NASA’s Artemis II mission, which will bring us back to the Moon (but just for a close look).

Subscribe!

Topic

  • Apple at 50: Why it’s still all about personal computing – 1:16

  • Artemis II is safely on its way to the moon, but they’re having problems with Outlook – 37:48

  • SpaceX files for the largest IPO ever, what’s driving their hopes for a 1.75 Trillion valuation? – 40:52

  • Another Starlink satellite broke up in orbit, the second in 6 months – 47:21

  • Anthropic accidentally leaked source code for Claude Code – 52:17

  • FCC issues ban on all foreign-made WiFi routers – 57:18

  • Around Engadget – 1:02:09

  • Pop culture picks – 1:08:20

Credits

Hosts : Devindra Hardawar and Igor Bonifacic
Producer: Ben Ellman
Music: Dale North and Terrence O’Brien

Source link

Advertisement
Continue Reading

Tech

Arcee’s new, open source Trinity-Large-Thinking is the rare, powerful U.S.-made AI model that enterprises can download and customize

Published

on

The baton of open source AI models has been passed on between several companies over the years since ChatGPT debuted in late 2022, from Meta with its Llama family to Chinese labs like Qwen and z.ai. But lately, Chinese companies have started pivoting back towards proprietary models even as some U.S. labs like Cursor and Nvidia release their own variants of the Chinese models, leaving a question mark about who will originate this branch of technology going forward.

One answer: Arcee, a San Francisco based lab, which this week released AI Trinity-Large-Thinking—a 399-billion parameter text-only reasoning model released under the uncompromisingly open Apache 2.0 license, allowing for full customizability and commercial usage by anyone from indie developers to large enterprises.

The release represents more than just a new set of weights on AI code sharing community Hugging Face; it is a strategic bet that “American Open Weights” can provide a sovereign alternative to the increasingly closed or restricted frontier models of 2025.

This move arrives precisely as enterprises express growing discomfort with relying on Chinese-based architectures for critical infrastructure, creating a demand for a domestic champion that Arcee intends to fill.

Advertisement

As Clément Delangue, co-founder and CEO of Hugging Face, told VentureBeat in a direct message on X: “The strength of the US has always been its startups so maybe they’re the ones we should count on to lead in open-source AI. Arcee shows that it’s possible!”

Genesis of a 30-person frontier lab

To understand the weight of the Trinity release, one must understand the lab that built it. Based in San Francisco, Arcee AI is a lean team of only 30 people.

While competitors like OpenAI and Google operate with thousands of engineers and multibillion-dollar compute budgets, Arcee has defined itself through what CTO Lucas Atkins calls “engineering through constraint”.

The company first made waves in 2024 after securing a $24 million Series A led by Emergence Capital, bringing its total capital to just under $50 million. In early 2026, the team took a massive risk: they committed $20 million—nearly half their total funding—to a single 33-day training run for Trinity Large.

Advertisement

Utilizing a cluster of 2048 NVIDIA B300 Blackwell GPUs, which provided twice the speed of the previous Hopper generation, Arcee bet the company’s future on the belief that developers needed a frontier model they could truly own.

This “back the company” bet was a masterclass in capital efficiency, proving that a small, focused team could stand up a full pipeline and stabilize training without endless reserves.

Engineering through extreme architectural constraint

Trinity-Large-Thinking is noteworthy for the extreme sparsity of its attention mechanism. While the model houses 400 billion total parameters, its Mixture-of-Experts architecture means that only 1.56%, or 13 billion parameters, are active for any given token.

This allows the model to possess the deep knowledge of a massive system while maintaining the inference speed and operational efficiency of a much smaller one—performing roughly 2 to 3 times faster than its peers on the same hardware. Training such a sparse model presented significant stability challenges.

Advertisement

To prevent a few experts from becoming “winners” while others remained untrained “dead weight,” Arcee developed SMEBU, or Soft-clamped Momentum Expert Bias Updates.

This mechanism ensures that experts are specialized and routed evenly across a general web corpus. The architecture also incorporates a hybrid approach, alternating local and global sliding window attention layers in a 3:1 ratio to maintain performance in long-context scenarios.

The data curriculum and synthetic reasoning

Arcee’s partnership with fellow startup DatologyAI provided a curriculum of over 10 trillion curated tokens. However, the training corpus for the full-scale model was expanded to 20 trillion tokens, split evenly between curated web data and high-quality synthetic data.

Unlike typical imitation-based synthetic data where a smaller model simply learns to mimic a larger one, DatologyAI utilized techniques to synthetically rewrite raw web text—such as Wikipedia articles or blogs—to condense the information.

Advertisement

This process helped the model learn to reason over concepts and information rather than merely memorizing exact token strings.

To ensure regulatory compliance, tremendous effort was invested in excluding copyrighted books and materials with unclear licensing, attracting enterprise customers who are wary of intellectual property risks associated with mainstream LLMs.

This data-first approach allowed the model to scale cleanly while significantly improving performance on complex tasks like mathematics and multi-step agent tool use.

The pivot from yappy chatbots to reasoning agents

The defining feature of this official release is the transition from a standard “instruct” model to a “reasoning” model.

Advertisement

By implementing a “thinking” phase prior to generating a response—similar to the internal loops found in the earlier Trinity-Mini—Arcee has addressed the primary criticism of its January “Preview” release.

Early users of the Preview model had noted that it sometimes struggled with multi-step instructions in complex environments and could be “underwhelming” for agentic tasks.

The “Thinking” update effectively bridges this gap, enabling what Arcee calls “long-horizon agents” that can maintain coherence across multi-turn tool calls without getting “sloppy”.

This reasoning process enables better context coherence and cleaner instruction following under constraint. This has direct implications for Maestro Reasoning, a 32B-parameter derivative of Trinity already being used in audit-focused industries to provide transparent “thought-to-answer” traces.

Advertisement

The goal was to move beyond “yappy” or inefficient chatbots toward reliable, cheap, high-quality agents that stay stable across long-running loops.

Geopolitics and the case for American open weights

The significance of Arcee’s Apache 2.0 commitment is amplified by the retreat of its primary competitors from the open-weight frontier.

Throughout 2025, Chinese research labs like Alibaba’s Qwen and z.ai (aka Zhupai) set the pace for high-efficiency MoE architectures.

However, as we enter 2026, those labs have begun to shift toward proprietary enterprise platforms and specialized subscriptions, signaling a move away from pure community growth.

Advertisement

The fragmentation of these once-prolific teams, such as the departure of key technical leads from Alibaba’s Qwen lab, has left a void at the high end of the open-weight market. In the United States, the movement has faced its own crisis.

Meta’s Llama division notably retreated from the frontier landscape following the mixed reception of Llama 4 in April 2025, which faced reports of quality issues and benchmark manipulation.

For developers who relied on the Llama 3 era of dominance, the lack of a current 400B+ open model created an urgent need for an alternative that Arcee has risen to fill.

Benchmarks and how Arcee’s Trinity-Large-Thinking stacks up to other U.S. frontier open source AI model offerings

Trinity-Large-Thinking’s performance on agent-specific evaluations establishes it as a legitimate frontier contender. On PinchBench, a critical metric for evaluating model capability on autonomous agentic tasks, Trinity achieved a score of 91.9, placing it just behind the proprietary market leader, Claude Opus 4.6 (93.3).

Advertisement
Arcee Trinity-Large-Thinking benchmark comparison chart

Arcee Trinity-Large-Thinking benchmark comparison chart. Credit: Arcee

This competitiveness is mirrored in IFBench, where Trinity’s score of 52.3 sits in a near-dead heat with Opus 4.6’s 53.1, indicating that the reasoning-first “Thinking” update has successfully addressed the instruction-following hurdles that challenged the model’s earlier preview phase.

The model’s broader technical reasoning capabilities also place it at the high end of the current open-source market. It recorded a 96.3 on AIME25, matching the high-tier Kimi-K2.5 and outstripping other major competitors like GLM-5 (93.3) and MiniMax-M2.7 (80.0).

While high-end coding benchmarks like SWE-bench Verified still show a lead for top-tier closed-source models—with Trinity scoring 63.2 against Opus 4.6’s 75.6—the massive delta in cost-per-token positions Trinity as the more viable sovereign infrastructure layer for enterprises looking to deploy these capabilities at production scale.

Advertisement

When it comes to other U.S. open source frontier model offerings, OpenAI’s gpt-oss tops out at 120 billion parameters, but there’s also Google with Gemma (Gemma 4 was just released this week) and IBM’s Granite family is also worth a mention, despite having lower benchmarks. Nvidia’s Nemotron family is also notable, but is fine-tuned and post-trained Qwen variants.

Benchmark

Arcee Trinity-Large

gpt-oss-120B (High)

Advertisement

IBM Granite 4.0

Google Gemma 4

GPQA-D

76.3%

Advertisement

80.1%

74.8%

84.3%

Tau2-Airline

Advertisement

88.0%

65.8%*

68.3%

76.9%

Advertisement

PinchBench

91.9%

69.0% (IFBench)

89.1%

Advertisement

93.3%

AIME25

96.3%

97.9%

Advertisement

88.5%

89.2%

MMLU-Pro

83.4%

Advertisement

90.0% (MMLU)

81.2%

85.2%

So how is an enterprise supposed to choose between all these?

Advertisement

Arcee Trinity-Large-Thinking is the premier choice for organizations building autonomous agents; its sparse 400B architecture excels at “thinking” through multi-step logic, complex math, and long-horizon tool use. By activating only a fraction of its parameters, it provides a high-speed reasoning engine for developers who need GPT-4o-level planning capabilities within a cost-effective, open-source framework.

Conversely, gpt-oss-120B serves as the optimal middle ground for enterprises that require high-reasoning performance but prioritize lower operational costs and deployment flexibility.

Because it activates only 5.1B parameters per forward pass, it is uniquely suited for technical workloads like competitive code generation and advanced mathematical modeling that must run on limited hardware, such as a single H100 GPU.

Its configurable reasoning effort—offering “Low,” “Medium,” and “High” modes—makes it the best fit for production environments where latency and accuracy must be balanced dynamically across different tasks.

Advertisement

For broader, high-throughput applications, Google Gemma 4 and IBM Granite 4.0 serve as the primary backbones. Gemma 4 offers the highest “intelligence density” for general knowledge and scientific accuracy, making it the most versatile option for R&D and high-speed chat interfaces.

Meanwhile, IBM Granite 4.0 is engineered for the “all-day” enterprise workload, utilizing a hybrid architecture that eliminates context bottlenecks for massive document processing. For businesses concerned with legal compliance and hardware efficiency, Granite remains the most reliable foundation for large-scale RAG and document analysis.

Ownership as a feature for regulated industries

In this climate, Arcee’s choice of the Apache 2.0 license is a deliberate act of differentiation. Unlike the restrictive community licenses used by some competitors, Apache 2.0 allows enterprises to truly own their intelligence stack without the “black box” biases of a general-purpose chat model.

“Developers and Enterprises need models they can inspect, post-train, host, distill, and own,” Lucas Atkins noted in the launch announcement.

Advertisement

This ownership is critical for the “bitter lesson” of training small models: you usually need to train a massive frontier model first to generate the high-quality synthetic data and logits required to build efficient student models.

Furthermore, Arcee has released Trinity-Large-TrueBase, a raw 10-trillion-token checkpoint. TrueBase offers a rare, “unspoiled” look at foundational intelligence before instruction tuning and reinforcement learning are applied. For researchers in highly regulated industries like finance and defense, TrueBase allows for authentic audits and custom alignments starting from a clean slate.

Community verdict and the future of distillation

The response from the developer community has been largely positive, reflecting the desire for more open weights, U.S.-made mdoels.

On X, researchers highlighted the disruption, noting that the “insanely cheap” prices for a model of this size would be a boon for the agentic community.

Advertisement

On open AI model inference website OpenRouter, Trinity-Large-Preview established itself as the #1 most used open model in the U.S., serving over 80.6 billion tokens on peak days like March 1, 2026.

The proximity of Trinity-Large-Thinking to Claude Opus 4.6 on PinchBench—at 91.9 versus 93.3—is particularly striking when compared to the cost. At $0.90 per million output tokens, Trinity is approximately 96% cheaper than Opus 4.6, which costs $25 per million output tokens.

Arcee’s strategy is now focused on bringing these pretraining and post-training lessons back down the stack. Much of the work that went into Trinity Large will now flow into the Mini and Nano models, refreshing the company’s compact line with the distillation of frontier-level reasoning.

As global labs pivot toward proprietary lock-in, Arcee has positioned Trinity as a sovereign infrastructure layer that developers can finally control and adapt for long-horizon agentic workflows.

Advertisement

Source link

Continue Reading

Tech

AI is doing the dirty work for insurance companies, and it’s getting worse

Published

on

Insurance claims adjusters have never had a reputation for generosity. But at least they were human. That’s changing fast, and not in your favor. A report by Futurism details how AI automation is now a major trend in personal insurance, the health, home, and auto coverage most of us rely on. 

Is your doctor’s opinion even part of the process anymore?

It doesn’t seem that your doctor’s opinion carries that much weight now. A Palm Beach Post investigation found that Iris Smith, an 80-year-old suffering from arthritis, may be a victim of AI-fueled preauthorization denials.

In another case, UnitedHealth is currently facing a class-action lawsuit alleging that AI-denied Medicare nursing care contributed to patient deaths. Meanwhile, a National Association of Insurance Commissioners survey found 84% of health insurers are using AI, with 68% deploying it for prior authorization approvals.

Most people give up and don’t even appeal these rejections because the process is too confusing or exhausting, which, if you’re an insurance company, is the outcome you want.

The worst part is that we know AI isn’t always accurate and has a tendency to hallucinate. It’s one thing if it makes a mistake while writing a report, but it’s a completely different ball game when it ends up denying medical aid to someone who truly needs it.

Advertisement

Is there anyone protecting your interests?

Florida Representative Lois Frankel isn’t having any of it. She told the Palm Beach Post she plans to fight any expansion into other states. “We believe Medicare was based on a promise that if your doctor says you need care, if you’re hurt and you need care, Medicare will be there for you, not AI.”

But if the past is any indication, her fight alone won’t be enough. Florida lawmakers tried to pass a bill in 2025, requiring human review for AI-generated denials. It passed the House, died in the Senate, and a Trump executive order discouraging state AI regulations didn’t help.

The silver lining, if you can call it that: nonprofits like Counterforce Health now offer free AI tools that analyze your denial letter and draft a customized appeal, making it easier to fight back. It’s AI versus AI at this point, and the world is growing gloomier by the day.

Advertisement

Source link

Continue Reading

Tech

The AI Doc’s Falsehoods And False Balance

Published

on

from the hype-without-substance dept

There is a familiar media failure in which opposing viewpoints are presented as equally valid, even when the evidence overwhelmingly supports one side. It’s called Bothsidesism. This false balance phenomenon legitimizes misinformation and undermines public understanding by giving disproportionate weight to baseless claims.

Why bring this up? Because the new AI Doc film is based on it.

The film wants credit for being “balanced” because it assembles a wide range of experts. But putting Prof. Fei-Fei Li, a pioneering computer scientist, next to someone like Eliezer Yudkowsky, an author of a Harry Potter fanfic, is not “balance.”

Once you understand that false equivalence is baked into the film’s storytelling, you understand how misleading and manipulative the documentary is. And it is compounded by a series of falsehoods that go unchallenged and uncorrected.

This review addresses both failures. 

Advertisement

The “AI Doc” Movie

“The AI Doc: Or How I Became an Apocaloptimist,” co-directed by Daniel Roher and Charlie Tyrell, sets out to explore AI, especially its potential for good and bad, with a strong emphasis on the filmmakers’ anxieties and fears. Its basic premise is: “A father-to-be tries to figure out what is happening with all this AI insanity.” As summarized by Andrew Maynard from Future of Being Human:

“The documentary progresses through the eyes of director Daniel Roher as he faces a tsunami of existential AI angst while grappling with the responsibility of becoming a father. Motivated by a fear that artificial intelligence could spell the end of everything that matters, he sets out to interview some of the largest (and loudest) voices in AI to fathom out whether this is the best of times or worst of times for him and his wife (filmmaker Caroline Lindy) to bring a kid into the world.”

The “loudest voices” include many AI doomer figures, such as Eliezer Yudkowsky, Dan Hendrycks, Daniel Kokotajlo, Connor Leahy, Jeffrey Ladish, and two of the most populist voices on emerging tech (first social media and now AI): Tristan Harris and Yuval Noah Harari. The film also features voices on AI ethics, including David Evan Haris, Emily M. Bender, Timnit Gebru, Deborah Raji, and Karen Hao. On the more boosterish side, there are Peter Diamandis and Guillaume Verdon (AKA Beff Jezos). Three leading AI CEOs were also interviewed: OpenAI’s Sam Altman, DeepMind’s Demis Hassabis, and Anthropic’s Amodei siblings, Dario and Daniela. (Meta’s Mark Zuckerberg declined, and xAI’s Elon Musk agreed but never showed up).

The movie started playing in theaters on March 27, but there are already plenty of reviews (dating back to the Sundance Film Festival). The praise is fairly consistent: It is timely, wide-ranging, visually energetic, and unusually well-connected, with access to major AI figures.

Advertisement

The most common criticism is that it is too deferential to interviewees and too thin on hard interrogation or concrete answers. As several reviewers put it:

  1. “Roher’s willingness to blindly accept any and all of his speakers’ pronouncements leaves The AI Doc feeling toothless.”
  2. “By giving its doomer and accelerationist voices so much time to present AI’s most hyperbolic potential outcomes with little pushback, the documentary’s first half plays more like an overlong advertisement for the technology as opposed to a piece of measured analysis.”
  3. “Roher acts as a fantastic storyteller, but he treats his subjects too gently. The film desperately needs more pushback during the interviews.”

Tristan Harris, co-founder of the Center for Humane Technology, told the AP: “My hope is that this film is kind of like ‘An Inconvenient Truth’ or ‘The Social Dilemma’ for AI.”

That is not reassuring. It is more like a glaring warning sign. Harris’s “Social Dilemma” and “AI Dilemma” movies were full of misinformation and nonsensical hyperbole, and both were designed to be manipulative and dishonest. If anything, his endorsement tells you exactly what kind of movie this is.

After watching the AI Doc, I realized what the doomers had managed to accomplish here: The film absorbs the panic rather than investigates it.

The False Balance of The AI Doc

Advertisement

The AI Doc starts with what one reviewer called a “Doom Parade.” It aims to set the tone.

The worst AI predictions are presented first,” another reviewer noted. “Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute, calmly talks of the ‘abrupt extermination’ of humanity.”

And it is worth remembering who Yudkowsky is and what he has actually advocated. In his notorious TIME op-ed, “Shut it All Down,” he argued that governments should “be willing to destroy a rogue datacenter by airstrike.” In his book “If Anyone Builds It, Everyone Dies,” which many reviewers found unconvincing and “unnecessarily dramatic sci-fi,” he (and his co-author Nate Soares) proposed that governments must bomb labs suspected of developing AI. Based on what exactly? On the authors’ overconfident, binary worldview and speculative scenarios, which they mistake for inevitability.

One review of that book observed, “The plan with If Anyone Builds It seems to be to sane-wash him [Yudkowsky] for the airport books crowd, sanding off his wild opinions.”

Advertisement

That is more or less what the new documentary does, too. The AI Doc sane-washes the loudest doomers for mainstream viewers, sanding off their wild opinions.

In his newsletter, David William Silva addresses the documentary’s “series of doomers,” who “describe AI-driven extinction with the calm confidence of people who have said these things so many times they have stopped noticing they have no evidence for them.”

“Roher’s reaction is full terror,” Silva adds. “I hope it is unequivocally evident that this is not journalism.”

That gets to the heart of it. The film pretends to weigh competing perspectives, but in practice, it grants disproportionate authority to people most invested in flooding the zone with AI panic. And there is a well-oiled machine behind this kind of AI panic. As Silva writes:

Advertisement

“The people behind the AI anxiety machine. […] They know that predicting human extinction by software is an extraordinary claim requiring extraordinary evidence. They know they don’t have it. They know ‘my kids won’t live to see middle age’ is nothing but performance. […] And they do it anyway. Why do you think that is? The calculation is simple. Some people will see through it, and they will be annoyed, write rebuttals, call it what it is. Ok, fine. Just an acceptable loss. The believers, on the other hand, are a market. As long as the ratio stays favorable, the machine is profitable.”

One of the biggest beneficiaries of this film is Harris.[1] He is framed as if he is in the middle between the two main camps (doomers and accelerationists), and his narrative gradually becomes the film’s narrative (similar to the Social Dilemma). His call to action even serves as the ending (with a QR code directing viewers to a designated website).

The problem is that this framing has very little to do with reality. Harris’s Center for Humane Technology got $500,000 from the Future of Life Institute for “AI-related policy work and messaging cohesion within the AI X-risk [existential risk] community.” That is not a neutral player.

There’s a touching scene in the film where Roher mentions his father’s cancer treatment and expresses hope that AI might help. Harris appears visibly emotional. But in other contexts, Harris has argued against looking at AI for help with cancer treatment… in the belief that it would lead to extinction. Here he is on Glenn Beck’s show in 2023:

“My mother died from cancer several years ago. And if you told me that we could have AI that was going to cure her of cancer, but on the other side of that coin was that all the world would go extinct a year later, because of the, the only way to develop that was to bring something, some Demon into the world that would we would not be able to control, as much as I love my mother, and I would want her to be here with me right now, I wouldn’t take that trade.”

That sort of hyperbole seems relevant to Harris’ stance on such things, but was not mentioned in the film at all.

Advertisement

Connor Leahy of Conjecture and ControlAI gets a similar makeover. In the documentary, he appears as another pessimistic expert. Elsewhere, he said he does not expect humanity “to make it out of this century alive; I’m not even sure we’ll get out of this decade!” His “Narrow Path” proposal for policymakers begins with the claim that “AI poses extinction risks to human existence.” Instead of calling for a six-month AI pause, he argued for a 20-year pause, because “two decades provide the minimum time frame to construct our defenses.”

This is exactly why background checks matter. Viewers of the AI Doc deserve to know the full scope of the more extreme positions these interviewees have publicly taken elsewhere. If someone has publicly argued for destroying data centers by airstrikes or stopping AI for 20 years, the audience should know that.

Debunking the Falsehoods

The film goes way beyond just pushing a panic. It also recycles several misleading or plainly false claims, letting them pass as established facts. Three stood out in particular.  

Advertisement

Anthropic’s Blackmail study

One of the most repeated “facts” in reviews of the movie is that Anthropic’s AI model, Clause, decided, unprompted, to blackmail a fictional employee. In the film, Daniel Roher asks, “And nobody taught it to do that?” Jeffrey Ladish, of Palisade Research and Tristan’s Center for Humane Technology, replies: “No, it learned to do that on its own.”

That is a misleading characterization of the actual experiment, it has already been debunked in “AI Blackmail: Fact-Checking a Misleading Narrative.” Anthropic researchers admitted that they strongly pressured the model and iterated through hundreds of prompts before producing that outcome. It wasn’t a spontaneous emergence of “evil” behavior; the researchers explicitly ensured it would be the default. Telling viewers that the model has gone full “HAL 9000” omits the facts about the heavily engineered experimental setup.  

Although this is a classic case of big claims and thin evidence, the film offers so little pushback that viewers are left to take Ladish’s statements at face value.

Advertisement

It is also worth remembering that Ladish has fought against open-source AI, pushed for a crackdown on open-source models, and once said, “We can prevent the release of a LLaMA 2! We need government action on this asap.” He later updated his position (and it’s good to revise such views). But does the film mention his earlier public hysteria? No.

Is AI less regulated than sandwich shops? No.

Connor Leahy tells Daniel Roher, “There is currently more regulation on selling a sandwich to the public” than there is on AI development. This talking point has become a favorite slogan in AI doomer circles. It was repeatedly stated by The Future of Life Institute’s Max Tegmark and, more recently, by Senator Bernie Sanders. It’s catchy. It’s also false.

State attorneys general from both parties have explicitly argued that existing laws already apply to AI. Lina Khan, writing on behalf of the Federal Trade Commission, stated that “AI is covered by existing laws. Each agency here today has legal authorities to readily combat AI-driven harm.” The existing AI regulatory stack already includes antitrust & competition regulation, civil rights & anti-discrimination law, consumer protection, data privacy & security, employment & labor law, financial regulation, insurance & accident compensation, property & contract law, among others.

Advertisement

So no, AI is not less regulated than sandwich shops. It’s a misleading soundbite, not a serious description of legal reality.  

Data center water usage

In the film, Karen Hao criticizes data centers, warning that “People are literally at risk, potentially of running out of drinking water.” That sounds alarming, which is presumably the point. But it is highly misleading.

In fact, Karen Hao had to issue corrections to her “Empire of AI” book because a key water-use figure was off by a factor of 4,500. The discrepancy was not 45x or 450x, but rather 4,500x. That is not a rounding error. For detailed rebuttals, see Andy Masley’s “The AI water issue is fake” and “Empire of AI is widely misleading about AI water use.”

Advertisement

There is also a basic proportionality issue here. As demonstrated by The Washington Post, “The water used by data centers caused a stir in Arizona’s drought-prone Maricopa County. But while they used about 905 million gallons there last year, that’s a small fraction of the 29 billion gallons devoted to the country’s golf courses.” To put that plainly: data centers accounted for just 0.1% of the county’s water use.

It is also worth noting that “most of the water used by data centers returns to its source unchanged.” In closed-loop cooling systems, for example, water is recirculated multiple times, which significantly reduces net consumption. 

None of this is hidden information. A basic fact-check by the filmmakers could have brought it to light. But that was not the film’s goal. They chose fear-based framing over actual reporting. They could have pressed interviewees on their track records, failed predictions, and political agendas. Instead, they let them narrate the stakes, unchallenged.

So, I think we can conclude that the AI Doc may want to appear balanced and thoughtful, but, unfortunately, too often it is not.

Advertisement

Final Remark

While Western filmmakers are busy platforming advocates for “bombing data centers” and “Stop AI for 20 years,” the Chinese Communist Party is building the actual infrastructure. The CCP is not making doom-and-gloom documentaries; it is racing ahead. This is a real strategic threat, and it is far more concerning than anything featured in this film.

—————————

Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and the “AI Panic” newsletter.

Advertisement

Filed Under: ai, ai doomerism, daniel roher, eliezer yudkowsky, the ai doc, tristan harris

Source link

Advertisement
Continue Reading

Tech

DC In The Data Center For A More Efficient Future

Published

on

If you own a computer that’s not mobile, it’s almost certain that it will receive its power in some form from a mains wall outlet. Whether it’s 230 V at 50 Hz or 120 V at 60 Hz, where once there might have been a transformer and a rectifier there’s now a switch-mode power supply that delivers low voltage DC to your machine. It’s a system that’s efficient and works well on the desktop, but in the data center even its efficiency is starting to be insufficient. IEEE Spectrum has a look at newer data centers that are moving towards DC power distribution, raising some interesting points which bear a closer look.

A traditional data center has many computers which in power terms aren’t much different from your machine at home. They get their mains power at distribution voltage — probably 33 KV AC where this is being written — they bring it down to a more normal mains voltage with a transformer just like the one on your street, and then they feed a battery-backed uninterruptible Power Supply (UPS) that converts from AC to DC, and then back again to AC. The AC then snakes around the data center from rack to rack, and inside each computer there’s another rectifier and switch-mode power supply to make the low voltage DC the computer uses.

The increasing demands of data centers full of GPUs for AI processing have raised power consumption to the extent that all these conversion steps now cost a significant amount of wasted power. The new idea is to convert once to DC (at a rather scary 800 volts) and distribute it direct to the cabinet where the computer uses a more efficient switch mode converter to reach the voltages it needs.

Advertisement

It’s an attractive idea not just for the data center. We’ve mused on similar ideas in the past and even celebrated a solution at the local level. But given the potential ecological impact of these data centers, it’s a little hard to get excited about the idea in this context. The fourth of our rules for the responsible use of a new technology comes in to play. Fortunately we think that both an inevitable cooling of the current AI hype and a Moore’s Law driven move towards locally-run LLMs may go some way towards solving that problem on its own.


header image: Christopher Bowns, CC BY-SA 2.0.

Advertisement

Source link

Continue Reading

Tech

A Major Publisher Just Canceled This Book Over AI Writing Concerns

Published

on

Last June, Mia Ballard’s self-published novel Shy Girl took the internet by storm. After winning the hearts of readers and publisher Hachette alike, it was set for a major US debut in the coming months. 

Now, the novel may never become available through any official channel again. Hachette has officially pulled the plug on the novel’s US release following a wave of allegations that generative AI played a role in the manuscript’s creation. 

Originally self-published in February 2025, the horror novel was traditionally released by Hachette’s science fiction and fantasy label Orbit in the UK in November. After The New York Times provided evidence of AI usage in Shy Girl, Hachette canceled the planned spring US release and removed the book from its website completely.

Advertisement

“Hachette remains committed to protecting original creative expression and storytelling,” the publisher said in a statement to the Times. 

Authors are required to disclose to Hachette whether AI was used in the creation of their work. Ballard has denied using AI tools to write the book, claiming an editor was responsible for the portions that appear to be AI-generated.

“My name is ruined for something I didn’t even personally do,” Ballard wrote in an email to the New York Times.

Advertisement
The book cover for Shy Girl by Mia Ballard.

Hachette UK

The cancellation of Shy Girl by Hachette marks the first time a major publisher has publicly pulled an existing title due to suspicions of AI-generated prose.

For the past few months, readers online have raised concerns about the book’s apparent use of AI.

A video from YouTuber frankie’s shelf provides a lengthy analysis of the novel, pointing out linguistic patterns that are characteristic of AI writing. The video also lists words in Shy Girl that are repeated with unusual frequency (“edge” is used 84 times and “sharp” 159 times), often in ways that are abstract and nonsensical.

In January, Max Spero, founder and chief executive of Pangram, ran the text of Shy Girl through his AI detection program. He claimed that the novel was 78% AI-generated.

The rise of AI has caught the publishing industry off guard. Though AI writing has already appeared in many self-published books, traditional publishers like Hachette are more critical of the technology.

Advertisement

Representatives for Hachette didn’t immediately respond to a request for comment.

Source link

Advertisement
Continue Reading

Tech

‘Wood is wood’: WSU research finds Yankees’ viral ‘torpedo’ bats perform the same as traditional bats

Published

on

A research team determined that the torpedo bat, left, and traditional bat perform equally well in hitting power with only a slight difference in the location of the bat’s sweet spot (WSU Photo / Voiland College of Engineering and Architecture)

The New York Yankees just cruised through Seattle and won two out of three games against the Mariners. On the other side of Washington state, the Bronx Bombers’ “torpedo bats” were being scientifically scrutinized.

In what Washington State University is calling the first-ever laboratory experiments on the new baseball bat design, researchers found that torpedo bats and traditional bats basically perform the same.

It didn’t look that way last season, when the Yankees hit a franchise-record nine home runs in a game against the Milwaukee Brewers and drew viral attention to the bats that they were swinging.

The torpedo bat design relies on a slightly different shape in which wood is removed from the barrel tip and added to the bat’s sweet spot, so that the diameter tapers down, a little like a bowling pin. But the hype appears overblown.

“Wood is wood,” Lloyd Smith, a professor in WSU’s School of Mechanical and Materials Engineering and director of the university’s Sports Science Laboratory, told WSU Insider. “When it comes to baseball, there’s not a lot you can do with wood. If your goal is to keep the game steady and consistent and not have a lot of change, wood bats are good.”

Advertisement

Smith is part of a research team that includes Alan Nathan from University of Illinois and Daniel Russell from Penn State University. They’ll present their findings at the upcoming International Sports Engineering Association conference, June 1–4 in Pullman, Wash.

According to WSU Insider, the researchers created two maple bats that were duplicates of a standard Major League Baseball bat. Two additional maple bats were made with a torpedo-shaped barrel that gave them the same swing weight as the standard bat.

They measured how much energy the bat returns to the ball by firing baseballs from an air cannon at a stationary bat and using light gates and cameras to measure the speed of the incoming and rebounding ball.

The team found nearly identical performance for the torpedo and standard bats except that the sweet spot for the torpedo bat was a half inch farther from the bat tip than the standard bat.

Advertisement

“It was actually pretty phenomenal how close they were,” said Smith.

While some Yankees players said last year that any little tweak could provide an advantage, the team’s captain wasn’t convinced.

Aaron Judge hit an American League-record 62 homers in 2022, 58 in an MVP season in 2024 and 53 as repeat MVP in 2025. He had three homers using a traditional bat in that much-talked-about rout of the Brewers.

“The past couple of seasons kind of speak for itself,” Judge told ESPN last May. “Why try to change something?”

Advertisement

Source link

Continue Reading

Tech

Xteink’s X3 E-Reader Snaps Onto Your iPhone and Ready for Any Spare Moment

Published

on

Xteink X3 E-Reader
Slapping the Xteink X3 onto an iPhone takes only a few seconds. This is owing in part to its built-in magnets, which exactly align with MagSafe and allow it to be easily snapped into place. You get a thin black or white slab that sits flush against the phone’s back without adding any bulk. Anyone who is continually reaching for their phone dozens of times per day would appreciate having a book right at their fingertips, all from the same move.



At only 58 grams, this device is easy to forget about until you need it, and then, as if by magic, it appears. Its overall size is a modest 100mm long and 60mm wide, so it goes unnoticed in a pocket until reading time beckons. Commuters and individuals waiting in lines can just pull out their phones and start reading a chapter without having to dig through their bags for another device.


XTEINK X4 E-Book Reader, 4.3″ Portable Pocket E-Ink eReader with Physical Page-Turn Buttons, Ultra-Thin…
  • Pocket-Size Mini eReader for Reading Anywhere: Ultra-light at just 0.23 inch and only 2.72 oz, Xteink X4 is designed for true portability. Slip it…
  • 4.3″ Paper-Like E-Ink Display: The 4.3-inch E-Ink screen delivers a natural paper-like reading experience that’s gentle on the eyes. Enjoy clear…
  • Magnetic-Ready Design for Quick Access: Includes magnetic stick-on rings, so you can attach Xteink X4 to the back of your phone or other magnetic…


The 3.7-inch E Ink screen displays clear text, with over 250 pixels per inch. You can easily change the font size with a few simple adjustments, so even the smallest pages provide a comfortable reading experience. With adequate lighting, the characters simply pop, and there is no eye strain to contend with, as opposed to phone screens. You also have real buttons on the sides and bottom for turning pages and accessing menus. One-handed operation feels perfectly normal, whether you’re on a train or confined in bed. The gyroscope within detects even the tiniest shake and flips pages forward, allowing you to maintain a solid grip during those rapid reading periods.

Xteink X3 E-Reader
Navigation is straightforward, with a grid of icons instead of swipes or touches. Choose a book or change the settings with a few presses, and it remains dependable even when your fingers are clumsy. The strategy minimizes distractions and allows you to concentrate on the words themselves. You can load books onto the device using either the 16GB microSD card included in the box or a companion app on your phone. Transferring EPUB files is quick and easy over Wi-Fi or by inserting the card into your computer, and storage increases up to 512 gigabytes, allowing you to carry thousands of titles without running out of space.

Xteink X3 E-Reader
The battery will last you 10 to 14 days on a single charge, even if you only read for an hour or two every day, and charging is simple; simply insert the special cable with magnetic pogo pins into the gadget and it will clip right into place. Okay, there is one little flaw: there is no built-in front light (yet), but you can get a separate clip-on version for only $9.99 if you plan on reading late into the evening. If you need more connectivity, there are Bluetooth and NFC connections available, as well as Wi-Fi for the occasional update or transfer. It’s available now on the official Xteink website and can be purchased for $79.
[Source]

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025