Connect with us
DAPA Banner

Tech

Manufact raises $6.3M as MCP becomes the ‘USB-C for AI’ powering ChatGPT and Claude apps

Published

on

For decades, software companies designed their products for a single type of customer: a human being staring at a screen. Every button, menu, and dashboard existed to translate a person’s intention into a machine’s action. But a small startup based in San Francisco and Zurich believes that era is ending — and that the future belongs to companies that build software not for people, but for the artificial intelligence agents that increasingly act on their behalf.

Manufact, a three-person company that emerged from Y Combinator’s Summer 2025 batch, announced in February that it raised $6.3 million in seed funding led by Peak XV, the venture capital firm formerly known as Sequoia Capital India and Southeast Asia, which now manages more than $10 billion in assets. Liquid 2 Ventures, Ritual Capital, Pioneer Fund, and Y Combinator also participated in the round, alongside angel investors including the co-founder and chief operating officer of Supabase.

The company’s thesis is deceptively simple and potentially enormous: as AI agents take over more of the work that humans perform inside software applications — filing expense reports, managing customer support tickets, writing code, booking travel — every software product on earth will need a new kind of interface designed specifically for those agents. Manufact is building the open-source tools and cloud infrastructure to make that transition possible.

“Software products are already being accessed by and will be accessed mainly by AI agents, or by users through chat interfaces,” Luigi Pederzani, co-founder and co-CEO of Manufact, said in an interview with VentureBeat. “That’s our bet. That’s our thesis. And that’s what we are really rooting our company on.”

Advertisement

How Anthropic’s Model Context Protocol became the universal standard for AI agents

To understand Manufact, you first have to understand the technology it is built on: the Model Context Protocol, or MCP, an open standard introduced by Anthropic in late 2024 that has rapidly become the dominant way for AI agents to communicate with external software tools and data sources.

Before MCP, connecting an AI agent to a company’s software required custom integration work for every single tool — a bespoke connector for Slack, another for Salesforce, another for a database. It was tedious, expensive, and fragile. MCP standardized this process into a single protocol, functioning as what CIO magazine recently called “the USB-C of AI” — a universal connector that lets any AI model plug into any software system through a single, consistent interface.

The adoption has been explosive. In December 2025, Anthropic donated MCP to the Linux Foundation’s new Agentic AI Foundation, co-founded with Block and OpenAI, with support from Google, Microsoft, Amazon Web Services, and Cloudflare. More than 10,000 active public MCP servers now operate across the ecosystem. ChatGPT, Cursor, Google Gemini, Microsoft Copilot, and Visual Studio Code all support the protocol. Enterprise-grade deployment infrastructure exists from AWS, Cloudflare, Google Cloud, and Microsoft Azure. An estimated 7 million downloads of MCP servers occur every month.

“Great protocols are as good as their adoption,” Pederzani said, drawing a comparison to the mobile revolution. “We saw the same transition with mobile, right? In the beginning, companies were just creating a pretty simple mobile app. Who would have bought a hotel or a flight or used a bank account from a mobile app? But as time passed, the web became mobile first. What we think is that software products will be MCP first, or chat first.”

Advertisement

The stakes are high. The global AI agents market reached $7.84 billion in 2025 and is projected to surge to $52.62 billion by 2030, according to industry analysts. The MCP Dev Summit, the largest conference dedicated to the protocol, takes place April 2–3 in New York City under the Linux Foundation’s banner, with speakers from Docker, Workato, and major cloud providers — and Manufact will be among the companies presenting.

Two Italian founders, a Zurich co-working space, and an open-source library that went viral

Manufact’s origin story reads like a case study in the power of open-source communities to validate a startup idea before a single dollar of venture capital is raised.

Pietro Zullo and Luigi Pederzani, both originally from Italy, met at a co-working space in Zurich — the same space that produced Browser Use, Bloom, and other startups that went through YC in previous batches. Zullo was studying at ETH Zurich; Pederzani was working at Morgen, an ETH spin-off AI startup used by teams at Spotify, GitHub, and Linear, after leading a 12-engineer team at Accenture Switzerland. Both were winding down previous projects in early 2025 when MCP launched.

“We both wrote agents in the past, and it was such a mess to write the tools, the integrations,” Zullo recalled. “When MCP came out, it looked like the perfect fit for what we were trying to do. But only Cursor, Claude Code, a few closed-source applications allowed you to actually use the protocol. I don’t think I’m going to do groceries or browse the internet or check my emails from Cursor — it’s like, not the right code, right? So we wrote an open-source library to basically do what you could do in Cursor with MCP servers, but on your own machine, on your own application, in your own terms.”

Advertisement

They called the library mcp-use, with a slogan that resonated across the developer community: “Connect any MCP to any LLM in six lines of code.” The repository attracted 2,000 to 2,500 GitHub stars within weeks. Today, the SDK has surpassed 5 million downloads and 9,000 GitHub stars. Organizations including NASA, Nvidia, and SAP use the library, and Manufact claims that 20 percent of the US 500 have experimented with it.

“The amount of power that you can put in six lines of code was really staggering,” Zullo said. The pair applied to Y Combinator on the day of the deadline. “We were super spontaneous because we had this open-source vibe and just enjoyed the process. We had so much energy from the community that was lifting us up, and we knew it was going to be fine.”

Inside Manufact’s plan to become the ‘Vercel for MCP’ — from SDK to cloud in 60 seconds

Manufact’s strategy borrows directly from the playbook that turned Vercel into a multi-billion-dollar company by providing hosting and developer tools for front-end web applications. The analogy is deliberate: just as Vercel made it trivially easy to deploy a Next.js app, Manufact wants to make it trivially easy to build, test, and deploy the MCP servers and MCP apps that AI agents need to interact with software.

The company offers three core products. First, the open-source mcp-use SDK, available in both Python and TypeScript, lets developers spin up a fully functional AI agent connected to MCP tools in as few as six lines of code. It supports any large language model, including local models, and has integrations with LangChain and other popular frameworks. Second, a built-in inspector and testing suite allows developers to visually debug their MCP servers in a browser, view raw JSON-RPC traffic, and test tool execution in a sandbox — without connecting to a live AI agent. Third, the Manufact Cloud platform handles deployment, scaling, authentication, access control, and observability, allowing teams to go from a GitHub push to a production MCP server in under 60 seconds.

Advertisement

“As software becomes more agentic, the hard part isn’t the model anymore — it’s everything around it,” Zullo said. “We started Manufact because developers were spending too much time on plumbing instead of building and shipping their products.”

The company has also moved aggressively into MCP apps, a newer extension of the protocol that allows developers to render interactive user interface components — React widgets, data visualizations, input forms — directly inside chat clients like ChatGPT and Claude. Manufact’s SDK lets a developer scaffold an MCP app with a single terminal command, edit React widgets, and deploy to ChatGPT in under a minute. This positions the company at the center of a potentially massive new distribution channel: ChatGPT alone has more than 800 million users.

5 million downloads, zero revenue, and a crowded field of cloud giants

Every open-source company faces the same fundamental tension: the community that makes the project valuable is not the same thing as a paying customer base. Manufact has been candid about this challenge.

Pederzani said the company made a deliberate decision after Y Combinator to focus entirely on the open-source product and community, rather than rushing to monetize. “A lot of open-source projects jump immediately on the monetization part and kind of betray the community,” he said. While NASA, Nvidia, and other prominent organizations use the SDK, Pederzani acknowledged they are not paying customers. Manufact’s target is to reach $2 million to $3 million in annual recurring revenue by the end of 2026, which would position it for a Series A fundraise.

Advertisement

The competitive landscape is crowding fast. AWS, Cloudflare, Vercel, and Docker have all launched MCP hosting features. But Manufact’s founders argue they sit in a complementary position relative to the model providers. “Anthropic and OpenAI are betting that their own chat products — Claude and ChatGPT — will become the primary interfaces through which people access all software,” Pederzani said. “If that bet plays out, we will serve these systems. That’s going to be massive.”

Why software companies without MCP servers risk becoming “dumb databases” for AI agents

Behind Manufact’s optimism lies a darker observation about the software industry that gives their pitch urgency. Pederzani argued that companies that fail to make their products accessible to AI agents risk being reduced to “systems of record” — dumb databases that agents query but that no longer own the user experience or the customer relationship.

“Now we have customers that come to us and say that their customers are choosing to adopt their product over a competitor because they offer an MCP server,” Pederzani said. “At the same time, there is a threat here that could put companies to become just systems of records. And this is really something that a lot of companies are scared of.”

In late February, Manufact co-hosted what it called the largest MCP apps hackathon to date at Y Combinator’s headquarters in San Francisco. The event drew 650 applications and 300 builders. OpenAI, Cloudflare, and Anthropic all sponsored it. Perhaps the most telling detail: eight employees from Anthropic attended — more people than Manufact’s own three-person team. The model providers, it appears, view Manufact as an ally rather than a threat.

Advertisement

Three employees, $6.3 million, and the ambition to capture a share of every AI tool call on Earth

For all its momentum, Manufact faces significant headwinds. The company has just three employees and has not yet demonstrated a scalable revenue model. Its most high-profile users are not paying customers. The $6.3 million seed round provides limited runway in an industry where infrastructure companies often require substantial capital to reach profitability. And the cloud providers that have launched MCP hosting features already own the customer relationships and billing infrastructure that enterprise buyers rely on.

But when asked what success looks like in two years, both founders pointed to a single metric: the percentage of global AI tool calls that flow through their infrastructure. “Our metric is the global tool calls or servers that run on Manufact — how many tool calls are passing through Manufact, made by agents,” Pederzani said. “Like Stripe is doing for the global GDP. We’re going to win if we can get a great number for it.”

The Stripe analogy is ambitious — Stripe processes hundreds of billions of dollars annually and is valued at roughly $90 billion — but it captures the scope of what Manufact’s founders believe is at stake. If MCP becomes the universal standard through which AI agents interact with all software, the company that provides the infrastructure for building and deploying MCP servers could occupy a position of outsized influence.

“In the end, what matters is to make something agents want,” Zullo said, riffing on Y Combinator’s famous dictum to “make something people want.” “What we’re focusing on and what we’re building is to help this transition of building for agents instead of building for humans.”

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

AI is doing the dirty work for insurance companies, and it’s getting worse

Published

on

Insurance claims adjusters have never had a reputation for generosity. But at least they were human. That’s changing fast, and not in your favor. A report by Futurism details how AI automation is now a major trend in personal insurance, the health, home, and auto coverage most of us rely on. 

Is your doctor’s opinion even part of the process anymore?

It doesn’t seem that your doctor’s opinion carries that much weight now. A Palm Beach Post investigation found that Iris Smith, an 80-year-old suffering from arthritis, may be a victim of AI-fueled preauthorization denials.

In another case, UnitedHealth is currently facing a class-action lawsuit alleging that AI-denied Medicare nursing care contributed to patient deaths. Meanwhile, a National Association of Insurance Commissioners survey found 84% of health insurers are using AI, with 68% deploying it for prior authorization approvals.

Most people give up and don’t even appeal these rejections because the process is too confusing or exhausting, which, if you’re an insurance company, is the outcome you want.

The worst part is that we know AI isn’t always accurate and has a tendency to hallucinate. It’s one thing if it makes a mistake while writing a report, but it’s a completely different ball game when it ends up denying medical aid to someone who truly needs it.

Advertisement

Is there anyone protecting your interests?

Florida Representative Lois Frankel isn’t having any of it. She told the Palm Beach Post she plans to fight any expansion into other states. “We believe Medicare was based on a promise that if your doctor says you need care, if you’re hurt and you need care, Medicare will be there for you, not AI.”

But if the past is any indication, her fight alone won’t be enough. Florida lawmakers tried to pass a bill in 2025, requiring human review for AI-generated denials. It passed the House, died in the Senate, and a Trump executive order discouraging state AI regulations didn’t help.

The silver lining, if you can call it that: nonprofits like Counterforce Health now offer free AI tools that analyze your denial letter and draft a customized appeal, making it easier to fight back. It’s AI versus AI at this point, and the world is growing gloomier by the day.

Advertisement

Source link

Continue Reading

Tech

The AI Doc’s Falsehoods And False Balance

Published

on

from the hype-without-substance dept

There is a familiar media failure in which opposing viewpoints are presented as equally valid, even when the evidence overwhelmingly supports one side. It’s called Bothsidesism. This false balance phenomenon legitimizes misinformation and undermines public understanding by giving disproportionate weight to baseless claims.

Why bring this up? Because the new AI Doc film is based on it.

The film wants credit for being “balanced” because it assembles a wide range of experts. But putting Prof. Fei-Fei Li, a pioneering computer scientist, next to someone like Eliezer Yudkowsky, an author of a Harry Potter fanfic, is not “balance.”

Once you understand that false equivalence is baked into the film’s storytelling, you understand how misleading and manipulative the documentary is. And it is compounded by a series of falsehoods that go unchallenged and uncorrected.

This review addresses both failures. 

Advertisement

The “AI Doc” Movie

“The AI Doc: Or How I Became an Apocaloptimist,” co-directed by Daniel Roher and Charlie Tyrell, sets out to explore AI, especially its potential for good and bad, with a strong emphasis on the filmmakers’ anxieties and fears. Its basic premise is: “A father-to-be tries to figure out what is happening with all this AI insanity.” As summarized by Andrew Maynard from Future of Being Human:

“The documentary progresses through the eyes of director Daniel Roher as he faces a tsunami of existential AI angst while grappling with the responsibility of becoming a father. Motivated by a fear that artificial intelligence could spell the end of everything that matters, he sets out to interview some of the largest (and loudest) voices in AI to fathom out whether this is the best of times or worst of times for him and his wife (filmmaker Caroline Lindy) to bring a kid into the world.”

The “loudest voices” include many AI doomer figures, such as Eliezer Yudkowsky, Dan Hendrycks, Daniel Kokotajlo, Connor Leahy, Jeffrey Ladish, and two of the most populist voices on emerging tech (first social media and now AI): Tristan Harris and Yuval Noah Harari. The film also features voices on AI ethics, including David Evan Haris, Emily M. Bender, Timnit Gebru, Deborah Raji, and Karen Hao. On the more boosterish side, there are Peter Diamandis and Guillaume Verdon (AKA Beff Jezos). Three leading AI CEOs were also interviewed: OpenAI’s Sam Altman, DeepMind’s Demis Hassabis, and Anthropic’s Amodei siblings, Dario and Daniela. (Meta’s Mark Zuckerberg declined, and xAI’s Elon Musk agreed but never showed up).

The movie started playing in theaters on March 27, but there are already plenty of reviews (dating back to the Sundance Film Festival). The praise is fairly consistent: It is timely, wide-ranging, visually energetic, and unusually well-connected, with access to major AI figures.

Advertisement

The most common criticism is that it is too deferential to interviewees and too thin on hard interrogation or concrete answers. As several reviewers put it:

  1. “Roher’s willingness to blindly accept any and all of his speakers’ pronouncements leaves The AI Doc feeling toothless.”
  2. “By giving its doomer and accelerationist voices so much time to present AI’s most hyperbolic potential outcomes with little pushback, the documentary’s first half plays more like an overlong advertisement for the technology as opposed to a piece of measured analysis.”
  3. “Roher acts as a fantastic storyteller, but he treats his subjects too gently. The film desperately needs more pushback during the interviews.”

Tristan Harris, co-founder of the Center for Humane Technology, told the AP: “My hope is that this film is kind of like ‘An Inconvenient Truth’ or ‘The Social Dilemma’ for AI.”

That is not reassuring. It is more like a glaring warning sign. Harris’s “Social Dilemma” and “AI Dilemma” movies were full of misinformation and nonsensical hyperbole, and both were designed to be manipulative and dishonest. If anything, his endorsement tells you exactly what kind of movie this is.

After watching the AI Doc, I realized what the doomers had managed to accomplish here: The film absorbs the panic rather than investigates it.

The False Balance of The AI Doc

Advertisement

The AI Doc starts with what one reviewer called a “Doom Parade.” It aims to set the tone.

The worst AI predictions are presented first,” another reviewer noted. “Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute, calmly talks of the ‘abrupt extermination’ of humanity.”

And it is worth remembering who Yudkowsky is and what he has actually advocated. In his notorious TIME op-ed, “Shut it All Down,” he argued that governments should “be willing to destroy a rogue datacenter by airstrike.” In his book “If Anyone Builds It, Everyone Dies,” which many reviewers found unconvincing and “unnecessarily dramatic sci-fi,” he (and his co-author Nate Soares) proposed that governments must bomb labs suspected of developing AI. Based on what exactly? On the authors’ overconfident, binary worldview and speculative scenarios, which they mistake for inevitability.

One review of that book observed, “The plan with If Anyone Builds It seems to be to sane-wash him [Yudkowsky] for the airport books crowd, sanding off his wild opinions.”

Advertisement

That is more or less what the new documentary does, too. The AI Doc sane-washes the loudest doomers for mainstream viewers, sanding off their wild opinions.

In his newsletter, David William Silva addresses the documentary’s “series of doomers,” who “describe AI-driven extinction with the calm confidence of people who have said these things so many times they have stopped noticing they have no evidence for them.”

“Roher’s reaction is full terror,” Silva adds. “I hope it is unequivocally evident that this is not journalism.”

That gets to the heart of it. The film pretends to weigh competing perspectives, but in practice, it grants disproportionate authority to people most invested in flooding the zone with AI panic. And there is a well-oiled machine behind this kind of AI panic. As Silva writes:

Advertisement

“The people behind the AI anxiety machine. […] They know that predicting human extinction by software is an extraordinary claim requiring extraordinary evidence. They know they don’t have it. They know ‘my kids won’t live to see middle age’ is nothing but performance. […] And they do it anyway. Why do you think that is? The calculation is simple. Some people will see through it, and they will be annoyed, write rebuttals, call it what it is. Ok, fine. Just an acceptable loss. The believers, on the other hand, are a market. As long as the ratio stays favorable, the machine is profitable.”

One of the biggest beneficiaries of this film is Harris.[1] He is framed as if he is in the middle between the two main camps (doomers and accelerationists), and his narrative gradually becomes the film’s narrative (similar to the Social Dilemma). His call to action even serves as the ending (with a QR code directing viewers to a designated website).

The problem is that this framing has very little to do with reality. Harris’s Center for Humane Technology got $500,000 from the Future of Life Institute for “AI-related policy work and messaging cohesion within the AI X-risk [existential risk] community.” That is not a neutral player.

There’s a touching scene in the film where Roher mentions his father’s cancer treatment and expresses hope that AI might help. Harris appears visibly emotional. But in other contexts, Harris has argued against looking at AI for help with cancer treatment… in the belief that it would lead to extinction. Here he is on Glenn Beck’s show in 2023:

“My mother died from cancer several years ago. And if you told me that we could have AI that was going to cure her of cancer, but on the other side of that coin was that all the world would go extinct a year later, because of the, the only way to develop that was to bring something, some Demon into the world that would we would not be able to control, as much as I love my mother, and I would want her to be here with me right now, I wouldn’t take that trade.”

That sort of hyperbole seems relevant to Harris’ stance on such things, but was not mentioned in the film at all.

Advertisement

Connor Leahy of Conjecture and ControlAI gets a similar makeover. In the documentary, he appears as another pessimistic expert. Elsewhere, he said he does not expect humanity “to make it out of this century alive; I’m not even sure we’ll get out of this decade!” His “Narrow Path” proposal for policymakers begins with the claim that “AI poses extinction risks to human existence.” Instead of calling for a six-month AI pause, he argued for a 20-year pause, because “two decades provide the minimum time frame to construct our defenses.”

This is exactly why background checks matter. Viewers of the AI Doc deserve to know the full scope of the more extreme positions these interviewees have publicly taken elsewhere. If someone has publicly argued for destroying data centers by airstrikes or stopping AI for 20 years, the audience should know that.

Debunking the Falsehoods

The film goes way beyond just pushing a panic. It also recycles several misleading or plainly false claims, letting them pass as established facts. Three stood out in particular.  

Advertisement

Anthropic’s Blackmail study

One of the most repeated “facts” in reviews of the movie is that Anthropic’s AI model, Clause, decided, unprompted, to blackmail a fictional employee. In the film, Daniel Roher asks, “And nobody taught it to do that?” Jeffrey Ladish, of Palisade Research and Tristan’s Center for Humane Technology, replies: “No, it learned to do that on its own.”

That is a misleading characterization of the actual experiment, it has already been debunked in “AI Blackmail: Fact-Checking a Misleading Narrative.” Anthropic researchers admitted that they strongly pressured the model and iterated through hundreds of prompts before producing that outcome. It wasn’t a spontaneous emergence of “evil” behavior; the researchers explicitly ensured it would be the default. Telling viewers that the model has gone full “HAL 9000” omits the facts about the heavily engineered experimental setup.  

Although this is a classic case of big claims and thin evidence, the film offers so little pushback that viewers are left to take Ladish’s statements at face value.

Advertisement

It is also worth remembering that Ladish has fought against open-source AI, pushed for a crackdown on open-source models, and once said, “We can prevent the release of a LLaMA 2! We need government action on this asap.” He later updated his position (and it’s good to revise such views). But does the film mention his earlier public hysteria? No.

Is AI less regulated than sandwich shops? No.

Connor Leahy tells Daniel Roher, “There is currently more regulation on selling a sandwich to the public” than there is on AI development. This talking point has become a favorite slogan in AI doomer circles. It was repeatedly stated by The Future of Life Institute’s Max Tegmark and, more recently, by Senator Bernie Sanders. It’s catchy. It’s also false.

State attorneys general from both parties have explicitly argued that existing laws already apply to AI. Lina Khan, writing on behalf of the Federal Trade Commission, stated that “AI is covered by existing laws. Each agency here today has legal authorities to readily combat AI-driven harm.” The existing AI regulatory stack already includes antitrust & competition regulation, civil rights & anti-discrimination law, consumer protection, data privacy & security, employment & labor law, financial regulation, insurance & accident compensation, property & contract law, among others.

Advertisement

So no, AI is not less regulated than sandwich shops. It’s a misleading soundbite, not a serious description of legal reality.  

Data center water usage

In the film, Karen Hao criticizes data centers, warning that “People are literally at risk, potentially of running out of drinking water.” That sounds alarming, which is presumably the point. But it is highly misleading.

In fact, Karen Hao had to issue corrections to her “Empire of AI” book because a key water-use figure was off by a factor of 4,500. The discrepancy was not 45x or 450x, but rather 4,500x. That is not a rounding error. For detailed rebuttals, see Andy Masley’s “The AI water issue is fake” and “Empire of AI is widely misleading about AI water use.”

Advertisement

There is also a basic proportionality issue here. As demonstrated by The Washington Post, “The water used by data centers caused a stir in Arizona’s drought-prone Maricopa County. But while they used about 905 million gallons there last year, that’s a small fraction of the 29 billion gallons devoted to the country’s golf courses.” To put that plainly: data centers accounted for just 0.1% of the county’s water use.

It is also worth noting that “most of the water used by data centers returns to its source unchanged.” In closed-loop cooling systems, for example, water is recirculated multiple times, which significantly reduces net consumption. 

None of this is hidden information. A basic fact-check by the filmmakers could have brought it to light. But that was not the film’s goal. They chose fear-based framing over actual reporting. They could have pressed interviewees on their track records, failed predictions, and political agendas. Instead, they let them narrate the stakes, unchallenged.

So, I think we can conclude that the AI Doc may want to appear balanced and thoughtful, but, unfortunately, too often it is not.

Advertisement

Final Remark

While Western filmmakers are busy platforming advocates for “bombing data centers” and “Stop AI for 20 years,” the Chinese Communist Party is building the actual infrastructure. The CCP is not making doom-and-gloom documentaries; it is racing ahead. This is a real strategic threat, and it is far more concerning than anything featured in this film.

—————————

Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and the “AI Panic” newsletter.

Advertisement

Filed Under: ai, ai doomerism, daniel roher, eliezer yudkowsky, the ai doc, tristan harris

Source link

Advertisement
Continue Reading

Tech

DC In The Data Center For A More Efficient Future

Published

on

If you own a computer that’s not mobile, it’s almost certain that it will receive its power in some form from a mains wall outlet. Whether it’s 230 V at 50 Hz or 120 V at 60 Hz, where once there might have been a transformer and a rectifier there’s now a switch-mode power supply that delivers low voltage DC to your machine. It’s a system that’s efficient and works well on the desktop, but in the data center even its efficiency is starting to be insufficient. IEEE Spectrum has a look at newer data centers that are moving towards DC power distribution, raising some interesting points which bear a closer look.

A traditional data center has many computers which in power terms aren’t much different from your machine at home. They get their mains power at distribution voltage — probably 33 KV AC where this is being written — they bring it down to a more normal mains voltage with a transformer just like the one on your street, and then they feed a battery-backed uninterruptible Power Supply (UPS) that converts from AC to DC, and then back again to AC. The AC then snakes around the data center from rack to rack, and inside each computer there’s another rectifier and switch-mode power supply to make the low voltage DC the computer uses.

The increasing demands of data centers full of GPUs for AI processing have raised power consumption to the extent that all these conversion steps now cost a significant amount of wasted power. The new idea is to convert once to DC (at a rather scary 800 volts) and distribute it direct to the cabinet where the computer uses a more efficient switch mode converter to reach the voltages it needs.

Advertisement

It’s an attractive idea not just for the data center. We’ve mused on similar ideas in the past and even celebrated a solution at the local level. But given the potential ecological impact of these data centers, it’s a little hard to get excited about the idea in this context. The fourth of our rules for the responsible use of a new technology comes in to play. Fortunately we think that both an inevitable cooling of the current AI hype and a Moore’s Law driven move towards locally-run LLMs may go some way towards solving that problem on its own.


header image: Christopher Bowns, CC BY-SA 2.0.

Advertisement

Source link

Continue Reading

Tech

A Major Publisher Just Canceled This Book Over AI Writing Concerns

Published

on

Last June, Mia Ballard’s self-published novel Shy Girl took the internet by storm. After winning the hearts of readers and publisher Hachette alike, it was set for a major US debut in the coming months. 

Now, the novel may never become available through any official channel again. Hachette has officially pulled the plug on the novel’s US release following a wave of allegations that generative AI played a role in the manuscript’s creation. 

Originally self-published in February 2025, the horror novel was traditionally released by Hachette’s science fiction and fantasy label Orbit in the UK in November. After The New York Times provided evidence of AI usage in Shy Girl, Hachette canceled the planned spring US release and removed the book from its website completely.

Advertisement

“Hachette remains committed to protecting original creative expression and storytelling,” the publisher said in a statement to the Times. 

Authors are required to disclose to Hachette whether AI was used in the creation of their work. Ballard has denied using AI tools to write the book, claiming an editor was responsible for the portions that appear to be AI-generated.

“My name is ruined for something I didn’t even personally do,” Ballard wrote in an email to the New York Times.

Advertisement
The book cover for Shy Girl by Mia Ballard.

Hachette UK

The cancellation of Shy Girl by Hachette marks the first time a major publisher has publicly pulled an existing title due to suspicions of AI-generated prose.

For the past few months, readers online have raised concerns about the book’s apparent use of AI.

A video from YouTuber frankie’s shelf provides a lengthy analysis of the novel, pointing out linguistic patterns that are characteristic of AI writing. The video also lists words in Shy Girl that are repeated with unusual frequency (“edge” is used 84 times and “sharp” 159 times), often in ways that are abstract and nonsensical.

In January, Max Spero, founder and chief executive of Pangram, ran the text of Shy Girl through his AI detection program. He claimed that the novel was 78% AI-generated.

The rise of AI has caught the publishing industry off guard. Though AI writing has already appeared in many self-published books, traditional publishers like Hachette are more critical of the technology.

Advertisement

Representatives for Hachette didn’t immediately respond to a request for comment.

Source link

Advertisement
Continue Reading

Tech

‘Wood is wood’: WSU research finds Yankees’ viral ‘torpedo’ bats perform the same as traditional bats

Published

on

A research team determined that the torpedo bat, left, and traditional bat perform equally well in hitting power with only a slight difference in the location of the bat’s sweet spot (WSU Photo / Voiland College of Engineering and Architecture)

The New York Yankees just cruised through Seattle and won two out of three games against the Mariners. On the other side of Washington state, the Bronx Bombers’ “torpedo bats” were being scientifically scrutinized.

In what Washington State University is calling the first-ever laboratory experiments on the new baseball bat design, researchers found that torpedo bats and traditional bats basically perform the same.

It didn’t look that way last season, when the Yankees hit a franchise-record nine home runs in a game against the Milwaukee Brewers and drew viral attention to the bats that they were swinging.

The torpedo bat design relies on a slightly different shape in which wood is removed from the barrel tip and added to the bat’s sweet spot, so that the diameter tapers down, a little like a bowling pin. But the hype appears overblown.

“Wood is wood,” Lloyd Smith, a professor in WSU’s School of Mechanical and Materials Engineering and director of the university’s Sports Science Laboratory, told WSU Insider. “When it comes to baseball, there’s not a lot you can do with wood. If your goal is to keep the game steady and consistent and not have a lot of change, wood bats are good.”

Advertisement

Smith is part of a research team that includes Alan Nathan from University of Illinois and Daniel Russell from Penn State University. They’ll present their findings at the upcoming International Sports Engineering Association conference, June 1–4 in Pullman, Wash.

According to WSU Insider, the researchers created two maple bats that were duplicates of a standard Major League Baseball bat. Two additional maple bats were made with a torpedo-shaped barrel that gave them the same swing weight as the standard bat.

They measured how much energy the bat returns to the ball by firing baseballs from an air cannon at a stationary bat and using light gates and cameras to measure the speed of the incoming and rebounding ball.

The team found nearly identical performance for the torpedo and standard bats except that the sweet spot for the torpedo bat was a half inch farther from the bat tip than the standard bat.

Advertisement

“It was actually pretty phenomenal how close they were,” said Smith.

While some Yankees players said last year that any little tweak could provide an advantage, the team’s captain wasn’t convinced.

Aaron Judge hit an American League-record 62 homers in 2022, 58 in an MVP season in 2024 and 53 as repeat MVP in 2025. He had three homers using a traditional bat in that much-talked-about rout of the Brewers.

“The past couple of seasons kind of speak for itself,” Judge told ESPN last May. “Why try to change something?”

Advertisement

Source link

Continue Reading

Tech

Xteink’s X3 E-Reader Snaps Onto Your iPhone and Ready for Any Spare Moment

Published

on

Xteink X3 E-Reader
Slapping the Xteink X3 onto an iPhone takes only a few seconds. This is owing in part to its built-in magnets, which exactly align with MagSafe and allow it to be easily snapped into place. You get a thin black or white slab that sits flush against the phone’s back without adding any bulk. Anyone who is continually reaching for their phone dozens of times per day would appreciate having a book right at their fingertips, all from the same move.



At only 58 grams, this device is easy to forget about until you need it, and then, as if by magic, it appears. Its overall size is a modest 100mm long and 60mm wide, so it goes unnoticed in a pocket until reading time beckons. Commuters and individuals waiting in lines can just pull out their phones and start reading a chapter without having to dig through their bags for another device.


XTEINK X4 E-Book Reader, 4.3″ Portable Pocket E-Ink eReader with Physical Page-Turn Buttons, Ultra-Thin…
  • Pocket-Size Mini eReader for Reading Anywhere: Ultra-light at just 0.23 inch and only 2.72 oz, Xteink X4 is designed for true portability. Slip it…
  • 4.3″ Paper-Like E-Ink Display: The 4.3-inch E-Ink screen delivers a natural paper-like reading experience that’s gentle on the eyes. Enjoy clear…
  • Magnetic-Ready Design for Quick Access: Includes magnetic stick-on rings, so you can attach Xteink X4 to the back of your phone or other magnetic…


The 3.7-inch E Ink screen displays clear text, with over 250 pixels per inch. You can easily change the font size with a few simple adjustments, so even the smallest pages provide a comfortable reading experience. With adequate lighting, the characters simply pop, and there is no eye strain to contend with, as opposed to phone screens. You also have real buttons on the sides and bottom for turning pages and accessing menus. One-handed operation feels perfectly normal, whether you’re on a train or confined in bed. The gyroscope within detects even the tiniest shake and flips pages forward, allowing you to maintain a solid grip during those rapid reading periods.

Xteink X3 E-Reader
Navigation is straightforward, with a grid of icons instead of swipes or touches. Choose a book or change the settings with a few presses, and it remains dependable even when your fingers are clumsy. The strategy minimizes distractions and allows you to concentrate on the words themselves. You can load books onto the device using either the 16GB microSD card included in the box or a companion app on your phone. Transferring EPUB files is quick and easy over Wi-Fi or by inserting the card into your computer, and storage increases up to 512 gigabytes, allowing you to carry thousands of titles without running out of space.

Xteink X3 E-Reader
The battery will last you 10 to 14 days on a single charge, even if you only read for an hour or two every day, and charging is simple; simply insert the special cable with magnetic pogo pins into the gadget and it will clip right into place. Okay, there is one little flaw: there is no built-in front light (yet), but you can get a separate clip-on version for only $9.99 if you plan on reading late into the evening. If you need more connectivity, there are Bluetooth and NFC connections available, as well as Wi-Fi for the occasional update or transfer. It’s available now on the official Xteink website and can be purchased for $79.
[Source]

Advertisement

Source link

Continue Reading

Tech

'The Bonfire of the Vanities' series headed to Apple TV

Published

on

Maybe the third time is the charm. Writer/producer David E. Kelley is adapting Tom Wolfe’s “The Bonfire of the Vanities” novel into a series for Apple TV, with “The Batman” director Matt Reeves.

Apple logo followed by lowercase letters t and v, all glowing with soft pastel gradient colors on a solid black background
Apple TV is dramatizing “The Bonfire of the Vanities” — image credit: Apple

David E. Kelley is still best known for “The Practice” and “Ally McBeal” shows, but he’s also the writer of Apple TV’s “Presumed Innocent” and “Margo’s Got Money Troubles.” Now according to Deadline, he’s dramatizing Tom Wolfe’s famous 1987 novel of greed and Wall Street money.
Not to spoil the story, but as excellent as it is, Wolfe’s novel feels as if it fades out rather than have a big finish, which has made it difficult to successfully adapt. It was filmed in 1990, with Tom Hanks starring and Brian DePalma directing from a screenplay by Michael Cristofer, but that was a flop.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

The leadership dilemma: Governing the “Agentic AI” workforce

Published

on

Artificial intelligence is no longer a back office enabler or a set of isolated automation software tools. It is becoming a core component of how organizations operate, compete, and deliver value.

As businesses accelerate their adoption of increasingly autonomous systems, often referred to as agentic AI, a significant leadership dilemma is emerging. The workforce is no longer exclusively human.

Source link

Continue Reading

Tech

How CIOs can create a strong foundation for an AI-enabled workplace

Published

on

As with any new tech, there’s a scale for AI adoption among businesses leaving some are ahead of the curve and others much further behind as they continue to resist and delay.

But what’s clear is that adoption is happening with or without formal strategy because nearly two-thirds (65%) of employees now say they intentionally use AI for work.

Source link

Continue Reading

Tech

OpenAI purchases online tech talk show TBPN

Published

on

OpenAI said the purchase will be part of its strategy to further the conversation on the changes brought about by artificial intelligence.

OpenAI, in what is being described as an unusual move, is set to purchase the Technology Business Programming Network (TBPN), a daily, live tech talk show hosted by Jordi Hays and John Coogan, that often features high-profile tech leaders and entrepreneurs. OpenAI 

OpenAI’s chief executive officer of applications Fidji Simo said: “As I’ve been thinking about the future of how we communicate at OpenAI, one thing that’s become clear is that the standard communications playbook just doesn’t apply to us. We’re not a typical company.

“We’re driving a really big technological shift. And with our mission to ensure artificial general intelligence benefits all of humanity comes a responsibility to help create a space for a real, constructive conversation about the changes AI creates, with builders and people using the technology at the centre.”

Advertisement

While the full details of the deal have yet to be disclosed, OpenAI said the TBPN team will maintain editorial independence and make decisions on their guests and programming. According to the Wall Street Journal, TBPN stated that it generated $5m in advertising revenue last year and is on track to exceed $30m in revenue in 2026.

However, an OpenAI spokesperson told Bloomberg that the platform is not aiming to make TBPN a money-making enterprise. 

In a statement, Hays expressed excitement at the venture, while making note of the importance of a strong partnership where both parties work as a team to communicate change and innovation in the AI and tech spaces. 

He said: “While we’ve been critical of the industry at times, after getting to know Sam and the OpenAI team, what stood out most was their openness to feedback and commitment to getting this right. Moving from commentary to real impact in how this technology is distributed and understood globally is incredibly important to us.”

Advertisement

Earlier this week OpenAI closed a larger than expected funding round in which it raised $122bn, exceeding the projected figure of $110bn. Part of that funding is expected to be put towards the scale and growth of the platform’s AI technologies and research, in line with current global demands. 

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025