Connect with us

Tech

‘AI companies shouldn’t leave American ratepayers to pick up the tab’: Anthropic says it will cover electricity price increases caused by its data centers

Published

on


  • President Trump has urged Big Tech to absorb rising energy bills, not consumers
  • Anthropic says it will pay upgrade costs and procure net-new power generation where possible
  • CEO Dario Amodei promises to be a “responsible neighbor” and work with communities and governments

Anthropic has boldly offered to cover electricity price increased caused by its own data centers, with CEO Dario Amodei stating the power of powering its AI models should fall on the company, not consumers.

This, of course, follows a recent request by President Trump for Big Tech companies to “pay their own way”, and not let US citizens face higher electricity bills as a result of their operations.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Anthropic raises $30bn led by GIC, Coatue at $380bn valuation

Published

on

The AI giant announced a revenue run rate of $14bn – up from $0 three years ago, and $9bn two months ago.

Anthropic announced a $30bn Series G funding round led by Coatue Management and Singapore’s GIC at a post-money valuation of $380bn. The raise more than doubles its valuation from the last round it announced in September.

The Series G was co-led by D E Shaw Ventures; Dragoneer; Founders Fund; Iconiq; and the Abu Dhabi-based MGX, an AI investment firm with close ties to the United Arab Emirates political class. MGX recently took 15pc ownership of TikTok’s US venture and funded xAI in a $20bn Series E round shortly before it was acquired by SpaceX.

Several other big-name investors took part in the round, including Accel; Bessemer Venture Partners; Black Rock; Fidelity Management and Research Company; Growth Equity at Goldman Sachs Alternatives; Lightspeed Venture Partners; Qatar Investment Authority; and Sequoia Capital, among others.

Advertisement

While Anthropic did not provide details on individual commitments from investors, Bloomberg reported last month that lead investors GIC and Coatue will commit around $1.5bn, and Iconiq around $1bn.

The Series G also includes portions of the previously announced $15bn in commitments from Microsoft and Nvidia, Anthropic said.

The company wants to use the funds to fuel frontier research in AI, develop products and expand its infrastructure. Claude is currently the only frontier AI model available on the three largest cloud platforms, Amazon Web Services, Google Cloud and Microsoft Azure.

The round comes as the AI giant hit a revenue run rate of $14bn – up from $0 three years ago, and around $9bn two months ago.

Advertisement

Anthropic has established itself as the go-to choice for enterprise coding in a market littered with choices ranging from OpenAI to Microsoft’s Copilot.

According to the company, the number of customers spending more than $100,000 annually on Claude has growth seven times in the past year, while more than 500 spend more than $1m on an annualised basis.

Claude Code’s revenue rate alone has grown to more than $2.5bn, more than doubling in value since the beginning of the year. The company launched ‘Cowork’ in January, a product entirely made by Claude Code, designed to be a simpler version of its predecessor. The launch garnered generally positive reactions from users.

Meanwhile, Claude for Healthcare was launched shortly after rival OpenAI released its iteration of a privacy-preserving tool for medical queries.

Advertisement

“Whether it is entrepreneurs, start-ups, or the world’s largest enterprises, the message from our customers is the same – Claude is increasingly becoming critical to how businesses work,” said Krishna Rao, Anthropic’s chief financial officer.

“This fundraising reflects the incredible demand we are seeing from these customers, and we will use this investment to continue building the enterprise-grade products and models they have come to depend on.”

The company also launched its latest AI model, the Opus 4.6, earlier this month. The model, according to the company, can manage categories of real-world work, generate documents, spreadsheets and presentations.

Artificial Analysis testing places Opus 4.6 at the top of the chart for running better than OpenAI’s GPT 5.2 while being cheaper.

Advertisement

OpenAI is reportedly in talks to raise as much as $100bn for a $750bn valuation. The company is currently valued at $500bn. Its revenue run rate exceeded past $20bn by the end of 2025.

Both OpenAI and Anthropic said they are interested in filing for initial public offerings (IPO).

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

OpenAI deploys Cerebras chips for ‘near-instant’ code generation in first major move beyond Nvidia

Published

on

OpenAI on Thursday launched GPT-5.3-Codex-Spark, a stripped-down coding model engineered for near-instantaneous response times, marking the company’s first significant inference partnership outside its traditional Nvidia-dominated infrastructure. The model runs on hardware from Cerebras Systems, a Sunnyvale-based chipmaker whose wafer-scale processors specialize in low-latency AI workloads.

The partnership arrives at a pivotal moment for OpenAI. The company finds itself navigating a frayed relationship with longtime chip supplier Nvidia, mounting criticism over its decision to introduce advertisements into ChatGPT, a newly announced Pentagon contract, and internal organizational upheaval that has seen a safety-focused team disbanded and at least one researcher resign in protest.

“GPUs remain foundational across our training and inference pipelines and deliver the most cost effective tokens for broad usage,” an OpenAI spokesperson told VentureBeat. “Cerebras complements that foundation by excelling at workflows that demand extremely low latency, tightening the end-to-end loop so use cases such as real-time coding in Codex feel more responsive as you iterate.”

The careful framing — emphasizing that GPUs “remain foundational” while positioning Cerebras as a “complement” — underscores the delicate balance OpenAI must strike as it diversifies its chip suppliers without alienating Nvidia, the dominant force in AI accelerators.

Advertisement

Speed gains come with capability tradeoffs that OpenAI says developers will accept

Codex-Spark represents OpenAI’s first model purpose-built for real-time coding collaboration. The company claims the model delivers more than 1000 tokens per second when served on ultra-low latency hardware, though it declined to provide specific latency metrics such as time-to-first-token figures.

“We aren’t able to share specific latency numbers, however Codex-Spark is optimized to feel near-instant — delivering more than 1000 tokens per second while remaining highly capable for real-world coding tasks,” the OpenAI spokesperson said.

The speed gains come with acknowledged capability tradeoffs. On SWE-Bench Pro and Terminal-Bench 2.0 — two industry benchmarks that evaluate AI systems’ ability to perform complex software engineering tasks autonomously — Codex-Spark underperforms the full GPT-5.3-Codex model. OpenAI positions this as an acceptable exchange: developers get responses fast enough to maintain creative flow, even if the underlying model cannot tackle the most sophisticated multi-step programming challenges.

The model launches with a 128,000-token context window and supports text only — no image or multimodal inputs. OpenAI has made it available as a research preview to ChatGPT Pro subscribers through the Codex app, command-line interface, and Visual Studio Code extension. A small group of enterprise partners will receive API access to evaluate integration possibilities.

Advertisement

“We are making Codex-Spark available in the API for a small set of design partners to understand how developers want to integrate Codex-Spark into their products,” the spokesperson explained. “We’ll expand access over the coming weeks as we continue tuning our integration under real workloads.”

Cerebras hardware eliminates bottlenecks that plague traditional GPU clusters

The technical architecture behind Codex-Spark tells a story about inference economics that increasingly matters as AI companies scale consumer-facing products. Cerebras’s Wafer Scale Engine 3 — a single chip roughly the size of a dinner plate containing 4 trillion transistors — eliminates much of the communication overhead that occurs when AI workloads spread across clusters of smaller processors.

For training massive models, that distributed approach remains necessary and Nvidia’s GPUs excel at it. But for inference — the process of generating responses to user queries — Cerebras argues its architecture can deliver results with dramatically lower latency. Sean Lie, Cerebras’s CTO and co-founder, framed the partnership as an opportunity to reshape how developers interact with AI systems.

“What excites us most about GPT-5.3-Codex-Spark is partnering with OpenAI and the developer community to discover what fast inference makes possible — new interaction patterns, new use cases, and a fundamentally different model experience,” Lie said in a statement. “This preview is just the beginning.”

Advertisement

OpenAI’s infrastructure team did not limit its optimization work to the Cerebras hardware. The company announced latency improvements across its entire inference stack that benefit all Codex models regardless of underlying hardware, including persistent WebSocket connections and optimizations within the Responses API. The results: 80 percent reduction in overhead per client-server round trip, 30 percent reduction in per-token overhead, and 50 percent reduction in time-to-first-token.

A $100 billion Nvidia megadeal has quietly fallen apart behind the scenes

The Cerebras partnership takes on additional significance given the increasingly complicated relationship between OpenAI and Nvidia. Last fall, when OpenAI announced its Stargate infrastructure initiative, Nvidia publicly committed to investing $100 billion to support OpenAI as it built out AI infrastructure. The announcement appeared to cement a strategic alliance between the world’s most valuable AI company and its dominant chip supplier.

Five months later, that megadeal has effectively stalled, according to multiple reports. Nvidia CEO Jensen Huang has publicly denied tensions, telling reporters in late January that there is “no drama” and that Nvidia remains committed to participating in OpenAI’s current funding round. But the relationship has cooled considerably, with friction stemming from multiple sources.

OpenAI has aggressively pursued partnerships with alternative chip suppliers, including the Cerebras deal and separate agreements with AMD and Broadcom. From Nvidia’s perspective, OpenAI may be using its influence to commoditize the very hardware that made its AI breakthroughs possible. From OpenAI’s perspective, reducing dependence on a single supplier represents prudent business strategy.

Advertisement

“We will continue working with the ecosystem on evaluating the most price-performant chips across all use cases on an ongoing basis,” OpenAI’s spokesperson told VentureBeat. “GPUs remain our priority for cost-sensitive and throughput-first use cases across research and inference.” The statement reads as a careful effort to avoid antagonizing Nvidia while preserving flexibility — and reflects a broader reality that training frontier AI models still requires exactly the kind of massive parallel processing that Nvidia GPUs provide.

Disbanded safety teams and researcher departures raise questions about OpenAI’s priorities

The Codex-Spark launch comes as OpenAI navigates a series of internal challenges that have intensified scrutiny of the company’s direction and values. Earlier this week, reports emerged that OpenAI disbanded its mission alignment team, a group established in September 2024 to promote the company’s stated goal of ensuring artificial general intelligence benefits humanity. The team’s seven members have been reassigned to other roles, with leader Joshua Achiam given a new title as OpenAI’s “chief futurist.”

OpenAI previously disbanded another safety-focused group, the superalignment team, in 2024. That team had concentrated on long-term existential risks from AI. The pattern of dissolving safety-oriented teams has drawn criticism from researchers who argue that OpenAI’s commercial pressures are overwhelming its original non-profit mission.

The company also faces fallout from its decision to introduce advertisements into ChatGPT. Researcher Zoë Hitzig resigned this week over what she described as the “slippery slope” of ad-supported AI, warning in a New York Times essay that ChatGPT’s archive of intimate user conversations creates unprecedented opportunities for manipulation. Anthropic seized on the controversy with a Super Bowl advertising campaign featuring the tagline: “Ads are coming to AI. But not to Claude.”

Advertisement

Separately, the company agreed to provide ChatGPT to the Pentagon through Genai.mil, a new Department of Defense program that requires OpenAI to permit “all lawful uses” without company-imposed restrictions — terms that Anthropic reportedly rejected. And reports emerged that Ryan Beiermeister, OpenAI’s vice president of product policy who had expressed concerns about a planned explicit content feature, was terminated in January following a discrimination allegation she denies.

OpenAI envisions AI coding assistants that juggle quick edits and complex autonomous tasks

Despite the surrounding turbulence, OpenAI’s technical roadmap for Codex suggests ambitious plans. The company envisions a coding assistant that seamlessly blends rapid-fire interactive editing with longer-running autonomous tasks — an AI that handles quick fixes while simultaneously orchestrating multiple agents working on more complex problems in the background.

“Over time, the modes will blend — Codex can keep you in a tight interactive loop while delegating longer-running work to sub-agents in the background, or fanning out tasks to many models in parallel when you want breadth and speed, so you don’t have to choose a single mode up front,” the OpenAI spokesperson told VentureBeat.

This vision would require not just faster inference but sophisticated task decomposition and coordination across models of varying sizes and capabilities. Codex-Spark establishes the low-latency foundation for the interactive portion of that experience; future releases will need to deliver the autonomous reasoning and multi-agent coordination that would make the full vision possible.

Advertisement

For now, Codex-Spark operates under separate rate limits from other OpenAI models, reflecting constrained Cerebras infrastructure capacity during the research preview. “Because it runs on specialized low-latency hardware, usage is governed by a separate rate limit that may adjust based on demand during the research preview,” the spokesperson noted. The limits are designed to be “generous,” with OpenAI monitoring usage patterns as it determines how to scale.

The real test is whether faster responses translate into better software

The Codex-Spark announcement arrives amid intense competition for AI-powered developer tools. Anthropic’s Claude Cowork product triggered a selloff in traditional software stocks last week as investors considered whether AI assistants might displace conventional enterprise applications. Microsoft, Google, and Amazon continue investing heavily in AI coding capabilities integrated with their respective cloud platforms.

OpenAI’s Codex app has demonstrated rapid adoption since launching ten days ago, with more than one million downloads and weekly active users growing 60 percent week-over-week. More than 325,000 developers now actively use Codex across free and paid tiers. But the fundamental question facing OpenAI — and the broader AI industry — is whether speed improvements like those promised by Codex-Spark translate into meaningful productivity gains or merely create more pleasant experiences without changing outcomes.

Early evidence from AI coding tools suggests that faster responses encourage more iterative experimentation. Whether that experimentation produces better software remains contested among researchers and practitioners alike. What seems clear is that OpenAI views inference latency as a competitive frontier worth substantial investment, even as that investment takes it beyond its traditional Nvidia partnership into untested territory with alternative chip suppliers.

Advertisement

The Cerebras deal is a calculated bet that specialized hardware can unlock use cases that general-purpose GPUs cannot cost-effectively serve. For a company simultaneously battling competitors, managing strained supplier relationships, and weathering internal dissent over its commercial direction, it is also a reminder that in the AI race, standing still is not an option. OpenAI built its reputation by moving fast and breaking conventions. Now it must prove it can move even faster — without breaking itself.

Source link

Continue Reading

Tech

5 Unique Ways To Upgrade Your Home Tool Box

Published

on





We may receive a commission on purchases made from links.

Tool boxes are designed to help you store, organize, and protect your tools. They’re made of multiple types of material, from flexible fabric tool bags to hard plastic or metal, and they come in all shapes and sizes, from handheld boxes to large immovable cabinets. A tool box or tool chest can hold tools, as the name suggests, or store craft and other hobby or DIY gadgets and gizmos.

When you’re just starting out with at-home projects, you can probably get away with a tool bag or a pre-made combination kit. But as you progress, your tool collection may begin to outpace what a bag or little box can handle. When that happens, it might be time to invest in something larger. 

Advertisement

The thing is, tool chests are designed to be universally useful. They are intended to appeal to the widest possible user base, which means keeping a relatively simple layout. The end result is a tool storage solution that’s pretty useful for almost anyone, but might not be perfectly useful for your purposes. To truly create the perfect tool box for your needs, you might need to make some modifications. These are five unique upgrades to make your tool box ever better.

Advertisement

Magnetic strips

Keeping tools in the drawers of your tool box is a simple way to organize them, but it can still make it tough to find a specific tool when you need it. If there are certain tools that you use more regularly, you can keep them isolated on the outside of your tool chest using a magnetic strip, provided they are made of certain types of metal.

Most common metals have unpaired electrons in their outer shells which generate a magnetic field. When in the presence of a magnet, ferromagnetic metals like iron align their magnetic fields, causing attraction. Basically, most kinds of tool steel will stick securely to magnets (though titanium doesn’t). It’s like an everyday magic trick, courtesy of physics.

To create more storage on the outside of the tool box, you can use the magnetic strip intended for holding knives in the kitchen to hold your favorite wrenches, screwdrivers, and other metal tools. And on a large tool box, you can place multiple magnetic strips and hold more tools or heavier tools. You can find magnetic strips of varying designs to fit your aesthetic style. There are thin rectangular strips and broader squarish magnetic boards. They come in plain metal, wood, or resin, all hiding rare earth magnets inside or behind. There are even strips with a bar underneath which can hold hooks for carrying even more tools.

Advertisement

Wheel and spool holders

Adding an array of pins or hooks to your tool box exterior will add another way for you store a wide range of tools and workshop accessories. There are a couple of ways to do this. 

The low-tech option is to break out the drill. Provided you don’t mind doing a little damage, you can drill small holes into the sides of your tool box and use those holes to hold peg board pins or hooks. Alternatively, you can find magnetic hooks which adhere to your tool box’s metal surface.

Advertisement

Once your pins, pegs, or hooks are in place, you can use them to hang wrenches or hold spools of wire, circular saw blades, and any other lightweight objects with a hole in the middle. Putting a few holes in the sides of your tool box should be safe, just make sure to remove objects from the drawers before drilling and don’t create so many holes that you compromise the strength of the box.

Advertisement

3D printed wrench organizer

A tool chest goes a long way toward storing and organizing your tool collection. You can keep different sorts of tools in different drawers, but unless you incorporate some sort of firm organizational system, you’ll be rifling through a tangled mess of assorted debris before long.

Fortunately, 3D printers have opened up a world of possibilities for customizing everything in our lives, and that includes the workshop. If you’re having trouble keeping your tools organized, you can 3D print (or pay someone else to 3D print) custom tool holders customized for your tool box and tool collection.

These 3D-printed wrench organizers are made by HayslettFabAnd3d on Etsy and custom printed on demand. You can choose any color you like and get the precise number you need for your collection. Each holder securely grips a single wrench and they’re modular so you can string them together in a custom layout to fit your tool box. And when you’re done with wrenches, there are other 3D printed organizers for other types of tools, like this plier organizer.

Advertisement

Triangle corner trays

When you have many tools floating around in your tool box, it can be easy to lose pieces of hardware, drill bits, and other small items in the middle of a job. A 3D printed triangle corner tray can nestle itself in the corner of your tool box drawer and give you a little cubby for holding screws, nuts, fuses, and anything else you want to keep separate and safe while you’re working.

These triangular corner cubbies are made by 4FPrintWorksLab on Etsy. They have a lip around the two outer edges to hang onto the edge of your tool box drawer. They come in many different colors, including glow in the dark, and two different sizes: 8 x 11 inches or 5 x 7 inches.

Advertisement

Each tray is printed on demand and you can even customize them to have a different lip width, depth, and angle, or add your name or company logo. You can mix and match trays on all four corners of each drawer so you never run out of extra storage space for your smallest bits and bobs. The well is deep enough to keep small objects contained while leaving plenty of space in the drawer to store your other tools.

Advertisement

Magnetic funnel holder

For some jobs, it’s nearly impossible to avoid getting greasy hands in the workshop — that’s why it’s called getting your hands dirty. Likewise, it’s difficult not to let that grease and grime spread to the rest of the workshop, getting all over work tables, tool boxes, and tools. A funnel holder and drip collector helps minimize the amount of oil and grease floating around your workspace. When you’re done pouring oil, coolant, or any other common workshop liquid you can put the funnel in the holder instead of putting it on or in your tool box where it will get everything else messy.

A magnetic holder takes advantage of your tool box’s metal construction to adhere to the side. It holds a funnel and allows oil to drip into a plastic bottle. This funnel holder from Generic has a graduated funnel hold which cups your funnel and accommodates funnels of various sizes. The bottom is designed with threads like a bottle cap, so you can attach any standard 20-ounce bottle to collect drippings. There’s also a cap holder, so you can store the soda cap for safe keeping until you’re ready to screw it back on and dispose of your collected drippings.

Advertisement



Source link

Continue Reading

Tech

Aqara U400 review: UWB home key will be hard to beat on other smart locks

Published

on

The newly launched Aqara U400 is the first — and so far only — smart lock with Ultra Wideband Home Key support, and after using it for the past month, I don’t think I can go back.

Black rectangular smart doorbell with a small circular button in the center, mounted on a white door frame next to a window showing a blurred person in plaid clothing
Aqara U400 review: The first UWB smart lock

I firmly believe that smart locks are one of the best smart home devices you can add to your home. Not only do they offer unparalleled convenience of unlocking or locking your home from anywhere, but they also add peace of mind.
If I get in the car to leave my house, I don’t have to fret about whether the door was locked. I can ask Siri if the front door is, in fact, properly secured.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Russia Fully Blocks WhatsApp – Slashdot

Published

on

An anonymous reader shares a report: U.S. messenger app WhatsApp, owned by Meta Platforms, has been completely blocked in Russia for failing to comply with local law, the Kremlin said on Thursday, suggesting Russians turn to a state-backed “national messenger” instead. “Due to Meta’s unwillingness to comply with Russian law, such a decision was indeed taken and implemented,” Kremlin spokesman Dmitry Peskov told reporters, proposing that Russians switch to MAX, Russia’s state-owned messenger.

Source link

Continue Reading

Tech

Paying twice for silicon never caught on, and Intel has finally pulled the plug on it everywhere

Published

on


  • Intel shut down On Demand after customers rejected paying for dormant silicon
  • Archiving SDSi signals the end of hardware features sold as add-ons
  • Cloud buyers refused fees for capabilities already fused into processors

Intel has moved to shut down its pay-as-you-go hardware upgrade effort with little public explanation or formal announcement.

The Software Defined Silicon initiative, later called Intel On Demand, has effectively been abandoned after years of limited visibility and sparse maintenance.

Advertisement

Source link

Continue Reading

Tech

Today’s NYT Mini Crossword Answers for Feb. 13

Published

on

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Need some help with today’s Mini Crossword? I found 1-Down tricky — nice one, puzzle creators. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Advertisement

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

completed-nyt-mini-crossword-puzzle-for-feb-13-2026.png

The completed NYT Mini Crossword puzzle for Feb. 13, 2026.

Advertisement

NYT/Screenshot by CNET

Mini across clues and answers

1A clue: Like flowers that never wilt
Answer: FAKE

5A clue: Italian city that’s hosting part of the 2026 Winter Olympics
Answer: MILAN

6A clue: “Huh, that’s strange …”
Answer: WEIRD

Advertisement

7A clue: Regions
Answer: AREAS

8A clue: Crossword clue, e.g.
Answer: HINT

Mini down clues and answers

1D clue: Guy in a kitchen
Answer: FIERI

2D clue: Visitor from another world
Answer: ALIEN

Advertisement

3D clue: Gold measurement
Answer: KARAT

4D clue: “Don’t tell me how it ___!”
Answer: ENDS

5D clue: Kissing sound
Answer: MWAH

Advertisement

Source link

Continue Reading

Tech

For $1M, you can pay Bryan Johnson (or BryanAI?) to teach you how to live longer

Published

on

It’s the middle of February, and the air is dry. There are fine lines emerging on my forehead, maybe because I don’t moisturize enough, but maybe as a harbinger of something greater: Each day I grow closer to my own death. Soon, I will be 30. I will never be younger than I am right now.

Fintech-founder-turned-longevity-guru Bryan Johnson has an offer that has caught my attention. For the low, low price of $1 million per year, I can pay him to show me the ropes of the “exact protocol” he’s followed for the last five years. He calls the program “Immortals.”

Yes, a guy who has received botox injections in his genitals will teach me how to supposedly reverse the process of aging. Why shouldn’t I believe that Bryan Johnson has uncovered the secrets to living longer than any other human? No, he has not yet proven his capacity to outlive all other humans. He was born in 1977, a year in which many current humans were born.

But why would I doubt the judgment of a guy who fortified his constitution with blood from his teenage son? When have the tech elite ever misled us? Should I also question when Elon Musk says that saving for retirement is irrelevant because AGI will create an economic abundance so great that no one will ever know poverty again?

Advertisement

According to Johnson’s post on X, this exclusive service — only three spots are available! — will include “a dedicated concierge team, BryanAI 24/7, extensive testing, millions of biological data points, continuous tracking, best skin and hair protocols, and access to the best therapies on market.”

I can talk to the AI version of a guy who livestreams himself doing shrooms for “science”? Sign me up!

Except I can’t. Because I do not have $1 million. Those like me will have to settle for buying Johnson’s overpriced olive oil in our pursuit of immortality (it’s peppery and smooth!).

Techcrunch event

Advertisement

Boston, MA
|
June 23, 2026

My emergent forehead wrinkle intensifies with the knowledge that Johnson will likely have an easy time filling up those three $1 million spots. Among the ultrawealthy, longevity has become an increasingly hot pursuit.

Advertisement

John Hering, who has given Musk billions of dollars in backing, co-founded Biograph, which describes itself as a preventative health and diagnostics clinic. Its most premium membership costs $15,000 a year (next to Johnson’s offering, it almost seems like a good deal … almost). A similar startup, Fountain Life, has raised $108 million to fund its “ultimate longevity program,” which charges a $21,500 annual fee. Sure, Johnson’s program is a lot more expensive, but remember, there’s only three spots! And if you’re still not ready to shell out seven figures, well, you can access a vague “supported tier” for $60,000.

There’s nothing wrong with wanting to live a longer, healthier life, but longevity influencers like Johnson take this to an extreme that’s unattainable and (common sense would say) totally unnecessary for the average person.

In his defense, Johnson isn’t trying to proselytize us all into taking 100 pills a day and subsisting largely on boiled vegetables. But he’s also not depriving us of the chance to make him richer in exchange for his “secrets.”

Advertisement

Source link

Continue Reading

Tech

Waymo is asking DoorDash drivers to shut the doors of its self-driving cars

Published

on

It still feels like a technological marvel: Waymo’s autonomous cars are now transporting passengers across six cities. Alas, this driverless future comes with its own set of problems. These vehicles can be rendered inert if a passenger accidentally leaves the door open.

According to a Reddit post, one DoorDash driver discovered this issue when an odd request appeared in their queue. Instead of making a delivery, the driver was offered $6.25 to drive less than one mile to a Waymo vehicle and close its door. After “verified completion,” they would get an extra $5.

“You actually ‘door’ dashed,” one commenter noted.

Image Credits:Reddit (opens in a new window)

It seems too ironic to be real. Waymo vehicles represent technological breakthroughs that once seemed unfathomable. The Alphabet-owned company just raised $16 billion to take its driverless cars international!

But Waymo and DoorDash confirmed to TechCrunch that this Reddit post is legitimate. This is, in fact, a real problem.

Advertisement

“Waymo is currently running a pilot program in Atlanta to enhance its AV fleet efficiency. In the rare event a vehicle door is left ajar, preventing the car from departing, nearby Dashers are notified, allowing Waymo to get its vehicles back on the road quickly,” Waymo and DoorDash said in a joint response. (The door-closing partnership, which began earlier this year, is just one facet of Waymo and DoorDash’s broader relationship. In October, the companies launched an autonomous delivery service in Phoenix, where Waymo vehicles deliver food and groceries to DoorDash customers.)

If a Waymo door is left open, it’s worth it to the company to pay someone to close it — the car cannot complete any more rides if it’s left immobile. Not to mention, an unmoving car could block the flow of traffic.

Techcrunch event

Boston, MA
|
June 23, 2026

Advertisement

This isn’t the first time Waymo has enlisted help with its door troubles. In Los Angeles, Waymo works with Honk, an app that’s like Uber for towing services. According to reports, Honk users in L.A. have been offered up to $24 to close a Waymo door — more than double what Atlanta DoorDash drivers receive.

The company noted that Waymo’s future vehicles will have automated door closures. But for now, gig workers are Waymo’s best bet.

Advertisement

Source link

Continue Reading

Tech

MiniMax’s new open M2.5 and M2.5 Lightning near state-of-the-art while costing 1/20th of Claude Opus 4.6

Published

on

Chinese AI startup MiniMax, headquartered in Shanghai, has sent shockwaves through the AI industry today with the release of its new M2.5 language model in two variants, which promise to make high-end artificial intelligence so cheap you might stop worrying about the bill entirely.

It’s also said to be “open source,” though the weights (settings) and code haven’t been posted yet, nor has the exact license type or terms. But that’s almost beside the point given how cheap MiniMax is serving it through its API and those of partners.

For the last few years, using the world’s most powerful AI was like hiring an expensive consultant—it was brilliant, but you watched the clock (and the token count) constantly. M2.5 changes that math, dropping the cost of the frontier by as much as 95%.

By delivering performance that rivals the top-tier models from Google and Anthropic at a fraction of the cost, particularly in agentic tool use for enterprise tasks, including creating Microsoft Word, Excel and PowerPoint files, MiniMax is betting that the future isn’t just about how smart a model is, but how often you can afford to use it.

Advertisement

Indeed, to this end, MiniMax says it worked “with senior professionals in fields such as finance, law, and social sciences” to ensure the model could perform real work up to their specifications and standards.

This release matters because it signals a shift from AI as a “chatbot” to AI as a “worker”. When intelligence becomes “too cheap to meter,” developers stop building simple Q&A tools and start building “agents”—software that can spend hours autonomously coding, researching, and organizing complex projects without breaking the bank.

In fact, MiniMax has already deployed this model into its own operations. Currently, 30% of all tasks at MiniMax HQ are completed by M2.5, and a staggering 80% of their newly committed code is generated by M2.5!

As the MiniMax team writes in their release blog post, “we believe that M2.5 provides virtually limitless possibilities for the development and operation of agents in the economy.”

Advertisement

Technology: sparse power and the CISPO breakthrough

The secret to M2.5’s efficiency lies in its Mixture of Experts (MoE) architecture. Rather than running all of its 230 billion parameters for every single word it generates, the model only “activates” 10 billion. This allows it to maintain the reasoning depth of a massive model while moving with the agility of a much smaller one.

To train this complex system, MiniMax developed a proprietary Reinforcement Learning (RL) framework called Forge. MiniMax engineer Olive Song stated on the ThursdAI podcast on YouTube that this technique was instrumental to scaling the performance even while using the relatively small number of parameters, and that the model was trained over a period of two months.

Forge is designed to help the model learn from “real-world environments” — essentially letting the AI practice coding and using tools in thousands of simulated workspaces.

“What we realized is that there’s a lot of potential with a small model like this if we train reinforcement learning on it with a large amount of environments and agents,” Song said. “But it’s not a very easy thing to do,” adding that was what they spent “a lot of time” on.

Advertisement

To keep the model stable during this intense training, they used a mathematical approach called CISPO (Clipping Importance Sampling Policy Optimization) and shared the formula on their blog.

This formula ensures the model doesn’t over-correct during training, allowing it to develop what MiniMax calls an “Architect Mindset”. Instead of jumping straight into writing code, M2.5 has learned to proactively plan the structure, features, and interface of a project first.

State-of-the-art (and near) benchmarks

The results of this architecture are reflected in the latest industry leaderboards. M2.5 hasn’t just improved; it has vaulted into the top tier of coding models, approaching Anthropic’s latest model, Claude Opus 4.6, released just a week ago, and showing that Chinese companies are now just days away from catching up to far better resourced (in terms of GPUs) U.S. labs.

MiniMax M2.5 line plot comparing different models performance over time on SWE benchmark

MiniMax M2.5 line plot comparing different models performance over time on SWE benchmark. Credit: MiniMax

Advertisement

Here are some of the new MiniMax M2.5 benchmark highlights:

  • SWE-Bench Verified: 80.2% — Matches Claude Opus 4.6 speeds

  • BrowseComp: 76.3% — Industry-leading search & tool use.

  • Multi-SWE-Bench: 51.3% — SOTA in multi-language coding

  • BFCL (Tool Calling): 76.8% — High-precision agentic workflows.

MiniMax M2.5 various benchmarks comparison bar charts

MiniMax M2.5 various benchmarks comparison bar charts. Credit: MiniMax

On the ThursdAI podcast, host Alex Volkov pointed out that MiniMax M2.5 operates extremely quickly and therefore uses less tokens to complete tasks, on the order $0.15 per task compared to $3.00 for Claude Opus 4.6.

Breaking the cost barrier

MiniMax is offering two versions of the model through its API, both focused on high-volume production use:

Advertisement
  • M2.5-Lightning: Optimized for speed, delivering 100 tokens per second. It costs $0.30 per 1M input tokens and $2.40 per 1M output tokens.

  • Standard M2.5: Optimized for cost, running at 50 tokens per second. It costs half as much as the Lightning version ($0.15 per 1M input tokens / $1.20 per 1M output tokens).

In plain language: MiniMax claims you can run four “agents” (AI workers) continuously for an entire year for roughly $10,000.

For enterprise users, this pricing is roughly 1/10th to 1/20th the cost of competing proprietary models like GPT-5 or Claude 4.6 Opus.

Model

Input

Advertisement

Output

Total Cost

Source

Qwen 3 Turbo

Advertisement

$0.05

$0.20

$0.25

Alibaba Cloud

Advertisement

deepseek-chat (V3.2-Exp)

$0.28

$0.42

$0.70

Advertisement

DeepSeek

deepseek-reasoner (V3.2-Exp)

$0.28

$0.42

Advertisement

$0.70

DeepSeek

Grok 4.1 Fast (reasoning)

$0.20

Advertisement

$0.50

$0.70

xAI

Grok 4.1 Fast (non-reasoning)

Advertisement

$0.20

$0.50

$0.70

xAI

Advertisement

MiniMax M2.5

$0.15

$1.20

$1.35

Advertisement

MiniMax

MiniMax M2.5-Lightning

$0.30

$2.40

Advertisement

$2.70

MiniMax

Gemini 3 Flash Preview

$0.50

Advertisement

$3.00

$3.50

Google

Kimi-k2.5

Advertisement

$0.60

$3.00

$3.60

Moonshot

Advertisement

GLM-5

$1.00

$3.20

$4.20

Advertisement

Z.ai

ERNIE 5.0

$0.85

$3.40

Advertisement

$4.25

Baidu

Claude Haiku 4.5

$1.00

Advertisement

$5.00

$6.00

Anthropic

Qwen3-Max (2026-01-23)

Advertisement

$1.20

$6.00

$7.20

Alibaba Cloud

Advertisement

Gemini 3 Pro (≤200K)

$2.00

$12.00

$14.00

Advertisement

Google

GPT-5.2

$1.75

$14.00

Advertisement

$15.75

OpenAI

Claude Sonnet 4.5

$3.00

Advertisement

$15.00

$18.00

Anthropic

Gemini 3 Pro (>200K)

Advertisement

$4.00

$18.00

$22.00

Google

Advertisement

Claude Opus 4.6

$5.00

$25.00

$30.00

Advertisement

Anthropic

GPT-5.2 Pro

$21.00

$168.00

Advertisement

$189.00

OpenAI

Strategic implications for enterprises and leaders

For technical leaders, M2.5 represents more than just a cheaper API. It changes the operational playbook for enterprises right now.

The pressure to “optimize” prompts to save money is gone. You can now deploy high-context, high-reasoning models for routine tasks that were previously cost-prohibitive.

Advertisement

The 37% speed improvement in end-to-end task completion means the “agentic” pipelines valued by AI orchestrators — where models talk to other models — finally move fast enough for real-time user applications.

In addition, M2.5’s high scores in financial modeling (74.4% on MEWC) suggest it can handle the “tacit knowledge” of specialized industries like law and finance with minimal oversight.

Because M2.5 is positioned as an open-source model, organizations can potentially run intensive, automated code audits at a scale that was previously impossible without massive human intervention, all while maintaining better control over data privacy, but until the licensing terms and weights are posted, this remains just a moniker.

MiniMax M2.5 is a signal that the frontier of AI is no longer just about who can build the biggest brain, but who can make that brain the most useful—and affordable—worker in the room.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025