Donut Lab, a Finnish company, recently released some impressive test data demonstrating how their solid-state battery handles extreme temperatures that conventional lithium-ion cells would struggle to deal with. The test results come from a series of measurements conducted by the VTT Technical Research Centre of Finland, a respected, government-backed research organization. They needed to figure out precisely how high the heat could reach before this Donut Lab cell, rated at 3.6 volts and 26 amp-hours, blew up in smoke (or some other way), so they decided to put it through its paces.
The engineers first established a baseline at a comfortable 20°C, at which point the cell delivered 24.9 amp-hours of capacity during a standard 1C discharge (or, to put it another way, the cell was chucking out a whopping 24 amps in one hour), but then they ramped up the temperature to 80°C, or ‘high heat’ to you and me, and discharged the cell again at the same rate. To their surprise, the cell managed to muster 27.5 amp-hours, which is around 110.5% of what it could manage at ambient temperature, and then outperformed that at 100°C (though at a lower 0.5C rate of 12 amps to keep things stable). There, the cell produced 27.6 amp-hours, or 107.1% of the beginning amount.
✔EXCELLENT PERFORMANCE: The adult electric scooter is powered by a 350W brushless motor, reaching speeds of 19 mph in S gear. Impressive…
✔KEEP YOU SAFE: Bright headlights keep you visible at night. Warning taillights flash when braking to protect you from collisions. Dual braking…
✔ENJOY SMART RIDING: You can check the current speed, speed mode, and battery life on the high-definition full-color LCD display. There are 4 riding…
The clear upshot of all of this is that when the temperature increased, the internal resistance in the solid electrolyte decreased, allowing the ions to move about more freely, which increased the battery’s useful energy output significantly. After both high discharges, they returned the cell to ambient temperature and recharged it normally; fortunately, no permanent damage was discovered from the electrical angle. Not so good news: at 100°C, the pouch cell lost its vacuum seal, most likely owing to gas buildup or material weakening, although it still functioned properly.
Advertisement
Conventional lithium-ion batteries use liquid electrolytes that become wobbly when temperatures rise above 60-70°C, causing a variety of problems such as vaporisation, swelling, rapid degradation, and, in the worst-case scenario, the dreaded thermal runaway, in which the cell becomes so hot that it is essentially a fire waiting to happen. Solid-state designs replace the liquid with a non-flammable solid, which, for obvious reasons, eliminates the source of all the problems. Donut Lab’s version appears to be capitalising on this advantage, transforming what would be a major issue for other batteries into a little but significant benefit.
This test adds to the evidence Donut Lab is using to support the claim that their battery technology is suitable for use in the real world. Donut Lab unveiled their Donut Battery as the first all-solid-state battery pack ready for cars at CES 2026, and it wasn’t just a pipe dream, as they claimed some pretty impressive numbers, including 400 watt-hours per kilogram energy density, full charges in 5 minutes, and a mind-boggling 100,000 cycles with almost no fade. The cells are really powering various Verge Motorcycles models that will hit the road this quarter, with the boast that they can hold more than 99% of capacity from -30°C to 100°C.
Graphics calculators are one of those strange technological cul-de-sacs. They rely on outdated technology and should not be nearly as expensive as they are, but market effects somehow keep prices well over $100 to this day. Given that fact, you might like to check out an open-source solution instead.
NumOS comes to us from [El-EnderJ]. It’s a scientific and graphic calculator system built to run on the ESP32-S3 with an ILI9341 screen. It’s intended to rival calculators like the Casio fx-991EX ClassWiz and the TI-84 Plus CE in terms of functionality. To that end, it has a full computer algebra system and a custom math engine to do all the heavy lifting a graphic calculator is expected to do, like symbolic differentiation and integration. It also has a Natural V.P.A.M-like display—if you’re unfamiliar with Casio’s terminology, it basically means things like fractions and integrals are rendered as you’d write them on paper rather than in uglier simplified symbology.
If you’ve ever wanted a graphics calculator that you could really tinker with down to the nuts and bolts, this is probably a great place to start. With that said, don’t expect your local school or university to let you take this thing into an exam hall. They’re pretty strict on that kind of thing these days.
Elon Musk took the stage over the weekend to announce a new partnership between Tesla, SpaceX and xAI to build a $25 billion chip-making factory in Austin, Texas, called Terafab.
Acknowledging Samsung, TSMC and other chipmakers, Musk said the Terafab project needs to get off the ground because existing semiconductor partners aren’t making chips fast enough. If built, Terafab would be the largest semiconductor manufacturing plant in the world.
Bringing more semiconductor facilities to the US isn’t new. The CHIPS Act of 2022 saw a dramatic rise in announcements for further investments in such facilities on American soil. Nvidia began manufacturing chips in its Arizona factory last year, and the motivation wasn’t only due to tariffs.
Advertisement
The CHIPS Act has paid out for several chip-making projects, including Intel’s massive $8 billion factory, though the introduction of additional semiconductor fabs in the US has been slow. Terafab would be a significant addition to the infrastructure for onshore chip making in the US, and by far the most expensive. There’s no word yet on whether Terafab would receive funding under the CHIPS Act.
Powering all of your electronic devices are chips that serve as their brains. They vary from the likes of Apple’s M series to Nvidia’s Vera Rubin CPU and beyond. The Terafab project aims to ease the current shortage of chips powering devices that will bring AI robotics and more to life. (The AI boom has also brought about a massive RAM shortage, with no expected relief until 2028, affecting prices on electronics like smartphones and laptops.)
Musk gave details on two of the chips he plans to build, the AI5 and AI6, which would power the likes of existing earthly ventures, such as Tesla’s Optimus robots and self-driving cars. Also detailed was the D3 chip, which he said would be made for orbital satellites in space. This type of ambition isn’t just coming from Musk, either. Nvidia announced similar goals to build orbital AI data centers during its GTC conference last week.
The project aims to have every piece of the manufacturing process take place at the facility to churn out chips by the billions, targeting the 2-nanometer process. Musk believes the project will help propel us into becoming a “galactic civilization.”
Advertisement
It sounds like an ambitious project, though not everyone is buying it. Musk has historically announced wild projects, like the “million-mile” battery that never quite got off the ground. Whether the Terafab facility actually becomes a reality is a waiting game for now.
Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.
Need some help with today’s Mini Crossword? It helps to know a little about birds. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.
If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.
After 400 years underwater, a Swedish Navy vessel in the Baltic Sea off Stockholm has become visible. Sunk on purpose back in the 17th century, the ship has resurfaced after the waters reached their lowest level in the past 100 years. Marine Archeologist, Jim Hansson, from Stockholm’s Vrak Museum of Wrecks, explained the conditions which led to its reemergence to AFP, as reported by CBS. “There has been a really long period of high pressure here around our area in the Nordics. So the water from the Baltic has been pushed out to the North Sea and the Atlantic.” .
The unidentified ship was sunk around 1640 so it could be used to form the foundation of a bridge connecting to the Kastellholmen island. There are currently five sunken ships in the area. The Swedish Navy is looking into reusing their oak hulls rather than using new wood. Researchers are currently attempting to identify these sunken ships as part of a research program called “The Lost Navy.”
Advertisement
How did the shipwreck survive underwater for 400 years?
It might seem baffling that a wooden ship could survive in the ocean for 400 years, but the Baltic Sea had the right conditions to keep the Swedish Navy vessel largely intact. According to Hansson, that part of the ocean doesn’t have shipworms, meaning the sunken ship wasn’t eaten. Shipworms, which can grow up to two meters long, are sea creatures that use bacteria in their gut to break down wood and consume it. They’re so proficient at it that they can sink a boat.
Advertisement
Rather than rotting the wood away as you’d possibly expect, the water actually keeps the boat intact — especially at deep levels — creating a time capsule of sorts. In fact, most boats can remain undisturbed deep under the water indefinitely — but bringing the shipwreck to the surface can cause the wood to break down since it was only being held together by water between its cells.
This has been a big issue with recovering the Vasa, another vessel in Sweden that sank back in 1628. Its wood is being ruined by iron and metal pieces that have started to acidify, now it’s out of the water. Scientists discovered that earth alkaline hydroxides can neutralize the acid, stopping the chemical reaction that destroys the wood, but it’s still a challenge to preserve uncovered shipwrecks. This means the low water levels in the Baltic Sea could pose a problem for the newly uncovered warship.
Ali Farhadi speaks at the Tech Alliance State of Technology annual luncheon in Seattle, May 2024. (GeekWire File Photo / Todd Bishop)
Microsoft is hiring a group of top AI researchers from the Seattle-based Allen Institute for AI and the University of Washington, including former Ai2 CEO Ali Farhadi, GeekWire has learned.
Farhadi, Hanna Hajishirzi, and Ranjay Krishna are expected to join Mustafa Suleyman’s organization at Microsoft while retaining their faculty positions at the UW’s Allen School of Computer Science and Engineering. Also joining is Sophie Lebrecht, the former Ai2 chief operating officer.
The move follows Farhadi’s departure from Ai2, announced March 12. Farhadi had led the Seattle-based nonprofit research institute for more than two and a half years.
Suleyman, the CEO of Microsoft AI, narrowed his focus last week from overseeing consumer-oriented Copilot products to leading Microsoft’s Superintelligence team.
The hires come as Microsoft works to reduce its dependence on OpenAI for frontier AI models, competing against Amazon, Google, and others. Suleyman’s Superintelligence team, formed in November, is part of a broader push to further develop advanced foundation models.
Advertisement
Microsoft has already hired researchers from Google DeepMind, Meta, OpenAI, and Anthropic, and the addition of the Ai2 and UW group would bring deep expertise in open-source model development and training efficiency — where Ai2 has punched well above its weight.
Backing from NSF and Nvidia
The exits represent a notable collective loss for Ai2, which was founded in 2014 by the late Microsoft co-founder Paul Allen. Hajishirzi is a co-lead of the OLMo open-source language model project and a co-principal investigator on a $152 million, five-year initiative backed by the National Science Foundation and Nvidia to build open AI models for scientific research.
She represented Ai2 in multiple sessions last week at Nvidia’s GTC conference in San Jose, including a panel on the future of open models alongside Nvidia CEO Jensen Huang.
Krishna has led the development of Ai2’s Molmo multimodal models, among other projects. He also presented at the Nvidia conference last week on behalf of the institute.
Advertisement
Farhadi, a computer vision specialist, co-founded Ai2 spinout Xnor.ai, which Apple acquired in 2020 for an estimated $200 million. He led machine learning efforts at Apple before returning to lead Ai2 as CEO in July 2023.
Ai2 interim CEO Peter Clark acknowledged the departures in a statement, saying the institute remains committed to its mission and its partnerships with the NSF and Nvidia, including the OMAI initiative.
“These initiatives are backed by a broad, experienced team with the expertise and continuity needed to carry this work forward,” Clark said. “We’re confident in our ability to build on the strong foundation already in place and to expand the impact of these efforts in the months ahead.”
He added that the institute is “grateful for the leadership and contributions of Ali, Hanna, Ranjay, and others” in advancing Ai2’s work, and wished them well.
Advertisement
In a post about the hires on LinkedIn, Suleyman praised Farhadi for leading Ai2 in releasing more than 100 models in a single year and called Hajishirzi “one of the most cited researchers of natural language processing in the world, full stop.”
Suleyman described Lebrecht as having scaled Ai2’s operations and open-source efforts, noting that she also co-founded the AI company Neon Labs and holds a PhD in cognitive neuroscience from Brown University.
He said they will help pursue Microsoft’s mission of “humanist superintelligence: safer, controllable, more capable AI systems in service of humanity and our toughest problems.”
When news broke earlier this month that Farhadi was leaving, Ai2 board chair Bill Hilf told GeekWire that Farhadi wanted to pursue research at the extreme frontier of AI, where for-profit companies are spending billions on training the most advanced models.
Advertisement
At the time, Hilf said the board had to weigh whether a nonprofit’s philanthropic dollars were best spent trying to keep pace, acknowledging that competing against tech giants at the largest scale of model development had become extraordinarily difficult.
Changes in Ai2’s funding realities
Behind the scenes, the changing nature of Ai2’s funding environment has also been playing a role in the exits, according to people with knowledge of the situation.
Ai2 was originally funded by Allen’s Vulcan Inc. and later by his estate. Its primary backer is now the Fund for Science and Technology, a $3.1 billion foundation created under Allen’s instructions and publicly launched in August, with a focus on applying science and technology to problems in areas aligned with Allen’s passions, including AI, bioscience, and the environment.
FFST, led by CEO Dr. Lynda Stuart, a physician-scientist who previously led the Institute for Protein Design at the UW, favors applied uses of AI over the costly work of frontier models.
Advertisement
In addition, while all Ai2 programs for 2026 are fully funded, these people said, FFST is moving from providing Ai2 with overall annual funding to a proposal-based process, with future support expected to favor real-world applications of AI over building open-source foundation models. The shift helps explain the departures of researchers focused on model development.
A spokesperson for the Fund for Science and Technology said Ai2’s “work and mission remain the same” and that FFST’s broader program strategies are still under development.
Farhadi, Hajishirzi, and Krishna are researchers whose work centers on building and advancing AI models. Microsoft’s Superintelligence team, backed by billions in compute investment, offers the resources and mandate to pursue that work at a much larger scale.
Supercapacitors rely mostly on double-layer capacitance to bridge the divide between chemical batteries and traditional capacitors, but they come with a number of weaknesses. Paramount among these are their relatively low voltage of around 2.7 V before their electrolyte begins to decompose, as well as their relatively high rates of self-discharge. Here a new design using lignin-derived porous carbon electrodes and a fluorinated diluent was demonstrated by [Shichao Zhang] et al., as published in Carbon Research, that seems to address these issues.
Most notable are the relatively high voltage of 4 V, an energy density of 77 Wh/kg and a self-discharge rate that’s much slower than that of conventional supercapacitors. In comparison with these supercapacitors, these demonstrated versions are also superior in terms of recharge cycles with 90% of capacity remaining after 10,000 cycles, which together with their much higher energy density should prove to be quite useful.
This feat is accomplished by using lignin as the base for the carbon electrodes to make a highly porous surface, along with the new electrolyte formulation consisting of alithium salt (LiBF4) dissolved in sulfolane with TTE as a non-solvating diluent. The idea of using lignin-derived carbon for such a purpose has previously been pitched by [Jia Liu] et al. in 2022 and [Zhihao Ding] in 2025, with this seemingly one of the first major applications we may be seeing.
Advertisement
Although the path towards commercialization from a lab-assembled prototype is a rough one, we may be seeing some of these improvements come to supercapacitors near you sooner rather than later.
ChatGPT can now help you find and book a rental car thanks to a new Turo integration that launched Monday. The Turo app for ChatGPT allows you to just tell the AI chatbot what you’re looking for — from pickup location and dates to number of seats, EV preference and more — using natural language, and be presented with real Turo rental cars, advice and links directly to the Turo website to book.
Turo is a peer-to-peer marketplace that lets private owners rent out their personal vehicles to travelers and locals. I like to think of it as the “Airbnb of cars” or drive-it-yourself Uber. Unlike traditional rental agencies that own and maintain large fleets of cars, Turo merely provides the tech, insurance and support to connect vehicle hosts with guest drivers. Turo has proven to be a popular alternative to the airport rental counter thanks to its more varied selection of unique car models (including luxury or high-tech vehicles), competitive pricing, and the convenience of having certain vehicles delivered.
And now, it has a ChatGPT integration. You can access the new Turo app within ChatGPT by first searching for and then adding Turo to the list of available agents in ChatGPT’s Apps menu. Once connected, adding “@Turo” to any chat with the AI bot will trigger the new functionality.
Advertisement
I fired up ChatGPT after setting it up for myself and typed the prompt: “@Turo, I’m going to be landing in Atlanta on Friday and would like to rent an EV for the weekend with enough range to make it to Augusta. What’s available?”
I used natural language to find available cars on the Turo service using ChatGPT.
Screenshot by Antuan Goodwin/CNET
The app replied with listings of vehicles currently available to rent near the airport with enough range to make the approximately 300-mile round trip with as little as one quick top-up. Each listing featured photos, price estimates (including tax and fees), star ratings and the number of times each car had been rented.
Advertisement
Clicking on a listing took me straight to the Turo website (or app on mobile) to complete the booking. I also tried asking for “an EV near my home that seats six people” and “a hybrid that would be useful for moving,” and found the results to be adequate.
In addition to the listings, ChatGPT and Turo provided details (like range) about each car as well as pros and cons, such as Tesla’s plentiful Superchargers between Atlanta and Augusta or the Kia EV6’s very fast charging speed. Overall, the new functionality looks like a fairly convenient and decent starting point for someone who knows nothing about cars to choose a rental.
Turo’s app for ChatGPT is the latest example of AI’s rapid advance into every aspect of the automotive industry, from natural language AI assistants in the dashboard to AI-powered inspection of rental car returns.
(Disclosure: Ziff Davis, CNET’s parent company, filed a lawsuit against OpenAI last year, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Speaking to Digital Foundry about Project Amethyst, Mark Cerny confirmed that Sony is developing AI-powered frame generation for PlayStation, but noted that no new releases are planned this year. He also declined to reveal whether the feature will be rolled out to the PS5 and PS5 Pro or be exclusive… Read Entire Article Source link
OpenAI is rolling out a new feature called ‘Library’ for ChatGPT, which allows you to store your personal files or images on OpenAI’s cloud storage.
OpenAI says ChatGPT Library requires Plus, Pro, and Business. It’s rolling out to customers across the world except the European Economic Area, Switzerland, and the United Kingdom.
I refreshed the ChatGPT web, and the Library automatically showed up on the sidebar.
ChatGPT Library
To my surprise, it’s actually not empty, as ChatGPT has already saved some of the files I uploaded in the last two weeks.
Turns out it’s an expected behaviour. By default, GPT will save your uploaded files in a dedicated, secure location, and they can be used for reference in a future chat.
Advertisement
“ChatGPT automatically saves uploaded and created files, including files uploaded in chats (for example: documents, spreadsheets, presentations, and images) in a dedicated, secure location so they can be easily accessed later,” OpenAI noted in a document.
On the other hand, if you use ChatGPT to generate AI images, they will continue to appear in the Images tab.
The Library section only has files you uploaded, and you can upload files by following these steps:
Open the composer menu (the attachment/add button).
Select Add from library.
Choose the file you want to use.
OpenAI says ChatGPT automatically saves uploaded and created files, including files uploaded in chats, in a dedicated, secure location so they can be easily accessed later.
This means files are saved to your account until you delete them manually, but deleting a chat containing a file does not delete those files saved to Library.
Advertisement
To delete a file:
Select the file in the “Library” tab
Click Delete, or click the trash icon next to the file.
OpenAI will remove files from its servers within 30 days of deletion.
It’s unclear why it takes nearly a month to purge files, but it is likely due to legal reasons.
Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.
Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.
Slowly but surely, the ante is starting to be upped in The Pitt season 2. Last week, a woman detained by two male ICE agents was brought in to see Dr. Robbie (Noah Wyle) and Cassie (Fiona Dourif).
Blood covered her arms as her handcuffs cut into her wrists, with the distressed woman clearly too scared to say anything. When Cassie asks her if there’s anybody she’d like to call, one ICE agent immediately responds that there are “no calls allowed.”
Clearly, we’re only just beginning to unravel this. So when does The Pitt season 2 episode 12 arrive on HBO Max?
Advertisement
Article continues below
Advertisement
What time can I watch The Pitt season 2 episode 12 on HBO Max?
The Pitt Season 2 | Official Trailer | HBO Max – YouTube
For US viewers, The Pitt season 2 episode 12 will drop on Thursday, March 26 at 6pm PT/ 9pm ET. As always, it’ll come out on HBO Max, too.
Internationally, you’re looking out for these timings:
US – 6pm PT / 9pm ET
Canada – 6pm PT / 9pm ET
India – Friday, March 20 at 7:30am IST
Singapore – Friday, March 20 at 10am SGT
Australia – Friday, March 20 at 1pm AEDT
New Zealand – Friday, March 20 at 3pm NZDT
You’ll notice that I’ve not included the UK here. That’s because March 26 is the same day that HBO Max UK actually launches.
Currently, we know that The Pitt season 1 will be available, but there’s been no confirmation about season 2.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Advertisement
When do new episodes of The Pitt season 2 come out?
Whitaker just out there doing his best. (Image credit: HBO)
New episodes of The Pitt will make landfall every Thursday in the US and on Fridays everywhere else. Here are the all-important dates you need to know about:
And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
You must be logged in to post a comment Login