Connect with us

Tech

Trump’s critical mineral reserve is an admission that the future is electric

Published

on

The Trump administration announced this week the U.S. government would work to build a $11.7 billion stockpile of critical minerals. That’s the headline; the subtext is more intriguing.

The stockpile initiative, branded as Project Vault, is the latest attempt by the administration to secure supplies of critical minerals for U.S. manufacturers and what President Donald Trump says will ensure “American businesses and workers are never harmed by any shortage.”

It follows recent investments from the administration into rare earth producers, including equity stakes in miners USA Rare Earth and MP Materials.

Individually, they can be interpreted as an administration taking steps to calm a part of the market that has been roiled by its own trade wars. Collectively, they’re an admission, however tacit or subconscious, that the future relies on electric technologies, including electric vehicles and wind turbines.

Advertisement

In his announcement, Trump alluded to the world’s dependence on China for a slew of critical minerals. Over the last year-plus, China has wielded its dominance to counter tariff threats from the Trump administration, restricting exports of rare earth metals and lithium battery materials to the United States. Eventually, China relented, but the episode made clear who held the trump card.

The spat also revealed just how integral critical minerals are to modern economies. Trump likened the new stockpile to the Strategic Petroleum Reserve maintained by the Department of Energy, which was set up in the wake of the oil embargo in the early 1970s.

“Just as we have long had a strategic petroleum reserve and a stockpile of critical minerals for national defense, we’re now creating this reserve for American industry, so we don’t have any problems,” Trump said.

Techcrunch event

Advertisement

Boston, MA
|
June 23, 2026

The oil reserve isn’t going away, but it’s not as important as it once was, diminished by productive U.S. oil wells and the increasing share of the energy market taken by solar, wind, and batteries. (Solar and wind continue to dominate new electric generating capacity, while more than 25% of new cars sold worldwide were EVs or plug-in hybrids.)

Advertisement

It’s not clear exactly which minerals will go into the reserve; Bloomberg reported that gallium and cobalt will be included. It’s possible that others like copper and nickel might get thrown in as well, though they weren’t mentioned.

The size of the investment is notable. The U.S. Export-Import Bank is providing a $10 billion loan, with private capital rounding out the rest. That’s about half the value of the oil currently in the Strategic Oil Reserve going toward a market that’s 1% the size of the global oil market, as Bloomberg columnist David Fickling pointed out.

The mismatch is either typical Trump bluster or an acknowledgement that the market for critical minerals is going to expand significantly in the coming years. 

It is possible it’s both, with a greater chance it is the latter.

Advertisement

Much of the growth in critical minerals comes from clean energy technologies and EVs; without them, the market won’t be as constrained as experts have predicted. Demand for electronics, including data centers, will play a role, but more than half of global growth in rare earth element demand is expected to come from electric vehicles and wind turbines, according to the IEA. For cobalt and lithium, the figures are even more skewed, with EVs representing the vast majority of growth through 2050.

The Trump administration hasn’t been quiet about its distain for clean energy technologies, preferring to bet on the status quo with fossil fuels. But the rest of the world is continuing to move toward solar, wind, and batteries, driving up demand for critical minerals. The new stockpile shows that markets can be hard to ignore.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

What AI Integration Really Looks Like in Today’s Classrooms

Published

on

In late 2022, when generative AI tools landed in students’ hands, classrooms changed almost overnight. Essays written by algorithms appeared in inboxes. Lesson plans suddenly felt outdated. And across the country, schools asked the same questions: How do we respond — and what comes next?

Some educators saw AI as a threat that enables cheating and undermines traditional teaching. Others viewed it as a transformative tool. But a growing number are charting a different path entirely: teaching students to work with AI critically and creatively while building essential literacy skills.

The challenge isn’t just about introducing new technology. It’s about reimagining what learning looks like when AI is part of the equation. How do teachers create assignments that can’t be easily outsourced to generative AI tools? How do elementary students learn to question AI-generated content? And how do educators integrate these tools without losing sight of creativity, critical thinking and human connection?

Recently, EdSurge spoke with three educators who are tackling these questions head-on: Liz Voci, an instructional technology specialist at an elementary school; Pam Amendola, a high school English teacher who reimagined her Macbeth unit to include AI; and Brandie Wright, who teaches fifth and sixth graders at a microschool, integrating AI into lessons on sustainability.

EdSurge: What led you to integrate AI into your teaching?

Amendola: When OpenAI’s ChatGPT burst onto the scene in November 2022, it upended education and sent teachers scrambling. Students were suddenly using AI to complete assignments. Many students thought, Why should I complete a worksheet when AI can do it for me? Why write a discussion post when AI can do it better and faster?

Advertisement

Our education system was built for an industrial age, but we now live in a technological age where tasks are completed rapidly. Learning at school should be a time of discovery, but education remains stuck in the past. We are in a place I call the in between. In this place, I discovered a need to educate students on AI literacy alongside the themes and structure of the English language.

I reimagined my Macbeth unit to integrate AI with traditional learning methods. I taught Acts I-III using time-tested approaches, building knowledge of both Shakespeare and AI into each act. In Act IV, students recreated their assigned scenes using generative AI to make an original movie. For Act V, they used block-based programming to have robots act out their scenes. My assessment had nothing to do with writing an essay, so it was uncheatable. I encouraged students to work with me to design the lesson so I could determine the best way to help them learn.

Voci: Last fall, I was in a literacy meeting with administrators and teachers where I heard concerns about the new science of reading materials not engaging students’ interest. While the books were highly accessible, students had no interest in reading them. This was my lightbulb moment. If we could use AI tools to develop engaging and accessible reading passages for students, we could also teach foundational AI literacy skills at the same time.

This is where The Perfect Book Project was born. Students work with teachers to develop their own perfect reading book that is both engaging and accessible, learning literary skills alongside how to work with and evaluate AI-generated content. In its pilot, I worked directly with teachers as students conceptualized, drafted, edited and published their books. I spent hundreds of hours creating prompts with content guardrails, accessibility constraints and research-based foundational literacy knowledge to guide students and teachers through the process.

Wright: I’m doing quite a bit of work around the U.N. Sustainable Development Goals, teaching our explorers the impact of our actions not just on ourselves but also on others and the environment. I wanted to see them use AI to deepen their knowledge and serve as a thought partner as they develop solutions to issues like climate change.

I created a lesson called “Investigating Energy Efficiency and Sustainability in Our Spaces.” The explorers went on a sustainability scavenger hunt around campus to find examples of energy-efficient items and sustainable practices. They used AI tools to analyze their findings, interpret and evaluate AI responses for accuracy and potential bias, and reflect on how technology and human decisions work together to create sustainable solutions. The AI in this lesson wasn’t about the tools they used, but more about how AI is viewed in the context of what they are learning.

Advertisement

What shifts in student learning did you observe?

Voci: One eye-opening moment was during my first lesson on hallucinations and bias with a third grade class. After introducing the concepts at a developmentally appropriate level, I had them reread their manuscripts through the lens of an AI hallucination and bias detective. It didn’t take long for the first student to find the first hallucination. There was incorrect scoring in a football game. AI counted a touchdown as one point. One student’s hand flew up; he was so excited to explain to me and the class how the model had incorrectly scored the game.

This discovery lit a fire under the rest of the class to begin looking more closely at every word of their text and not take it at face value. The class went on to find more hallucinations and discover some generalizations that did not represent their intentions.

Wright: I saw the explorers develop their critical thinking as they asked questions about how AI was used, how AI makes its decisions and whether this affects the environment. I truly appreciate that this age group holds onto their creativity and imagination. They don’t want AI to do the creating for them. They still want to draw their own pictures and tell their own stories.

Amendola: It was uncomfortable for my honors students to try something new. They were out of their element and craved the structure of the rubric. I had to let go of traditional grading structures first before I could help them embrace the ambiguity. Their willingness to explore and make mistakes was wonderful. The collaboration helped create a sense of class community that resulted in learning a new skill.

What’s your advice for educators hesitant to explore AI?

Advertisement

Amendola: Don’t be afraid to try new things. Keep in mind that the greatest success first requires a change of mindset. Only then can you open the doors to what generative AI can do for your students.

Voci: Don’t let the fear, weight and speed of AI advancement paralyze you. Find small, intentional steps that are grounded in human-centered values to move forward with your own knowledge, and then find ways to connect your new knowledge to support student learning. In this age of AI, we need to give our fellow educators the same resources, scaffolding and grace.

Wright: Jump in!


Join the movement at https://generationai.org to participate in our ongoing exploration of how we can harness AI’s potential to create more engaging and transformative learning experiences for all students.

Advertisement

Source link

Continue Reading

Tech

X’s latest Community Notes experiment allows AI to write the first draft

Published

on

X is experimenting with a new way for AI to write Community Notes. The company is testing a new “collaborative notes” feature that allows human writers to request an AI-written Community Note.

It’s not the first time the platform has experimented with AI in Community Notes. The company started a pilot program last year to allow developers to create dedicated AI note writers. But the latest experiment sounds like a more streamlined process.

According to the company, when an existing Community Note contributor requests a note on a post, the request “now also kicks off creation of a Collaborative Note.” Contributors can then rate the note or suggest improvements. “Collaborative Notes can update over time as suggestions and ratings come in,” X says. “When considering an update, the system reviews new input from contributors to make the note as helpful as possible, then decides whether the new version is a meaningful improvement.”

X doesn’t say whether it’s using Grok or another AI tool to actually generate the fact check. If it was using Grok, that would be in-line with how a lot of X users currently invoke the AI on threads with replies like “@grok is this true?”

Advertisement

Community Notes has often been criticized for moving too slowly so adding AI into the mix could help speed up the process of getting notes published. Keith Coleman, who oversees Community Notes at X, wrote in a post that the update also provides “a new way to make models smarter in the process (continuous learning from community feedback).” On the other hand, we don’t have to look very far to find examples of Grok losing touch with reality or worse.

According to X, only Community Note Contributors with a “top writer” status will be able to initiate a collaborative note to start, though it expects to expand availability “over time.”

Source link

Advertisement
Continue Reading

Tech

You can buy Amazon’s new Fire TV models right now

Published

on

Amazon has refreshed its Fire TV lineup in the UK, with three new ranges available to buy right now.

The updated Fire TV 2-Series, Fire TV 4-Series, and Fire TV Omni QLED promise slimmer designs, faster performance and smarter picture tech. All of this is aimed at getting you to your shows quicker.

Leading this current crop is the Fire TV Omni QLED, available in 50-, 55- and 65-inch sizes. Amazon says the new panel is 60% brighter than previous models, with double the local dimming zones for punchier highlights and deeper blacks. Dolby Vision and HDR10+ Adaptive are on board. In addition, the TV can automatically adjust colour and brightness based on your room lighting.

The Omni QLED also leans heavily into smart features. OmniSense uses presence detection to wake the TV when you enter the room and power it down when you leave. Meanwhile, Interactive Art reacts to movement, turning the screen into something closer to a living display than a black rectangle on the wall.

Advertisement

Further down the range, the redesigned Fire TV 2-Series and Fire TV 4-Series cover screen sizes from 32 to 55 inches. The 2-Series sticks to HD resolution, while the 4-Series steps up to 4K. Both benefit from ultra-thin bezels and a new quad-core processor that Amazon says makes them 30% faster than before. It’s a modest upgrade on paper. However, it is one that should make everyday navigation feel noticeably snappier.

Advertisement

All three ranges run Fire TV OS, with Amazon continuing to push its content-first approach. It surfaces apps, live TV and recommendations as soon as you turn the screen on.

The new Fire TV models are available now in the UK, with introductory pricing running until 10 February 2026:

Advertisement

With faster internals and a brighter flagship model, Amazon’s latest Fire TVs look like a solid refresh, especially if you’re after a big screen without a premium TV price tag.

Source link

Advertisement
Continue Reading

Tech

DHS Is Hunting Down Trump Critics. The ‘Free Speech’ Warriors Are Mighty Quiet.

Published

on

from the the-chilling-effects-are-real dept

For years, we’ve been subjected to an endless parade of hyperventilating claims about the Biden administration’s supposed “censorship industrial complex.” We were told, over and over again, that the government was weaponizing its power to silence conservative speech. The evidence for this? Some angry emails from White House staffers that Facebook ignored. That was basically it. The Supreme Court looked at it and said there was no standing because there was no evidence of coercion (and even suggested that the plaintiffs had fabricated some of the facts, unsupported by reality).

But now we have actual, documented cases of the federal government using its surveillance apparatus to track down and intimidate Americans for nothing more than criticizing government policy. And wouldn’t you know it, the same people who spent years screaming about censorship are suddenly very quiet.

If any of the following stories had happened under the Biden administration, you’d hear screams from the likes of Matt Taibbi, Bari Weiss, and Michael Shellenberger, about the crushing boot of the government trying to silence speech.

But somehow… nothing. Weiss is otherwise occupied—busy stripping CBS News for parts to please King Trump. And the dude bros who invented the “censorship industrial complex” out of their imaginations? Pretty damn quiet about stories like the following.

Advertisement

Taibbi is spending his time trying to play down the Epstein files and claiming Meta blocking ICE apps on direct request from DHS isn’t censorship because he hasn’t seen any evidence that it’s because of the federal government. Dude. Pam Bondi publicly stated she called Meta to have them removed. Shellenberger, who is now somehow a “free speech professor” at Bari Weiss’ collapsing fake university, seems to just be posting non-stop conspiracy theory nonsense from cranks.

Let’s start with the case that should make your blood boil. The Washington Post reports that a 67-year-old retired Philadelphia man — a naturalized U.S. citizen originally from the UK — found himself in the crosshairs of the Department of Homeland Security after he committed the apparently unforgivable sin of… sending a polite email to a government lawyer asking for mercy in a deportation case.

Here’s what he wrote to a prosecutor who was trying to deport an Afghani man who feared the Taliban would take his life if sent there. The Philadelphia resident found the prosecutors email and sent the following:

“Mr. Dernbach, don’t play Russian roulette with H’s life. Err on the side of caution. There’s a reason the US government along with many other governments don’t recognise the Taliban. Apply principles of common sense and decency.”

That’s it. That’s the email that triggered a federal response. Within hours — hours — of sending this email, Google notified him that DHS had issued an administrative subpoena demanding his personal information. Days later, federal agents showed up at his door.

Advertisement

Showed. Up. At. His. Door.

A retired guy sends a respectful email asking the government to be careful with someone’s life, and within the same day, the surveillance apparatus is mobilized against him.

The tool being weaponized here is the administrative subpoena (something we’ve been calling out for well over a decade, under administrations of both parties) which is a particularly insidious instrument because it doesn’t require a judge’s approval. Unlike a judicial subpoena, where investigators have to show a judge enough evidence to justify the search, administrative subpoenas are essentially self-signed permission slips. As TechCrunch explains:

Unlike judicial subpoenas, which are authorized by a judge after seeing enough evidence of a crime to authorize a search or seizure of someone’s things, administrative subpoenas are issued by federal agencies, allowing investigators to seek a wealth of information about individuals from tech and phone companies without a judge’s oversight.

While administrative subpoenas cannot be used to obtain the contents of a person’s emails, online searches, or location data, they can demand information specifically about the user, such as what time a user logs in, from where, using which devices, and revealing the email addresses and other identifiable information about who opened an online account. But because administrative subpoenas are not backed by a judge’s authority or a court’s order, it’s largely up to a company whether to give over any data to the requesting government agency.

Advertisement

The Philadelphia retiree’s case would be alarming enough if it were a one-off. It’s not. Bloomberg has reported on at least five cases where DHS used administrative subpoenas to try to unmask anonymous Instagram accounts that were simply documenting ICE raids in their communities. One account, @montcowatch, was targeted simply for sharing resources about immigrant rights in Montgomery County, Pennsylvania. The justification? A claim that ICE agents were being “stalked” — for which there was no actual evidence.

The ACLU, which is now representing several of these targeted individuals, isn’t mincing words:

“It doesn’t take that much to make people look over their shoulder, to think twice before they speak again. That’s why these kinds of subpoenas and other actions—the visits—are so pernicious. You don’t have to lock somebody up to make them reticent to make their voice heard. It really doesn’t take much, because the power of the federal government is so overwhelming.”

This is textbook chilling effects on speech.

Remember, it was just a year and a half ago in Murthy v. Missouri, the Supreme Court found no First Amendment violation when the Biden administration sent emails to social media platforms—in part because the platforms felt entirely free to say no. The platforms weren’t coerced; they could ignore the requests and did.

Advertisement

Now consider the Philadelphia retiree. He sends one polite email. Within hours, DHS has mobilized to unmask him. Days later, federal agents are at his door. Does that sound like someone who’s free to speak his mind without consequence?

Even if you felt that what the Biden admin did was inappropriate, it didn’t involve federal agents showing up at people’s homes.

That is what actual government suppression of speech looks like. Not mean tweets from press secretaries that platforms ignored, but federal agents showing up at your door because you sent an (perfectly nice) email the government didn’t like.

So we have DHS mobilizing within hours to identify a 67-year-old retiree who sent a polite email. We have agents showing up at citizens’ homes to interrogate them about their protected speech. We have the government trying to unmask anonymous accounts that are documenting law enforcement activities — something that is unambiguously protected under the First Amendment.

Advertisement

Recording police, sharing that recording, and doing so anonymously is legal. It’s protected speech. And the government is using administrative subpoenas to try to identify and intimidate the people doing it.

For years, we heard that government officials sending emails to social media companies — emails the companies ignored — constituted an existential threat to the First Amendment. But when the government actually uses its coercive power to track down, identify, and intimidate citizens for their speech?

Crickets.

This is what a real threat to free speech looks like. Not “jawboning” that platforms can easily refuse, but the full weight of federal surveillance being deployed against anyone who dares to criticize the administration. The chilling effect here is the entire point.

Advertisement

As the ACLU noted, this appears to be “part of a broader strategy to intimidate people who document immigration activity or criticize government actions.”

If you spent the last few years warning about government censorship, this is your moment. This is the actual thing you claimed to be worried about. But, of course, all those who pretended to care about free speech really only meant they cared about their own team’s speech. Watching the government actually suppress critics? No big deal. They probably deserved it.

Filed Under: 1st amendment, administrative subpoenas, bari weiss, chilling effects, dhs, donald trump, free speech, matt taibbi, michael shellenberger

Companies: google, meta

Advertisement

Source link

Continue Reading

Tech

Musk Predicts SpaceX Will Launch More AI Compute Per Year Than the Cumulative Total on Earth

Published

on

Elon Musk told podcast host Dwarkesh Patel and Stripe co-founder John Collison that space will become the most economically compelling location for AI data centers in less than 36 months, a prediction rooted not in some exotic technical breakthrough but in the basic math of electricity supply: chip output is growing exponentially, and electrical output outside China is essentially flat.

Solar panels in orbit generate roughly five times the power they do on the ground because there is no day-night cycle, no cloud cover, no atmospheric loss, and no atmosphere-related energy reduction. The system economics are even more favorable because space-based operations eliminate the need for batteries entirely, making the effective cost roughly 10 times cheaper than terrestrial solar, Musk said. The terrestrial bottleneck is already real.

Musk said powering 330,000 Nvidia GB300 chips — once you account for networking hardware, storage, peak cooling on the hottest day of the year, and reserve margin for generator servicing — requires roughly a gigawatt at the generation level. Gas turbines are sold out through 2030, and the limiting factor is the casting of turbine vanes and blades, a process handled by just three companies worldwide.

Five years from now, Musk predicted, SpaceX will launch and operate more AI compute annually than the cumulative total on Earth, expecting at least a few hundred gigawatts per year in space. Patel estimated that 100 gigawatts alone would require on the order of 10,000 Starship launches per year, a figure Musk affirmed. SpaceX is gearing up for 10,000 launches a year, Musk said, and possibly 20,000 to 30,000.

Advertisement

Source link

Continue Reading

Tech

Winter Olympics 2026: Omega’s Quantum Timer Precision

Published

on

From 6-22 February, the 2026 Winter Olympics in Milan-Cortina d’Ampezzo, Italy will feature not just the world’s top winter athletes but also some of the most advanced sports technologies today. At the first Cortina Olympics in 1956, the Swiss company Omega—based in Biel/Bienne—introduced electronic ski starting gates and launched the first automated timing tech of its kind.

At this year’s Olympics, Swiss Timing, sister company to Omega under the parent Swatch Group, unveils a new generation of motion analysis and computer vision technology. The new technologies on offer include photofinish cameras that capture up to 40,000 images per second.

“We work very closely with athletes,” says Swiss Timing CEO Alain Zobrist, who has overseen Olympic timekeeping since the winter games of 2006 in Torino “They are the primary customers of our technology and services, and they need to understand how our systems work in order to trust them.”

Live data capture of a figure skater's performance, with a 3D rendering of the athlete, jump heights and more. Using high-resolution cameras and AI algorithms tuned to skaters’ routines, Milan-Cortina Olympic officials expect new figure skating tech to be a key highlight of the games. Omega

Figure Skating Tech Completes the Rotation

Figure skating, the Winter Olympics’ biggest TV draw, is receiving a substantial upgrade at Milano Cortina 2026.

Advertisement

Fourteen 8K resolution cameras positioned around the rink will capture every skater’s movement. “We use proprietary software to interpret the images and visualize athlete movement in a 3D model,” says Zobrist. “AI processes the data so we can track trajectory, position, and movement across all three axes—X, Y, and Z”.

The system measures jump heights, air times, and landing speeds in real time, producing heat maps and graphic overlays that break down each program—all instantaneously. “The time it takes for us to measure the data, until we show a matrix on TV with a graphic, this whole chain needs to take less than 1/10 of a second,” Zobrist says.

A range of different AI models helps the broadcasters and commentators process each skater’s every move on the ice.

“There is an AI that helps our computer vision system do pose estimation,” he says. “So we have a camera that is filming what is happening, and an AI that helps the camera understand what it’s looking at. And then there is a second type of AI, which is more similar to a large language model that makes sense of the data that we collect”.

Advertisement

Among the features Swiss Timing’s new systems provide is blade angle detection, which gives judges precise technical data to augment their technical and aesthetic decisions. Zobrist says future versions will also determine whether a given rotation is complete, so that “If the rotation is 355 degrees, there is going to be a deduction,” he says.

This builds on technology Omega unveiled at the 2024 Paris Olympics for diving, where cameras measured distances between a diver’s head and the board to help judges assess points and penalties to be awarded.

Three dimensional rendering of a ski jumper preparing for dismount on a tall slope. At the 2026 Winter Olympics, ski jumping will feature both camera-based and sensor-based technologies to make the aerial experience more immediate and real-time. Omega

Ski Jumping Tech Finds Make-or-Break Moments

Unlike figure skating’s camera-based approach, ski jumping also relies on physical sensors.

“In ski jumping, we use a small, lightweight sensor attached to each ski, one sensor per ski, not on the athlete’s body,” Zobrist says. The sensors are lightweight and broadcast data on a skier’s speed, acceleration, and positioning in the air. The technology also correlates performance data with wind conditions, revealing environmental factors’ influence on each jump.

Advertisement

High-speed cameras also track each ski jumper. Then, a stroboscopic camera provides body position time-lapses throughout the jump.

“The first 20 to 30 meters after takeoff are crucial as athletes move into a V position and lean forward,” Zobrist says. “And both the timing and precision of this movement strongly influence performance.”

The system reveals biomechanical characteristics in real time, he adds, showing how athletes position their bodies during every moment of the takeoff process. The most common mistake in flight position, over-rotation or under-rotation, can now be detailed and diagnosed with precision on every jump.

Bobsleigh: Pushing the Line on the Photo Finish

This year’s Olympics will also feature a “virtual photo finish,” providing comparison images of when different sleds cross the finish line over previous runs.

Advertisement

Red Omega camera with large lens, under a sleek hood, set against a black background. Omega’s cameras will provide virtual photo finishes at the 2026 Winter Olympics. Omega

“We virtually build a photo finish that shows different sleds from different runs on a single visual reference,” says Zobrist.

After each run, composite images show the margins separating performances. However, more tried-and-true technology still generates official results. A Swiss Timing score, he says, still comes courtesy of photoelectric cells, devices that emit light beams across the finish line and stop the clock when broken. The company offers its virtual photo finish, by contrast, as a visualization tool for spectators and commentators.

In bobsleigh, as in every timed Winter Olympic event, the line between triumph and heartbreak is sometimes measured in milliseconds or even shorter time intervals still. Such precision will, Zobrist says, stem from Omega’s Quantum Timer.

“We can measure time to the millionth of a second, so 6 digits after the comma, with a deviation of about 23 nanoseconds over 24 hours,” Zobrist explained. “These devices are constantly calibrated and used across all timed sports.”

Advertisement

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Are Duracell Batteries Better Than Energizer? What Consumer Reports Data Says

Published

on





We may receive a commission on purchases made from links.

With some product segments, two rival companies dominate the market space. Boeing and Airbus sometimes even use the same engines, and chances are good you’re reading this on a mobile phone running software created by either Apple or Google. When it comes to alkaline batteries, the two most recognizable premium brands are Duracell and Energizer.  Consumer Reports tested 15 different AA batteries and rated the Duracell Quantum AA highest among alkaline batteries and equal in perfomance to Energizer Ultimate lithium batteries. Rayovac Fusion Advanced AA batteries also performed well, coming in just ahead of Energizer Advanced lithium and Duracell Copper top alkaline cells. Energizer EcoAdvanced and Max+ PowerSeal alkaline batteries scored a little lower; in the company of retailer-branded batteries from Amazon, CVS, Walgreens, and Rite Aid.

This round of Consumer Reports testing covered only disposable alkaline and lithium batteries. SlashGear’s ranking of rechargeable batteries also placed Duracell just ahead of Energizer, although EBL and Eneloop batteries topped our list. Energizer and Duracell batteries cost more than most generic competitors, but expensive name-brand batteries usually last longer than cheaper ones.

Advertisement

Most tests (including this one from Consumer Reports) find Duracell and Energizer batteries to be more or less equal in terms of performance, although the Duracell Quantum was a top performer here. Energizer Ultimate lithium batteries more or less matched the Quantum’s performance benchmarks, and Energizer Advanced lithium cells tested slightly better than Duracell Copper Top alkalines. Both brands are highly recommended by SlashGear and Consumer Reports with impressive performance that separates them from the rest of the pack, so buy either with confidence.

Advertisement

How to make batteries last longer

Consumer Reports tested this battery of batteries by measuring runtime in toys and flashlights, but use across a variety of devices might return different results. For example, a TV remote control doesn’t require that much power to function and is used intermittently, so drain on batteries is low. That’s why you may not need to change them for months, or even years. In contrast, Xbox controllers drain batteries very quickly because features like haptic feedback and wireless connectivity draw lots of power.

There are a couple things you can do to get the most out of your batteries. First, pay attention to the stamped expiration date if you’re buying them in a store. If the batteries aren’t going straight to use in a device, keep them in their original packaging in a cool, dry place. Contrary to a common myth, storing batteries in the fridge or freezer is a bad idea; condensation can form inside the packaging. Put them in an interior closet or water-resistant toolbox, and remove batteries from devices you don’t plan on using for a while. Be careful not to store batteries loose in a box or drawer with other batteries or metal objects; short-circuits could drain your batteries or even cause a fire. A plastic battery organizer will protect batteries when not in use and can be stored in a closet or cabinet. Stacking batteries upright will prevent terminal-to-terminal contact, and always recycle used batteries according to local guidelines.



Advertisement

Source link

Continue Reading

Tech

Anthropic’s Claude Opus 4.6 brings 1M token context and ‘agent teams’ to take on OpenAI’s Codex

Published

on

Anthropic on Thursday released Claude Opus 4.6, a major upgrade to its flagship artificial intelligence model that the company says plans more carefully, sustains longer autonomous workflows, and outperforms competitors including OpenAI’s GPT-5.2 on key enterprise benchmarks — a release that arrives at a tumultuous moment for the AI industry and global software markets.

The launch comes just three days after OpenAI released its own Codex desktop application in a direct challenge to Anthropic’s Claude Code momentum, and amid a $285 billion rout in software and services stocks that investors attribute partly to fears that Anthropic’s AI tools could disrupt established enterprise software businesses.

For the first time, Anthropic’s Opus-class models will feature a 1 million token context window, allowing the AI to process and reason across vastly more information than previous versions. The company also introduced “agent teams” in Claude Code — a research preview feature that enables multiple AI agents to work simultaneously on different aspects of a coding project, coordinating autonomously.

“We’re focused on building the most capable, reliable, and safe AI systems,” an Anthropic spokesperson told VentureBeat about the announcements. “Opus 4.6 is even better at planning, helping solve the most complex coding tasks. And the new agent teams feature means users can split work across multiple agents — one on the frontend, one on the API, one on the migration — each owning its piece and coordinating directly with the others.”

Advertisement

Why OpenAI and Anthropic are locked in an all-out war for enterprise developers

The release intensifies an already fierce competition between Anthropic and OpenAI, the two most valuable privately held AI companies in the world. OpenAI on Monday released a new desktop application for its Codex artificial intelligence coding system, a tool the company says transforms software development from a collaborative exercise with a single AI assistant into something more akin to managing a team of autonomous workers.

AI coding assistants have exploded in popularity over the last year, and OpenAI said more than 1 million developers have used Codex in the past month. The new Codex app is part of OpenAI’s ongoing effort to lure users and market share away from rivals like Anthropic and Cursor.

The timing of Anthropic’s release — just 72 hours after OpenAI’s Codex launch — underscores the breakneck pace of competition in AI development tools. OpenAI faces intensifying competition from Anthropic, which posted the largest share increase of any frontier lab since May 2025, according to a recent Andreessen Horowitz survey. Forty-four percent of enterprises now use Anthropic in production, driven by rapid capability gains in software development since late 2024. The desktop launch is a strategic counter to Claude Code’s momentum.

Advertisement

According to Anthropic’s announcement, Opus 4.6 achieves the highest score on Terminal-Bench 2.0, an agentic coding evaluation, and leads all other frontier models on Humanity’s Last Exam, a complex multi-discipline reasoning test. On GDPval-AA — a benchmark measuring performance on economically valuable knowledge work tasks in finance, legal and other domains — Opus 4.6 outperforms OpenAI’s GPT-5.2 by approximately 144 ELO points, which translates to obtaining a higher score approximately 70% of the time.

0e5c55fa8fd05a893d11168654dc36999e90908b-2600x2968

Claude Opus 4.6 leads or matches competitors across most benchmark categories, according to Anthropic’s internal testing. The model showed particular strength in agentic tasks, office work and novel problem-solving. (Source: Anthropic)

Inside Claude Code’s $1 billion revenue milestone and growing enterprise footprint

The stakes are substantial. Asked about Claude Code’s financial performance, the Anthropic spokesperson noted that in November, the company announced that Claude Code reached $1 billion in run rate revenue only six months after becoming generally available in May 2025.

The spokesperson highlighted major enterprise deployments: “Claude Code is used by Uber across teams like software engineering, data science, finance, and trust and safety; wall-to-wall deployment across Salesforce’s global engineering org; tens of thousands of devs at Accenture; and companies across industries like Spotify, Rakuten, Snowflake, Novo Nordisk, and Ramp.”

Advertisement

That enterprise traction has translated into skyrocketing valuations. Earlier this month, Anthropic signed a term sheet for a $10 billion funding round at a $350 billion valuation. Bloomberg reported that Anthropic is simultaneously working on a tender offer that would allow employees to sell shares at that valuation, offering liquidity to staffers who have watched the company’s worth multiply since its 2021 founding.

How Opus 4.6 solves the ‘context rot’ problem that has plagued AI models

One of Opus 4.6’s most significant technical improvements addresses what the AI industry calls “context rot“—the degradation of model performance as conversations grow longer. Anthropic says Opus 4.6 scores 76% on MRCR v2, a needle-in-a-haystack benchmark testing a model’s ability to retrieve information hidden in vast amounts of text, compared to just 18.5% for Sonnet 4.5.

“This is a qualitative shift in how much context a model can actually use while maintaining peak performance,” the company said in its announcement.

The model also supports outputs of up to 128,000 tokens — enough to complete substantial coding tasks or documents without breaking them into multiple requests.

Advertisement

For developers, Anthropic is introducing several new API features alongside the model: adaptive thinking, which allows Claude to decide when deeper reasoning would be helpful rather than requiring a binary on-off choice; four effort levels (low, medium, high, max) to control intelligence, speed and cost tradeoffs; and context compaction, a beta feature that automatically summarizes older context to enable longer-running tasks.

long-context retrieval anthropic

Opus 4.6 dramatically outperformed its predecessor on tests measuring how well models retrieve information buried in long documents — a key capability for enterprise coding and research tasks. (Source: Anthropic)

Anthropic’s delicate balancing act: Building powerful AI agents without losing control

Anthropic, which has built its brand around AI safety research, emphasized that Opus 4.6 maintains alignment with its predecessors despite its enhanced capabilities. On the company’s automated behavior audit measuring misaligned behaviors such as deception, sycophancy, and cooperation with misuse, Opus 4.6 “showed a low rate” of problematic responses while also achieving “the lowest rate of over-refusals — where the model fails to answer benign queries — of any recent Claude model.”

When asked how Anthropic thinks about safety guardrails as Claude becomes more agentic, particularly with multiple agents coordinating autonomously, the spokesperson pointed to the company’s published framework: “Agents have tremendous potential for positive impacts in work but it’s important that agents continue to be safe, reliable, and trustworthy. We outlined our framework for developing safe and trustworthy agents last year which shares core principles developers should consider when building agents.”

Advertisement

The company said it has developed six new cybersecurity probes to detect potentially harmful uses of the model’s enhanced capabilities, and is using Opus 4.6 to help find and patch vulnerabilities in open-source software as part of defensive cybersecurity efforts.

569d748607388e6ed42e3ff0ff245d9b0cde6878-3840x2160

Anthropic says its newest model exhibits the lowest rate of problematic behaviors — including deception and sycophancy — of any Claude version tested, even as capabilities have increased. (Source: Anthropic)

Sam Altman vs. Dario Amodei: The Super Bowl ad battle that exposed AI’s deepest divisions

The rivalry between Anthropic and OpenAI has spilled into consumer marketing in dramatic fashion. Both companies will feature prominently during Sunday’s Super Bowl. Anthropic is airing commercials that mock OpenAI’s decision to begin testing advertisements in ChatGPT, with the tagline: “Ads are coming to AI. But not to Claude.”

OpenAI CEO Sam Altman responded by calling the ads “funny” but “clearly dishonest,” posting on X that his company would “obviously never run ads in the way Anthropic depicts them” and that “Anthropic wants to control what people do with AI” while serving “an expensive product to rich people.”

Advertisement

The exchange highlights a fundamental strategic divergence: OpenAI has moved to monetize its massive free user base through advertising, while Anthropic has focused almost exclusively on enterprise sales and premium subscriptions.

The $285 billion stock selloff that revealed Wall Street’s AI anxiety

The launch occurs against a backdrop of historic market volatility in software stocks. A new AI automation tool from Anthropic PBC sparked a $285 billion rout in stocks across the software, financial services and asset management sectors on Tuesday as investors raced to dump shares with even the slightest exposure. A Goldman Sachs basket of US software stocks sank 6%, its biggest one-day decline since April’s tariff-fueled selloff.

The selloff was triggered by a new legal tool from Anthropic, which showed the AI industry’s growing push into industries that can unlock lucrative enterprise revenue needed to fund massive investments in the technology. One trigger for Tuesday’s selloff was Anthropic’s launch of plug-ins for its Claude Cowork agent on Friday, enabling automated tasks across legal, sales, marketing and data analysis.

Thomson Reuters plunged 15.83% Tuesday, its biggest single-day drop on record; and Legalzoom.com sank 19.68%. European legal software providers including RELX, owner of LexisNexis, and Wolters Kluwer experienced their worst single-day performances in decades.

Advertisement

Not everyone agrees the selloff is warranted. Nvidia CEO Jensen Huang said on Tuesday that fears AI would replace software and related tools were “illogical” and “time will prove itself.” Mark Murphy, head of U.S. enterprise software research at JPMorgan, said in a Reuters report it “feels like an illogical leap” to say a new plug-in from an LLM would “replace every layer of mission-critical enterprise software.”

What Claude’s new PowerPoint integration means for Microsoft’s AI strategy

Among the more notable product announcements: Anthropic is releasing Claude in PowerPoint in research preview, allowing users to create presentations using the same AI capabilities that power Claude’s document and spreadsheet work. The integration puts Claude directly inside a core Microsoft product — an unusual arrangement given Microsoft’s 27% stake in OpenAI.

The Anthropic spokesperson framed the move pragmatically in an interview with VentureBeat: “Microsoft has an official add-in marketplace for Office products with multiple add-ins available to help people with slide creation and iteration. Any developer can build a plugin for Excel or PowerPoint. We’re participating in that ecosystem to bring Claude into PowerPoint. This is about participating in the ecosystem and giving users the ability to work with the tools that they want, in the programs they want.”

Claude in Powerpoint

Claude’s new PowerPoint integration, shown here analyzing a market research slide, places Anthropic’s AI directly inside a flagship Microsoft product — despite Microsoft’s major investment in rival OpenAI. (Source: Anthropic)

Advertisement

The data behind enterprise AI adoption: Who’s winning and who’s losing ground

Data from a16z’s recent enterprise AI survey suggests both Anthropic and OpenAI face an increasingly competitive landscape. While OpenAI remains the most widely used AI provider in the enterprise, with approximately 77% of surveyed companies using it in production in January 2026, Anthropic’s adoption is rising rapidly — from near-zero in March 2024 to approximately 40% using it in production by January 2026.

The survey data also shows that 75% of Anthropic’s enterprise customers are using it in production, with 89% either testing or in production — figures that slightly exceed OpenAI’s 46% in production and 73% testing or in production rates among its customer base.

Enterprise spending on AI continues to accelerate. Average enterprise LLM spend reached $7 million in 2025, up 180% from $2.5 million in 2024, with projections suggesting $11.6 million in 2026 — a 65% increase year-over-year.

a16z LLM Vendor Usage Over Time chart

OpenAI remains the dominant AI provider in enterprise settings, but Anthropic’s share has surged from near zero in early 2024 to roughly 40 percent of companies using it in production by January 2026. (Source: Andreessen Horowitz survey, January 2026)

Advertisement

Pricing, availability, and what developers need to know about Claude Opus 4.6

Opus 4.6 is available immediately on claude.ai, the Claude API, and major cloud platforms. Developers can access it via claude-opus-4-6 through the API. Pricing remains unchanged at $5 per million input tokens and $25 per million output tokens, with premium pricing of $10/$37.50 for prompts exceeding 200,000 tokens using the 1 million token context window.

For users who find Opus 4.6 “overthinking” simpler tasks — a characteristic Anthropic acknowledges can add cost and latency — the company recommends adjusting the effort parameter from its default high setting to medium.

The recommendation captures something essential about where the AI industry now stands. These models have grown so capable that their creators must now teach customers how to make them think less. Whether that represents a breakthrough or a warning sign depends entirely on which side of the disruption you’re standing on — and whether you remembered to sell your software stocks before Tuesday.

Source link

Advertisement
Continue Reading

Tech

The Nintendo Switch 2 proved me wrong in several ways, and it’s why I’d suggest buying it now before price hikes

Published

on

I’ve been critical of the Nintendo Switch 2 in previous stories, given the price of its games and, to a lesser extent, the handheld’s price itself, but after giving it a shot, it’s a strong recommendation. You can now grab the Nintendo Switch 2 bundle on Best Buy for $629.95 (was $684.95).

The bundle includes Mario Kart World and Donkey Kong Bananza, two exclusive Switch 2 titles that showcase the fun ready to be had with the console this President’s Day. I’ve bought and played both games (not for $80), and I can say that they serve as perfect introductions to the Switch ecosystem, especially if you’ve not played many Nintendo games.

potential price increase. Multiple reports suggest the threat is from a combination of tariffs and the ongoing RAM crisis.

Since Valve is also hesitant to announce prices for the Steam Machine’s launch due to the high cost of memory, Nintendo will likely announce a price increase for its Switch 2 soon.

That makes now the best time to make a move if you’ve been considering a Switch 2 – and I can attest to it being worth every cent, especially since its life cycle has only just begun. Expect to see several exclusives launch later down the line, and The Duskbloods is already one that’s caught my eye.

Source link

Continue Reading

Tech

DIY Macropad Rocks A Haptic Feedback Wheel

Published

on

Macropads can be as simple as a few buttons hooked up to a microcontroller to do the USB HID dance and talk to a PC. However, you can go a lot further, too. [CNCDan] demonstrates this well with his sleek macropad build, which throws haptic feedback into the mix.

The build features six programmable macro buttons, which are situated either on side of a 128×64 OLED display. This setup allows the OLED screen to show icons that explain the functionality of each button. There’s also a nice large rotary knob, surrounded by 20 addressable WS2811 LEDs for visual feedback. Underneath the knob lives an an encoder, as well as a brushless motor typically used in gimbal builds, which is driven by a TMC6300 motor driver board. Everything is laced up to a Waveshare RP2040 Plus devboard which runs the show. It’s responsible for controlling the motors, reading the knob and switches, and speaking USB to the PC that it’s plugged into.

It’s a compact device that nonetheless should prove to be a good productivity booster on the bench. We’ve featured [CNCDan’s] work before, too, such as this nifty DIY VR headset.

Advertisement

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025