During its latest Unpacked event, Samsung dished all the details on the Galaxy S25 lineup. The Galaxy S25 and S25 Plus start at $799.99 and $999.99, respectively, while the S25 Ultra runs a cool $1299.99 in its entry-level configuration. You can preorder the phones ahead of their launch on February 7th, but before you do, you’re probably wondering what’s new.
Technology
Samsung Galaxy S25 vs. S25 Plus vs. S25 Ultra: specs comparison
The phones don’t look or feel much different, save for the slightly curvier Galaxy S25 Ultra. The Snapdragon Elite 8 is perhaps the S25 family’s most notable hardware upgrade, which is up to 40 percent faster than the Snapdragon 8 Gen 3 chipset and comes with a new neural processing unit to support Samsung’s expanded Galaxy AI experience. The company introduced multimodal and generative AI improvements, after all, and the Galaxy S25 line will be among the first to usher in new Google Gemini features.
Our reviews are still forthcoming, and it’s much too early for us to determine whether any of these phones are actually worth upgrading for. But that doesn’t mean we can’t distill their differences to help you determine which device you’d rather buy. Keep reading for a full breakdown of all of the hardware and software changes, the unique traits of each Galaxy S25 device, and a closer look at their specs — plus their counterparts from last year.
Design
In terms of shape and size, it’s hard to tell the Galaxy S25 and S25 Plus from their last-gen counterparts. But the S25 Ultra looks a bit different than the S24 Ultra with its subtly rounded corners and flat edges, which are more visually aligned with the smaller phones. It’s the thinnest and lightest Ultra yet, even if only by a hair. And the Ultra-exclusive S Pen is back, albeit without gestures and the remote shutter feature.
Samsung says the aluminum frame on the Galaxy S25 and S25 Plus features at least one recycled component. Both sandwich their components between slabs of Corning’s Gorilla Glass Victus 2, but the Ultra uses a titanium frame and a display that’s protected by Corning Gorilla Armor 2. It’s a ceramic-infused material said to be stronger than typical tempered glass with antireflective and scratch-resistant properties. (The rear still uses Victus 2.)
Samsung also tweaked the design of the camera modules on all three phones, adding a thicker bordering hump with a bolder aesthetic. The S25 and S25 Plus come in several new color options, too, including an “icy” blue and a new mint green to help them stand apart, as well as navy and silver for a more traditional aesthetic. Three more colors will be available exclusively from Samsung.com: black, red, and rose gold.
The Ultra has its own set of titanium colors, including black, gray, and silverish hues of blue and white. If you order the Ultra from Samsung.com, you’ll also be able to choose from rose gold, black, and green.
Storage and RAM
The Samsung Galaxy S25 series is available with largely the same memory and storage options as the previous models, except all three models now start with 12GB of RAM. You can get the base Galaxy S25 with 128GB or 256GB of storage, while the Plus starts with 256GB of storage with a 512GB option. The Ultra, meanwhile, offers the same starting configurations as the Plus, along with a 1TB configuration.
Processor
All three Galaxy S25 phones use a Snapdragon 8 Elite chipset — no matter where in the world you’re purchasing from. The processor uses an Oryon CPU similar to the ones you’ll find in newer Qualcomm laptops.
The 3nm chip has two “prime” cores and six performance cores with a dedicated “Hexagon” neural processing unit that supports multimodal AI capabilities with 40 percent faster efficiency compared to the Snapdragon 8 Gen 3. The added headroom allows support for more on-device AI functions, including Generative Edit. Many of these features should generally work faster without the added overhead of server-side processing.
Overall, Samsung claims the Snapdragon 8 Elite offers 37 percent faster CPU performance and 30 percent faster GPU performance for gaming, at least compared to the Snapdragon 8 Gen 3 it’s replacing. That being said, we can’t yet discern how that translates in practice.
Display
The Dynamic AMOLED displays on the Galaxy S25 smartphones are largely unchanged compared to the previous generation. The base Galaxy S25 still has a 6.2-inch Full HD Plus display, while the 6.7-inch display on the Galaxy S25 Plus remains Quad HD Plus.
The S25 Ultra’s display is slightly larger than last year’s at 6.9 inches — a 0.1-inch increase to make up for the slight curve — with the same QHD Plus resolution. All three still support a maximum 120Hz variable refresh rate.
Cameras
The Galaxy S25 and S25 Plus have the same three rear cameras, including a 50-megapixel wide sensor, a 12-megapixel ultrawide option, and a 10-megapixel telephoto sensor. Meanwhile, the Galaxy S25 Ultra offers four total rear cameras, including a main 200-megapixel wide-angle camera, a new 50-megapixel ultrawide camera with macro mode (up from the S24 Ultra’s 12-megapixel), a 50-megapixel telephoto sensor with 5x optical zoom, and a 12-megapixel sensor for 3x zoom. All still use the same 12-megapixel front camera.
Recording options are largely similar across the board, with all three Galaxy S25 models supporting 8K resolution at up to 30 frames per second on their main wide-angle sensors and 4K at up to 60 frames per second for all cameras. However, the Galaxy S25 Ultra supports 4K at up to 120 frames per second.
Samsung now enables 10-bit HDR recording by default on all S25 phones, and they retain the Log color profile option for advanced color grading. The cameras picked up other software-enabled tricks, too, including the Audio Eraser feature first seen in Pixel phones. That feature lets you choose and isolate specific sounds — including voices, music, and wind — with the option to lower the rest or mute them entirely.
There’s also a new Virtual Aperture feature in Expert RAW, allowing you to adjust your footage’s depth of field after recording. There’s a new suite of filters inspired by iconic film looks, too.
Samsung says its new ProScaler feature on the Galaxy S25 Plus and Ultra offers 40 percent better upscaling compared to the Galaxy S24’s based on its signal-to-noise ratio. Since that feature requires QHD Plus resolution, you won’t find it on the base Galaxy S25.
Battery
Like the Galaxy S24 line, the Galaxy S25, S25 Plus, and S25 Ultra use 4,000mAh, 4,900mAh, and 5,000mAh batteries, respectively. That being said, Samsung says they offer the longest battery life of any Galaxy phones to date, largely thanks to hardware and software efficiency improvements.
Fast charging over USB-C returns in all three, of course, but they’re now also “Qi2 Ready.” That means there are no magnets embedded directly in the devices — which is the case with Apple’s latest handsets — but you will be able to obtain 15W wireless charging speeds when paired with Samsung’s magnetic Qi2 Ready cases. That should effectively enable you to use magnetic Qi2 chargers with Samsung Galaxy S25 devices.
Android 15, One UI 7, and Galaxy AI
The Galaxy S25’s launch is less about the hardware and more an opportunity to introduce One UI 7, its AI-heavy take on Android 15. While there are several visual tweaks, the bigger change is in Galaxy AI’s expanded granularity and cohesiveness.
Both Samsung and Google are introducing new multimodal AI features with the Galaxy S25’s launch. Google Gemini Live will launch first on the Galaxy S25, for example, though it will eventually come to the Galaxy S24 and Google Pixel 9. It’s a full-fledged conversational AI companion that’s now the default assistant when long-pressing the home button. (Bixby is still available in its own app.)
Gemini Live supports natural language commands for generative tasks and on-device functions. You can feed it images and files to facilitate requests, and it can dive into multiple apps to help complete them.
You can also get more personalized daily summaries with Now Brief, which is accessible directly from the lockscreen’s new Now Bar (which feels similar to the iPhone’s Dynamic Island). You’ll also notice a redesigned AI Select menu (which you may remember as Smart Select), 20 supported languages for on-device translations, call transcriptions directly within the dialer, and more. Most of these changes should port to older Galaxy flagships, but we’re not yet sure whether all of them will.
By the numbers
No, you’re not experiencing deja vu — the Galaxy S25 smartphones feel largely familiar on paper, as our comparison chart below illustrates. Outside the processor bump, the hardware differences are pretty minor compared to Samsung’s last-gen phones.
The software changes are the most significant upgrades this year, but many of those features will come to older phones, too, thanks to the now-customary seven years of OS updates you’ll get when purchasing a flagship Galaxy phone. Check out the full specs below to see how exactly these devices compare.
Related:
Technology
Nvidia Geforce GPUs obliterate the competition in this popular video sofware benchmark
- Nvidia has commanding lead over rivals in latest Adobe After Effects benchmarks
- Even lower-performance Nvidia GPUs outpace Intel and AMD cards
- But to Apple’s credit, the M3 Max pulls ahead in 2D significantly despite its laptop form factor
Nvidia‘s GeForce RTX 40-Series GPUs has shown off some significant advantages when it comes to dealing with 3D workflows over comparable Intel and AMD cards, new figures have claimed.
The latest Puget Systems After Effects benchmarks say Nvidia’s flagship GeForce RTX 4090 delivered up to 20 times the performance of Apple’s MacBook Pro M3 Max in 3D tasks; reflecting the card’s technical design focus on GPU-intensive workloads.
The 4090, equipped with 24GB of GDDR6X memory and 16,384 CUDA cores, nearly doubles the performance of its own mid-range RTX 4060 in the Advanced 3D tests that utilize Adobe’s Advanced 3D rendering engine which is heavily dependent on GPU acceleration.
Nvidia RTX 4090 outperforms its rivals
Comparatively, the RTX 4060, featuring 8GB of GDDR6 memory and 3,072 CUDA cores, outpaces AMD’s flagship Radeon RX 7900 XTX, which boasts 24GB of GDDR6 memory and 6,144 stream processors.
Despite its superior memory capacity, the Radeon GPU trails the RTX 4060 by 25% in overall 3D performance.
Intel’s Arc GPUs, such as the Arc B580 with 12GB of VRAM and 3,456 cores also fall short of Nvidia’s mid-range offerings, trailing the RTX 4060 by approximately 22%.
Apple’s M3 Max, equipped with 40 GPU cores, performs roughly 10 times slower than the RTX 4060 in GPU-accelerated 3D tasks.
However, while Nvidia leads in 3D rendering, Apple’s M3 Max performs well in 2D workflows due to its CPU efficiencies. The MacBook Pro excels in projects emphasizing 2D layers and effects, where GPU performance plays a secondary role. Nevertheless, for CPU-dependent tracking tasks, Nvidia and Apple systems perform similarly.
Nvidia owes its dominance in After Effects 3D workflows to its advanced GPU architecture and software integration. The RTX 4090, for instance, comes with technologies like the Ada Lovelace architecture and CUDA framework which optimizes 3D GPU performance.
You might also like
Technology
‘Neo-Nazi Madness’: Meta’s Top AI Lawyer on Why He Fired the Company
The one exception to that is the UMG v. Anthropic case, because at least early on, earlier versions of Anthropic would generate the song lyrics for songs in the output. That’s a problem. The current status of that case is they’ve put safeguards in place to try to prevent that from happening, and the parties have sort of agreed that, pending the resolution of the case, those safeguards are sufficient, so they’re no longer seeking a preliminary injunction.
At the end of the day, the harder question for the AI companies is not is it legal to engage in training? It’s what do you do when your AI generates output that is too similar to a particular work?
Do you expect the majority of these cases to go to trial, or do you see settlements on the horizon?
There may well be some settlements. Where I expect to see settlements is with big players who either have large swaths of content or content that’s particularly valuable. The New York Times might end up with a settlement, and with a licensing deal, perhaps where OpenAI pays money to use New York Times content.
There’s enough money at stake that we’re probably going to get at least some judgments that set the parameters. The class-action plaintiffs, my sense is they have stars in their eyes. There are lots of class actions, and my guess is that the defendants are going to be resisting those and hoping to win on summary judgment. It’s not obvious that they go to trial. The Supreme Court in the Google v. Oracle case nudged fair-use law very strongly in the direction of being resolved on summary judgment, not in front of a jury. I think the AI companies are going to try very hard to get those cases decided on summary judgment.
Why would it be better for them to win on summary judgment versus a jury verdict?
It’s quicker and it’s cheaper than going to trial. And AI companies are worried that they’re not going to be viewed as popular, that a lot of people are going to think, Oh, you made a copy of the work that should be illegal and not dig into the details of the fair-use doctrine.
There have been lots of deals between AI companies and media outlets, content providers, and other rights holders. Most of the time, these deals appear to be more about search than foundational models, or at least that’s how it’s been described to me. In your opinion, is licensing content to be used in AI search engines—where answers are sourced by retrieval augmented generation or RAG—something that’s legally obligatory? Why are they doing it this way?
If you’re using retrieval augmented generation on targeted, specific content, then your fair-use argument gets more challenging. It’s much more likely that AI-generated search is going to generate text taken directly from one particular source in the output, and that’s much less likely to be a fair use. I mean, it could be—but the risky area is that it’s much more likely to be competing with the original source material. If instead of directing people to a New York Times story, I give them my AI prompt that uses RAG to take the text straight out of that New York Times story, that does seem like a substitution that could harm the New York Times. Legal risk is greater for the AI company.
What do you want people to know about the generative AI copyright fights that they might not already know, or they might have been misinformed about?
The thing that I hear most often that’s wrong as a technical matter is this concept that these are just plagiarism machines. All they’re doing is taking my stuff and then grinding it back out in the form of text and responses. I hear a lot of artists say that, and I hear a lot of lay people say that, and it’s just not right as a technical matter. You can decide if generative AI is good or bad. You can decide it’s lawful or unlawful. But it really is a fundamentally new thing we have not experienced before. The fact that it needs to train on a bunch of content to understand how sentences work, how arguments work, and to understand various facts about the world doesn’t mean it’s just kind of copying and pasting things or creating a collage. It really is generating things that nobody could expect or predict, and it’s giving us a lot of new content. I think that’s important and valuable.
Technology
UK probes Apple and Google over ‘mobile ecosystem’ market power
The U.K.’s Competition and Markets Authority (CMA) is launching “strategic market status” (SMS) investigations into the mobile ecosystems of Apple and Google.
The investigations constitute part of the new Digital Markets, Competition and Consumers (DMCC) Act which passed last year and came into effect in January. The Act includes new powers for the CMA to designate companies as having strategic market status if they are deemed to be overly dominant, and propose remedies and interventions to improve competition.
The CMA announced its first such SMS investigation last week, launching a probe into Google Search’s market share which is reportedly around the 90% mark. The regulator announced at the time that a second one would be coming in January, and we now know that it’s using its fresh powers to establish whether Apple and Google have strategic market status in their respective mobile ecosystems, which covers areas like browsers, app stores, and operating systems.
‘Holding back innovation’
Today’s announcement doesn’t come as a major surprise. Back in August, the CMA said it was closing a duo of investigations into Apple and Google’s respective mobile app ecosystems, which it had launched starting back in 2021. However, the CMA made it clear that this would be more of a pause, and it would be looking to use its new powers to address competition concerns around the two biggest players in the mobile services market.
In November, an inquiry group set up by the CMA concluded that Apple’s mobile browser policies and a pact with Google were “holding back innovation” in the U.K. The findings noted that Apple forced third-party mobile browsers to use Apple’s browser engine, WebKit, which restricts what these browsers are able to do in comparison to Apple’s own Safari browser, and thus limits how they can effectively differentiate in what is a competitive market.
As part of its new probe, the CMA has now confirmed that it will look at “the extent of competition between and within” Apple’s and Google’s respective mobile ecosystems, including barriers that may be preventing others from competing. This will include whether either company is using their dominant position in operating systems, app distribution, or browsers to “favour their own apps and services” — many of which are bundled by default and can’t always be uninstalled.
On top of that, the CMA said it would look into whether either company imposes “unfair terms and conditions” on developers that wish to distribute their apps through their app stores.
Alex Haffner, competition partner at U.K. law firm Fladgate, said that today’s announcement was “wholly expected,” adding that the more interesting facet is how this new probe fits into the broader changes underway at the U.K. regulator.
Indeed, news emerged this week that the CMA had appointed ex-Amazon executive Doug Gurr as interim chair, constituting part of a wider shift as the U.K. positions itself as a pro-growth, pro-tech nation by cutting red tape and bureaucracy.
“What is more interesting is how this fits into the current sea change which is engulfing the broader organisation of the CMA and in particular the very clear steer it is getting from central government to ensure that regulation is consistently applied with its pro-growth agenda,” Haffner said in a statement issued to TechCrunch. “We can expect this to feature heavily once the CMA gets its teeth stuck into the specifics of the DMCC regime, and its dealings with the tech companies involved.”
Remedies
Today’s announcement kickstarts a three-week period during which relevant stakeholders are invited to submit comments as part of the investigations, with the outcomes expected to be announced by October 22, 2025. While it’s still early days, potential remedies — in the event that Apple and Google are deemed to have strategic market status — include requiring the companies to provide third-parties with greater access to key functionality to help them better compete. It also may include making it easier to pay for services outside of Apple and Google’s existing app store structure.
In a statement issued to TechCrunch, an Apple spokesperson said that it will “continue to engage constructively with the CMA” as their investigation progresses.
“Apple believes in thriving and dynamic markets where innovation can flourish,” the spokesperson said. “We face competition in every segment and jurisdiction where we operate, and our focus is always the trust of our users. In the U.K. alone, the iOS app economy supports hundreds of thousands of jobs and makes it possible for developers big and small to reach users on a trusted platform.”
Oliver Bethell, senior director for competition at Google, echoed this sentiment, noting that the company “will work constructively with the CMA.”
“Android’s openness has helped to expand choice, reduce prices and democratise access to smartphones and apps. It’s the only example of a successful and viable open source mobile operating system,” Bethell wrote in a blog post today. “We favour a way forward that avoids stifling choice and opportunities for U.K. consumers and businesses alike, and without risk to U.K. growth prospects.”
Technology
News,/news,,news, Coverage | TechRadar
- Elon Musk slams Project Stargate initiative
- Claims backers of $500bn initiative “don’t actually have the money”
- Sam Altman, Satya Nadella, hit back at Musk claims
The global AI market appears to have descended into a playground battle of insults after Elon Musk, Sam Altman, Satya Nadella, and others all clashed over the launch of Project Stargate.
Revealed earlier this week to huge fanfare as part of the new Trump administration’s plans to boost AI across the US, Project Stargate is reportedly set to see as much as $500 billion invested into data centers to support the increasing data needs of Altman’s OpenAI.
However, X owner and newly-anointed White House advisor Musk has sought to dampen enthusiasm, claiming in a series of online posts that Stargate’s investors (including Microsoft and Softbank) “don’t actually have the money”.
Project Stargate “swindler”
The initial pledges by Stargate’s partners were around $100 billion, part of which is being invested into a data center in Abilene, Texas.
However Musk looked to pour cold water on these claims, posting, “SoftBank has well under $10 billion secured. I have that on good authority.”
A later post, a reply to a post criticizing Altman, saw Musk say, “Sam is a swindler.”
For his part, Altman was quick to fire back, and in his own X post responding to Musk’s allegation that SoftBank was short of capital, stated “Wrong, as you surely know.”
“[Stargate] is great for the country. i realize what is great for the country isn’t always what’s optimal for your companies, but in your new role, i hope you’ll mostly put [US] first,” he added.
In later posts, Altman told Musk, “I genuinely respect your accomplishments and think you are the most inspiring entrepreneur of our time,” later adding, “I don’t think [Musk is] a nice person or treating us fairly, but you have to respect the guy, and he pushes all of us to be more ambitious.”
Altman was not the only figure to fire back at Musk’s claims, as Microsoft CEO Satya Nadella later declined to comment in detail, but did say, “All I know is, I’m good for my $80 billion,” when asked in a CNBC interview at the World Economic Forum in Davos.
You might also like
Technology
OpenAI may preview its agent tool for users on the $200 per month Pro plan
We may see OpenAI’s agent tool, Operator, released sooner rather than later. Changes to ChatGPT’s code base suggest that Operator will be available as an early research preview to users on the $200 Pro subscription plan.
The changes aren’t yet publicly visible, but a user on X who goes by Choi spotted these updates in ChatGPT’s client-side code. TechCrunch separately identified the same references to Operator on OpenAI’s website.
Here are the three interesting tidbits we spotted:
- There are multiple references to the operator.chatgpt.com URL. This URL currently redirects to the main chatgpt.com web page.
- There will be a new popup that tells you to upgrade your plan if you want to try Operator. “Operator is currently only available to Pro users as an early research preview,” it says.
- On the page that lists the Plus and Pro plans, OpenAI will add “Access to research preview of Operator” as one of the benefits of the Pro plan.
Bloomberg previously reported that OpenAI was working on a general-purpose agent that can perform tasks in a web browser for you.
While this sounds a bit abstract, think about all the mundane things you do regularly in your web browser with quite a few clicks — following someone on LinkedIn, adding an expense in Concur, assigning a task to someone in Asana, or changing the status of a prospect on Salesforce. An agent could perform such multi-step tasks based on an instruction set.
More recently, The Information reported that OpenAI could launch Operator as early as this week. With today’s changes, it seems like everything is ready for a public launch.
Anthropic has released an AI model that can control your PC using a “Computer Use” API and local tools that control your mouse and keyboard. It is currently available as a beta feature for developers.
It looks like Operator is going to be usable on ChatGPT’s website, meaning that it won’t interact with your local computer. Instead, OpenAI will likely run a web browser on its own servers to perform tasks for you.
Nevertheless, it indicates that OpenAI’s ability to interact with computers is progressing. Operator is a specific sandboxed implementation of the company’s underlying agentic framework. It’s going to be interesting to see if the company has more information to share on the technology that powers Operator.
Technology
Beyerdynamic just released four IEMs pitched at different members of your band – yes, even the bassist
- Beyerdynamic’s new IEMs come in four specifications, for every band member
- The numbers you need to remember are 70, 71, 72 or 73
- …Oh, and $499, which is the price
Revered hi-fi brand Beyerdynamic (see the Aventho 300 for the firm’s most recent headphone hit, but that’s just for starters) has released a new line of professional in-ear monitors, and the company wants you to know that every member of the band has been specifically catered for here.
The DT 70 IE, DT 71 IE, DT 72 IE, and DT 73 IE (that’s the full quartet) all feature Beyerdynamic’s own TESLA.11 dynamic driver system, boasting a Total Harmonic Distortion (often abbreviated to THD) of just 0.02%, which is very low indeed – anything below 0.1% is typically considered gifted for an in-ear monitor. Beyer calls it “one of the loudest, lowest-distortion systems available”, but you also get five different sizes of silicone eartips and three pairs of Comply memory foam eartips to achieve a decent fit and seal (nobody wants distractions from the Amazon delivery guy outside while trying to lay down a particular riff).
So what’s different in each set? The acoustic tuning, friend. For example, if you’re a drummer, Beyer knows you need crisp bass and clear treble with just slightly reduced mids, to get what you need from the mix – so the DT 71 IE is the pair for you…
The new Beyerdynamic IEMs will be available in Q2 2025, priced $499.99 per pair, which is around £409 or AU$799, give or take (but those last two figures are guesstimates, rather than official prices).
Which of the Beyer bunch is best (for you)?
So, let’s briefly delve into which of Beyerdynamic’s quartet of IEMs might work best for you.
DT 70 IE is billed as the ideal set “for mixing and critical listening” with a “precise, linear tuning that follows the Fletcher-Munson curve”. So, it’s the set aimed squarely at the audiophile and the live mixer, with a cable that the company says “minimizes structure-borne noise”, plus a gold-plated MMCX connector for a stable, long-lasting connection.
DT 71 IE is quite simply “for drummers and bassists” with a tailored sound signature that Beyerdynamic assures us “enhances low frequencies while ensuring detailed reproduction of cymbals, percussion and bass guitar overtones” with slightly reduced mids (because some vocalists can be a lot).
Speaking of vocals, DT 72 IE is “for guitarists and singers” with a “subtly tuned bass” that its makers say won’t overwhelm during performance. Beyerdynamic also notes that the frequency response between 200-500 Hz compensates for the “occlusion effect,” which should nix any muffled mixes during the gig.
Finally, DT 73 IE is the pair for you if you’re an orchestral musician, pianist or keyboard player. Extra care here has been taken with treble overtones (there’s a subtle boost from 5kHz upwards), alongside natural bass and mids. It’s all about hearing intricate harmonic details clearly, but in a non-fatiguing sound profile.
Oh, and you may have spotted acclaimed jazz pianist, gospel artist and producer Cory Henry in the press shots. That’s because he and Gina Miles (winner of The Voice Season 23) will be helping to showcase the new products. How? By performing at select times at Beyerdynamic’s booth at the National Association of Music Merchants (or NAMM) in Anaheim, from (Thursday January 23) through Saturday, January 25. Don’t forget…
You may also like
Technology
Researchers develop a way to power wearables through human skin
The dream of battery-free devices has taken an unlikely turn, as Carnegie Mellon researchers debuted Power-Over-Skin. The technology allows for electrical currents to travel through human skin in a bid to power things like blood sugar monitors, pacemakers, and even consumer wearables like smart glasses and fitness trackers.
Researchers note the tech is still in “early stages.” At the moment, they’ve showcased the tech supporting low-power electronics like the LED earring pictured above.
“It’s similar to how a radio uses the air as the medium between the transmitter station and your car stereo,” notes CMU researcher Andy Kong. “We’re just using body tissue as the transmitting medium in this case.”
Technology
The AI lie: how trillion-dollar hype is killing humanity
AI companies like Google, OpenAI, and Anthropic want you to believe we’re on the cusp of Artificial General Intelligence (AGI)—a world where AI tools can outthink humans, handle complex professional tasks without breaking a sweat, and chart a new frontier of autonomous intelligence. Google just rehired the founder of Character.AI to accelerate its quest for AGI, OpenAI recently released its first “reasoning” model, and Anthropic’s CEO Dario Amodei says AGI could be achieved as early as 2026.
But here’s the uncomfortable truth: in the quest for AGI in high-stakes fields like medicine, law, veterinary advice, and financial planning, AI isn’t just “not there yet,” it may never get there.
CEO of Pearl AI Search, a division of JustAnswer.
The Hard Facts on AI’s Shortcomings
This year, Purdue researchers presented a study showing ChatGPT got programming questions wrong 52% of the time. In other equally high-stakes categories, GenAI does not fare much better.
When people’s health, wealth, and well-being hang in the balance, the current high failure rates of GenAI platforms are unacceptable. The hard truth is that this accuracy issue will be extremely challenging to overcome.
A recent Georgetown study suggests it might cost a staggering $1 trillion to improve AI’s quality by just 10%. Even then, it would remain worlds away from the reliability that matters in life-and-death scenarios. The “last mile” of accuracy — in which AI becomes undeniably safer than a human expert — will be far harder, more expensive, and time consuming to achieve than the public has been led to believe.
AI’s inaccuracy doesn’t just have theoretical or academic consequences. A 14-year-old boy recently sought guidance from an AI chatbot and, instead of directing him toward help, mental health resources, or even common decency, the AI urged him to take his own life. Tragically, he did. His family is now suing—and they’ll likely win—because the AI’s output wasn’t just a “hallucination” or cute error. It was catastrophic and it came from a system that was wrong with utter conviction. Like the reckless ‘Cliff Clavin’ (who wagered his entire Jeopardy winnings on the TV show ‘Cheers’) AI brims with confidence while spouting the complete wrong answer.
The Mechanical Turk 2.0—With a Twist
Today’s AI hype recalls the infamous 18th-century Mechanical Turk: a supposed chess-playing automaton that actually had a human hidden inside. Modern AI models also hide a dirty secret—they rely heavily on human input.
From annotating and cleaning training data to moderating the content of outputs, tens of millions of humans are still enmeshed in almost every step of advancing GenAI, but the big foundational model companies can’t afford to admit this. Doing so would be acknowledging how far we are from true AGI. Instead, these platforms are locked into a “fake it till you make it” strategy, raising billions to buy more GPUs on the flimsy promise that brute force will magically deliver AGI.
It’s a pyramid scheme of hype: persuade the public that AGI is imminent, secure massive funding, build more giant data centers that burn more energy, and hope that, somehow, more compute will bridge the gap that honest science says may never be crossed.
This is painfully reminiscent of the buzz around Alexa, Cortana, Bixby, and Google Assistant just a decade ago. Users were told voice assistants would take over the world within months. Yet today, many of these devices gather dust, mostly relegated to setting kitchen timers or giving the day’s weather. The grand revolution never happened, and it’s a cautionary tale for today’s even grander AGI promises.
Shielding Themselves from Liability
Why wouldn’t major AI platforms just admit the truth about their accuracy? Because doing so would open the floodgates of liability.
Acknowledging fundamental flaws in AI’s reasoning would provide a smoking gun in court, as in the tragic case of the 14-year-old boy. With trillions of dollars at stake, no executive wants to hand a plaintiff’s lawyer the ultimate piece of evidence: “We knew it was dangerously flawed, and we shipped it anyway.”
Instead, companies double down on marketing spin, calling these deadly mistakes “hallucinations,” as though that’s an acceptable trade-off. If a doctor told a child to kill himself, should we call that a “hallucination?” Or, should we call it what it is — an unforgivable failure that deserves full legal consequence and permanent revocation of advice-giving privileges?
AI’s adoption plateau
People learned quickly that Alexa and the other voice assistants could not reliably answer their questions, so they just stopped using them for all but the most basic tasks. AI platforms will inevitably hit an adoption wall, endangering their current users while scaring away others that might rely on or try their platforms.
Think about the ups and downs of self-driving cars; despite carmakers’ huge autonomy promises – Tesla has committed to driverless robotaxis by 2027 – Goldman Sachs recently lowered its expectations for the use of even partially autonomous vehicles. Until autonomous cars meet a much higher standard, many humans will withhold complete trust.
Similarly, many users won’t put their full trust in AI even if it one day equals human intelligence; it must be vastly more capable than even the smartest human. Other users will be lulled in by AI’s ability to answer simple questions and burned when they make high-stakes inquiries. For either group, AI’s shortcomings won’t make it a sought-after tool.
A Necessary Pivot: Incorporate Human Judgment
These flawed AI platforms can’t be used for critical tasks until they either achieve the mythical AGI status or incorporate reliable human judgment.
Given the trillion-dollar cost projections, environmental toll of massive data centers, and mounting human casualties, the choice is clear: put human expertise at the forefront. Let’s stop pretending that AGI is right around the corner. That false narrative is deceiving some people and literally killing others.
Instead, use AI to empower humans and create new jobs where human judgment moderates machine output. Make the experts visible rather than hiding them behind a smokescreen of corporate bravado. Until and unless AI attains near-perfect reliability, human professionals are indispensable. It’s time we stop the hype, face the truth, and build a future where AI serves humanity—instead of endangering it.
We’ve compiled a list of the best recruitment platforms.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Technology
Psychology Can Be Harnessed to Combat Violent Extremism
This prediction is based on several decades of research that my colleagues and I have been undertaking at the University of Oxford to establish what makes people willing to fight and die for their groups. We use a variety of methods, including interviews, surveys, and psychological experiments to collect data from a wide range of groups, such as tribal warriors, armed insurgents, terrorists, conventional soldiers, religious fundamentalists, and violent football fans.
We have found that life-changing and group-defining experiences cause our personal and collective identities to become fused together. We call it “identity fusion.” Fused individuals will stop at nothing to advance the interests of their groups, and this applies not only to acts we would applaud as heroic—such as rescuing children from burning buildings or taking a bullet for one’s comrades—but also acts of suicide terrorism.
Fusion is commonly measured by showing people a small circle (representing you) and a big circle (representing your group) and placing pairs of such circles in a sequence so that they overlap to varying degrees: not at all, then just a little bit, then a bit more, and so on until the little circle is completely enclosed in the big circle. Then people are asked which pair of circles best captures their relationship with the group. People who choose the one in which the little circle is inside the big circle are said to be “fused.” Those are people who love their group so much that they will do almost anything to protect it.
This isn’t unique to humans. Some species of birds will feign a broken wing to draw a predator away from their fledglings. One species—the superb fairy wren of Australasia—lures predators away from their young by making darting movements and squeaky sounds to imitate the behavior of a delectable mouse. Humans too will typically go to great lengths to protect their genetic relatives, especially their children who (except for identical twins) share more of their genes than other family members. But—unusually in the animal kingdom—humans often go further still by putting themselves in harm’s way to protect groups of genetically unrelated members of the tribe. In ancient prehistory, such tribes were small enough that everyone knew everybody else. These local groups bonded through shared ordeals such as painful initiations, by hunting dangerous animals together, and by fighting bravely on the battlefield.
Nowadays, however, fusion is scaled up to vastly bigger groups, thanks to the ability of the world’s media—including social media—to fill our heads with images of horrendous suffering in faraway regional conflicts.
When I met with one of the former leaders of the terrorist organization Jemaah Islamiyah in Indonesia, he told me he first became radicalized in the 1980s after reading newspaper reports about the treatment of fellow Muslims by Russian soldiers in Afghanistan. Twenty years later, however, nearly a third of American extremists were radicalized via social media feeds, and by 2016 that proportion had risen to about three quarters. Smartphones and immersive reporting shrinks the world to such an extent that forms of shared suffering in face-to-face groups can now be largely recreated and spread to millions of people across thousands of miles at the click of a button.
Fusion based on shared suffering may be powerful, but is not sufficient by itself to motivate violent extremism. Our research suggests that three other ingredients are also necessary to produce the deadly cocktail: outgroup threat, demonization of the enemy, and the belief that peaceful alternatives are lacking. In regions such as Gaza, where the sufferings of civilians are regularly captured on video and shared around the world, it is only natural that rates of fusion among those watching on in horror will increase. If people believe that peaceful solutions are impossible, violent extremism will spiral.
Technology
Samsung Unpacked: Samsung teased an extra-thin S25 model at Unpacked
Samsung Unpacked’s “one more thing” was a bit of a weird one. After the presentation ended, the company rolled a brief pre-packaged video of the Galaxy Edge — not to be confused with the “Star Wars” theme park of the same name.
Though limited, the reveal was confirmation of earlier rumors that the hardware giant is working on an extra-thin version of its new S25 flagship. The Galaxy S25 Edge is, presumably, another tier for the line, slotting in alongside the S25, S25+, and S25 Ultra.
Key details, including pricing, availability, and actual thickness were not revealed, though the company did showcase what appeared to be dummy models at Wednesday’s event. Early rumors pointed to a 6.4 mm thickness, a considerable reduction from the base Galaxy S25’s 7.2 mm.
Samsung clearly wanted to avoid taking too much wind out of the Galaxy S25’s sails during the event, so it opted instead for a more cryptic reveal. Even so, the mere appearance of the device at Unpacked may be enough to keep early adopters from preordering the S25 ahead of its February 7 release.
After all, those are precisely the folks who get excited by things like a 0.8 mm profile reduction.
-
Fashion8 years ago
These ’90s fashion trends are making a comeback in 2025
-
Entertainment8 years ago
The Season 9 ‘ Game of Thrones’ is here.
-
Fashion8 years ago
9 spring/summer 2025 fashion trends to know for next season
-
Entertainment8 years ago
The old and New Edition cast comes together to perform You’re Not My Kind of Girl.
-
Sports8 years ago
Ethical Hacker: “I’ll Show You Why Google Has Just Shut Down Their Quantum Chip”
-
Business8 years ago
Uber and Lyft are finally available in all of New York State
-
Entertainment8 years ago
Disney’s live-action Aladdin finally finds its stars
-
Sports8 years ago
Steph Curry finally got the contract he deserves from the Warriors
-
Entertainment8 years ago
Mod turns ‘Counter-Strike’ into a ‘Tekken’ clone with fighting chickens
-
Fashion8 years ago
Your comprehensive guide to this fall’s biggest trends
You must be logged in to post a comment Login