Stockholm startup Neko Health has made a big bet on consumers wanting to learn about their state of health and how to prevent things going wrong. Now, investors are making a big bet on Neko.
The startup has raised a fresh $260 million in funding, a Series B that values Neko at $1.8 billion post-money, TechCrunch has learned exclusively.
Neko will be using the capital to break into new markets like the U.S.; continue developing its diagnostics, potentially with acquisitions; and to open more clinics in response to demand. With its waitlist now at over 100,000 people – up from 40,000 just a few months ago – Neko has scanned and evaluated 10,000 patients to date in clinics in Stockholm and its newer market of London.
“It’s very clear that there’s incredible demand for a different way of thinking about health care,” Hjalmar Nilsonne, the CEO and co-founder, said in an interview. He spoke to TechCrunch over a video link from New York, where he is working on laying the groundwork for setting up clinics in the U.S. market.
Advertisement
The U.S. is a priority, he said, because right now it accounts for the most people on its waitlist outside of Europe. “Of course, we want to come to the U.S. We think there’s a lot we could contribute to the ecosystem here, made possible by this funding round,” he added.
Lightspeed Venture Partners, a new investor in the company, is leading this Series B, with General Catalyst, O.G. Venture Partners, Rosello, Lakestar and Atomico participating. The round follows a Series A of $65 million in 2023 from Lakestar, Atomico, General Catalyst and Prima Materia, the investment firm co-founded by Spotify’s Daniel Ek, who happens to be the other co-founder of Neko. Prima Materia also seeded Neko with its initial funding but is not an investor in this latest round.
The funding and Neko’s growth are coming at a time when demands are shifting in the world of healthcare.
Around the world, whether healthcare systems are state-backed or privatized, there’s been a rising focus on preventative healthcare to spot signs before they develop into problems, including to offset the costs of handling chronic and complex conditions in populations that are living longer than before.
Alongside that, there has been a massive injection of technology into the worlds of medicine and health: new devices, new insights, and applications powered by, for example, artificial intelligence are changing how doctors are interacting with patients, what they are able to diagnose, and what patients are looking for in a medical environment.
Advertisement
Not all of these advances are evolving seamlessly — very far from it — but they show few signs of going away, and Neko is playing into all of these changes.
The Neko Health experience involves a visit to a clinic — calm, futuristic, minimalist — where, for £300, a customer gets an hour-long exam based around proprietary hardware and software. That exam generates “millions of health data points,” Neko says.
Moles and other marks on your skin are detected and counted as part of a check for skin cancer; waist circumference, blood pressure, blood sugar, cholesterol and triglyceride levels, heart rate, grip strength and other parameters are measured and used to determine whether you are at risk of metabolic syndrome, stroke, heart attack, diabetes and more. The visit includes a consultation with a doctor and recommendations for follow-ups if needed.
Those follow-ups might come shortly after the initial visit — for example, further monitoring of blood pressure or heart activity — or it might be another full appointment the following year. Nilsonne said that currently 80% of its customers have rebooked and paid in advance for appointments in a year’s time.
Considering Neko is a company that has staked its whole ethos on the power of data and advance planning, it had a fairly random start in life.
Advertisement
It was co-founded back in 2018 after Ek reached out to Nilsonne over Twitter to chat about the state of the healthcare market in response to a tweet of Nilsonne’s. Neither have backgrounds in the field – Nilsson’s previous startup was in climate tech – but through ongoing conversations, early ideas for Neko began to form.
It took six years to bring together a team and work out Neko’s vertically-integrated approach. Even so, Nilsonne said that Neko went into the market hoping for the best but unsure if their idea would resonate; now, according to the company, demand exceeds capacity.
Looking ahead, along with building more clinics to take in more users, Neko is focused on R&D around its medical hardware and software.
It’s starting from a fairly low-tech baseline because of the costs until recently of building and owning medical devices. “The average ECG machine in primary care is 15 years old, meaning the software is 15 years old,” Nilsonne said. “We have a completely different model where we’re vertically integrated, meaning we make these devices, we make the software, and we have the clinic.”
Advertisement
He added that Neko’s aim is to have updates on a yearly cadence, bringing in more parameters to measure, and likely different tiers of service at different price points.
“The body scan today is kind of the iPod moment for Neko,” he said. “The iPod was an iconic product that people loved, and that was exciting. But no one today is using an iPod. It enabled Apple to invest in this incredible paradigm of hand held computational devices. So we very much see this as the beginning of a journey where we’re trying to contribute, you know, incredibly affordable, high quality preventative diagnostics, and every year we’re going to be able to do more and more with less and less.”
The funding round, he said, will “allow us to double down and really increase our investments in making the product better, which is ultimately about solving some of the core problems in health care.”
It will also give Neko a chance to put more space between itself and others looking at preventative healthcare opportunities, such as Zoi in France and Aware in Germany. The capital could also set it apart from efforts from public health services, such as the Health Check provided by the NHS in the U.K., which covers many of the same areas that Neko does.
Advertisement
Some weeks ago, I heard from one of Neko’s early backers that some of the most insistent waitlisters were investors who wanted to check out the company first-hand for the health of their bodies and of their funds.
It seems that getting Lightspeed off the waitlist quickly yielded a strong result. As part of this funding round, Lightspeed partner Bejul Somaia will join Neko’s board.
The one exception to that is the UMG v. Anthropic case, because at least early on, earlier versions of Anthropic would generate the song lyrics for songs in the output. That’s a problem. The current status of that case is they’ve put safeguards in place to try to prevent that from happening, and the parties have sort of agreed that, pending the resolution of the case, those safeguards are sufficient, so they’re no longer seeking a preliminary injunction.
At the end of the day, the harder question for the AI companies is not is it legal to engage in training? It’s what do you do when your AI generates output that is too similar to a particular work?
Do you expect the majority of these cases to go to trial, or do you see settlements on the horizon?
There may well be some settlements. Where I expect to see settlements is with big players who either have large swaths of content or content that’s particularly valuable. The New York Times might end up with a settlement, and with a licensing deal, perhaps where OpenAI pays money to use New York Times content.
Advertisement
There’s enough money at stake that we’re probably going to get at least some judgments that set the parameters. The class-action plaintiffs, my sense is they have stars in their eyes. There are lots of class actions, and my guess is that the defendants are going to be resisting those and hoping to win on summary judgment. It’s not obvious that they go to trial. The Supreme Court in the Google v. Oracle case nudged fair-use law very strongly in the direction of being resolved on summary judgment, not in front of a jury. I think the AI companies are going to try very hard to get those cases decided on summary judgment.
Why would it be better for them to win on summary judgment versus a jury verdict?
It’s quicker and it’s cheaper than going to trial. And AI companies are worried that they’re not going to be viewed as popular, that a lot of people are going to think, Oh, you made a copy of the work that should be illegal and not dig into the details of the fair-use doctrine.
There have been lots of deals between AI companies and media outlets, content providers, and other rights holders. Most of the time, these deals appear to be more about search than foundational models, or at least that’s how it’s been described to me. In your opinion, is licensing content to be used in AI search engines—where answers are sourced by retrieval augmented generation or RAG—something that’s legally obligatory? Why are they doing it this way?
Advertisement
If you’re using retrieval augmented generation on targeted, specific content, then your fair-use argument gets more challenging. It’s much more likely that AI-generated search is going to generate text taken directly from one particular source in the output, and that’s much less likely to be a fair use. I mean, it could be—but the risky area is that it’s much more likely to be competing with the original source material. If instead of directing people to a New York Times story, I give them my AI prompt that uses RAG to take the text straight out of that New York Times story, that does seem like a substitution that could harm the New York Times. Legal risk is greater for the AI company.
What do you want people to know about the generative AI copyright fights that they might not already know, or they might have been misinformed about?
The thing that I hear most often that’s wrong as a technical matter is this concept that these are just plagiarism machines. All they’re doing is taking my stuff and then grinding it back out in the form of text and responses. I hear a lot of artists say that, and I hear a lot of lay people say that, and it’s just not right as a technical matter. You can decide if generative AI is good or bad. You can decide it’s lawful or unlawful. But it really is a fundamentally new thing we have not experienced before. The fact that it needs to train on a bunch of content to understand how sentences work, how arguments work, and to understand various facts about the world doesn’t mean it’s just kind of copying and pasting things or creating a collage. It really is generating things that nobody could expect or predict, and it’s giving us a lot of new content. I think that’s important and valuable.
The U.K.’s Competition and Markets Authority (CMA) is launching “strategic market status” (SMS) investigations into the mobile ecosystems of Apple and Google.
The investigations constitute part of the new Digital Markets, Competition and Consumers (DMCC) Act which passed last year and came into effect in January. The Act includes new powers for the CMA to designate companies as having strategic market status if they are deemed to be overly dominant, and propose remedies and interventions to improve competition.
The CMA announced its first such SMS investigation last week, launching a probe into Google Search’s market share which is reportedly around the 90% mark. The regulator announced at the time that a second one would be coming in January, and we now know that it’s using its fresh powers to establish whether Apple and Google have strategic market status in their respective mobile ecosystems, which covers areas like browsers, app stores, and operating systems.
‘Holding back innovation’
Today’s announcement doesn’t come as a major surprise. Back in August, the CMA said it was closing a duo of investigations into Apple and Google’s respective mobile app ecosystems, which it had launched starting back in 2021. However, the CMA made it clear that this would be more of a pause, and it would be looking to use its new powers to address competition concerns around the two biggest players in the mobile services market.
Advertisement
In November, an inquiry group set up by the CMA concluded that Apple’s mobile browser policies and a pact with Google were “holding back innovation” in the U.K. The findings noted that Apple forced third-party mobile browsers to use Apple’s browser engine, WebKit, which restricts what these browsers are able to do in comparison to Apple’s own Safari browser, and thus limits how they can effectively differentiate in what is a competitive market.
As part of its new probe, the CMA has now confirmed that it will look at “the extent of competition between and within” Apple’s and Google’s respective mobile ecosystems, including barriers that may be preventing others from competing. This will include whether either company is using their dominant position in operating systems, app distribution, or browsers to “favour their own apps and services” — many of which are bundled by default and can’t always be uninstalled.
On top of that, the CMA said it would look into whether either company imposes “unfair terms and conditions” on developers that wish to distribute their apps through their app stores.
Alex Haffner, competition partner at U.K. law firm Fladgate, said that today’s announcement was “wholly expected,” adding that the more interesting facet is how this new probe fits into the broader changes underway at the U.K. regulator.
Advertisement
Indeed, news emerged this week that the CMA had appointed ex-Amazon executive Doug Gurr as interim chair, constituting part of a wider shift as the U.K. positions itself as a pro-growth, pro-tech nation by cutting red tape and bureaucracy.
“What is more interesting is how this fits into the current sea change which is engulfing the broader organisation of the CMA and in particular the very clear steer it is getting from central government to ensure that regulation is consistently applied with its pro-growth agenda,” Haffner said in a statement issued to TechCrunch. “We can expect this to feature heavily once the CMA gets its teeth stuck into the specifics of the DMCC regime, and its dealings with the tech companies involved.”
Remedies
Today’s announcement kickstarts a three-week period during which relevant stakeholders are invited to submit comments as part of the investigations, with the outcomes expected to be announced by October 22, 2025. While it’s still early days, potential remedies — in the event that Apple and Google are deemed to have strategic market status — include requiring the companies to provide third-parties with greater access to key functionality to help them better compete. It also may include making it easier to pay for services outside of Apple and Google’s existing app store structure.
In a statement issued to TechCrunch, an Apple spokesperson said that it will “continue to engage constructively with the CMA” as their investigation progresses.
Advertisement
“Apple believes in thriving and dynamic markets where innovation can flourish,” the spokesperson said. “We face competition in every segment and jurisdiction where we operate, and our focus is always the trust of our users. In the U.K. alone, the iOS app economy supports hundreds of thousands of jobs and makes it possible for developers big and small to reach users on a trusted platform.”
Oliver Bethell, senior director for competition at Google, echoed this sentiment, noting that the company “will work constructively with the CMA.”
“Android’s openness has helped to expand choice, reduce prices and democratise access to smartphones and apps. It’s the only example of a successful and viable open source mobile operating system,” Bethell wrote in a blog post today. “We favour a way forward that avoids stifling choice and opportunities for U.K. consumers and businesses alike, and without risk to U.K. growth prospects.”
Claims backers of $500bn initiative “don’t actually have the money”
Sam Altman, Satya Nadella, hit back at Musk claims
The global AI market appears to have descended into a playground battle of insults after Elon Musk, Sam Altman, Satya Nadella, and others all clashed over the launch of Project Stargate.
Revealed earlier this week to huge fanfare as part of the new Trump administration’s plans to boost AI across the US, Project Stargate is reportedly set to see as much as $500 billion invested into data centers to support the increasing data needs of Altman’s OpenAI.
However, X owner and newly-anointed White House advisor Musk has sought to dampen enthusiasm, claiming in a series of online posts that Stargate’s investors (including Microsoft and Softbank) “don’t actually have the money”.
Project Stargate “swindler”
The initial pledges by Stargate’s partners were around $100 billion, part of which is being invested into a data center in Abilene, Texas.
However Musk looked to pour cold water on these claims, posting, “SoftBank has well under $10 billion secured. I have that on good authority.”
Advertisement
A later post, a reply to a post criticizing Altman, saw Musk say, “Sam is a swindler.”
For his part, Altman was quick to fire back, and in his own X post responding to Musk’s allegation that SoftBank was short of capital, stated “Wrong, as you surely know.”
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
“[Stargate] is great for the country. i realize what is great for the country isn’t always what’s optimal for your companies, but in your new role, i hope you’ll mostly put [US] first,” he added.
Advertisement
In later posts, Altman told Musk, “I genuinely respect your accomplishments and think you are the most inspiring entrepreneur of our time,” later adding, “I don’t think [Musk is] a nice person or treating us fairly, but you have to respect the guy, and he pushes all of us to be more ambitious.”
Altman was not the only figure to fire back at Musk’s claims, as Microsoft CEO Satya Nadella later declined to comment in detail, but did say, “All I know is, I’m good for my $80 billion,” when asked in a CNBC interview at the World Economic Forum in Davos.
We may see OpenAI’s agent tool, Operator, released sooner rather than later. Changes to ChatGPT’s code base suggest that Operator will be available as an early research preview to users on the $200 Pro subscription plan.
The changes aren’t yet publicly visible, but a user on X who goes by Choi spotted these updates in ChatGPT’s client-side code. TechCrunch separately identified the same references to Operator on OpenAI’s website.
Here are the three interesting tidbits we spotted:
There are multiple references to the operator.chatgpt.com URL. This URL currently redirects to the main chatgpt.com web page.
There will be a new popup that tells you to upgrade your plan if you want to try Operator. “Operator is currently only available to Pro users as an early research preview,” it says.
On the page that lists the Plus and Pro plans, OpenAI will add “Access to research preview of Operator” as one of the benefits of the Pro plan.
Bloomberg previously reported that OpenAI was working on a general-purpose agent that can perform tasks in a web browser for you.
While this sounds a bit abstract, think about all the mundane things you do regularly in your web browser with quite a few clicks — following someone on LinkedIn, adding an expense in Concur, assigning a task to someone in Asana, or changing the status of a prospect on Salesforce. An agent could perform such multi-step tasks based on an instruction set.
Advertisement
More recently, The Information reported that OpenAI could launch Operator as early as this week. With today’s changes, it seems like everything is ready for a public launch.
Anthropic has released an AI model that can control your PC using a “Computer Use” API and local tools that control your mouse and keyboard. It is currently available as a beta feature for developers.
It looks like Operator is going to be usable on ChatGPT’s website, meaning that it won’t interact with your local computer. Instead, OpenAI will likely run a web browser on its own servers to perform tasks for you.
Nevertheless, it indicates that OpenAI’s ability to interact with computers is progressing. Operator is a specific sandboxed implementation of the company’s underlying agentic framework. It’s going to be interesting to see if the company has more information to share on the technology that powers Operator.
Beyerdynamic’s new IEMs come in four specifications, for every band member
The numbers you need to remember are 70, 71, 72 or 73
…Oh, and $499, which is the price
Revered hi-fi brand Beyerdynamic (see the Aventho 300 for the firm’s most recent headphone hit, but that’s just for starters) has released a new line of professional in-ear monitors, and the company wants you to know that every member of the band has been specifically catered for here.
The DT 70 IE, DT 71 IE, DT 72 IE, and DT 73 IE (that’s the full quartet) all feature Beyerdynamic’s own TESLA.11 dynamic driver system, boasting a Total Harmonic Distortion (often abbreviated to THD) of just 0.02%, which is very low indeed – anything below 0.1% is typically considered gifted for an in-ear monitor. Beyer calls it “one of the loudest, lowest-distortion systems available”, but you also get five different sizes of silicone eartips and three pairs of Comply memory foam eartips to achieve a decent fit and seal (nobody wants distractions from the Amazon delivery guy outside while trying to lay down a particular riff).
So what’s different in each set? The acoustic tuning, friend. For example, if you’re a drummer, Beyer knows you need crisp bass and clear treble with just slightly reduced mids, to get what you need from the mix – so the DT 71 IE is the pair for you…
The new Beyerdynamic IEMs will be available in Q2 2025, priced $499.99 per pair, which is around £409 or AU$799, give or take (but those last two figures are guesstimates, rather than official prices).
Which of the Beyer bunch is best (for you)?
So, let’s briefly delve into which of Beyerdynamic’s quartet of IEMs might work best for you.
Advertisement
DT 70 IE is billed as the ideal set “for mixing and critical listening” with a “precise, linear tuning that follows the Fletcher-Munson curve”. So, it’s the set aimed squarely at the audiophile and the live mixer, with a cable that the company says “minimizes structure-borne noise”, plus a gold-plated MMCX connector for a stable, long-lasting connection.
DT 71 IE is quite simply “for drummers and bassists”with atailored sound signature that Beyerdynamic assures us “enhances low frequencies while ensuring detailed reproduction of cymbals, percussion and bass guitar overtones” with slightly reduced mids (because some vocalists can be a lot).
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Speaking of vocals, DT 72 IE is “for guitarists and singers” with a “subtly tuned bass” that its makers say won’t overwhelm during performance. Beyerdynamic also notes that the frequency response between 200-500 Hz compensates for the “occlusion effect,” which should nix any muffled mixes during the gig.
Advertisement
Finally, DT 73 IE is the pair for you if you’re an orchestral musician, pianist or keyboard player.Extra care here has been taken with treble overtones (there’s a subtle boost from 5kHz upwards), alongside natural bass and mids. It’s all about hearing intricate harmonic details clearly, but in a non-fatiguing sound profile.
Oh, and you may have spotted acclaimed jazz pianist, gospel artist and producer Cory Henry in the press shots. That’s because he and Gina Miles (winner of The Voice Season 23) will be helping to showcase the new products. How? By performing at select times at Beyerdynamic’s booth at the National Association of Music Merchants (or NAMM) in Anaheim, from (Thursday January 23) through Saturday, January 25. Don’t forget…
The dream of battery-free devices has taken an unlikely turn, as Carnegie Mellon researchers debuted Power-Over-Skin. The technology allows for electrical currents to travel through human skin in a bid to power things like blood sugar monitors, pacemakers, and even consumer wearables like smart glasses and fitness trackers.
Researchers note the tech is still in “early stages.” At the moment, they’ve showcased the tech supporting low-power electronics like the LED earring pictured above.
“It’s similar to how a radio uses the air as the medium between the transmitter station and your car stereo,” notes CMU researcher Andy Kong. “We’re just using body tissue as the transmitting medium in this case.”
AI companies like Google, OpenAI, and Anthropic want you to believe we’re on the cusp of Artificial General Intelligence (AGI)—a world where AI tools can outthink humans, handle complex professional tasks without breaking a sweat, and chart a new frontier of autonomous intelligence. Google just rehired the founder of Character.AI to accelerate its quest for AGI, OpenAI recently released its first “reasoning” model, and Anthropic’s CEO Dario Amodei says AGI could be achieved as early as 2026.
But here’s the uncomfortable truth: in the quest for AGI in high-stakes fields like medicine, law, veterinary advice, and financial planning, AI isn’t just “not there yet,” it may never get there.
Andy Kurtzig
CEO of Pearl AI Search, a division of JustAnswer.
The Hard Facts on AI’s Shortcomings
This year, Purdue researchers presented a study showing ChatGPT got programming questions wrong 52% of the time. In other equally high-stakes categories, GenAI does not fare much better.
When people’s health, wealth, and well-being hang in the balance, the current high failure rates of GenAI platforms are unacceptable. The hard truth is that this accuracy issue will be extremely challenging to overcome.
A recent Georgetown study suggests it might cost a staggering $1 trillion to improve AI’s quality by just 10%. Even then, it would remain worlds away from the reliability that matters in life-and-death scenarios. The “last mile” of accuracy — in which AI becomes undeniably safer than a human expert — will be far harder, more expensive, and time consuming to achieve than the public has been led to believe.
Advertisement
AI’s inaccuracy doesn’t just have theoretical or academic consequences. A 14-year-old boy recently sought guidance from an AI chatbot and, instead of directing him toward help, mental health resources, or even common decency, the AI urged him to take his own life. Tragically, he did. His family is now suing—and they’ll likely win—because the AI’s output wasn’t just a “hallucination” or cute error. It was catastrophic and it came from a system that was wrong with utter conviction. Like the reckless ‘Cliff Clavin’ (who wagered his entire Jeopardy winnings on the TV show ‘Cheers’) AI brims with confidence while spouting the complete wrong answer.
The Mechanical Turk 2.0—With a Twist
Today’s AI hype recalls the infamous 18th-century Mechanical Turk: a supposed chess-playing automaton that actually had a human hidden inside. Modern AI models also hide a dirty secret—they rely heavily on human input.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
From annotating and cleaning training data to moderating the content of outputs, tens of millions of humans are still enmeshed in almost every step of advancing GenAI, but the big foundational model companies can’t afford to admit this. Doing so would be acknowledging how far we are from true AGI. Instead, these platforms are locked into a “fake it till you make it” strategy, raising billions to buy more GPUs on the flimsy promise that brute force will magically deliver AGI.
Advertisement
It’s a pyramid scheme of hype: persuade the public that AGI is imminent, secure massive funding, build more giant data centers that burn more energy, and hope that, somehow, more compute will bridge the gap that honest science says may never be crossed.
This is painfully reminiscent of the buzz around Alexa, Cortana, Bixby, and Google Assistant just a decade ago. Users were told voice assistants would take over the world within months. Yet today, many of these devices gather dust, mostly relegated to setting kitchen timers or giving the day’s weather. The grand revolution never happened, and it’s a cautionary tale for today’s even grander AGI promises.
Shielding Themselves from Liability
Why wouldn’t major AI platforms just admit the truth about their accuracy? Because doing so would open the floodgates of liability.
Acknowledging fundamental flaws in AI’s reasoning would provide a smoking gun in court, as in the tragic case of the 14-year-old boy. With trillions of dollars at stake, no executive wants to hand a plaintiff’s lawyer the ultimate piece of evidence: “We knew it was dangerously flawed, and we shipped it anyway.”
Advertisement
Instead, companies double down on marketing spin, calling these deadly mistakes “hallucinations,” as though that’s an acceptable trade-off. If a doctor told a child to kill himself, should we call that a “hallucination?” Or, should we call it what it is — an unforgivable failure that deserves full legal consequence and permanent revocation of advice-giving privileges?
AI’s adoption plateau
People learned quickly that Alexa and the other voice assistants could not reliably answer their questions, so they just stopped using them for all but the most basic tasks. AI platforms will inevitably hit an adoption wall, endangering their current users while scaring away others that might rely on or try their platforms.
Think about the ups and downs of self-driving cars; despite carmakers’ huge autonomy promises – Tesla has committed to driverless robotaxis by 2027 – Goldman Sachs recently lowered its expectations for the use of even partially autonomous vehicles. Until autonomous cars meet a much higher standard, many humans will withhold complete trust.
Similarly, many users won’t put their full trust in AI even if it one day equals human intelligence; it must be vastly more capable than even the smartest human. Other users will be lulled in by AI’s ability to answer simple questions and burned when they make high-stakes inquiries. For either group, AI’s shortcomings won’t make it a sought-after tool.
Advertisement
A Necessary Pivot: Incorporate Human Judgment
These flawed AI platforms can’t be used for critical tasks until they either achieve the mythical AGI status or incorporate reliable human judgment.
Given the trillion-dollar cost projections, environmental toll of massive data centers, and mounting human casualties, the choice is clear: put human expertise at the forefront. Let’s stop pretending that AGI is right around the corner. That false narrative is deceiving some people and literally killing others.
Instead, use AI to empower humans and create new jobs where human judgment moderates machine output. Make the experts visible rather than hiding them behind a smokescreen of corporate bravado. Until and unless AI attains near-perfect reliability, human professionals are indispensable. It’s time we stop the hype, face the truth, and build a future where AI serves humanity—instead of endangering it.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
This prediction is based on several decades of research that my colleagues and I have been undertaking at the University of Oxford to establish what makes people willing to fight and die for their groups. We use a variety of methods, including interviews, surveys, and psychological experiments to collect data from a wide range of groups, such as tribal warriors, armed insurgents, terrorists, conventional soldiers, religious fundamentalists, and violent football fans.
We have found that life-changing and group-defining experiences cause our personal and collective identities to become fused together. We call it “identity fusion.” Fused individuals will stop at nothing to advance the interests of their groups, and this applies not only to acts we would applaud as heroic—such as rescuing children from burning buildings or taking a bullet for one’s comrades—but also acts of suicide terrorism.
Fusion is commonly measured by showing people a small circle (representing you) and a big circle (representing your group) and placing pairs of such circles in a sequence so that they overlap to varying degrees: not at all, then just a little bit, then a bit more, and so on until the little circle is completely enclosed in the big circle. Then people are asked which pair of circles best captures their relationship with the group. People who choose the one in which the little circle is inside the big circle are said to be “fused.” Those are people who love their group so much that they will do almost anything to protect it.
This isn’t unique to humans. Some species of birds will feign a broken wing to draw a predator away from their fledglings. One species—the superb fairy wren of Australasia—lures predators away from their young by making darting movements and squeaky sounds to imitate the behavior of a delectable mouse. Humans too will typically go to great lengths to protect their genetic relatives, especially their children who (except for identical twins) share more of their genes than other family members. But—unusually in the animal kingdom—humans often go further still by putting themselves in harm’s way to protect groups of genetically unrelated members of the tribe. In ancient prehistory, such tribes were small enough that everyone knew everybody else. These local groups bonded through shared ordeals such as painful initiations, by hunting dangerous animals together, and by fighting bravely on the battlefield.
Advertisement
Nowadays, however, fusion is scaled up to vastly bigger groups, thanks to the ability of the world’s media—including social media—to fill our heads with images of horrendous suffering in faraway regional conflicts.
When I met with one of the former leaders of the terrorist organization Jemaah Islamiyah in Indonesia, he told me he first became radicalized in the 1980s after reading newspaper reports about the treatment of fellow Muslims by Russian soldiers in Afghanistan. Twenty years later, however, nearly a third of American extremists were radicalized via social media feeds, and by 2016 that proportion had risen to about three quarters. Smartphones and immersive reporting shrinks the world to such an extent that forms of shared suffering in face-to-face groups can now be largely recreated and spread to millions of people across thousands of miles at the click of a button.
Fusion based on shared suffering may be powerful, but is not sufficient by itself to motivate violent extremism. Our research suggests that three other ingredients are also necessary to produce the deadly cocktail: outgroup threat, demonization of the enemy, and the belief that peaceful alternatives are lacking. In regions such as Gaza, where the sufferings of civilians are regularly captured on video and shared around the world, it is only natural that rates of fusion among those watching on in horror will increase. If people believe that peaceful solutions are impossible, violent extremism will spiral.
Samsung Unpacked’s “one more thing” was a bit of a weird one. After the presentation ended, the company rolled a brief pre-packaged video of the Galaxy Edge — not to be confused with the “Star Wars” theme park of the same name.
Though limited, the reveal was confirmation of earlier rumors that the hardware giant is working on an extra-thin version of its new S25 flagship. The Galaxy S25 Edge is, presumably, another tier for the line, slotting in alongside the S25, S25+, and S25 Ultra.
Key details, including pricing, availability, and actual thickness were not revealed, though the company did showcase what appeared to be dummy models at Wednesday’s event. Early rumors pointed to a 6.4 mm thickness, a considerable reduction from the base Galaxy S25’s 7.2 mm.
Samsung clearly wanted to avoid taking too much wind out of the Galaxy S25’s sails during the event, so it opted instead for a more cryptic reveal. Even so, the mere appearance of the device at Unpacked may be enough to keep early adopters from preordering the S25 ahead of its February 7 release.
Advertisement
After all, those are precisely the folks who get excited by things like a 0.8 mm profile reduction.
GMKTec joins HP with a Ryzen AI Max+ 395 workstation mini PC
The Max+ 395 is currently the world’s most powerful APU and could be a nuisance to Nvidia’s DIGITS GB10
Expect products based on the 395 to roll out later in Q2 2025 after Chinese New Year
GMK, an emerging Chinese brand in the mini PC market, has announced (originally in Chinese) the upcoming launch of a new product powered by the AMD Ryzen AI Max+ 395.
The company claims this will be the world’s first mini PC featuring the Ryzen AI Max+ 395 chip. It also plans to offer versions with non-Plus Ryzen AI Max APUs.
According to ITHome (originally in Chinese), the device is part of GMK’s “ALL IN AI” strategy and is expected to debut in the first or second quarter of 2025.
AMD’s Ryzen AI Max+ 395 chip
The AMD Ryzen AI Max+ 395 processor boasts 16 Zen 5 cores, 32 threads, and a 5.1 GHz peak clock speed. Additionally, it integrates 40 RDNA 3.5 compute units, delivering solid graphics performance via the Radeon 8060S iGPU.
According to benchmarks, the Ryzen AI Max+ 395 outpaces the Intel Lunar Lake Core Ultra 9 288V in CPU tasks by threefold and surpasses NVIDIA’s GeForce RTX 4090 in AI performance tests.
Advertisement
With a configurable TDP of 45-120W, the processor balances efficiency and performance, positioning itself as a competitive choice for AI workloads, gaming, and mobile workstations.
This platform adopts LPDDR5x memory, achieving a bandwidth of up to 256GB/s. It also integrates a 50TOPS “XDNA 2” NPU, providing impressive AI performance tailored towards Windows 11 AI+ PCs.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The Max+ 395 specs suggest that the new GMK mini PC will likely surpass the performance of the current Evo X1 model, which features a Ryzen Strix Point HX 370 APU and is priced at $919.
You must be logged in to post a comment Login