Caffeinated Capital, a San Francisco venture firm started by a solo capitalist Raymond Tonsing, is raising a fifth fund of $400 million, according to a regulatory filing.
The firm, an early investor in software company Airtable and defense startup Saronic, has already raised $160 million toward the fund. If Caffeinated hits its target, it will be the 15-year-old firm’s largest capital haul date. Although the outfit didn’t announce its previous fund, PitchBook data estimates that Caffeinated closed its fund four with a total of $209 million in commitments.
Although Tonsing was Caffeinated’s only general partner until four years ago, Varun Gupta, who led data science and machine learning at Affirm, joined him as a second general partner in 2020.
Tonsing was an early investor in Affirm, a buy-now-pay-later platform that went public in 2021. The firm’s other notable exits include A/B testing startup Optimizely, which PitchBook estimates was sold for $600 million in 2020.
Nvidia has commanding lead over rivals in latest Adobe After Effects benchmarks
Even lower-performance Nvidia GPUs outpace Intel and AMD cards
But to Apple’s credit, the M3 Max pulls ahead in 2D significantly despite its laptop form factor
Nvidia‘s GeForce RTX 40-Series GPUs has shown off some significant advantages when it comes to dealing with 3D workflows over comparable Intel and AMD cards, new figures have claimed.
The latest Puget Systems After Effects benchmarks say Nvidia’s flagship GeForce RTX 4090 delivered up to 20 times the performance of Apple’s MacBook Pro M3 Max in 3D tasks; reflecting the card’s technical design focus on GPU-intensive workloads.
The 4090, equipped with 24GB of GDDR6X memory and 16,384 CUDA cores, nearly doubles the performance of its own mid-range RTX 4060 in the Advanced 3D tests that utilize Adobe’s Advanced 3D rendering engine which is heavily dependent on GPU acceleration.
Nvidia RTX 4090 outperforms its rivals
Comparatively, the RTX 4060, featuring 8GB of GDDR6 memory and 3,072 CUDA cores, outpaces AMD’s flagship Radeon RX 7900 XTX, which boasts 24GB of GDDR6 memory and 6,144 stream processors.
Despite its superior memory capacity, the Radeon GPU trails the RTX 4060 by 25% in overall 3D performance.
Advertisement
Intel’s Arc GPUs, such as the Arc B580 with 12GB of VRAM and 3,456 cores also fall short of Nvidia’s mid-range offerings, trailing the RTX 4060 by approximately 22%.
Apple’s M3 Max, equipped with 40 GPU cores, performs roughly 10 times slower than the RTX 4060 in GPU-accelerated 3D tasks.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
However, while Nvidia leads in 3D rendering, Apple’s M3 Max performs well in 2D workflows due to its CPU efficiencies. The MacBook Pro excels in projects emphasizing 2D layers and effects, where GPU performance plays a secondary role. Nevertheless, for CPU-dependent tracking tasks, Nvidia and Apple systems perform similarly.
Advertisement
Nvidia owes its dominance in After Effects 3D workflows to its advanced GPU architecture and software integration. The RTX 4090, for instance, comes with technologies like the Ada Lovelace architecture and CUDA framework which optimizes 3D GPU performance.
The one exception to that is the UMG v. Anthropic case, because at least early on, earlier versions of Anthropic would generate the song lyrics for songs in the output. That’s a problem. The current status of that case is they’ve put safeguards in place to try to prevent that from happening, and the parties have sort of agreed that, pending the resolution of the case, those safeguards are sufficient, so they’re no longer seeking a preliminary injunction.
At the end of the day, the harder question for the AI companies is not is it legal to engage in training? It’s what do you do when your AI generates output that is too similar to a particular work?
Do you expect the majority of these cases to go to trial, or do you see settlements on the horizon?
There may well be some settlements. Where I expect to see settlements is with big players who either have large swaths of content or content that’s particularly valuable. The New York Times might end up with a settlement, and with a licensing deal, perhaps where OpenAI pays money to use New York Times content.
Advertisement
There’s enough money at stake that we’re probably going to get at least some judgments that set the parameters. The class-action plaintiffs, my sense is they have stars in their eyes. There are lots of class actions, and my guess is that the defendants are going to be resisting those and hoping to win on summary judgment. It’s not obvious that they go to trial. The Supreme Court in the Google v. Oracle case nudged fair-use law very strongly in the direction of being resolved on summary judgment, not in front of a jury. I think the AI companies are going to try very hard to get those cases decided on summary judgment.
Why would it be better for them to win on summary judgment versus a jury verdict?
It’s quicker and it’s cheaper than going to trial. And AI companies are worried that they’re not going to be viewed as popular, that a lot of people are going to think, Oh, you made a copy of the work that should be illegal and not dig into the details of the fair-use doctrine.
There have been lots of deals between AI companies and media outlets, content providers, and other rights holders. Most of the time, these deals appear to be more about search than foundational models, or at least that’s how it’s been described to me. In your opinion, is licensing content to be used in AI search engines—where answers are sourced by retrieval augmented generation or RAG—something that’s legally obligatory? Why are they doing it this way?
Advertisement
If you’re using retrieval augmented generation on targeted, specific content, then your fair-use argument gets more challenging. It’s much more likely that AI-generated search is going to generate text taken directly from one particular source in the output, and that’s much less likely to be a fair use. I mean, it could be—but the risky area is that it’s much more likely to be competing with the original source material. If instead of directing people to a New York Times story, I give them my AI prompt that uses RAG to take the text straight out of that New York Times story, that does seem like a substitution that could harm the New York Times. Legal risk is greater for the AI company.
What do you want people to know about the generative AI copyright fights that they might not already know, or they might have been misinformed about?
The thing that I hear most often that’s wrong as a technical matter is this concept that these are just plagiarism machines. All they’re doing is taking my stuff and then grinding it back out in the form of text and responses. I hear a lot of artists say that, and I hear a lot of lay people say that, and it’s just not right as a technical matter. You can decide if generative AI is good or bad. You can decide it’s lawful or unlawful. But it really is a fundamentally new thing we have not experienced before. The fact that it needs to train on a bunch of content to understand how sentences work, how arguments work, and to understand various facts about the world doesn’t mean it’s just kind of copying and pasting things or creating a collage. It really is generating things that nobody could expect or predict, and it’s giving us a lot of new content. I think that’s important and valuable.
The U.K.’s Competition and Markets Authority (CMA) is launching “strategic market status” (SMS) investigations into the mobile ecosystems of Apple and Google.
The investigations constitute part of the new Digital Markets, Competition and Consumers (DMCC) Act which passed last year and came into effect in January. The Act includes new powers for the CMA to designate companies as having strategic market status if they are deemed to be overly dominant, and propose remedies and interventions to improve competition.
The CMA announced its first such SMS investigation last week, launching a probe into Google Search’s market share which is reportedly around the 90% mark. The regulator announced at the time that a second one would be coming in January, and we now know that it’s using its fresh powers to establish whether Apple and Google have strategic market status in their respective mobile ecosystems, which covers areas like browsers, app stores, and operating systems.
‘Holding back innovation’
Today’s announcement doesn’t come as a major surprise. Back in August, the CMA said it was closing a duo of investigations into Apple and Google’s respective mobile app ecosystems, which it had launched starting back in 2021. However, the CMA made it clear that this would be more of a pause, and it would be looking to use its new powers to address competition concerns around the two biggest players in the mobile services market.
Advertisement
In November, an inquiry group set up by the CMA concluded that Apple’s mobile browser policies and a pact with Google were “holding back innovation” in the U.K. The findings noted that Apple forced third-party mobile browsers to use Apple’s browser engine, WebKit, which restricts what these browsers are able to do in comparison to Apple’s own Safari browser, and thus limits how they can effectively differentiate in what is a competitive market.
As part of its new probe, the CMA has now confirmed that it will look at “the extent of competition between and within” Apple’s and Google’s respective mobile ecosystems, including barriers that may be preventing others from competing. This will include whether either company is using their dominant position in operating systems, app distribution, or browsers to “favour their own apps and services” — many of which are bundled by default and can’t always be uninstalled.
On top of that, the CMA said it would look into whether either company imposes “unfair terms and conditions” on developers that wish to distribute their apps through their app stores.
Alex Haffner, competition partner at U.K. law firm Fladgate, said that today’s announcement was “wholly expected,” adding that the more interesting facet is how this new probe fits into the broader changes underway at the U.K. regulator.
Advertisement
Indeed, news emerged this week that the CMA had appointed ex-Amazon executive Doug Gurr as interim chair, constituting part of a wider shift as the U.K. positions itself as a pro-growth, pro-tech nation by cutting red tape and bureaucracy.
“What is more interesting is how this fits into the current sea change which is engulfing the broader organisation of the CMA and in particular the very clear steer it is getting from central government to ensure that regulation is consistently applied with its pro-growth agenda,” Haffner said in a statement issued to TechCrunch. “We can expect this to feature heavily once the CMA gets its teeth stuck into the specifics of the DMCC regime, and its dealings with the tech companies involved.”
Remedies
Today’s announcement kickstarts a three-week period during which relevant stakeholders are invited to submit comments as part of the investigations, with the outcomes expected to be announced by October 22, 2025. While it’s still early days, potential remedies — in the event that Apple and Google are deemed to have strategic market status — include requiring the companies to provide third-parties with greater access to key functionality to help them better compete. It also may include making it easier to pay for services outside of Apple and Google’s existing app store structure.
In a statement issued to TechCrunch, an Apple spokesperson said that it will “continue to engage constructively with the CMA” as their investigation progresses.
Advertisement
“Apple believes in thriving and dynamic markets where innovation can flourish,” the spokesperson said. “We face competition in every segment and jurisdiction where we operate, and our focus is always the trust of our users. In the U.K. alone, the iOS app economy supports hundreds of thousands of jobs and makes it possible for developers big and small to reach users on a trusted platform.”
Oliver Bethell, senior director for competition at Google, echoed this sentiment, noting that the company “will work constructively with the CMA.”
“Android’s openness has helped to expand choice, reduce prices and democratise access to smartphones and apps. It’s the only example of a successful and viable open source mobile operating system,” Bethell wrote in a blog post today. “We favour a way forward that avoids stifling choice and opportunities for U.K. consumers and businesses alike, and without risk to U.K. growth prospects.”
Claims backers of $500bn initiative “don’t actually have the money”
Sam Altman, Satya Nadella, hit back at Musk claims
The global AI market appears to have descended into a playground battle of insults after Elon Musk, Sam Altman, Satya Nadella, and others all clashed over the launch of Project Stargate.
Revealed earlier this week to huge fanfare as part of the new Trump administration’s plans to boost AI across the US, Project Stargate is reportedly set to see as much as $500 billion invested into data centers to support the increasing data needs of Altman’s OpenAI.
However, X owner and newly-anointed White House advisor Musk has sought to dampen enthusiasm, claiming in a series of online posts that Stargate’s investors (including Microsoft and Softbank) “don’t actually have the money”.
Project Stargate “swindler”
The initial pledges by Stargate’s partners were around $100 billion, part of which is being invested into a data center in Abilene, Texas.
However Musk looked to pour cold water on these claims, posting, “SoftBank has well under $10 billion secured. I have that on good authority.”
Advertisement
A later post, a reply to a post criticizing Altman, saw Musk say, “Sam is a swindler.”
For his part, Altman was quick to fire back, and in his own X post responding to Musk’s allegation that SoftBank was short of capital, stated “Wrong, as you surely know.”
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
“[Stargate] is great for the country. i realize what is great for the country isn’t always what’s optimal for your companies, but in your new role, i hope you’ll mostly put [US] first,” he added.
Advertisement
In later posts, Altman told Musk, “I genuinely respect your accomplishments and think you are the most inspiring entrepreneur of our time,” later adding, “I don’t think [Musk is] a nice person or treating us fairly, but you have to respect the guy, and he pushes all of us to be more ambitious.”
Altman was not the only figure to fire back at Musk’s claims, as Microsoft CEO Satya Nadella later declined to comment in detail, but did say, “All I know is, I’m good for my $80 billion,” when asked in a CNBC interview at the World Economic Forum in Davos.
We may see OpenAI’s agent tool, Operator, released sooner rather than later. Changes to ChatGPT’s code base suggest that Operator will be available as an early research preview to users on the $200 Pro subscription plan.
The changes aren’t yet publicly visible, but a user on X who goes by Choi spotted these updates in ChatGPT’s client-side code. TechCrunch separately identified the same references to Operator on OpenAI’s website.
Here are the three interesting tidbits we spotted:
There are multiple references to the operator.chatgpt.com URL. This URL currently redirects to the main chatgpt.com web page.
There will be a new popup that tells you to upgrade your plan if you want to try Operator. “Operator is currently only available to Pro users as an early research preview,” it says.
On the page that lists the Plus and Pro plans, OpenAI will add “Access to research preview of Operator” as one of the benefits of the Pro plan.
Bloomberg previously reported that OpenAI was working on a general-purpose agent that can perform tasks in a web browser for you.
While this sounds a bit abstract, think about all the mundane things you do regularly in your web browser with quite a few clicks — following someone on LinkedIn, adding an expense in Concur, assigning a task to someone in Asana, or changing the status of a prospect on Salesforce. An agent could perform such multi-step tasks based on an instruction set.
Advertisement
More recently, The Information reported that OpenAI could launch Operator as early as this week. With today’s changes, it seems like everything is ready for a public launch.
Anthropic has released an AI model that can control your PC using a “Computer Use” API and local tools that control your mouse and keyboard. It is currently available as a beta feature for developers.
It looks like Operator is going to be usable on ChatGPT’s website, meaning that it won’t interact with your local computer. Instead, OpenAI will likely run a web browser on its own servers to perform tasks for you.
Nevertheless, it indicates that OpenAI’s ability to interact with computers is progressing. Operator is a specific sandboxed implementation of the company’s underlying agentic framework. It’s going to be interesting to see if the company has more information to share on the technology that powers Operator.
Beyerdynamic’s new IEMs come in four specifications, for every band member
The numbers you need to remember are 70, 71, 72 or 73
…Oh, and $499, which is the price
Revered hi-fi brand Beyerdynamic (see the Aventho 300 for the firm’s most recent headphone hit, but that’s just for starters) has released a new line of professional in-ear monitors, and the company wants you to know that every member of the band has been specifically catered for here.
The DT 70 IE, DT 71 IE, DT 72 IE, and DT 73 IE (that’s the full quartet) all feature Beyerdynamic’s own TESLA.11 dynamic driver system, boasting a Total Harmonic Distortion (often abbreviated to THD) of just 0.02%, which is very low indeed – anything below 0.1% is typically considered gifted for an in-ear monitor. Beyer calls it “one of the loudest, lowest-distortion systems available”, but you also get five different sizes of silicone eartips and three pairs of Comply memory foam eartips to achieve a decent fit and seal (nobody wants distractions from the Amazon delivery guy outside while trying to lay down a particular riff).
So what’s different in each set? The acoustic tuning, friend. For example, if you’re a drummer, Beyer knows you need crisp bass and clear treble with just slightly reduced mids, to get what you need from the mix – so the DT 71 IE is the pair for you…
The new Beyerdynamic IEMs will be available in Q2 2025, priced $499.99 per pair, which is around £409 or AU$799, give or take (but those last two figures are guesstimates, rather than official prices).
Which of the Beyer bunch is best (for you)?
So, let’s briefly delve into which of Beyerdynamic’s quartet of IEMs might work best for you.
Advertisement
DT 70 IE is billed as the ideal set “for mixing and critical listening” with a “precise, linear tuning that follows the Fletcher-Munson curve”. So, it’s the set aimed squarely at the audiophile and the live mixer, with a cable that the company says “minimizes structure-borne noise”, plus a gold-plated MMCX connector for a stable, long-lasting connection.
DT 71 IE is quite simply “for drummers and bassists”with atailored sound signature that Beyerdynamic assures us “enhances low frequencies while ensuring detailed reproduction of cymbals, percussion and bass guitar overtones” with slightly reduced mids (because some vocalists can be a lot).
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Speaking of vocals, DT 72 IE is “for guitarists and singers” with a “subtly tuned bass” that its makers say won’t overwhelm during performance. Beyerdynamic also notes that the frequency response between 200-500 Hz compensates for the “occlusion effect,” which should nix any muffled mixes during the gig.
Advertisement
Finally, DT 73 IE is the pair for you if you’re an orchestral musician, pianist or keyboard player.Extra care here has been taken with treble overtones (there’s a subtle boost from 5kHz upwards), alongside natural bass and mids. It’s all about hearing intricate harmonic details clearly, but in a non-fatiguing sound profile.
Oh, and you may have spotted acclaimed jazz pianist, gospel artist and producer Cory Henry in the press shots. That’s because he and Gina Miles (winner of The Voice Season 23) will be helping to showcase the new products. How? By performing at select times at Beyerdynamic’s booth at the National Association of Music Merchants (or NAMM) in Anaheim, from (Thursday January 23) through Saturday, January 25. Don’t forget…
The dream of battery-free devices has taken an unlikely turn, as Carnegie Mellon researchers debuted Power-Over-Skin. The technology allows for electrical currents to travel through human skin in a bid to power things like blood sugar monitors, pacemakers, and even consumer wearables like smart glasses and fitness trackers.
Researchers note the tech is still in “early stages.” At the moment, they’ve showcased the tech supporting low-power electronics like the LED earring pictured above.
“It’s similar to how a radio uses the air as the medium between the transmitter station and your car stereo,” notes CMU researcher Andy Kong. “We’re just using body tissue as the transmitting medium in this case.”
AI companies like Google, OpenAI, and Anthropic want you to believe we’re on the cusp of Artificial General Intelligence (AGI)—a world where AI tools can outthink humans, handle complex professional tasks without breaking a sweat, and chart a new frontier of autonomous intelligence. Google just rehired the founder of Character.AI to accelerate its quest for AGI, OpenAI recently released its first “reasoning” model, and Anthropic’s CEO Dario Amodei says AGI could be achieved as early as 2026.
But here’s the uncomfortable truth: in the quest for AGI in high-stakes fields like medicine, law, veterinary advice, and financial planning, AI isn’t just “not there yet,” it may never get there.
Andy Kurtzig
CEO of Pearl AI Search, a division of JustAnswer.
The Hard Facts on AI’s Shortcomings
This year, Purdue researchers presented a study showing ChatGPT got programming questions wrong 52% of the time. In other equally high-stakes categories, GenAI does not fare much better.
When people’s health, wealth, and well-being hang in the balance, the current high failure rates of GenAI platforms are unacceptable. The hard truth is that this accuracy issue will be extremely challenging to overcome.
A recent Georgetown study suggests it might cost a staggering $1 trillion to improve AI’s quality by just 10%. Even then, it would remain worlds away from the reliability that matters in life-and-death scenarios. The “last mile” of accuracy — in which AI becomes undeniably safer than a human expert — will be far harder, more expensive, and time consuming to achieve than the public has been led to believe.
Advertisement
AI’s inaccuracy doesn’t just have theoretical or academic consequences. A 14-year-old boy recently sought guidance from an AI chatbot and, instead of directing him toward help, mental health resources, or even common decency, the AI urged him to take his own life. Tragically, he did. His family is now suing—and they’ll likely win—because the AI’s output wasn’t just a “hallucination” or cute error. It was catastrophic and it came from a system that was wrong with utter conviction. Like the reckless ‘Cliff Clavin’ (who wagered his entire Jeopardy winnings on the TV show ‘Cheers’) AI brims with confidence while spouting the complete wrong answer.
The Mechanical Turk 2.0—With a Twist
Today’s AI hype recalls the infamous 18th-century Mechanical Turk: a supposed chess-playing automaton that actually had a human hidden inside. Modern AI models also hide a dirty secret—they rely heavily on human input.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
From annotating and cleaning training data to moderating the content of outputs, tens of millions of humans are still enmeshed in almost every step of advancing GenAI, but the big foundational model companies can’t afford to admit this. Doing so would be acknowledging how far we are from true AGI. Instead, these platforms are locked into a “fake it till you make it” strategy, raising billions to buy more GPUs on the flimsy promise that brute force will magically deliver AGI.
Advertisement
It’s a pyramid scheme of hype: persuade the public that AGI is imminent, secure massive funding, build more giant data centers that burn more energy, and hope that, somehow, more compute will bridge the gap that honest science says may never be crossed.
This is painfully reminiscent of the buzz around Alexa, Cortana, Bixby, and Google Assistant just a decade ago. Users were told voice assistants would take over the world within months. Yet today, many of these devices gather dust, mostly relegated to setting kitchen timers or giving the day’s weather. The grand revolution never happened, and it’s a cautionary tale for today’s even grander AGI promises.
Shielding Themselves from Liability
Why wouldn’t major AI platforms just admit the truth about their accuracy? Because doing so would open the floodgates of liability.
Acknowledging fundamental flaws in AI’s reasoning would provide a smoking gun in court, as in the tragic case of the 14-year-old boy. With trillions of dollars at stake, no executive wants to hand a plaintiff’s lawyer the ultimate piece of evidence: “We knew it was dangerously flawed, and we shipped it anyway.”
Advertisement
Instead, companies double down on marketing spin, calling these deadly mistakes “hallucinations,” as though that’s an acceptable trade-off. If a doctor told a child to kill himself, should we call that a “hallucination?” Or, should we call it what it is — an unforgivable failure that deserves full legal consequence and permanent revocation of advice-giving privileges?
AI’s adoption plateau
People learned quickly that Alexa and the other voice assistants could not reliably answer their questions, so they just stopped using them for all but the most basic tasks. AI platforms will inevitably hit an adoption wall, endangering their current users while scaring away others that might rely on or try their platforms.
Think about the ups and downs of self-driving cars; despite carmakers’ huge autonomy promises – Tesla has committed to driverless robotaxis by 2027 – Goldman Sachs recently lowered its expectations for the use of even partially autonomous vehicles. Until autonomous cars meet a much higher standard, many humans will withhold complete trust.
Similarly, many users won’t put their full trust in AI even if it one day equals human intelligence; it must be vastly more capable than even the smartest human. Other users will be lulled in by AI’s ability to answer simple questions and burned when they make high-stakes inquiries. For either group, AI’s shortcomings won’t make it a sought-after tool.
Advertisement
A Necessary Pivot: Incorporate Human Judgment
These flawed AI platforms can’t be used for critical tasks until they either achieve the mythical AGI status or incorporate reliable human judgment.
Given the trillion-dollar cost projections, environmental toll of massive data centers, and mounting human casualties, the choice is clear: put human expertise at the forefront. Let’s stop pretending that AGI is right around the corner. That false narrative is deceiving some people and literally killing others.
Instead, use AI to empower humans and create new jobs where human judgment moderates machine output. Make the experts visible rather than hiding them behind a smokescreen of corporate bravado. Until and unless AI attains near-perfect reliability, human professionals are indispensable. It’s time we stop the hype, face the truth, and build a future where AI serves humanity—instead of endangering it.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
This prediction is based on several decades of research that my colleagues and I have been undertaking at the University of Oxford to establish what makes people willing to fight and die for their groups. We use a variety of methods, including interviews, surveys, and psychological experiments to collect data from a wide range of groups, such as tribal warriors, armed insurgents, terrorists, conventional soldiers, religious fundamentalists, and violent football fans.
We have found that life-changing and group-defining experiences cause our personal and collective identities to become fused together. We call it “identity fusion.” Fused individuals will stop at nothing to advance the interests of their groups, and this applies not only to acts we would applaud as heroic—such as rescuing children from burning buildings or taking a bullet for one’s comrades—but also acts of suicide terrorism.
Fusion is commonly measured by showing people a small circle (representing you) and a big circle (representing your group) and placing pairs of such circles in a sequence so that they overlap to varying degrees: not at all, then just a little bit, then a bit more, and so on until the little circle is completely enclosed in the big circle. Then people are asked which pair of circles best captures their relationship with the group. People who choose the one in which the little circle is inside the big circle are said to be “fused.” Those are people who love their group so much that they will do almost anything to protect it.
This isn’t unique to humans. Some species of birds will feign a broken wing to draw a predator away from their fledglings. One species—the superb fairy wren of Australasia—lures predators away from their young by making darting movements and squeaky sounds to imitate the behavior of a delectable mouse. Humans too will typically go to great lengths to protect their genetic relatives, especially their children who (except for identical twins) share more of their genes than other family members. But—unusually in the animal kingdom—humans often go further still by putting themselves in harm’s way to protect groups of genetically unrelated members of the tribe. In ancient prehistory, such tribes were small enough that everyone knew everybody else. These local groups bonded through shared ordeals such as painful initiations, by hunting dangerous animals together, and by fighting bravely on the battlefield.
Advertisement
Nowadays, however, fusion is scaled up to vastly bigger groups, thanks to the ability of the world’s media—including social media—to fill our heads with images of horrendous suffering in faraway regional conflicts.
When I met with one of the former leaders of the terrorist organization Jemaah Islamiyah in Indonesia, he told me he first became radicalized in the 1980s after reading newspaper reports about the treatment of fellow Muslims by Russian soldiers in Afghanistan. Twenty years later, however, nearly a third of American extremists were radicalized via social media feeds, and by 2016 that proportion had risen to about three quarters. Smartphones and immersive reporting shrinks the world to such an extent that forms of shared suffering in face-to-face groups can now be largely recreated and spread to millions of people across thousands of miles at the click of a button.
Fusion based on shared suffering may be powerful, but is not sufficient by itself to motivate violent extremism. Our research suggests that three other ingredients are also necessary to produce the deadly cocktail: outgroup threat, demonization of the enemy, and the belief that peaceful alternatives are lacking. In regions such as Gaza, where the sufferings of civilians are regularly captured on video and shared around the world, it is only natural that rates of fusion among those watching on in horror will increase. If people believe that peaceful solutions are impossible, violent extremism will spiral.
Samsung Unpacked’s “one more thing” was a bit of a weird one. After the presentation ended, the company rolled a brief pre-packaged video of the Galaxy Edge — not to be confused with the “Star Wars” theme park of the same name.
Though limited, the reveal was confirmation of earlier rumors that the hardware giant is working on an extra-thin version of its new S25 flagship. The Galaxy S25 Edge is, presumably, another tier for the line, slotting in alongside the S25, S25+, and S25 Ultra.
Key details, including pricing, availability, and actual thickness were not revealed, though the company did showcase what appeared to be dummy models at Wednesday’s event. Early rumors pointed to a 6.4 mm thickness, a considerable reduction from the base Galaxy S25’s 7.2 mm.
Samsung clearly wanted to avoid taking too much wind out of the Galaxy S25’s sails during the event, so it opted instead for a more cryptic reveal. Even so, the mere appearance of the device at Unpacked may be enough to keep early adopters from preordering the S25 ahead of its February 7 release.
Advertisement
After all, those are precisely the folks who get excited by things like a 0.8 mm profile reduction.
You must be logged in to post a comment Login