Connect with us
DAPA Banner

Tech

AI Companions Are Growing more Popular

Published

on

For a different perspective on AI companions, see our Q&A with Brad Knox: How Can AI Companions Be Helpful, not Harmful?

AI models intended to provide companionship for humans are on the rise. People are already frequently developing relationships with chatbots, seeking not just a personal assistant but a source of emotional support.

In response, apps dedicated to providing companionship (such as Character.ai or Replika) have recently grown to host millions of users. Some companies are now putting AI into toys and desktop devices as well, bringing digital companions into the physical world. Many of these devices were on display at CES last month, including products designed specifically for children, seniors, and even your pets.

AI companions are designed to simulate human relationships by interacting with users like a friend would. But human-AI relationships are not well understood, and companies are facing concern about whether the benefits outweigh the risks and potential harm of these relationships, especially for young people. In addition to questions about users’ mental health and emotional well being, sharing intimate personal information with a chatbot poses data privacy issues.

Advertisement

Nevertheless, more and more users are finding value in sharing their lives with AI. So how can we understand the bonds that form between humans and chatbots?

Jaime Banks is a professor at the Syracuse University School of Information Studies who researches the interactions between people and technology—in particular, robots and AI. Banks spoke with IEEE Spectrum about how people perceive and relate to machines, and the emerging relationships between humans and their machine companions.

Defining AI Companionship

How do you define AI companionship?

Advertisement

Jaime Banks: My definition is evolving as we learn more about these relationships. For now, I define it as a connection between a human and a machine that is dyadic, so there’s an exchange between them. It is also sustained over time; a one-off interaction doesn’t count as a relationship. It’s positively valenced—we like being in it. And it is autotelic, meaning we do it for its own sake. So there’s not some extrinsic motivation, it’s not defined by an ability to help us do our jobs or make us money.

I have recently been challenged by that definition, though, when I was developing an instrument to measure machine companionship. After developing the scale and working to initially validate it, I saw an interesting situation where some people do move toward this autotelic relationship pattern. “I appreciate my AI for what it is and I love it and I don’t want to change it.” It fit all those parts of the definition. But then there seems to be this other relational template that can actually be both appreciating the AI for its own sake, but also engaging it for utilitarian purposes.

That makes sense when we think about how people come to be in relationships with AI companions. They often don’t go into it purposefully seeking companionship. A lot of people go into using, for instance, ChatGPT for some other purpose and end up finding companionship through the course of those conversations. And we have these AI companion apps like Replika and Nomi and Paradot that are designed for social interaction. But that’s not to say that they couldn’t help you with practical topics.

Professor Jaime Banks programming the motions of a humanoid robot on a desktop computer. Jaime Banks customizes the software for an embodied AI social humanoid robot.Angela Ryan/Syracuse University

Different models are also programmed to have different “personalities.” How does that contribute to the relationship between humans and AI companions?

Advertisement

Banks: One of our Ph.D. students just finished a project about what happened when OpenAI demoted GPT-4o and the problems that people encountered, in terms of companionship experiences when the personality of their AI just completely changed. It didn’t have the same depth. It couldn’t remember things in the same way.

That echoes what we saw a couple years ago with Replika. Because of legal problems, Replika disabled for a period of time the erotic roleplay module and people described their companions as though they had been lobotomized, that they had this relationship and then one day they didn’t anymore. With my project on the tanking of the soulmate app, many people in their reflection were like, “I’m never trusting AI companies again. I’m only going to have an AI companion if I can run it from my computer so I know that it will always be there.”

Benefits and Risks of AI Relationships

What are the benefits and risks of these relationships?

Banks: There’s a lot of talk about the risks and a little talk about benefits. But frankly, we are only just on the precipice of starting to have longitudinal data that might allow people to make causal claims. The headlines would have you believe that these are the end of mankind, that they’re going to make you commit suicide or abandon other humans. But much of those are based on these unfortunate, but uncommon situations.

Advertisement

Most scholars gave up technological determinism as a perspective a long time ago. In the communication sciences at least, we don’t generally assume that machines make us do something because we have some degree of agency in our interactions with technologies. Yet much of the fretting around potential risks is deterministic—AI companions make people delusional, make them suicidal, make them reject other relationships. A large number of people get real benefits from AI companions. They narrate experiences that are deeply meaningful to them. I think it’s irresponsible of us to discount those lived experiences.

When we think about concerns linking AI companions to loneliness, we don’t have much data that can support causal claims. Some studies suggest AI companions lead to loneliness, but other work suggests it reduces loneliness, and other work suggests that loneliness is what comes first. Social relatedness is one of our three intrinsic psychological needs, and if we don’t have that we will seek it out, whether it’s from a volleyball for a castaway, my dog, or an AI that will allow me to feel connected to something in my world.

Some people, and governments for that matter, may move toward a protective stance. For instance, there are problems around what gets done with your intimate data that you hand over to an agent owned and maintained by a company—that’s a very reasonable concern. Dealing with the potential for children to interact, where children don’t always navigate the boundaries between fiction and actuality. There are real, valid concerns. However, we need some balance in also thinking about what people are getting from it that’s positive, productive, healthy. Scholars need to make sure we’re being cautious about our claims based on our data. And human interactants need to educate themselves.

Close-up of Professor Jaime Banks aligning her fingers and palm with the hand of a humanoid robot. Jaime Banks holds a mechanical hand.Angela Ryan/Syracuse University

Why do you think that AI companions are becoming more popular now?

Advertisement

Banks: I feel like we had this perfect storm, if you will, of the maturation of large language models and coming out of COVID, where people had been physically and sometimes socially isolated for quite some time. When those conditions converged, we had on our hands a believable social agent at a time when people were seeking social connection. Outside of that, we are increasingly just not nice to one another. So, it’s not entirely surprising that if I just don’t like the people around me, or I feel disconnected, that I would try to find some other outlet for feeling connected.

More recently there’s been a shift to embodied companions, in desktop devices or other formats beyond chatbots. How does that change the relationship, if it does?

Banks: I’m part of a Facebook group about robotic companions and I watch how people talk, and it almost seems like it crosses this boundary between toy and companion. When you have a companion with a physical body, you are in some ways limited by the abilities of that body, whereas with digital-only AI, you have the ability to explore fantastic things—places that you would never be able to go with another physical entity, fantasy scenarios.

But in robotics, once we get into a space where there are bodies that are sophisticated, they become very expensive and that means that they are not accessible to a lot of people. That’s what I’m observing in many of these online groups. These toylike bodies are still accessible, but they are also quite limiting.

Advertisement

Do you have any favorite examples from popular culture to help explain AI companionship, either how it is now or how it could be?

Banks: I really enjoy a lot of the short fiction in Clarkesworld magazine, because the stories push me to think about what questions we might need to answer now to be prepared for a future hybrid society. Top of mind are the stories “Wanting Things,” “Seven Sexy Cowboy Robots,” and “Today I am Paul.” Outside of that, I’ll point to the game Cyberpunk 2077, because the character Johnny Silverhand complicates the norms for what counts as a machine and what counts as companionship.

From Your Site Articles

Related Articles Around the Web

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Battery Tester Outperforms Cheaper Options

Published

on

Batteries are notoriously difficult pieces of technology to deal with reliably. They often need specific temperatures, charge rates, can’t tolerate physical shocks or damage, and can fail catastrophically if all of their finicky needs aren’t met. And, adding insult to injury, for many chemistries, the voltage does not correlate to state of charge in meaningful ways. Battery testers take many efforts to mitigate these challenges, but often miss the mark for those who need high fidelity in their measurements. For that reason, [LiamTronix] built their own.

The main problem with the cheaper battery testers, at least for [LiamTronix]’s use cases, is that he has plenty of batteries that are too large to practically test on the low-current devices, or which have internal battery management systems (BMS) which can’t connect to these testers. The first circuit he built to help solve these issues is based on a shunt resistor, which lets a smaller IC chip monitor a much larger current by looking at voltage drop across a resistor with a small resistance value. The Pi uses a Python script which monitors the current draw over the course of the test and outputs the result on a handy graph.

This circuit worked well enough for smaller batteries, but for his larger batteries like the 72V one he built for his electric tractor, these methods could draw far too much power to be safe. So from there he built a much more robust circuit which uses four MOSFETs as part of four constant current sources to sink and measure the current from the battery. A Pi Zero monitors the voltage and current from the battery, and also turns on some fans pointed at the MOSFETs’ heat sink to keep them from overheating. The system can be configured to work for different batteries and different current draw rates, making it much more capable than anything off the shelf.

Advertisement

Source link

Advertisement
Continue Reading

Tech

AI has a different kind of bias problem, but it’s an often repeated one

Published

on

AI bias is usually talked about in terms of algorithms: skewed datasets, flawed outputs, and stereotypes baked into models. But new research suggests there’s another, more subtle problem about who gets to use AI in the first place. According to a recent report by Lean In, women are less likely than men to use AI tools at work, and even when they do, they’re less likely to get recognition or support for it.

The numbers paint a clear picture. Men are more likely to use AI regularly (33% vs 27%), more likely to have ever used it at work, and significantly more likely to be encouraged by managers to adopt it. And it’s not just about access, but also about perception. Women are more likely to worry about the risks of AI, question its accuracy, and even fear being judged for using it, including concerns that it might be seen as “cheating.”

Why this matters more than it seems

Chances are, this gap could compound fast. AI is quickly becoming a core workplace skill, and early adoption often translates into better opportunities. If one group is consistently using it less, or getting less credit for it, that gap can grow into real career disadvantages over time. And this isn’t happening in isolation. Broader research already shows women are underrepresented in tech and AI roles, meaning they’re not just using these tools less, but they’re also less involved in building them.

What makes this interesting is how familiar it feels. This isn’t a new kind of bias; it’s an old one, just showing up in a new space. The same patterns seen in workplaces for decades, with less recognition, less encouragement, and more scrutiny, are now playing out in how AI is adopted and used.

Same bias, new tech?

As AI becomes a core workplace skill, even small gaps like this can snowball into missed opportunities, slower career growth, and less representation in shaping the tech itself. Because if the people using AI aren’t equally represented… the future it builds won’t be either.

Advertisement

Source link

Continue Reading

Tech

April Fools’ Day 2026: The Good, the Bad and the Bizarre of This Year’s Corporate Jokes

Published

on

If you’re online at all in 2026, you know it can feel like April Fools’ Day every day. You’ve almost certainly come across videos and content, often created with AI, and had to stop and ask yourself if what you’re looking at is true or made up. 

Some are obvious. You mean, there aren’t really beds made of kittens, cotton candy and rubies? And I wasn’t really offered a job guarding a spooky funeral home where I might hear tapping coming from the morgue freezer at 3 a.m.? (Both of these are TikTok videos, and the AI is scarily good — and also just scary.)

As brands roll out their April Fools’ Day jokes for this year, I keep thinking that in an AI-heavy world, the jokes seem less surprising, the faked-up art less novel. Here are some highlights from this year’s list of April 1 corporate and tech jokes.

Advertisement

Fortnite: Big heads and llama riding

Here’s an April Fool’s prank that’s more than a joke, it’s real — but only temporary. Fortnite players can try out a 24-hour-only April Fool’s Day game update that throws some truly wacky changes into the popular game. Players get enormous heads, can ride on other players’ shoulders, can use finger guns that go “pew, pew,” make a splat sound when landing after a fall, and, perhaps best of all, rideable llamas have appeared.

Warhammer: The Musical

Hey, if Broadway can make a musical about Alexander Hamilton, or a bunch of cats, surely they can make one about the Warhammer universe? That’s the joke behind this trailer for The Emperor Protects: A Warhammer 40,000 Musical, the April 1 joke from Games Workshop, creator of the popular game world. The 2.5-minute trailer, with impressive costumes and music, really sells it.

Traeger: AI-powered grilling glasses

Screenshot from the Traeger grill site shows their April 1 prank, AI grill glasses.

A

Advertisement

Screenshot by Gael Fashingbauer Cooper/CNET

This April 1 joke seems like it could maybe be a practical, real thing. Traeger makes wood-pellet grills, and this year’s joke is their claim to offer AI-powered grilling eyeglasses. “With smart guidance, thermal imaging, night‑vision, and hands‑free photo and video capture, MEAT‑AI lets you command every cook like never before,” the site touts. Hmm, I wouldn’t actually mind a pair of glasses that could look down at my grill and tell me whether my steak is done or how much more time it needs to cook. Get on that, Traeger.

T-Mobile cologne

Model holds a purple cell phone-shape that is presented as a cologne bottle

Can you smell me now? Wait, wrong cellphone company.

T-Mobile

Want to smell like your cellphone? What does that even mean? Wireless tech giant T-Mobile’s prank is Metro by T-Mobile CALLoGNE, combining call, as in phone call, with cologne. The company touts its April 1 joke as “the world’s first luxury fragrance inspired by the unmistakable scent of a brand-new phone.” Metro is T-Mobile’s prepaid brand, formerly known as MetroPCS. 

Advertisement

Timekettle British translation

They say the US and UK are two nations separated by a common language. You may already know some British phrases, including “boot” for what Americans call a car trunk, and “bonnet” for what we call the hood of a car. Timekettle makes AI-powered translation products, and its April 1 prank is a British-to-American language translation update for its translation devices. Cheerio, old chap.

british-translation-app-april-1

Timekettle offers translation services, but the British English to American English version is a special April 1 joke.

Timekettle

Whisker cat hair clothing

Advertisement
cataire.png

From couture to cat hair, Whisker’s April 1 prank involves cat-hair clothing.

Whisker

If you own a cat, cat hair is already on everything in your closet. So Cataire (like couture, I guess), a line of designer clothing made out of real cat hair, doesn’t seem that far off. Whisker, the company behind the Litter-Robot litter box, is taking this April 1 prank to the meowy max. They’ve actually used real cat hair from adoptable cats at a Michigan animal shelter to adorn three sweaters that will later be sold on eBay. Each eBay listing doubles as an adoption profile for a real shelter cat.

Yahoo’s Scrōll Stoppr

scrollstoppr-1

Doomscrolling isn’t even a possibility with Yahoo’s thumb guard, ScrōllStoppr.

Advertisement

Yahoo

Those who spend too much time on their phones might appreciate the idea behind Yahoo’s prank, Scrōll Stoppr. It’s described as “a delightfully absurd finger accessory that physically blocks your thumb from touching your phone screen.” I hate to break it to Yahoo, but I discovered this myself years ago when I cut my thumb slicing onions for Thanksgiving and had to wrap it in a Band-Aid. Yahoo says you can actually buy this — it will be available for $5 on Yahoo TikTok Shop on April 1 and will be delivered in a box that sounds off with the Yahoo signature yodel. If it sells out, just put on a Band-Aid for the same results. BYO yodel.

Omaha Steaks pocket steak

Man is shown pulling a pocket-sized Omaha steak package our of a denim shirt pocket.

Stake out a spot in your shirt for this pocket steak.

Omaha Steaks

Need a spot of protein on the go? Omaha Steaks is best known for sending giant crates of beef as gifts, but the company’s April 1 product is “the world’s first pocket-sized steak.” It gets beefier: The company jokes that the steak is cooked by motion-activated technology. A rare deal indeed, if well done.

Advertisement

Baskin-Robbins ice cream soup

br-soup.png

Slurp up Baskin-Robbins April Fools’ Day joke, ice-cream soup.

Baskin-Robbins

Baskin-Robbins has always had creative ice cream flavors, but for April 1, the company is hyping… ice cream soup. Not real, of course, but they’re promoting the faux frozen dessert in hopes that people will be inspired to take advantage of a buy-one-get-one 50% off deal on pre-packed quarts April 1-2 for Baskin-Robbins Rewards Members. Slurp ’em if you got ’em.

Baby Bottle Pop, supplement style

Advertisement
bottle-pop-april-1.png

Suck on this, say the makers of Baby Bottle Pop.

Baby Bottle Pop

Grown-ups don’t get any of the fun kid candy, but instead are stuck taking vitamins and supplements. Baby Bottle Pop Candy, which is exactly what it sounds like, candy in a baby-bottle container, is pretending for April 1 that it now comes in adult flavors. Is protein a flavor? Is fiber? Salmon is, but candy salmon is too much, even for this Seattleite. Thankfully, it’s just for April Fools’ Day.

Source link

Advertisement
Continue Reading

Tech

Watch Artemis II Live: When is NASA’s Historic Moon Launch?

Published

on

The top sections of a large rocket and a launch gantry against the background of a deep blue sky.

NASA’s Artemis II Space Launch System (SLS) rocket and Orion spacecraft and the launch gantry at the Kennedy Space Center in Florida on March 31, 2026.

NASA/Keegan Barber

Fifty-four years after the last Apollo mission to the moon, NASA’s Artemis II mission is set to return. The Space Launch System rocket carrying the Orion spacecraft is scheduled to take off from the Kennedy Space Center in Florida on Wednesday afternoon. The four-person crew, made up of American and Canadian astronauts, will be 250,000 miles from Earth at its farthest point in the journey to orbit the moon. This is everything you need to know about NASA’s mission, its dreams for a future lunar base and this new age of space exploration.

How to watch Artemis II moon launch

Advertisement

Takeoff is scheduled for Wednesday at 6:24 p.m. ET / 3:24 p.m. PT from NASA’s Kennedy Space Center in Cape Canaveral, Florida. Delays are common during launches, especially due to weather, so we’ll keep this story updated if the takeoff time changes.

You can watch the livestream on NASA’s YouTube, official website and social media accounts. If you’re looking for coverage in Spanish, check out NASA’s Spanish YouTube channel.

How to Watch NASA's Artemis II Mission chart

Here’s all the ways you can keep up with the Artemis II mission.

Advertisement

NASA

What to expect from this mission to the moon

The Artemis II mission is designed to orbit the moon on a 10-day trip. The astronauts will not be touching down on the moon’s surface this trip, but they will be testing the system’s life support systems for the first time, according to NASA. This mission also sets the stage for future Artemis missions, including Artemis IV, scheduled for 2028, which should put humans back on the moon.

We’ll be keeping up-to-date on all the latest Artemis II news, so check back here today and throughout the week for updates.

Source link

Advertisement
Continue Reading

Tech

OpenAI closes larger than expected funding round of $122bn

Published

on

The giant funding round gives OpenAI a post-money valuation of $852bn.

Artificial intelligence company OpenAI has announced the closure of a recent funding round at $122bn, exceeding the projected figure of $110bn. 

The round was backed by strategic partners Amazon, Nvidia and SoftBank, with continued participation from OpenAI’s long-term partner Microsoft. SoftBank co-led the round alongside a16z, DE Shaw Ventures, MGX, TPG and accounts advised by T Rowe Price Associates. There was also participation from several global institutions.

For the first time, OpenAI extended participation to investors through banking channels, raising more than $3bn from individual investors. The funding round gives OpenAI a post-money valuation of $852bn, the company said. 

Advertisement

In a post about the announcement, OpenAI said, “This is commercial scale and it is mission scale. The fastest way to widen the benefits of AI is to put useful intelligence in people’s hands early and let that access compound globally. 

“AI is driving productivity gains, accelerating scientific discovery and expanding what people and organisations can build. This funding gives us the resources to continue to lead at the scale this moment demands.”

The announcement comes at a time when OpenAI is calling a halt to specific features and products, as it aims to better manage costs and reprioritise resources. For example, plans for an erotic ChatGPT were reportedly put on hold indefinitely, as OpenAI elected to carry out additional research and to address concerns from staff and investors. 

Additionally, in late March, the platform revealed plans to shut down controversial AI video generator Sora just a few months after announcing a multi-year licensing deal with Disney. OpenAI explained that bybending the feature, the organisation can redirect its focus onto other projects. 

Advertisement

OpenAI is facing significant challenges from rivals in the AI space and recently news surfaced indicating the company’s plans to combine its AI chatbot, coding tool and web browser into a desktop ‘superapp’.

Sources noted that the move is intended to counter harsh competition from the AI giant’s rivals, such as Anthropic. 

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Robinhood sues WA state to block enforcement of gambling laws against prediction markets

Published

on

Robinhood isn’t waiting to get sued in Washington state. 

The financial services company filed a preemptive federal suit against Washington’s attorney general and gambling commission, arguing the state can’t use its gambling laws to shut down prediction market trading that it contends is authorized under federal commodities law.

The suit comes a few days after Washington Attorney General Nick Brown sued prediction market platform Kalshi in state court. The state takes the position that event contracts — which let users wager on the outcome of real-world activities ranging from NFL games to elections to the number of measles cases in a given year — amount to illegal gambling.

In its lawsuit, filed March 30 in U.S. District Court in Tacoma, Wash., Robinhood argues that federal law preempts Washington’s gambling statutes as applied to event contracts traded on exchanges regulated by the Commodity Futures Trading Commission. 

Robinhood Markets, based in Menlo Park, Calif., is known for popularizing commission-free stock trading. The suit was filed by its Chicago-based subsidiary, Robinhood Derivatives

Advertisement

The company, which is registered with the CFTC as a futures commission merchant, offers event contracts through the Kalshi and ForecastEx exchanges and says it plans to launch trading on a third exchange, Rothera, later this year, according to the complaint.

Pre-emptive move: The company points to the Kalshi suit and a December warning from the state Gambling Commission declaring prediction markets “unauthorized” as evidence that enforcement against the company is imminent.

The complaint was filed on behalf of Robinhood by the law firms Davis Wright Tremaine in Seattle and Cravath, Swaine & Moore in New York.

Robinhood’s suit cites Brown’s statement, at a press conference last week, that Kalshi is “just a bookie with a fancy name, and a huge amount of venture capital behind them.”

Advertisement

The suit says the company had “no choice but to file this lawsuit to protect its customers and its business.”

“[W]e believe in the power of prediction markets and the important role they play at the intersection of trading, news, economics, politics, culture, and sports,” a Robinhood spokesperson said via email, noting that the markets are federally regulated. “This step, consistent with our past actions in other jurisdictions, aims to preserve access for customers in Washington.”

GeekWire has reached out to the Washington AG’s office for comment.

Broader landscape: The case is part of a national wave of litigation over prediction markets. Kalshi is fighting more than 20 civil lawsuits, and Arizona’s AG filed criminal charges last month. 

Advertisement

Courts are split on the issue. Federal judges in New Jersey and Tennessee, for example, have ruled that states cannot enforce their gambling laws against federally regulated prediction markets, while state courts in Massachusetts and Ohio have ruled that they can.

Washington state has staked out a broader position than other states in this fight, arguing that all event contracts — not just sports bets — are illegal under state law. Other states have focused their enforcement on sports-related contracts specifically.A bipartisan bill introduced last week by Sens. Adam Schiff (D-Calif.) and John Curtis (R-Utah) would ban sports betting on prediction market platforms.

Read the full complaint below.

Robinhood v. WA state by GeekWire

Advertisement

Source link

Continue Reading

Tech

Are Mini Leaf Blowers ACTUALLY Worth It?

Published

on







Leaf blowers are pretty versatile tools for keeping your yard tidy, blowing snow off your car, or anything else that needs a healthy measure of forced air. Unfortunately for people with more space constraints, your average blower is pretty big and unwieldy. Additionally, for people with limited mobility options or noise limitations, a smaller, more compact leaf blower might be the ticket.

Advertisement

Enter the advent of the mini leaf blower. We all know that a full-size leaf blower packs some power, but how does a tiny handheld version perform? Does the reduction in size make it less useful? In this video, we take a couple mini leaf blowers purchased online through the ringer and see what each one is capable of.

Will the name brand come up on top, or will a lesser-known brand take the crown as the winner? More importantly, are mini leaf blowers even worth it compared to the full-size versions?

Advertisement



Source link

Continue Reading

Tech

Meta and YouTube found liable in landmark social media addiction trial

Published

on

Mark Lanier, the folksy Texas litigator who doubles as a part-time pastor, held a jar of M&Ms in front of the Los Angeles jury and told them that each one represented a billion dollars of Meta’s market capitalisation. There were, by that maths, roughly 1,400 sweets in the jar. The jury awarded his client six of them. The question now stalking Silicon Valley is what happens when the other jars start to empty.

On Wednesday 25 March, a California jury found Meta and Google liable on all counts in the first bellwether trial to test whether social media platforms can be treated as defective products, engineered, like a faulty car seat or a contaminated drug, to cause harm. The plaintiff, a 20-year-old woman identified only as K.G.M. and referred to in court as Kaley, told the jury she had begun using YouTube at six years old and Instagram at nine, and that the platforms had amplified personal struggles into body dysmorphia, depression, and suicidal thoughts. After nine days of deliberation, 43 hours in total,  the jurors agreed.

The damages were modest by big-tech standards: $3 million in compensatory damages and $3 million in punitive damages, split 70-30 between Meta and Google. Meta’s share amounts to $4.2 million against a company whose market capitalisation, at the time of the verdict, stood at approximately $1.4 trillion. But the financial significance of the ruling lies not in what was awarded but in what it unlocked. More than 10,000 individual cases and nearly 800 school-district claims are pending in federal multidistrict litigation, with eight further bellwether trials scheduled for the months ahead. The verdict establishes, for the first time, that a jury will accept the legal theory that social media apps should be treated as products whose design is inherently defective.

The ruling landed one day after a separate jury in Santa Fe, New Mexico, ordered Meta to pay $375 million in civil penalties ,$5,000 per violation — after finding the company had violated state consumer-protection laws by enabling child sexual exploitation on Facebook and Instagram. New Mexico became the first state to prevail at trial against a social media company over child-safety concerns. Evidence presented during that six-week trial included internal Meta documents and testimony from former employees establishing that the platform’s design features had enabled predators to target minors. A bench trial on the state’s remaining claims against Meta is scheduled to begin on 4 May.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The back-to-back verdicts sent Meta’s stock into its steepest decline in more than two years. Shares fell 6.8 per cent the day after the Los Angeles verdict, continued sliding to an 8 per cent drop the following day, and finished the week down 11 per cent. By month’s end, Meta was down 19 per cent, having shed roughly $310 billion in market value. Analysts at JPMorgan and Goldman Sachs began revising their price targets, citing what they described as unquantifiable tail risk from the cascade of litigation now using the verdict as a template.

Inside Meta, the verdict is viewed as a disappointment rather than a crisis — at least publicly. The company had entered the trial confident in its position, arguing that Kaley’s struggles with family and school predated her use of Instagram and that reducing something as complex as teen mental health to a single cause risked leaving broader issues unaddressed. A spokesperson told the BBC that many teenagers rely on digital communities to find belonging. Meta said it would appeal, and gave no indication it would settle future cases or alter its product design.

Advertisement

Google took a different tack, arguing that YouTube had been mischaracterised in the trial. YouTube is “a responsibly built streaming platform, not a social media site,” the company said — a distinction the jury evidently did not find persuasive. Both companies will have the opportunity to refine their legal arguments as the bellwether programme continues, but the evidentiary record from Kaley’s trial, including internal documents in which Meta executives discussed efforts to attract and retain young users, can now be recalled in subsequent proceedings.

TikTok and Snapchat’s parent company Snap Inc had been co-defendants in the case but settled before the trial began. The settlement amounts remain undisclosed, and neither company admitted liability, but the decision to resolve their exposure before a jury could weigh in suggests their legal teams reached a different calculus than Meta’s. Both companies remain defendants in several upcoming bellwether trials.

The broader implications extend well beyond courtroom damages. Eric Goldman, an associate dean and professor of law at Santa Clara University, told the BBC he viewed the social media addiction cases as a potentially existential threat to the industry’s current business model. The social media industry, Goldman wrote after the verdict, “faces existential legal liability and inevitably will need to reconfigure their core offerings if they can’t get broad-based relief on appeal.” Former Twitter executive Bruce Daisley framed the structural problem more bluntly: two decades of growth had produced businesses “geared for trying to force people to spend more and more time” on their platforms, and any regulation or litigation that threatened that engagement model became a problem to be neutralised through lobbying and public relations.

The legal reckoning arrives at a moment when the technology industry’s relationship with regulators is already under severe strain. Australia’s social-media age ban, which took effect in December 2025, has prompted enforcement actions against five platforms for non-compliance. The European Union’s Digital Services Act and AI Act are imposing new obligations that many companies have struggled to meet. The NIS2 Directive has expanded cybersecurity regulatory scope across eighteen sectors. And the US Congress, where Meta chief executive Mark Zuckerberg was meeting Senate Majority Leader John Thune on the day the verdict landed, continues to weigh federal age-verification and platform-liability legislation.

Advertisement

What distinguishes the litigation from the regulatory push is that juries, unlike legislators, do not negotiate. They decide. And in Los Angeles last week, twelve citizens decided that the products Meta and Google built were defective, that the companies knew they were defective, and that a young woman was harmed as a result. The $6 million penalty is a rounding error for companies worth more than the GDP of most nations. The legal precedent is not.

As Kaley’s attorney Jayne Conroy told the BBC after the verdict: there is, right now, a lot of maths going on in boardrooms at Meta, Google, Snap, and TikTok.

Source link

Advertisement
Continue Reading

Tech

Spotify Finally Tries Hi-Fi: Lossless Listening Lounge in London Built Around Horn Speakers and Bryston Power

Published

on

Spotify has spent the better part of two decades convincing the world that convenience beats fidelity. Now it wants you to sit down, shut up, take your shoes off, and listen. Really listen. Inside its London headquarters, the company has opened a 30-seat Listening Lounge designed to showcase its long-delayed lossless tier, which finally arrived in 2025 after competitors like TIDAL, Qobuz, and Apple Music had already moved on to higher ground. Timing has never been Spotify’s strong suit when it comes to sound quality, but at least it showed up.

The Listening Lounge is invite-only, which feels about right. Spotify Premium users and “top fans” get the nod, assuming they want to trade playlists and background noise for something resembling focus. The room is built around album-centric sessions and curated listening events, which Spotify now calls “intentional listening.” Audiophiles have been calling it Tuesday night since 1978, but sure, let’s rebrand it and roll it out to the press who will eat it up like horseradish on gefilte fish at the Passover seder. On second thought — stick with the biltong and some mustard.

To Spotify’s credit, it didn’t cheap out on the system. This isn’t a soundbar and some mood lighting. The setup leans hard into old-school hi-fi: custom horn-loaded speakers from Friendly Pressure, Bryston 3B Cubed power amplifiers, a PrimaLuna DAC paired with an Evo 400 tube preamp, and a Bluesound Node Icon handling streaming duties.

spotify-listening-lounge-london-angle

The speakers are big, unapologetic, and built around Alnico drivers and compression horns that don’t care about your furniture layout or your neighbors. This is two-channel stereo with no Atmos tricks, no DSP safety net, and no interest in pretending otherwise. Left, right, and whatever your ears can handle.

Advertisement

The room itself plays along. Designed with a Japanese vinyl bar aesthetic, the system sits elevated like some kind of altar, because apparently we’re doing ritual now. Acoustic treatment is handled seriously, reflections are controlled, distractions minimized. And yes, you take your shoes off. Nothing says “we’re serious about lossless audio” quite like white socks from Marks & Spencer on a polished floor while a tube preamp warms the room.

The timing of all this is hard to ignore. Spotify’s lossless rollout wasn’t early. It wasn’t even competitive. It was late. While others were pushing 24-bit streams and building credibility with listeners who actually care about sound, Spotify leaned into scale, algorithms, and playlists designed for people who don’t want to think too hard about what they’re hearing. Now that fidelity has become “important,” Spotify is doing what large companies do best. Build an experience, control the narrative, invite the right people, and hope nobody remembers how they had to be dragged kicking and screaming into the room.

spotify-listening-lounge-london-right

The system, however, does its job. Reports point to serious dynamics, scale that fills the room, and a level of clarity that makes lossless audio feel like more than a marketing checkbox. Horn speakers bring speed and impact, along with a presentation that can get a little sharp if the recording demands it. That’s the trade-off. This setup doesn’t smooth things over or make bad recordings sound polite. It tells the truth, whether you like it or not. There’s a lesson there for the high-end audio community.

This whole exercise isn’t really about a room in London. It’s about positioning. Spotify wants to be seen as a company that understands high-end audio, not just one that delivers background music between podcasts and ads. It wants a seat at the same table as services that built their reputations on fidelity, not convenience. That’s a tough pivot when your entire business model was built on making music easier, faster, smaller, and rather crappy sounding.

Advertisement

The Bottom Line

The Listening Lounge is impressive. The system is real. The intent is finally pointed in the right direction. But there’s an unavoidable edge of irony here. Audiophiles have been building rooms like this for decades without the need for an invite list or a press release. Spotify didn’t invent serious listening. It just discovered that it matters.

And that’s where this either becomes something meaningful or just another well-lit detour. Spotify isn’t a niche player trying to earn credibility. It’s the largest music streaming platform on the planet, with hundreds of millions of users; more than all of its direct competitors combined. If lossless audio actually matters to the company, this can’t stop at a single curated room in London with a guest list and a carefully controlled narrative. That’s not a movement. That’s a demo.

Because the real test isn’t what happens inside that room. It’s what happens outside of it. Does Spotify push lossless as a core feature across the platform, front and center, where its massive user base can actually engage with it? Does it educate listeners on why better sound quality matters? Does it integrate that experience into everyday listening in a way that doesn’t require an invitation and a plane ticket?

Advertisement. Scroll to continue reading.
Advertisement

Right now, it feels like Spotify is trying to prove something—to the press, to the industry, maybe even to itself. But if this is going to land, it needs to scale beyond a showcase and become part of the product story in a real, unavoidable way. Otherwise, this Listening Lounge risks being remembered for what it looks like today: a very expensive reminder that Spotify showed up late and is still figuring out how serious it wants to be.

Source link

Advertisement
Continue Reading

Tech

Aspyr: Hey, Those Crappy Tomb Raider Remastered Outfits Were Made By Our Artists, Not AI!

Published

on

from the McPromptism dept

I’m going to trust that most of our audience will have some idea of what McCarthyism was in the 1950s. To summarize very briefly, it was an anti-communist campaign that spread into becoming equally anti-leftist throughout the country, with a specific focus on driving the supposed communist influences out of major media in America, such as radio and Hollywood. This led to a public hyper-vigilant in looking for supposed communists everywhere, as well as plenty of cases of false accusations of communist activity purposefully foisted upon people for personal reasons. This rabid, frothy-mouthed era of suspicion became a major stain on America in the 1950s.

I’m watching a version of this begin to take form around artificial intelligence. I know, I know: there are very real dangers and negative outcomes that could come to be from AI. That was true of communism and our Cold War enemy in the Soviet Union as well. My point is not that AI is great all the time and any pushback against it is invalid. Instead, my point is that we’re starting to see what I’ll call McPromptism, where some percentage of the public looks for AI everywhere it can and, if use is suspected, immediately decries it as terrible and demands that people not engage with the supposed user.

And just like McCarthyism, McPromptism gets its accusations wrong sometimes. You can see a version of that in the story of Aspyr’s remastering of old Tomb Raider games and the horrible outfits that were produced for the protagonist, Lara Croft.

Earlier this week we reported on fan reaction to the latest update to the Tomb Raider I-III Remastered collection, in which the game received a new Challenge Mode, while Lara received a suite of new outfits to wear as rewards. And oh wow, they were bad. Comically bad. So bad, in fact, that one of the remaster’s original artists posted on X to distance himself and his colleagues from the dross. Alongside all of this was the suspicion that genAI might have been involved in the fits’ creation, given just how dreadful they looked. Publisher Aspyr has now finally responded to the claims to insist no AI was used at all, instead stating they were created by “our team of artists.” Which raises more questions.

If you want to see a somewhat humorous look at the outfit textures that are the subject of public complaint, here you go.

Advertisement

On the one hand, for someone like me who is not into the anti-AI dogma out there, it is objectively funny for some people to point at bad video game textures and claim they’re so bad because they’re obviously created using generative AI… only to have the company that made them say, “Nuh uh! It was our human employees who made them!” It’s almost Monty-Python-esque, in a way.

But this default among some in the gaming public to be “This thing in gaming is bad, so it must have been made using AI!” is just one more kind of silly that is out there right now. Aspyr doesn’t exactly have a perfect reputation when it comes to remastering games, after all, and it built that reputation long before genAI came along.

It seems clear that this was a case of images being released to promote the remastered game that Aspyr didn’t live up to in the actual game itself. No AI, just human beings not hitting the mark. It happens all the time. Hell, there is even a chance that AI could have done a better job. Not a certainty by any stretch, but a possibility.

Advertisement

But the real take away from this otherwise minor episode for me was the McPromptism misfire. If you’re going to rage against the literal machine in the video gaming industry, which I think is the wrong stance to take anyway, at least let it be righteous rage.

Filed Under: ai, mcpromptism, tomb raider, video games

Companies: aspyr

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025