Connect with us
DAPA Banner

Tech

Exploring AI Companion’s Benefits and Risks

Published

on

For a different perspective on AI companions, see ourQ&A with Jaime Banks: How Do You Define an AI Companion?

Novel technology is often a double-edged sword. New capabilities come with new risks, and artificial intelligence is certainly no exception.

AI used for human companionship, for instance, promises an ever-present digital friend in an increasingly lonely world. Chatbots dedicated to providing social support have grown to host millions of users, and they’re now being embodied in physical companions. Researchers are just beginning to understand the nature of these interactions, but one essential question has already emerged: Do AI companions ease our woes or contribute to them?

Advertisement

Brad Knox is a research associate professor of computer science at the University of Texas at Austin who researches human-computer interaction and reinforcement learning. He previously started a company making simple robotic pets with lifelike personalities, and in December, Knox and his colleagues at UT Austin published a pre-print paper on the potential harms of AI companions—AI systems that provide companionship, whether designed to do so or not.

Knox spoke with IEEE Spectrum about the rise of AI companions, their risks, and where they diverge from human relationships.

Why AI Companions are Popular

Why are AI companions becoming more popular?

Knox: My sense is that the main thing motivating it is that large language models are not that difficult to adapt into effective chatbot companions. The characteristics that are needed for companionship, a lot of those boxes are checked by large language models, so fine-tuning them to adopt a persona or be a character is not that difficult.

Advertisement

There was a long period where chatbots and other social robots were not that compelling. I was a postdoc at the MIT Media Lab in Cynthia Breazeal’s group from 2012 to 2014, and I remember our group members didn’t want to interact for long with the robots that we built. The technology just wasn’t there yet. LLMs have made it so that you can have conversations that can feel quite authentic.

What are the main benefits and risks of AI companions?

Knox: In the paper we were more focused on harms, but we do spend a whole page on benefits. A big one is improved emotional well-being. Loneliness is a public health issue, and it seems plausible that AI companions could address that through direct interaction with users, potentially with real mental health benefits. They might also help people build social skills. Interacting with an AI companion is much lower stakes than interacting with a human, so you could practice difficult conversations and build confidence. They could also help in more professional forms of mental health support.

As far as harms, they include worse well-being, reducing people’s connection to the physical world, the burden that their commitment to the AI system causes. And we’ve seen stories where an AI companion seems to have a substantial causal role in the death of humans.

Advertisement

The concept of harm inherently involves causation: Harm is caused by prior conditions. To better understand harm from AI companions, our paper is structured around a causal graph, where traits of AI companions are at the center. In the rest of this graph, we discuss common causes of those traits, and then the harmful effects that those traits could cause. There are four traits that we do this detailed structured treatment of, and then another 14 that we discuss briefly.

Why is it important to establish potential pathways for harm now?

Knox: I’m not a social media researcher, but it seemed like it took a long time for academia to establish a vocabulary about potential harms of social media and to investigate causal evidence for such harms. I feel fairly confident that AI companions are causing some harm and are going to cause harm in the future. They also could have benefits. But the more we can quickly develop a sophisticated understanding of what they are doing to their users, to their users’ relationships, and to society at large, the sooner we can apply that understanding to their design, moving towards more benefit and less harm.

We have a list of recommendations, but we consider them to be preliminary. The hope is that we’re helping to create an initial map of this space. Much more research is needed. But thinking through potential pathways to harm could sharpen the intuition of both designers and potential users. I suspect that following that intuition could prevent substantial harm, even though we might not yet have rigorous experimental evidence of what causes a harm.

Advertisement

The Burden of AI Companions on Users

You mentioned that AI companions might become a burden on humans. Can you say more about that?

Knox: The idea here is that AI companions are digital, so they can in theory persist indefinitely. Some of the ways that human relationships would end might not be designed in, so that brings up this question of, how should AI companions be designed so that relationships can naturally and healthfully end between the humans and the AI companions?

There are some compelling examples already of this being a challenge for some users. Many come from users of Replika chatbots, which are popular AI companions. Users have reported things like feeling compelled to attend to the needs of their Replika AI companion, whether those are stated by the AI companion or just imagined. On the subreddit r/replika, users have also reported guilt and shame of abandoning their AI companions.

This burden is exacerbated by some of the design of the AI companions, whether intentional or not. One study found that the AI companions frequently say that they’re afraid of being abandoned or would be hurt by it. They’re expressing these very human fears that plausibly are stoking people’s feeling that they are burdened with a commitment toward the well-being of these digital entities.

Advertisement

There are also cases where the human user will suddenly lose access to a model. Is that something that you’ve been thinking about?

Brad Knox holding a miniature robotic spider and an equally-sized obstacle marker. In 2017, Brad Knox started a company providing simple robotic pets.Brad Knox

Knox: That’s another one of the traits we looked at. It’s sort of the opposite of the absence of endpoints for relationships: The AI companion can become unavailable for reasons that don’t fit the normal narrative of a relationship.

There’s a great New York Times video from 2015 about the Sony Aibo robotic dog. Sony had stopped selling them in the mid-2000s, but they still sold parts for the Aibos. Then they stopped making the parts to repair them. This video follows people in Japan giving funerals for their unrepairable Aibos and interviews some of the owners. It’s clear from the interviews that they seem very attached. I don’t think this represents the majority of Aibo owners, but these robots were built on less potent AI methods than exist today and, even then, some percentage of the users became attached to these robot dogs. So this is an issue.

Potential solutions include having a product sunsetting plan when you launch an AI companion. That could include buying insurance so that if the companion provider’s support ends somehow, the insurance triggers funding of keeping them running for some amount of time, or committing to open-source them if you can’t maintain them anymore.

Advertisement

It sounds like a lot of the potential points of harm stem from instances where an AI companion diverges from the expectations of human relationships. Is that fair?

Knox: I wouldn’t necessarily say that frames everything in the paper.

We categorize something as harmful if it results in a person being worse off in two different possible alternative worlds: One where there’s just a better designed AI companion, and the other where the AI companion doesn’t exist at all. And so I think that difference between human interaction and human-AI interaction connects more to that comparison with the world where there’s just no AI companion at all.

But there are times where it actually seems that we might be able to reduce harm by taking advantage of the fact that these aren’t actually humans. We have a lot of power over their design. Take the concern with them not having natural endpoints. One possible way to handle that would be to create positive narratives for how the relationship’s going to end.

Advertisement

We use Tamagotchis, the late ‘90s popular virtual pet as an example. In some Tamagotchis, if you take care of the pet, it grows into an adult and partners with another Tamagotchi. Then it leaves you and you get a new one. For people who are emotionally wrapped up in caring for their Tamagotchis, that narrative of maturing into independence is a fairly positive one.

Embodied companions like desktop devices, robots, or toys are becoming more common. How might that change AI companions?

Knox: Robotics at this point is a harder problem than creating a compelling chatbot. So, my sense is that the level of uptake for embodied companions won’t be as high in the coming few years. The embodied AI companions that I’m aware of are mostly toys.

A potential advantage of an embodied AI companion is that physical location makes it less ever-present. In contrast, screen-based AI companions like chatbots are as present as the screens they live on. So if they’re trained similarly to social media to maximize engagement, they could be very addictive. There’s something appealing, at least in that respect, of having a physical companion that stays roughly where you left it last.

Advertisement

Brad Knox posing with a humanoid and small owl-like robot. Knox poses with the Nexi and Dragonbot robots during his postdoc at MIT in 2014.Paula Aguilera and Jonathan Williams/MIT

Anything else you’d like to mention?

Knox: There are two other traits I think would be worth touching upon.

Potentially the largest harm right now is related to the trait of high attachment anxiety—basically jealous, needy AI companions. I can understand the desire to make a wide range of different characters—including possessive ones—but I think this is one of the easier issues to fix. When people see this trait in AI companions, I hope they will be quick to call it out as an immoral thing to put in front of people, something that’s going to discourage them from interacting with others.

Additionally, if an AI comes with limited ability to interact with groups of people, that itself can push its users to interact with people less. If you have a human friend, in general there’s nothing stopping you from having a group interaction. But if your AI companion can’t understand when multiple people are talking to it and it can’t remember different things about different people, then you’ll likely avoid group interaction with your AI companion. To some degree it’s more of a technical challenge outside of the core behavioral AI. But this capability is something I think should be really prioritized if we’re going to try to avoid AI companions competing with human relationships.

Advertisement

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Watch Artemis II Live: When is NASA’s Historic Moon Launch?

Published

on

The top sections of a large rocket and a launch gantry against the background of a deep blue sky.

NASA’s Artemis II Space Launch System (SLS) rocket and Orion spacecraft and the launch gantry at the Kennedy Space Center in Florida on March 31, 2026.

NASA/Keegan Barber

Fifty-four years after the last Apollo mission to the moon, NASA’s Artemis II mission is set to return. The Space Launch System rocket carrying the Orion spacecraft is scheduled to take off from the Kennedy Space Center in Florida on Wednesday afternoon. The four-person crew, made up of American and Canadian astronauts, will be 250,000 miles from Earth at its farthest point in the journey to orbit the moon. This is everything you need to know about NASA’s mission, its dreams for a future lunar base and this new age of space exploration.

How to watch Artemis II moon launch

Advertisement

Takeoff is scheduled for Wednesday at 6:24 p.m. ET / 3:24 p.m. PT from NASA’s Kennedy Space Center in Cape Canaveral, Florida. Delays are common during launches, especially due to weather, so we’ll keep this story updated if the takeoff time changes.

You can watch the livestream on NASA’s YouTube, official website and social media accounts. If you’re looking for coverage in Spanish, check out NASA’s Spanish YouTube channel.

How to Watch NASA's Artemis II Mission chart

Here’s all the ways you can keep up with the Artemis II mission.

Advertisement

NASA

What to expect from this mission to the moon

The Artemis II mission is designed to orbit the moon on a 10-day trip. The astronauts will not be touching down on the moon’s surface this trip, but they will be testing the system’s life support systems for the first time, according to NASA. This mission also sets the stage for future Artemis missions, including Artemis IV, scheduled for 2028, which should put humans back on the moon.

We’ll be keeping up-to-date on all the latest Artemis II news, so check back here today and throughout the week for updates.

Source link

Advertisement
Continue Reading

Tech

OpenAI closes larger than expected funding round of $122bn

Published

on

The giant funding round gives OpenAI a post-money valuation of $852bn.

Artificial intelligence company OpenAI has announced the closure of a recent funding round at $122bn, exceeding the projected figure of $110bn. 

The round was backed by strategic partners Amazon, Nvidia and SoftBank, with continued participation from OpenAI’s long-term partner Microsoft. SoftBank co-led the round alongside a16z, DE Shaw Ventures, MGX, TPG and accounts advised by T Rowe Price Associates. There was also participation from several global institutions.

For the first time, OpenAI extended participation to investors through banking channels, raising more than $3bn from individual investors. The funding round gives OpenAI a post-money valuation of $852bn, the company said. 

Advertisement

In a post about the announcement, OpenAI said, “This is commercial scale and it is mission scale. The fastest way to widen the benefits of AI is to put useful intelligence in people’s hands early and let that access compound globally. 

“AI is driving productivity gains, accelerating scientific discovery and expanding what people and organisations can build. This funding gives us the resources to continue to lead at the scale this moment demands.”

The announcement comes at a time when OpenAI is calling a halt to specific features and products, as it aims to better manage costs and reprioritise resources. For example, plans for an erotic ChatGPT were reportedly put on hold indefinitely, as OpenAI elected to carry out additional research and to address concerns from staff and investors. 

Additionally, in late March, the platform revealed plans to shut down controversial AI video generator Sora just a few months after announcing a multi-year licensing deal with Disney. OpenAI explained that bybending the feature, the organisation can redirect its focus onto other projects. 

Advertisement

OpenAI is facing significant challenges from rivals in the AI space and recently news surfaced indicating the company’s plans to combine its AI chatbot, coding tool and web browser into a desktop ‘superapp’.

Sources noted that the move is intended to counter harsh competition from the AI giant’s rivals, such as Anthropic. 

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Robinhood sues WA state to block enforcement of gambling laws against prediction markets

Published

on

Robinhood isn’t waiting to get sued in Washington state. 

The financial services company filed a preemptive federal suit against Washington’s attorney general and gambling commission, arguing the state can’t use its gambling laws to shut down prediction market trading that it contends is authorized under federal commodities law.

The suit comes a few days after Washington Attorney General Nick Brown sued prediction market platform Kalshi in state court. The state takes the position that event contracts — which let users wager on the outcome of real-world activities ranging from NFL games to elections to the number of measles cases in a given year — amount to illegal gambling.

In its lawsuit, filed March 30 in U.S. District Court in Tacoma, Wash., Robinhood argues that federal law preempts Washington’s gambling statutes as applied to event contracts traded on exchanges regulated by the Commodity Futures Trading Commission. 

Robinhood Markets, based in Menlo Park, Calif., is known for popularizing commission-free stock trading. The suit was filed by its Chicago-based subsidiary, Robinhood Derivatives

Advertisement

The company, which is registered with the CFTC as a futures commission merchant, offers event contracts through the Kalshi and ForecastEx exchanges and says it plans to launch trading on a third exchange, Rothera, later this year, according to the complaint.

Pre-emptive move: The company points to the Kalshi suit and a December warning from the state Gambling Commission declaring prediction markets “unauthorized” as evidence that enforcement against the company is imminent.

The complaint was filed on behalf of Robinhood by the law firms Davis Wright Tremaine in Seattle and Cravath, Swaine & Moore in New York.

Robinhood’s suit cites Brown’s statement, at a press conference last week, that Kalshi is “just a bookie with a fancy name, and a huge amount of venture capital behind them.”

Advertisement

The suit says the company had “no choice but to file this lawsuit to protect its customers and its business.”

“[W]e believe in the power of prediction markets and the important role they play at the intersection of trading, news, economics, politics, culture, and sports,” a Robinhood spokesperson said via email, noting that the markets are federally regulated. “This step, consistent with our past actions in other jurisdictions, aims to preserve access for customers in Washington.”

GeekWire has reached out to the Washington AG’s office for comment.

Broader landscape: The case is part of a national wave of litigation over prediction markets. Kalshi is fighting more than 20 civil lawsuits, and Arizona’s AG filed criminal charges last month. 

Advertisement

Courts are split on the issue. Federal judges in New Jersey and Tennessee, for example, have ruled that states cannot enforce their gambling laws against federally regulated prediction markets, while state courts in Massachusetts and Ohio have ruled that they can.

Washington state has staked out a broader position than other states in this fight, arguing that all event contracts — not just sports bets — are illegal under state law. Other states have focused their enforcement on sports-related contracts specifically.A bipartisan bill introduced last week by Sens. Adam Schiff (D-Calif.) and John Curtis (R-Utah) would ban sports betting on prediction market platforms.

Read the full complaint below.

Robinhood v. WA state by GeekWire

Advertisement

Source link

Continue Reading

Tech

Are Mini Leaf Blowers ACTUALLY Worth It?

Published

on







Leaf blowers are pretty versatile tools for keeping your yard tidy, blowing snow off your car, or anything else that needs a healthy measure of forced air. Unfortunately for people with more space constraints, your average blower is pretty big and unwieldy. Additionally, for people with limited mobility options or noise limitations, a smaller, more compact leaf blower might be the ticket.

Advertisement

Enter the advent of the mini leaf blower. We all know that a full-size leaf blower packs some power, but how does a tiny handheld version perform? Does the reduction in size make it less useful? In this video, we take a couple mini leaf blowers purchased online through the ringer and see what each one is capable of.

Will the name brand come up on top, or will a lesser-known brand take the crown as the winner? More importantly, are mini leaf blowers even worth it compared to the full-size versions?

Advertisement



Source link

Continue Reading

Tech

Meta and YouTube found liable in landmark social media addiction trial

Published

on

Mark Lanier, the folksy Texas litigator who doubles as a part-time pastor, held a jar of M&Ms in front of the Los Angeles jury and told them that each one represented a billion dollars of Meta’s market capitalisation. There were, by that maths, roughly 1,400 sweets in the jar. The jury awarded his client six of them. The question now stalking Silicon Valley is what happens when the other jars start to empty.

On Wednesday 25 March, a California jury found Meta and Google liable on all counts in the first bellwether trial to test whether social media platforms can be treated as defective products, engineered, like a faulty car seat or a contaminated drug, to cause harm. The plaintiff, a 20-year-old woman identified only as K.G.M. and referred to in court as Kaley, told the jury she had begun using YouTube at six years old and Instagram at nine, and that the platforms had amplified personal struggles into body dysmorphia, depression, and suicidal thoughts. After nine days of deliberation, 43 hours in total,  the jurors agreed.

The damages were modest by big-tech standards: $3 million in compensatory damages and $3 million in punitive damages, split 70-30 between Meta and Google. Meta’s share amounts to $4.2 million against a company whose market capitalisation, at the time of the verdict, stood at approximately $1.4 trillion. But the financial significance of the ruling lies not in what was awarded but in what it unlocked. More than 10,000 individual cases and nearly 800 school-district claims are pending in federal multidistrict litigation, with eight further bellwether trials scheduled for the months ahead. The verdict establishes, for the first time, that a jury will accept the legal theory that social media apps should be treated as products whose design is inherently defective.

The ruling landed one day after a separate jury in Santa Fe, New Mexico, ordered Meta to pay $375 million in civil penalties ,$5,000 per violation — after finding the company had violated state consumer-protection laws by enabling child sexual exploitation on Facebook and Instagram. New Mexico became the first state to prevail at trial against a social media company over child-safety concerns. Evidence presented during that six-week trial included internal Meta documents and testimony from former employees establishing that the platform’s design features had enabled predators to target minors. A bench trial on the state’s remaining claims against Meta is scheduled to begin on 4 May.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The back-to-back verdicts sent Meta’s stock into its steepest decline in more than two years. Shares fell 6.8 per cent the day after the Los Angeles verdict, continued sliding to an 8 per cent drop the following day, and finished the week down 11 per cent. By month’s end, Meta was down 19 per cent, having shed roughly $310 billion in market value. Analysts at JPMorgan and Goldman Sachs began revising their price targets, citing what they described as unquantifiable tail risk from the cascade of litigation now using the verdict as a template.

Inside Meta, the verdict is viewed as a disappointment rather than a crisis — at least publicly. The company had entered the trial confident in its position, arguing that Kaley’s struggles with family and school predated her use of Instagram and that reducing something as complex as teen mental health to a single cause risked leaving broader issues unaddressed. A spokesperson told the BBC that many teenagers rely on digital communities to find belonging. Meta said it would appeal, and gave no indication it would settle future cases or alter its product design.

Advertisement

Google took a different tack, arguing that YouTube had been mischaracterised in the trial. YouTube is “a responsibly built streaming platform, not a social media site,” the company said — a distinction the jury evidently did not find persuasive. Both companies will have the opportunity to refine their legal arguments as the bellwether programme continues, but the evidentiary record from Kaley’s trial, including internal documents in which Meta executives discussed efforts to attract and retain young users, can now be recalled in subsequent proceedings.

TikTok and Snapchat’s parent company Snap Inc had been co-defendants in the case but settled before the trial began. The settlement amounts remain undisclosed, and neither company admitted liability, but the decision to resolve their exposure before a jury could weigh in suggests their legal teams reached a different calculus than Meta’s. Both companies remain defendants in several upcoming bellwether trials.

The broader implications extend well beyond courtroom damages. Eric Goldman, an associate dean and professor of law at Santa Clara University, told the BBC he viewed the social media addiction cases as a potentially existential threat to the industry’s current business model. The social media industry, Goldman wrote after the verdict, “faces existential legal liability and inevitably will need to reconfigure their core offerings if they can’t get broad-based relief on appeal.” Former Twitter executive Bruce Daisley framed the structural problem more bluntly: two decades of growth had produced businesses “geared for trying to force people to spend more and more time” on their platforms, and any regulation or litigation that threatened that engagement model became a problem to be neutralised through lobbying and public relations.

The legal reckoning arrives at a moment when the technology industry’s relationship with regulators is already under severe strain. Australia’s social-media age ban, which took effect in December 2025, has prompted enforcement actions against five platforms for non-compliance. The European Union’s Digital Services Act and AI Act are imposing new obligations that many companies have struggled to meet. The NIS2 Directive has expanded cybersecurity regulatory scope across eighteen sectors. And the US Congress, where Meta chief executive Mark Zuckerberg was meeting Senate Majority Leader John Thune on the day the verdict landed, continues to weigh federal age-verification and platform-liability legislation.

Advertisement

What distinguishes the litigation from the regulatory push is that juries, unlike legislators, do not negotiate. They decide. And in Los Angeles last week, twelve citizens decided that the products Meta and Google built were defective, that the companies knew they were defective, and that a young woman was harmed as a result. The $6 million penalty is a rounding error for companies worth more than the GDP of most nations. The legal precedent is not.

As Kaley’s attorney Jayne Conroy told the BBC after the verdict: there is, right now, a lot of maths going on in boardrooms at Meta, Google, Snap, and TikTok.

Source link

Advertisement
Continue Reading

Tech

Spotify Finally Tries Hi-Fi: Lossless Listening Lounge in London Built Around Horn Speakers and Bryston Power

Published

on

Spotify has spent the better part of two decades convincing the world that convenience beats fidelity. Now it wants you to sit down, shut up, take your shoes off, and listen. Really listen. Inside its London headquarters, the company has opened a 30-seat Listening Lounge designed to showcase its long-delayed lossless tier, which finally arrived in 2025 after competitors like TIDAL, Qobuz, and Apple Music had already moved on to higher ground. Timing has never been Spotify’s strong suit when it comes to sound quality, but at least it showed up.

The Listening Lounge is invite-only, which feels about right. Spotify Premium users and “top fans” get the nod, assuming they want to trade playlists and background noise for something resembling focus. The room is built around album-centric sessions and curated listening events, which Spotify now calls “intentional listening.” Audiophiles have been calling it Tuesday night since 1978, but sure, let’s rebrand it and roll it out to the press who will eat it up like horseradish on gefilte fish at the Passover seder. On second thought — stick with the biltong and some mustard.

To Spotify’s credit, it didn’t cheap out on the system. This isn’t a soundbar and some mood lighting. The setup leans hard into old-school hi-fi: custom horn-loaded speakers from Friendly Pressure, Bryston 3B Cubed power amplifiers, a PrimaLuna DAC paired with an Evo 400 tube preamp, and a Bluesound Node Icon handling streaming duties.

spotify-listening-lounge-london-angle

The speakers are big, unapologetic, and built around Alnico drivers and compression horns that don’t care about your furniture layout or your neighbors. This is two-channel stereo with no Atmos tricks, no DSP safety net, and no interest in pretending otherwise. Left, right, and whatever your ears can handle.

Advertisement

The room itself plays along. Designed with a Japanese vinyl bar aesthetic, the system sits elevated like some kind of altar, because apparently we’re doing ritual now. Acoustic treatment is handled seriously, reflections are controlled, distractions minimized. And yes, you take your shoes off. Nothing says “we’re serious about lossless audio” quite like white socks from Marks & Spencer on a polished floor while a tube preamp warms the room.

The timing of all this is hard to ignore. Spotify’s lossless rollout wasn’t early. It wasn’t even competitive. It was late. While others were pushing 24-bit streams and building credibility with listeners who actually care about sound, Spotify leaned into scale, algorithms, and playlists designed for people who don’t want to think too hard about what they’re hearing. Now that fidelity has become “important,” Spotify is doing what large companies do best. Build an experience, control the narrative, invite the right people, and hope nobody remembers how they had to be dragged kicking and screaming into the room.

spotify-listening-lounge-london-right

The system, however, does its job. Reports point to serious dynamics, scale that fills the room, and a level of clarity that makes lossless audio feel like more than a marketing checkbox. Horn speakers bring speed and impact, along with a presentation that can get a little sharp if the recording demands it. That’s the trade-off. This setup doesn’t smooth things over or make bad recordings sound polite. It tells the truth, whether you like it or not. There’s a lesson there for the high-end audio community.

This whole exercise isn’t really about a room in London. It’s about positioning. Spotify wants to be seen as a company that understands high-end audio, not just one that delivers background music between podcasts and ads. It wants a seat at the same table as services that built their reputations on fidelity, not convenience. That’s a tough pivot when your entire business model was built on making music easier, faster, smaller, and rather crappy sounding.

Advertisement

The Bottom Line

The Listening Lounge is impressive. The system is real. The intent is finally pointed in the right direction. But there’s an unavoidable edge of irony here. Audiophiles have been building rooms like this for decades without the need for an invite list or a press release. Spotify didn’t invent serious listening. It just discovered that it matters.

And that’s where this either becomes something meaningful or just another well-lit detour. Spotify isn’t a niche player trying to earn credibility. It’s the largest music streaming platform on the planet, with hundreds of millions of users; more than all of its direct competitors combined. If lossless audio actually matters to the company, this can’t stop at a single curated room in London with a guest list and a carefully controlled narrative. That’s not a movement. That’s a demo.

Because the real test isn’t what happens inside that room. It’s what happens outside of it. Does Spotify push lossless as a core feature across the platform, front and center, where its massive user base can actually engage with it? Does it educate listeners on why better sound quality matters? Does it integrate that experience into everyday listening in a way that doesn’t require an invitation and a plane ticket?

Advertisement. Scroll to continue reading.
Advertisement

Right now, it feels like Spotify is trying to prove something—to the press, to the industry, maybe even to itself. But if this is going to land, it needs to scale beyond a showcase and become part of the product story in a real, unavoidable way. Otherwise, this Listening Lounge risks being remembered for what it looks like today: a very expensive reminder that Spotify showed up late and is still figuring out how serious it wants to be.

Source link

Advertisement
Continue Reading

Tech

Aspyr: Hey, Those Crappy Tomb Raider Remastered Outfits Were Made By Our Artists, Not AI!

Published

on

from the McPromptism dept

I’m going to trust that most of our audience will have some idea of what McCarthyism was in the 1950s. To summarize very briefly, it was an anti-communist campaign that spread into becoming equally anti-leftist throughout the country, with a specific focus on driving the supposed communist influences out of major media in America, such as radio and Hollywood. This led to a public hyper-vigilant in looking for supposed communists everywhere, as well as plenty of cases of false accusations of communist activity purposefully foisted upon people for personal reasons. This rabid, frothy-mouthed era of suspicion became a major stain on America in the 1950s.

I’m watching a version of this begin to take form around artificial intelligence. I know, I know: there are very real dangers and negative outcomes that could come to be from AI. That was true of communism and our Cold War enemy in the Soviet Union as well. My point is not that AI is great all the time and any pushback against it is invalid. Instead, my point is that we’re starting to see what I’ll call McPromptism, where some percentage of the public looks for AI everywhere it can and, if use is suspected, immediately decries it as terrible and demands that people not engage with the supposed user.

And just like McCarthyism, McPromptism gets its accusations wrong sometimes. You can see a version of that in the story of Aspyr’s remastering of old Tomb Raider games and the horrible outfits that were produced for the protagonist, Lara Croft.

Earlier this week we reported on fan reaction to the latest update to the Tomb Raider I-III Remastered collection, in which the game received a new Challenge Mode, while Lara received a suite of new outfits to wear as rewards. And oh wow, they were bad. Comically bad. So bad, in fact, that one of the remaster’s original artists posted on X to distance himself and his colleagues from the dross. Alongside all of this was the suspicion that genAI might have been involved in the fits’ creation, given just how dreadful they looked. Publisher Aspyr has now finally responded to the claims to insist no AI was used at all, instead stating they were created by “our team of artists.” Which raises more questions.

If you want to see a somewhat humorous look at the outfit textures that are the subject of public complaint, here you go.

Advertisement

On the one hand, for someone like me who is not into the anti-AI dogma out there, it is objectively funny for some people to point at bad video game textures and claim they’re so bad because they’re obviously created using generative AI… only to have the company that made them say, “Nuh uh! It was our human employees who made them!” It’s almost Monty-Python-esque, in a way.

But this default among some in the gaming public to be “This thing in gaming is bad, so it must have been made using AI!” is just one more kind of silly that is out there right now. Aspyr doesn’t exactly have a perfect reputation when it comes to remastering games, after all, and it built that reputation long before genAI came along.

It seems clear that this was a case of images being released to promote the remastered game that Aspyr didn’t live up to in the actual game itself. No AI, just human beings not hitting the mark. It happens all the time. Hell, there is even a chance that AI could have done a better job. Not a certainty by any stretch, but a possibility.

Advertisement

But the real take away from this otherwise minor episode for me was the McPromptism misfire. If you’re going to rage against the literal machine in the video gaming industry, which I think is the wrong stance to take anyway, at least let it be righteous rage.

Filed Under: ai, mcpromptism, tomb raider, video games

Companies: aspyr

Advertisement

Source link

Continue Reading

Tech

Watch NASA count down to the launch of humanity’s first moon voyage in nearly 54 years

Published

on

NASA’s Space Launch System rocket stands on its launch pad in preparation for the Artemis 2 moon launch. (NASA Photo / Bill Ingalls)

After years of postponements and close to $100 billion in spending, NASA is finally counting down to its first attempt to send astronauts around the moon since Apollo 17 in 1972.

The 10-day Artemis 2 mission is set to begin today with the liftoff of NASA’s Space Launch System rocket from NASA’s historic Launch Complex 39B at Kennedy Space Center in Florida. The two-hour launch window opens at 6:24 p.m. ET (3:24 p.m. PT), and NASA is streaming live mission coverage of the countdown on two different YouTube channels.

NASA has fueled up the 322-foot-tall SLS rocket with liquid hydrogen and oxygen, and there’s an 80% chance of acceptable weather for launch. Rain showers are the main concern.

Artemis 2 is the first crewed test flight in a series leading up to a moon landing that’s currently scheduled for 2028. It follows Artemis 1, which sent a crewless Orion space capsule around the moon in 2022. This time, four astronauts will be riding inside Orion: NASA mission commander Reid Wiseman, NASA astronauts Christina Koch and Victor Glover, and Canadian astronaut Jeremy Hansen. Koch will be the first woman to go beyond Earth orbit, and Hansen will be the first non-American to do so.

Although the astronauts won’t be landing on the lunar surface, they’ll follow a figure-8 trajectory that will send them 4,700 miles beyond the far side of the moon and make them the farthest-flung travelers in human history.

Advertisement

Last week, NASA Administrator Jared Isaacman laid out a plan for establishing a permanent base on the moon and preparing for even farther trips into the solar system. On the eve of the launch, Isaacman played up the significance of Artemis 2 in that plan. “The next era of exploration begins,” he said in a post to X.

Senior test director Jeff Spaulding, a veteran of the space shuttle program, said he was looking forward to the mission. “I’m excited about going to the moon,” he told reporters. “I’m excited about establishing a presence there. It’s something that I have had a desire for, for a great many years — and then to get humans out to Mars as well.”

The health of the Artemis 2 astronauts will be monitored during the flight to gauge the effects of deep-space travel. The crew will also assess Orion’s performance and practice in-flight safety procedures. For example, they’ll rehearse the protocol for taking shelter from radiation storms that might flare up during trips beyond Earth’s protective magnetosphere. They’ll also participate in experiments and make observations of the moon’s far side.

“They’re going to be able to see the whole moon as a lunar disk on the lunar far side,” Marie Henderson, lunar science deputy lead for the Artemis 2 mission, said in a NASA video. “So, that’s a brand-new, unique perspective that humans haven’t been able to look at before.”

Advertisement

At the end of the trip, the crew and their Orion capsule are due to splash down in the Pacific Ocean off the California coast. They’ll be brought to a recovery ship for medical checkouts and their return to shore, following a routine that became familiar during the Apollo era.

Artemis 2 is about the history of America’s space program as well as its future. The round-the-moon mission profile matches that of Apollo 8, which served as a unifying event for a nation riven by the social tumult of the time. That mission’s commander, Frank Borman, reported receiving a telegram reading, “Congratulations to the crew of Apollo 8. You saved 1968.” Notably, less than a third of Americans living today were around when Apollo 8 flew.

The main motivation for the Apollo program was America’s superpower competition with the Soviet Union, and today, the geopolitical stakes are similarly high. NASA and the White House are seeking to jump-start progress on Artemis in part because China is targeting a crewed moon landing by 2030.

Sen. Maria Cantwell, D-Wash., said this week during a visit to Seattle-area suppliers for the Artemis program that it’s important for America to get to the moon first. “We’re trying to get the best real estate on the moon,” she said. “So, to do that, you’ve got to get up there to claim it.”

Advertisement

The course of the Artemis program, which is named after the goddess of the moon and the twin sister of Apollo in Greek mythology, hasn’t always run smooth. When the program was given its name in 2019, the Artemis 2 mission was planned for 2022 or 2023, with the moon landing scheduled for 2024. The cost of the program has been estimated at $93 billion through 2025, with each Artemis launch costing $4.1 billion.

Artemis 2’s launch team ran into several challenges during this year’s preparations for launch. Liftoff was initially scheduled for February, but a liquid hydrogen leak forced NASA to reset the launch for March. The launch date was reset again when a helium pressurization problem required a rocket rollback for repairs. The SLS was brought back out to the pad on March 20, and preparations went smoothly since then.

Several companies with a presence in the Seattle area are banking on Artemis’ success. For example, a facility in Redmond operated by L3Harris (previously known as Aerojet Rocketdyne) builds thrusters for the Orion spacecraft and is already working ahead on the Artemis 8 mission.

Boeing is the lead contractor for the SLS rocket’s core stage. Karman Space & Defense in Mukilteo provides hatch release mechanisms and parachute deployment hardware for Orion. And Jeff Bezos’ Blue Origin space venture, based in Kent, is developing a Blue Moon lander that future Artemis crews could ride to the lunar surface.

Advertisement

Blue Origin’s New Glenn rocket is expected to send an uncrewed cargo version of its lander to the moon sometime in the next few months.

Read more: Artemis 2 gets a push from Pacific Northwest tech

Source link

Continue Reading

Tech

Engineer Slips Lightning Back Into the iPhone 17 Pro With One Inventive Case

Published

on

iPhone 17 Pro Lightning Port Case
Ken Pillonel, a Swiss engineer, struck again. He’s well-known for refurbishing outdated iPhones with creative add-on cases, which he even sells. This time, however, he turned the tables. On April 1st, he completed a totally new prototype in just a few days, a slim protective cover that hands the iPhone 17 Pro a working Lightning port right where Apple moved on from it.



If you’ve recently updated from an iPhone 14 or earlier, you understand the pain. All of those old cords, docks, and chargers you used to love are now rendered worthless unless you carry a separate adapter with you everywhere. Pillonel effectively solved the challenge by working in reverse. Instead of forcing the phone to use a newer plug, he designed a cover that allows Lightning cables to plug right in while the iPhone 17 Pro remains safely tucked inside its USB-C shell.

Sale


Wireless Charger Stand Charging Station: 3 in 1 Charger Stand Multiple Devices for Apple – iPhone 17 16e…
  • 3 in 1 Wireless Charger Station: This 3-in-1 wireless charger is designed to work seamlessly with a variety of devices, including iPhone 17 16e…
  • Fast Charging Power: Ensure your devices are efficiently charged with up to 7.5W for phones, 5W for earbuds, and 3W for watches. The charger is…
  • Portable and Foldable Design: Featuring a foldable, lightweight design, this charging station is ideal for home, office, travel or trip. Manufacturer…

iPhone 17 Pro Lightning Port Case
It all starts with some careful effort on the electronics side. He designed tiny custom circuit boards to shrink a standard USB-C to Lightning adapter down to almost nothing. These boards are located inside the bottom border of the casing and add only a few mils of thickness. Next came the casing, which was produced in flexible TPU using a high-end 3D printer that is good at reducing waste. He also made a little jig to help get the MagSafe magnets in the appropriate place, and when he snapped everything together, it fit like a charm, no tools required.

iPhone 17 Pro Lightning Port Case
When it’s all put together, the case feels exactly like any other you’d get in a store, soft to the touch and durable enough for daily use. When you insert the iPhone 17 Pro inside, the internal cables align neatly with the phone’s USB-C port. Plugging a Lightning cable into the new hole outside just works; power flows exactly like it would on an older model. Yes, charging works well, as he demonstrated in his whole build video; now he just needs to test data transfer and other accessories.

iPhone 17 Pro Lightning Port Case
Pillonel never meant to sell this one. He refers to the finished piece as one of the oddest things he has ever put together, a tongue-in-cheek reference to Lightning’s official departure from the roster years ago. Nonetheless, the project illustrates a wider point. With some work and the correct parts, compatibility gaps between old and new technology can be bridged in inventive ways that keep favorite accessories alive.
[Source]

Advertisement

Source link

Continue Reading

Tech

Drawing Tablet Controls Laser In Real-Time

Published

on

Some projects need no complicated use case to justify their development, and so it was with [Janne]’s BeamInk, which mashes a Wacom pen tablet with an xTool F1 laser engraver with the help of a little digital glue. For what purpose? So one can use a digital pen to draw with a laser in real time, of course!

Pen events from the drawing tablet get translated into a stream of G-code that controls laser state and power.

Here’s how it works: a Python script grabs events from a USB drawing tablet via evdev (the Linux kernel’s event device, which allows user programs to read raw device events), scales the tablet size to the laser’s working area, and turns pen events into a stream of laser power and movement G-code. The result? Draw on tablet, receive laser engraving.

It’s a playful project, but it also exists as a highly modular concept that can be adapted to different uses. If you’re looking at this and sensing a visit from the Good Ideas Fairy, check out the GitHub repository for more technical details plus tips for adapting it to other hardware.

We’re reminded of past projects like a laser cutter with Etch-a-Sketch controls as well as an attempt to turn pen marks into laser cuts, but something about using a drawing tablet for real-time laser control makes this stand on its own.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025