Connect with us
DAPA Banner

Tech

After testing both, is the choice easy?

Published

on

Looking for a new pair of Sony earbuds but aren’t sure whether to splurge on the latest model, or save on the older WF-1000XM5? We’re here to help.

We’ve reviewed both the Sony WF-1000XM6 and the WF-1000XM5 to help you decide which earbuds are a better fit for you.

If you’re not convinced by either Sony pair, then visit our best headphones and best wireless earbuds guides instead.

Price and Availability

The Sony WF-1000XM6 earbuds are the newer of the two and, unsurprisingly, naturally have a higher price tag at £249/$249. 

Advertisement

Advertisement

Although the WF-1000XM5s are the older pair, they’re still readily available to buy. Not only that but, although the earbuds’ official RRP is £199/$199, it’s not impossible to find a hefty price cut for them. For example, at the time of writing, the XM5 buds were just £169 on Sony’s official site.

Design

  • Sony WF-1000XM6s are chunkier though slimmer in profile
  • Both are comfortable to wear, although the XM6 buds can be more fiddly to wear
  • Both are IPX4

Although both the WF-1000XM6 and the XM5 are relatively slim and definitely pocketable, there are a few notable differences between the two. 

Firstly, due to the additional microphone, the XM6 model is slightly chunkier than its predecessor, and subsequently can make the earbuds fiddly to wear and fit correctly. While we never noted an issue with comfort, we did struggle to get a perfectly airtight seal for ANC. Using the Sony Sound Connect app, we found the earbuds struggled to pass Sony’s strict test for a suitable seal. It’s frustrating, but fortunately doesn’t seem to impede the ANC too much – but more on that later.

Otherwise, both earbuds are fitted with responsive touch controls that cover playback, switching between ANC modes, volume control and more, all of which can be customised via the companion app.

Advertisement

Advertisement

In addition, both earbuds are fitted with the same stiffer ear-tips that aim to plug your ears more effectively than silicon alternatives, and both have an IPX4 rating too. This means both buds can withstand sweat and rain drops.

Winner: Sony WF-1000XM5

Sony WF-1000XM6

Advertisement

Sony WF-1000XM5

Features

  • Both earbuds are packed with features, including Speak to Chat, Adaptive Sound Control and voice assistants
  • Both also support 360 Reality Audio and can be connected to two devices at once

With both, you’ll benefit from the likes of Quick Attention Mode, Speak to Chat and Adaptive Sound Control. There’s also head gesture control, your choice of voice assistant and a clever Find Your Equalizer that allows you to adjust the sound more intuitively than playing around with bands and frequencies. 

Controlling both Sony earbuds is done via the Sound Connect smartphone app, and allows you to customise touch controls, noise-cancellation modes and the Bluetooth connection too. While we wish the app was a bit more streamlined, overall it’s a solid companion piece to the buds.

Advertisement

Advertisement
Sony WF-1000XM5 Sound Connect appSony WF-1000XM5 Sound Connect app
Sony Sound Connect app. Image Credit (Trusted Reviews)

One especially interesting feature on the app is the Discover section that has features like your listening history across all music services, plus logs how long you use the headphones and includes badges to help game-mify the experience too. How useful this is will depend really on your personal preference, but it shows just how feature-packed the buds are.

Winner: Tie

Sound Quality

  • WF-1000XM6 has a larger 8.4mm driver
  • Both offer a clear, balanced approach across the frequency range, however the XM6 have improved highs
  • Overall, the XM6 is more vibrant and energetic compared to the XM5

Although there are differences between the two, it’s worth noting that both the XM6 and XM5 are brilliant sounding earbuds. However, thanks to the larger 8.4mm driver at play here, the XM6 offers a wider soundstage compared to the XM5. In fact, we found that not only were highs improved, with more clarity and detail, but bass felt weightier too. This is especially noteworthy, as we concluded that bass lovers might be a bit disappointed by the XM5’s more balanced approach.

In addition, we noted that at its default volume, the XM6 picks up more vibrancy, dynamism and energy than the XM5. 

Advertisement

Sony WF-1000XM6 in caseSony WF-1000XM6 in case
Sony WF-1000XM6. Image Credit (Trusted Reviews)

All of this, however, is not to say the XM5s don’t sound good – quite the opposite – but it’s just the XM6 has tweaked the overall quality.

Winner: Sony WF-1000XM6

Advertisement

Noise Cancellation

  • WF-1000XM6 has one additional microphone for noise cancelling
  • Although the XM5s are easier to wear, the XM6s offer overall stronger noise-cancellation
  • Call quality is also stronger with the XM6s

Sony claims the WF-1000XM6 offers the “best true wireless for noise-cancellation” and we’re confident to say that they are, in fact, among the quietest pair of earbuds we’ve reviewed. While getting the right fit can be fiddly, which we’ve mentioned earlier, over the weeks we’ve found the earbuds manage to curb outside noises like traffic, voices and even planes brilliantly. 

Overall, although the XM6 is a solid improvement over the XM5 pair, we should note that the XM5s are easier to wear than the XM6. 

Advertisement

Sony WF-1000XM5 mainSony WF-1000XM5 main
Sony WF-1000XM5. Image Credit (Trusted Reviews)

Call quality also sees an improvement, as we found the XM5 had a tendency to let in noise when we spoke. Fortunately, the XM6 sounds completely silent during phone calls.

Winner: Sony WF-1000XM6

Battery Life

  • No improvements with the XM6
  • Both offer eight hours per charge with an additional 16 hours in the case

Sony hasn’t made any improvements with the battery life of the XM6 buds, and promises the same 24 hours total (eight plus sixteen in the case) as the XM5. Having said that, we actually found the XM6 seemed likely to offer even more hours than Sony claims, with an hour of listening still resulting in 100% charge.

The XM5 actually benefits from a slightly faster charging speed, with a three minute charge resulting in an extra hour of playback, whereas the XM6 needs five minutes. The difference is negligible, but if you find yourself in a pinch then you’ll definitely be thankful.

Advertisement

Advertisement

Winner: Tie

Verdict

Although they’re slightly chunkier and can be quite fiddly to wear initially, the Sony WF-1000XM6 buds are a brilliant upgrade from the WF-1000XM5 pair. Not only is the ANC among the best we’ve ever tested, but the sound is more vibrant and dynamic than its predecessor.

Having said that, the XM6 buds do come with a hefty price tag. So, if you’re on a tighter budget, the XM5 is a brilliant compromise. 

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Sam Altman responds to ‘incendiary’ New Yorker article after attack on his home

Published

on

OpenAI CEO Sam Altman published a blog post on Friday evening responding to both an apparent attack on his home and an in-depth New Yorker profile raising questions about his trustworthiness.

Early Friday morning, someone allegedly threw a Molotov cocktail at Altman’s home. No one was hurt in the incident, and a suspect was later arrested at OpenAI headquarters, where he was threatening to burn down the building, according to the San Francisco Police Department.

While the police have not identified the suspect publicly, Altman noted that the incident came a few days after “an incendiary article” was published about him. He said someone had suggested that the article’s publication “at a time of great anxiety about AI” could make things “more dangerous” for him.

“I brushed it aside,” Altman said. “Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives.”

Advertisement

The article in question was a lengthy investigative piece written by Ronan Farrow (who won a Pulitzer for reporting that revealed many of the sexual abuse allegations around Harvey Weinstein) and Andrew Marantz (who’s written extensively about technology and politics).

Farrow and Marantz said that during interviews with more than 100 people who have knowledge of Altman’s business conduct, most described Altman as someone with “a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart.” 

Echoing other journalists who have profiled Altman, Farrow and Marantz suggested that many sources raised questions about his trustworthiness, with one anonymous board member saying he combines “a strong desire to please people, to be liked in any given interaction” with “a sociopathic lack of concern for the consequences that may come from deceiving someone.”

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

In his response, Altman said that looking back, he can identify “a lot of things I’m proud of and a bunch of mistakes.”

Advertisement

Among the mistakes, he said, is a tendency towards “being conflict-averse,” which he said has “caused great pain for me and OpenAI.”

“I am not proud of handling myself badly in a conflict with our previous board that led to a huge mess for the company,” Altman said, presumably referring to his removal and rapid reinstatement as OpenAI CEO back in 2023. “I have made many other mistakes throughout the insane trajectory of OpenAI; I am a flawed person in the center of an exceptionally complex situation, trying to get a little better each year, always working for the mission.”

He added, “I am sorry to people I’ve hurt and wish I had learned more faster.”

Altman also acknowledged that there seems to be “so much Shakespearean drama between the companies in our field,” which he attributed to a “‘ring of power’ dynamic” that “makes people do crazy things.”

Advertisement

Of course, the correct way to deal with the ring of power is to destroy it, so Altman added, “I don’t mean that [artificial general intelligence] is the ring itself, but instead the totalizing philosophy of ‘being the one to control AGI.’” His proposed solution is “to orient towards sharing the technology with people broadly, and for no one to have the ring.”

Altman concluded by saying that he welcomes “good-faith criticism and debate,” while reiterating his belief that “technological progress can make the future unbelievably good, for your family and mine.”

“While we have that debate, we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally,” he said.

Source link

Advertisement
Continue Reading

Tech

Two-Week Social Media ‘Detox’ Erases a Decade Age-Related Decline, Study Finds

Published

on

Critics say social media is engineered to be as addictive as tobacco or gambling, writes the Washington Post — while adding that “the science has been moving in parallel with the court’s recognition.”

A growing body of research links heavy social media use not only to declines in mental health but to measurable cognitive effects — on attention, memory and focus — that in some studies resemble accelerated aging. Science also suggests we have more control than we realize when it comes to reversing this damage, and the solution is surprisingly simple: Take a break… “Digital detoxes” can sound like a fad. But in one of the largest studies to date, published in PNAS Nexus and involving more than 467 participants with an average age of 32, even a short time away produced striking results — effectively erasing a decade of age-related cognitive decline.

For 14 days, participants used a commercially available app, Freedom, to block internet access on their phones. They were still allowed calls and text messages, essentially turning a smartphone into a dumb phone. Their time online decreased from 314 minutes to 161 minutes, and by the end of the period the participants had improvements in sustained attention, mental health as well as self-reported well-being. The improvement in sustained attention was about the same magnitude as 10 years of age-related decline, the researchers noted, and the effect of the intervention on depression symptoms was larger than antidepressants and similar to that of cognitive behavioral therapy.

But two things were even more mind-blowing… Even those people who cheated and broke the rules after a few days seemed to have positive effects from the break; and in follow-up reports after the two weeks, many people reported the positive effects lingered. “So you don’t have to necessarily restrict yourself forever. Even taking a partial digital detox, even for a few days, seems to work,” Kushlev said.
The article also notes a November study at Harvard published in JAMA Network Open where nearly 400 people ‘found that even a short break can make a measurable difference: After just one week of reduced smartphone use, participants reported drops in anxiety (16.1 percent), depression (24.8 percent) and insomnia (14.5 percent)…”

Advertisement

“Other experiments point in the same direction — whether decreasing social media use by an hour a day for one week or stepping away from just Facebook and Instagram.”

Source link

Continue Reading

Tech

I’ve tested every iPhone since the iPhone 12, and Ceramic Shield 2 is the first iPhone glass I fully trust

Published

on

Marketing is one thing, but reality is quite another. Like many of us, I won’t forget the claimed “durable” microtwill of FineWoven, the shaky initial launch of Apple Maps, or the infamous butterfly keyboard that was supposedly four times more stable. Remember the promise of AirPower? Of course you don’t.

It’s worth celebrating when the real-world experience does actually live up to the hype, then. And that’s the case with Apple’s Ceramic Shield 2, the tech giant’s latest and unquestionably greatest iPhone glass.

Source link

Advertisement
Continue Reading

Tech

IBM settles its DEI lawsuit with the DOJ for $17 million

Published

on

IBM has agreed to settle the US Department of Justice’s accusations that the company violated civil rights laws with its DEI practices. According to a press release from the DOJ, IBM will pay more than $17 million to resolve allegations of taking “race, color, national origin, or sex” into account when making employment decisions. This settlement is the latest development in a longstanding effort from the Trump administration to end DEI programs, which was kick-started from an executive order in early 2025.

IBM denied any wrongdoing and said the settlement wasn’t an admission of liability, while the US government said this conclusion wasn’t a concession that its claims weren’t well founded, according to the settlement agreement. According to the DOJ, IBM had violated the Civil Rights Act of 1964 with practices that included altering “interview criteria based on race or sex,” developing “race and sex demographic goals for business units,” using “a diversity modifier that tied bonus compensation to achieving demographic targets” and more.

An IBM spokesperson told Engadget in an email that the company “is pleased to have resolved this matter,” adding that “our workforce strategy is driven by a single principle: having the right people with the right skills that our clients depend on.”

According to Todd Blanche, the agency’s acting attorney general, this action is one of the first resolutions to come out of the Civil Rights Fraud Initiative, which was launched in May 2025. IBM isn’t the only company to alter its policies, with both T-Mobile and Meta agreeing to put an end to its DEI initiatives last year.

Advertisement

Source link

Continue Reading

Tech

What’s Your Favorite Kind Of Hack?

Published

on

Talking with [Tom Nardi] on the podcast this week, he mentioned his favorite kind of hack: the community-developed open-source firmware that can be flashed into a commercial product that has crappy firmware, thus saving it. The example, just for the record, is the CrossPoint open e-book reader firmware that turns a mediocre cheap e-book into something that you can do anything you want with. Very nice!

And that got me thinking about “kinds of hacks” in general. Do we have a classification scheme for the hacks that we see here on Hackaday? For instance, the obvious precursor to many of Tom’s favorite hacks is the breaking-into-the-locked-firmware hack, where a device that didn’t want you loading your own firmware on it is convinced to let you do so. Junk-hacking is probably also a category of its own, where instead of finding your prey on AliExpress, you find it on eBay, or in the alleyway. And the save-it-from-the-landfill repair and renovation hacks are close relatives.

The doing-too-much-with-too-little hacks are maybe my personal favorite. I just love to see when someone manages to get DOOM running in Linux on a computer made with only 8-pin microcontrollers. Because of the nature of the game, these often also include a handful of abusing-a-component-to-do-something-it’s-not-meant-to-do hacks. Heck, we even had a challenge for just exactly those kind of hacks.

Advertisement

Then there are fine-art-hacks, where the aesthetic outcome is as important as the technical, or games-hacks where fun is the end result.

What other broad categories of hacks are we missing? And which are your favorite?

Advertisement

Source link

Continue Reading

Tech

AI And Cybersecurity: A Glass Half-Empty/Half-Full Proposition, Where The Glass Is Holding Nitroglycerin

Published

on

from the yikes dept

First, some of the good news: certain AI models—currently Anthropic’s Mythos, but surely others are well on their way if they haven’t already arrived—turn out to be really good at finding cybersecurity vulnerabilities. As Anthropic itself reported:

During our testing, we found that Mythos Preview is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser when directed by a user to do so. The vulnerabilities it finds are often subtle or difficult to detect. Many of them are ten or twenty years old, with the oldest we have found so far being a now-patched 27-year-old bug in OpenBSD—an operating system known primarily for its security.

That’s quite the tool, if it can help find vulnerabilities so that they can be patched.

But it’s also quite the tool to help find vulnerabilities so that they can be exploited. Like so many tools, including technological tools, whether they are good or bad depends entirely in how they are used. A hammer is a really helpful tool for building things, but it also smashes windows. And with this news, AI now has the capability for some really destructive uses.

To try to prevent them, Anthropic is working with some of the largest tech companies in the world to let them use a preview of its model on their own software to help QA them and proactively patch vulnerabilities. As Casey Newton reports:

Advertisement

Anthropic announced Mythos alongside Project Glasswing, an initiative with more than 40 of the world’s biggest tech companies that will see Anthropic grant early access to the model to find and patch vulnerabilities across many of the world’s most important systems. Launch partners in the coalition include Apple, Google, Microsoft, Cisco and Broadcom.

They’ll be tasked with scanning and patching their own systems along with the critical open-source systems that modern digital infrastructure depends on. Anthropic is giving participants $100 million in usage credits for Mythos, and donating another $4 million to open-source security efforts.

This sounds like a great program. It also should be noted that the Mythos model is not consumer-grade AI; it takes expensive, dedicated infrastructure to run, which means that, at least for the moment, there’s not an imminent danger of it being misused. But trouble is nevertheless brewing, and someday it will be here, which raises certain questions, like:

(A) What about other AI models, which will inevitably be similarly powerful? What if they are produced by less ethical companies, who would have no compunction against rogue actors using their systems in destructive ways that Project Glasswing won’t have intercepted?

(B) And what about every single legacy technology system in use, which Project Glasswing is unlikely to be able to retroactively fix? Large, resourced companies may be able to weather the on-coming storm, but what about your local dentist office? Or a hospital? Municipal IT systems? Networked technology is everywhere, and these smaller businesses and institutions are likely to both have older, unpatched technology and also fewer resources to update and secure them, or deal with the consequences of a hack, which can be devastating for the business or the people they serve.

On the other hand, there does seem to be one other bit of good news with this revelation: governments, including that of the United States, have often engaged in the dubious practice of hording zero-days, or collecting information about vulnerabilities that they then kept secret so that they could exploit them themselves by using them on an adversary. For those unfamiliar, “zero-day” refers to a vulnerability that has yet to be disclosed, which is why it’s on “day zero,” or before the first day of it being a known vulnerability that could now be fixed.

Advertisement

Mythos’s capabilities would seem to obviate this strategy, because suddenly the stash of unknown vulnerabilities isn’t really going to be such a secret, since anyone using the model will be able to find them. Mythos’s existence changes the balance of interests, where the stronger national security play by the government would be to disclose any discovered vulnerability to the vendor as soon as possible so that they can be patched and our nation’s systems more secured. Arguably that was always the better national security play, but now there’s definitely no upside to trying to keep them secret because it now definitely needs to be presumed that adversaries will be able to find and exploit them. They’ll have the tools.

With these AI models we’re going to need to presume that everyone is going to have the tools to know about every vulnerability. Up to now there has been at least the illusion of some security, because vulnerabilities couldn’t be exploited if no one knew about them, and finding vulnerabilities is hard. But now that it will be easy, the risk to the nation’s cybersecurity is greater than we have ever before contended with.

It is also not really a great harbinger that we know about Mythos because… a copy of the software got leaked. It’s just the software that was leaked and not the models it uses to tune its “reasoning,” which means that anyone trying to now build their own Mythos is still missing an important piece if they want to mimic its full capabilities, but they would have a lot. Which is probably why Anthropic has been sending DMCA takedown notices to have the leaked software removed from the Internet.

But doing so raises a related issue: the role of copyright law when it comes to “vibe coding,” or “having an AI system write the software rather than a programmer, just by instructing it on what to do. It’s especially important in light of the cybersecurity concerns always raised by software (and including vibe-coded software, as we’re having to trust that what’s produced does not have vulnerabilities). Copyright requires a human author, which raises the question: can software written by an AI be copyrightable? The answer would appear to be no, unless there was a great deal of creative effort on the part of a human being to instruct the AI or modify the output. But as Ed Lee chronicled, per Anthropic itself, even its own software (“pretty much 100%”) is being written by AI. And if that’s the case, then Anthropic has no business sending takedown notices for its software because DMCA takedown notices are only for demanding the removal of copyrighted works, which, it would appear, Anthropic’s own code does not qualify for.

Advertisement

But maybe it’s better if software stops being subject to copyright. “Vibe coding,” is becoming increasingly efficient, to the point that there is likely no need for copyright to incentivize its authorship. Instead, what public policy really needs to emphasize is that whatever software is produced is secure software. But in many ways copyright obstructs that goal, like through its lengthy terms, which mean that while a copyright holder might not still be maintaining its older software, no one else can maintain and patch it either, without potentially infringing the software’s copyright.  Or through its privileged secrecy (unusually for copyright, when it comes to software you don’t actually have to disclose all the actual code to register a copyright in it!) and other powers to lock out security research efforts, like through Section 1201 of the DMCA, when such efforts aren’t specifically supported by the developer–assuming the developer supports any security testing at all, as right now there aren’t necessarily the incentives to make them care about it.  Instead public policy has given them the ability, like with copyright, to escape oversight of the security of their software products, even as those products end up embedded in more and more of our lives.

It’s time to change that focus and get copyright out of the way of making software security our top policy priority.

And fast.

Filed Under: ai, claude, claude code, copyright, cybersecurity, mythos, project glasswing, vulnerabilities, zero days

Companies: anthropic

Advertisement

Source link

Continue Reading

Tech

‘Crimson Desert’ Is a Cat Dad Simulator

Published

on

Step into the shoes of the strongest, goodest boy in a game that is beautiful, baffling, and impossible to put down.

Source link

Continue Reading

Tech

‘Moon joy!’ Artemis 2’s crew sets a distance record, documents lunar far side and heads back toward Earth

Published

on

NASA’s Artemis 2 crew captured an iconic “Earthset” picture, showing Earth dipping beneath the lunar horizon. (NASA Photo)

Four astronauts today became the first humans to make a trip around the moon since the Apollo era — and added new pages to history books for the Artemis era.

The Artemis 2 crew reached a maximum distance of 252,756 miles from Earth, surpassing the distance record for human travel that was set during the Apollo 13 mission in 1970 by more than 4,000 miles.

NASA astronaut Christina Koch marked the occasion in a radio transmission from NASA’s Orion space capsule, named Integrity. “We most importantly choose this moment to challenge this generation and the next to make sure this record is not long-lived,” she said.

Koch made history as the first woman to travel beyond Earth orbit. One of her crewmates, NASA pilot Victor Glover, is the first Black astronaut to take a moon trip, and Canadian astronaut Jeremy Hansen is the first non-U.S. astronaut to do so.

The main purpose of the 10-day Artemis 2 mission is to serve as an initial crewed test flight for the Orion spacecraft, which traced a similar round-the-moon course during the uncrewed Artemis 1 mission in 2022. A successful Artemis 2 mission will prepare the way for a lunar lander test flight in Earth orbit as early as next year, potentially followed in 2028 by the first crewed moon landing since Apollo.

Advertisement

Seattle-area tech workers have played a role in getting Orion off the ground — and bringing it back home. L3Harris’ Aerojet Rocketdyne facility in Redmond worked on the spacecraft’s main engine and some of its thrusters, while Karman Space Systems’ Mukilteo facility provided mechanisms for Orion’s parachute deployment system and emergency hatch release system.

Artemis 2’s flight plan took advantage of orbital mechanics and a precisely timed firing of Orion’s main engine to send the astronauts on a free-return trip around the moon and back. The moon’s gravitational pull caused Orion to make a crucial U-turn around the far side, at a minimum distance of 4,067 miles from the lunar surface, and then slingshot back toward Earth.

A scientific swing around the moon

Scientists enlisted the astronauts to make up-close geological observations of the lunar surface during the flyby. Because the Artemis astronauts had a wider perspective on the moon than Apollo astronauts did five decades ago, they could see parts of the far side that had gone unseen directly by human eyes (although they’ve been well-documented by robotic probes).

NASA’s mission commander, Reid Wiseman, found it difficult to break away from moongazing to discuss his observations over a radio link with Kelsey Young, Artemis 2’s lunar science lead. “You’re pulling me away from the moon right now, so let’s go,” he told Young.

Advertisement

Back at Mission Control in Houston, Young took it all in good stride. “I have to say that ‘moon joy’ is the new term that’s already become our team’s new motto,” she told Wiseman.

The astronauts focused on features of scientific interest — including Orientale Basin and Hertzsprung Basin, two multi-ring impact craters that document different geological eras on the far side. They noted subtle shades of green and brown on the mostly gray moonscape. They also took a close look at the south polar region, which is the target for the Artemis program’s first crewed landing.

“The view of the south pole is quite amazing,” Glover said.

Koch marveled over the bright young craters that stood out on the lunar surface. “What it really looks like is a lampshade with tiny pinpricks, and the light is shining through,” she said. “They’re so bright compared to the rest of the moon.”

Advertisement

Emotional moments

Hansen told Mission Control that the astronauts were proposing new names for two craters they spotted on the surface below. “Integrity” was chosen as the name for one of the craters, in honor of the crew’s spacecraft. The other crater was dubbed “Carroll,” in honor of Wiseman’s wife, who died in 2020. After Hansen spelled out Carroll’s name, the astronauts came together to give Wiseman a comforting hug.

That wasn’t the flyby’s only emotional moment. Koch said she felt an “overwhelming sense of being moved by looking at the moon” and comparing it with Earth. Her description of the feeling was similar to astronauts’ accounts of a phenomenon known as the Overview Effect.

“Everything we need, the Earth provides,” she said, “and that is in itself something of a miracle, and one that you can’t truly know until you’ve had the perspective of the other.”

Just before Orion was due to pass behind the moon for a temporary blackout, Glover took the opportunity to refer to the Christian commandment to love your neighbor as yourself. “As we prepare to go out of radio communication, we’re still able to feel your love from Earth. And to all of you down there on Earth, and around Earth, we love you from the moon,” he said. “We will see you on the other side.”

Advertisement

About 40 minutes later, Orion emerged from the other side of the moon, and communication was restored. “It is so great to hear from Earth again,” Koch told Mission Control.

“We will explore, we will build ships, we will visit again, we will construct science outposts, we will drive rovers, we will do radio astronomy, we will found companies, we will bolster industry, we will inspire,” Koch said. “But ultimately, we will always choose Earth.”

Earthset, Earthrise and an eclipse

The behind-the-moon turnaround provided the crew with opportunities to capture images of Earthset and Earthrise — and marked the beginning of Orion’s homeward journey. Back at Mission Control, the support team turned their double-sided mission patches around to change the focus of the patch’s design from the moon to Earth.

But the workday wasn’t yet finished: For the grand finale, the astronauts donned protective glasses and watched as the sun passed behind the moon to create an unearthly kind of solar eclipse. As the sun sank beneath the lunar horizon, they captured pictures of the solar corona.

Advertisement

Glover reported that the corona created a bright halo “almost around the entire moon,” with the lunar surface illuminated ever so faintly by Earth’s reflected light. “It is quite an impressive sight,” he said. “Earthshine is very distinct, and it creates quite an impressive visual illusion. Wow, it’s amazing.”

The sun’s re-emergence from behind the moon marked the end of today’s seven-hour lunar observation session. “I can’t say enough how much science we’ve already learned, and how much inspiration you’ve provided to our entire team, the lunar science community and the entire world with what you were able to bring today,” Young told the crew. “You really brought the moon closer today, and we can’t thank you enough.”

High-resolution images and reports about the observations are due to be downlinked and distributed in the days ahead. Planetary scientists will be poring over the data long after Orion and its crew make their scheduled splashdown in the Pacific Ocean on Friday.

After the flyby, President Donald Trump congratulated the crew over an audio link and called them “modern-day pioneers.”

Advertisement

“Today you’ve made history and made all America really proud,” he said. “No astronaut has been to the moon since the days of the Apollo program. … At long last, America is back.”

Source link

Continue Reading

Tech

Amazon sued by YouTubers for allegedly scraping their content to train AI video tool

Published

on

Amazon’s headquarters campus in Seattle. (GeekWire Photo / Kurt Schlosser)

A trio of YouTube producers filed a class action lawsuit against Amazon alleging the tech giant illegally used content from the video platform to train and improve its Nova Reel generative AI model.

The suit, filed Friday in U.S. District Court for the Western District of Washington in Seattle, describes how Amazon allegedly used datasets earmarked only for academic use, circumvented YouTube’s copyright protection measures, and scraped video content. KING5 first reported on the suit.

“In a world where Defendant and others can circumvent technological protections to exploit copyrighted works without authorization with impunity, creators will be less likely to make their creations available on YouTube and other similar platforms, for fear of losing all control of them,” the plaintiffs state in their suit. “The world will be poorer for it.”

Plaintiffs are seeking damages, restitution and injunctive relief, claiming Amazon violated the Digital Millennium Copyright Act.

An Amazon spokesperson declined to comment on the matter, citing ongoing litigation.

Advertisement

Amazon released its Nova foundation models in 2024 via AWS Bedrock. The Nova Reel model can take text prompts and images and turn them into short videos, with features including watermarking.

According to the suit, Amazon deployed automated download tools paired with virtual machines that cycled through IP addresses to avoid being blocked, enabling the unauthorized extraction of data from millions of videos.

The named plaintiffs include:

  • Ted Entertainment, Inc. (TEI), a California-based media company owned by Ethan and Hila Klein with more than 5,800 videos on YouTube with a combined total of more than 4 billion views. TEI channels include h3h3 Productions and H3 Podcast Highlights.
  • Matt Fisher, a California-based YouTuber who runs the MrShortGame Golf channel that provides instructional videos and has more than 500,000 subscribers.
  • Golfholics, a golf-focused YouTube channel with more than 130,000 subscribers and millions of views.

The suit argues the plaintiffs have no way to recover intellectual property already used to train Amazon’s models. “Once AI ingests content, that content is stored in its neural network and not capable of deletion or retraction,” it states.

Dozens of similar cases are working their way through courts nationwide. Among them: the New York Times’ lawsuit against OpenAI and Microsoft, a class action by authors against Microsoft, and a suit from musicians with YouTube content against Google.

Advertisement

Separate lawsuits against Anthropic and music-generation startup Suno over the alleged unauthorized use of books and music in AI training have since settled. A case brought by authors against Meta was dismissed.

Source link

Continue Reading

Tech

How Quiet Failures Are Redefining AI Reliability

Published

on

In late-stage testing of a distributed AI platform, engineers sometimes encounter a perplexing situation: every monitoring dashboard reads “healthy,” yet users report that the system’s decisions are slowly becoming wrong.

Engineers are trained to recognize failure in familiar ways: a service crashes, a sensor stops responding, a constraint violation triggers a shutdown. Something breaks, and the system tells you. But a growing class of software failures looks very different. The system keeps running, logs appear normal, and monitoring dashboards stay green. Yet the system’s behavior quietly drifts away from what it was designed to do.

This pattern is becoming more common as autonomy spreads across software systems. Quiet failure is emerging as one of the defining engineering challenges of autonomous systems because correctness now depends on coordination, timing, and feedback across entire systems.

When Systems Fail Without Breaking

Consider a hypothetical enterprise AI assistant designed to summarize regulatory updates for financial analysts. The system retrieves documents from internal repositories, synthesizes them using a language model, and distributes summaries across internal channels.

Advertisement

Technically, everything works. The system retrieves valid documents, generates coherent summaries, and delivers them without issue.

But over time, something slips. Maybe an updated document repository isn’t added to the retrieval pipeline. The assistant keeps producing summaries that are coherent and internally consistent, but they’re increasingly based on obsolete information. Nothing crashes, no alerts fire, every component behaves as designed. The problem is that the overall result is wrong.

From the outside, the system looks operational. From the perspective of the organization relying on it, the system is quietly failing.

The Limits of Traditional Observability

One reason quiet failures are difficult to detect is that traditional systems measure the wrong signals. Operational dashboards track uptime, latency, and error rates, the core elements of modern observability. These metrics are well-suited for transactional applications where requests are processed independently, and correctness can often be verified immediately.

Advertisement

Autonomous systems behave differently. Many AI-driven systems operate through continuous reasoning loops, where each decision influences subsequent actions. Correctness emerges not from a single computation but from sequences of interactions across components and over time. A retrieval system may return contextually inappropriate and technically valid information. A planning agent may generate steps that are locally reasonable but globally unsafe. A distributed decision system may execute correct actions in the wrong order.

None of these conditions necessarily produces errors. From the perspective of conventional observability, the system appears healthy. From the perspective of its intended purpose, it may already be failing.

Why Autonomy Changes Failure

The deeper issue is architectural. Traditional software systems were built around discrete operations: a request arrives, the system processes it, and the result is returned. Control is episodic and externally initiated by a user, scheduler, or external trigger.

Autonomous systems change that structure. Instead of responding to individual requests, they observe, reason, and act continuously. AI agents maintain context across interactions. Infrastructure systems adjust resource in real time. Automated workflows trigger additional actions without human input.

Advertisement

In these systems, correctness depends less on whether any single component works, and more on coordination across time.

Distributed-systems engineers have long wrestled with issues of coordination. But this is coordination of a new kind. It’s no longer about things like keeping data consistent across services. It’s about ensuring that a stream of decisions—made by models, reasoning engines, planning algorithms, and tools, all operating with partial context—adds up to the right outcome.

A modern AI system may evaluate thousands of signals, generate candidate actions, and execute them across a distributed infrastructure. Each action changes the environment in which the next decision is made. Under these conditions, small mistakes can compound. A step that is locally reasonable can still push the system further off course.

Engineers are beginning to confront what might be called behavioral reliability: whether an autonomous system’s actions remain aligned with its intended purpose over time.

Advertisement

The Missing Layer: Behavioral Control

When organizations encounter quiet failures, the initial instinct is to improve monitoring: deeper logs, better tracing, more analytics. Observability is essential, but it only shows that the behavior has already diverged—it doesn’t correct it.

Quiet failures require something different: the ability to shape system behavior while it is still unfolding. In other words, autonomous systems increasingly need control architectures, not just monitoring.

Engineers in industrial domains have long relied on supervisory control systems. These are software layers that continuously evaluate a system’s status and intervene when behavior drifts outside safe bounds. Aircraft flight-control systems, power-grid operations, and large manufacturing plants all rely on such supervisory loops. Software systems historically avoided them because most applications didn’t need them. Autonomous systems increasingly do.

Behavioral monitoring in AI systems focuses on whether actions remain aligned with intended purpose, not just whether components are functioning. Instead of relying only on metrics such as latency or error rates, engineers look for signs of behavior drift: shifts in outputs, inconsistent handling of similar inputs, or changes in how multi-step tasks are carried out. An AI assistant that begins citing outdated sources, or an automated system that takes corrective actions more often than expected, may signal that the system is no longer using the right information to make decisions. In practice, this means tracking outcomes and patterns of behavior over time.

Advertisement

Supervisory control builds on these signals by intervening while the system is running. A supervisory layer checks whether ongoing actions remain within acceptable bounds and can respond by delaying or blocking actions, limiting the system to safer operating modes, or routing decisions for review. In more advanced setups, it can adjust behavior in real time—for example, by restricting data access, tightening constraints on outputs, or requiring extra confirmation for high-impact actions.

Together, these approaches turn reliability into an active process. Systems don’t just run, they are continuously checked and steered. Quiet failures may still occur, but they can be detected earlier and corrected while the system is operating.

A Shift in Engineering Thinking

Preventing quiet failures requires a shift in how engineers think about reliability: from ensuring components work correctly to ensuring system behavior stays aligned over time. Rather than assuming that correct behavior will emerge automatically from component design, engineers must increasingly treat behavior as something that needs active supervision.

As AI systems become more autonomous, this shift will likely spread across many domains of computing, including cloud infrastructure, robotics, and large-scale decision systems. The hardest engineering challenge may no longer be building systems that work, but ensuring that they continue to do the right thing over time.

Advertisement

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025