Connect with us
DAPA Banner

Tech

3 Best Robot Lawn Mowers (2026), Tested and Reviewed

Published

on

Mowers I Am Currently Testing

We are just into a new cutting season here, so I haven’t tested these new robot mowers enough to make a full recommendation, but here are my impressions so far.

Image may contain Grass Lawn Plant Device Lawn Mower Tool Car Transportation Vehicle Machine and Wheel

Photograph: Simon Hill

Mammotion Luba 3 AWD for $2,399: If this robot mower continues to perform as well as it has in its first week, it will earn a spot above. It is pricey, but the Mammotion Luba 3 AWD can handle relatively rough terrain and steep slopes, and it combines three technologies (GPS, LiDAR, and AI vision) to ensure it can cut larger lawns even where there might be tree cover or other awkward spots. It boasts quiet operation, efficient pathfinding, and leaves a lovely finish. The obstacle avoidance is solid, and it does a decent job around the edges. I also appreciate the manual mowing option, enabling you to cut any problem areas with remote app control.

Husqvarna Aspire R6V for £999: I was excited to test this new robot mower from Husqvarna because it is more affordable than many of its range, including our top pick, and it doesn’t require a separate aerial for the satellite connection. It uses a combination of GPS and AI vision with a camera on the front. It was easy to set up and map the lawn in the app, but you will need a good Wi-Fi signal across your yard for it to work effectively. So far, I’ve been a little disappointed in the sensitive obstacle avoidance, as it has been leaving large uncut strips around the edges of my lawn. But I’d like to tinker and test for a bit longer before I deliver a final verdict. This model also seems to be available only in the UK right now. I’m waiting to hear back about a US equivalent.

Advertisement

In my queue, after these two mowers, I have the Mova LiDAX Ultra 1000 and the Anthbot M9.

Other Robot Lawn Mowers We Like

Eufy Robot Lawn Mower E15 for $2,300: This is another wire-free mower, but instead of relying on satellite navigation, it uses a camera system to automatically map lawns and avoid obstacles. It can cover up to 0.2 acres (8,700 square feet), cut from 1 to 3 inches, and handle up to 18-degree slopes. It is also fairly quiet and has GPS tracking, but you must have Wi-Fi coverage in your backyard, or you’ll need a 4G data subscription. I found the setup lengthy due to a firmware download, but the mapping and the first cut were decent. The E15 can only run during the day, and it doesn’t cope very well with inclines. I also found it frequently failed to cut the edges of the lawn and doesn’t perform well if the grass is damp. I wouldn’t recommend it at full price, but it seems to get frequent deep discounts.

Avoid These Mowers

EcoFlow Blade

EcoFlow Blade

Advertisement

Photograph: Simon Hill

EcoFlow Blade for £1,849: While it was easy to set up and cut my lawn nicely without the need for any boundary wire, the EcoFlow Blade (6/10, WIRED Review) sometimes struggled with GPS navigation and ended up stuck in a flower bed. It also left an untouched strip around the edge of my lawn. The object avoidance was solid, and it can be automated in the app, though it occasionally failed to start a scheduled cut for me. EcoFlow seems to have discontinued this model, though it is still on sale in Europe. Probably best to avoid.

Yardcare E400

Yardcare E400

Photograph: Simon Hill

Yardcare E400 for $370: Curious about the budget end of the robot mower market, I agreed to try the Yardcare E400, but this mower was an unmitigated disaster from start to finish. It’s a boundary wire model, so you must run wire around the area you want mowed. Yardcare suggests it can cover up to 4,300 square feet and cut grass between 0.8 and 2.4 inches. The problem is that it gets stuck frequently and struggles to even get on and off its charging station reliably. After trying multiple fixes to no avail and going through customer support, I had to conclude that this model has a serious design flaw.

How Do Robot Lawn Mowers Work?

Advertisement

Perhaps counterintuitively, the setup instructions for your robot lawn mower will likely tell you to start by cutting the grass. Robot mowers mostly can’t deal with long grass. Unlike traditional mowers, these robots don’t collect grass cuttings; they mulch instead, and they are designed to cut frequently, keeping your lawn short and simply leaving the cuttings on the ground, which can also improve lawn health. Most robot mowers are designed to run two or three times a week during the growing season (from late spring to early fall).

They have rechargeable batteries onboard and can last from half an hour to several hours on a full charge. They return to the charging base and recharge automatically when their power runs low. Most mowers have simple controls, a small display, and an emergency stop button. You can generally start and stop mowing, set schedules, and create or edit mapped areas using the onboard controls or the companion mobile app, very much like a robot vacuum.

What Features Should I Look for in a Robot Mower?

There are many robot mower features to consider, and the best choice for you depends on what your yard is like.

Advertisement

Lawn Size and Shape

Robot lawn mowers are generally rated to cover a specific square footage, with wider coverage requiring models with larger batteries. Alongside yard size, you should consider the shape and topography of your lawn, as most robot mowers will struggle with steep inclines. While you can often map out separate areas so your robot mower can mow front and back lawns, for example, it will generally need you to lift and carry it between those areas. If you have an uneven garden or steep slopes, you should look for a four-wheel-drive (4WD) or all-wheel-drive (AWD) mower and check the manufacturer’s rating for inclines.

Navigation Type

There are a few types of navigation that robot mowers employ. We’ve tested five different approaches, though some mowers combine multiple technologies for better performance:

Advertisement
  • Satellite: Often employing something called Real-Time Kinematic (RTK) Global Positioning System (GPS), these mowers need a satellite signal to navigate and will have a receiver that must be placed in the open with a clear line of sight to the sky. Satellite navigation mowers are not suitable for areas with tall trees or buildings.
  • Light Detection and Ranging (LiDAR): This technology sends out rapid laser pulses to 3D map the terrain (it is also used by self-driving cars). It enables mowers to cut grass under thick tree canopies or near tall buildings where GPS signals usually fail.
  • Cameras: Cameras and onboard AI are used for obstacle detection and avoidance. AI vision can automatically map areas and cut the grass while avoiding obstacles it encounters, much like how most robot vacuums navigate a home to clean the floors.
  • Wire boundary: These mowers require you to install a perimeter wire as a boundary around your lawn that marks out the border the mower should not cross. It’s a messy job that can be tricky.
  • Remote control: You mow your lawn from the comfort of your home using a remote controller or an app on your phone. Some only work via remote control, while others can also cut automatically.

Power and Charging

Robot mowers generally come with large charging docks, and you’ll need to earmark a suitable spot for yours. They usually have extensive weatherproof cabling, but you will have to find a route to an outdoor socket.

Wi-Fi and Bluetooth

To connect to your mower and schedule a mow, update the firmware, or remote control it where supported, you need a decent Wi-Fi signal or a Bluetooth connection. It’s best to set up your mower’s charging station within range of your Wi-Fi network. Some mowers also need a strong Wi-Fi signal to operate effectively, so you might consider adding an outdoor mesh router. If you want to connect your phone via Bluetooth, you will have to get quite close.

How Noisy Are Robot Mowers?

Advertisement

Most robot mowers are far quieter than their traditional counterparts, and you can expect them to operate at around 55 decibels, though they may go as high as 75 decibels. We only tested battery-powered mowers, but expect gas mowers to be louder. While the operation is often quiet, I did find that several mowers made annoying beeping sounds when backing up or had a loud recorded voice during setup or upon receiving a command.

Do Robot Mowers Work in Any Weather?

Robot mowers and their charging stations usually have an IP rating and can cope with rain, but you should pack up and bring your mower indoors during the winter months. Many robot mowers have some kind of rain sensor and will pause mowing when it gets too wet. Some mowers may need to be paused manually. The wheels can churn up your lawn and get caked in mud if mowers continue to labor in the rain, especially with larger and heavier models.

How Well Do Robot Mowers Cut?

Advertisement

Mowers of different sizes will have varying cutting widths, denoting the width of the strip they can cut on each pass. Most also have floating cutting decks that enable you to choose the length of grass you want (typically 1 to 3 inches). Many robot mowers seem to struggle with cutting around the edges of a lawn, especially if there’s a wall or fence that prevents them from getting close enough.

It’s common to find an uncut verge around the edge of your lawn, so you might need to occasionally get the string trimmer out. Every robot mower I’ve tested has also struggled to cut the area around the charging station, so I recommend placing the unit on a deck or paving if possible.

Can I Install a Robot Mower Myself?

​Yes, most robot mowers can be installed by anyone, but you might want to set aside an afternoon to work out any snags. Finding the best spot for the receiver for a satellite mower can be tricky. The mapping process can also take a while; usually, it prompts you to remote-control your mower around the border you want to set. After the first mow, you should review its performance and make tweaks to ensure it’s covering the entire area you want to cut.

Advertisement

How I Test Robot Lawn Mowers

I test each robot lawn mower for at least a month, on at least two different lawn areas, assessing the ease of setup, the mapping process, automatic scheduling (where available), navigation, obstacle avoidance, and the quality of the final cut, looking for length, uniformity, and any missed patches. Where applicable, I try extra features, tweak settings in the app, and check how the mower handles different weather conditions. I also keep an eye on battery performance and charging time to ensure it aligns with the manufacturer’s claims.

Power up with unlimited access to WIRED. Get best-in-class reporting and exclusive subscriber content that’s too important to ignore. Subscribe Today.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Sam Altman apologises after OpenAI chose not to report ChatGPT user who carried out Tumbler Ridge school shooting

Published

on

TL;DR

Sam Altman apologised to the community of Tumbler Ridge, British Columbia, for OpenAI’s failure to alert police after its own systems flagged a ChatGPT user who went on to kill eight people and injure 27 in Canada’s deadliest school shooting since 1989. Approximately a dozen OpenAI employees had reviewed the flagged account in June 2025 and some recommended reporting to law enforcement, but leadership overruled them, applying a “higher threshold” that the conversations did not meet. OpenAI has since lowered its reporting threshold and established contact with the RCMP, but all changes are voluntary, and Canada has no law requiring AI companies to report identified threats.

Sam Altman published an open letter to the community of Tumbler Ridge, British Columbia, on Thursday, apologising for OpenAI’s failure to alert law enforcement after its own systems flagged a user who went on to carry out the deadliest school shooting in Canada in nearly four decades. “I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman wrote. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.” The letter, dated April 23 and released publicly a day later, arrived 72 days after Jesse Van Rootselaar, 18, killed eight people and injured 27 others in a shooting that began at a family home and ended at Tumbler Ridge Secondary School on February 10. OpenAI’s automated abuse detection had flagged Van Rootselaar’s ChatGPT account eight months earlier, in June 2025. Approximately a dozen employees reviewed the flagged conversations, which described scenarios involving gun violence, and some recommended contacting Canadian police. Company leadership decided against it. The account was banned. No one was told. Van Rootselaar created a second account and was not detected until after the RCMP released a name.

Advertisement

The decision

The Wall Street Journal first reported the internal debate at OpenAI. The employees who reviewed Van Rootselaar’s flagged account saw what they described as signs of “an imminent risk of serious harm to others.” They escalated their recommendation to report the conversations to law enforcement. Leadership applied what an OpenAI spokesperson later called a “higher threshold” for credible and imminent threat reporting and concluded the activity did not meet it. The account was terminated. The conversations were preserved internally. The police were not contacted. Eight months later, Van Rootselaar killed her mother, Jennifer Strang, 39, and her 11-year-old half-brother, Emmett Jacobs, at the family home, then drove to the secondary school and opened fire with a modified rifle, killing education assistant Shannda Aviugana-Durand, 39, and five students aged 12 and 13: Zoey Benoit, Ticaria Lampert, Kylie Smith, Abel Mwansa, and Ezekiel Schofield. Twenty-seven people were injured. Maya Gebala, 12, was shot three times in the head and neck while shielding classmates and sustained what doctors described as a “catastrophic, traumatic brain injury” with permanent cognitive and physical disability. Van Rootselaar died by suicide at the school.

The civil lawsuit filed in BC Supreme Court in March by Cia Edmonds on behalf of her daughter Maya alleges that ChatGPT provided “information, guidance, and assistance to plan a mass casualty event, including the types of weapons to be used, and describing precedents from other mass casualty events or historical acts of violence.” The specific content of the conversations has not been made public. BC Premier David Eby said he deliberately did not ask what was in the chat logs to avoid compromising the RCMP investigation. What is known is that OpenAI’s own system identified the conversations as potentially dangerous, that OpenAI’s own employees recommended action, and that OpenAI’s leadership chose not to act. The apology is not for a failure of detection. The detection worked. The apology is for what happened after detection worked.

The letter

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Advertisement

Altman’s letter was addressed to the Tumbler Ridge community and released after BC Premier Eby disclosed that Altman had agreed to apologise during earlier discussions about OpenAI’s handling of the case. “I have been thinking of you often over the past few months,” Altman wrote. “I cannot imagine anything worse in the world than losing a child.” He added: “I reaffirm the commitment I made to the mayor and premier to find ways to prevent tragedies like this in the future. Going forward, our focus will continue to be working with all levels of government to help ensure something like this never happens again.” The letter contained no specific policy commitments, no description of what OpenAI would change, and no acknowledgement that employees had recommended reporting the account and been overruled. Eby called the apology “necessary” but “grossly insufficient for the devastation done to the families of Tumbler Ridge.” Tumbler Ridge Mayor Darryl Krakowka acknowledged receipt and asked for “care and consideration” while the community navigates the grieving process.

The policy commitments came separately, in a letter from OpenAI vice-president of global policy Ann O’Leary to Canadian federal ministers. O’Leary wrote that OpenAI had lowered its reporting threshold so that a user no longer needs to discuss “the target, means, and timing” of planned violence for a conversation to be flagged for law enforcement referral. The company has enlisted mental health and behavioural experts to help assess flagged cases and established a direct point of contact with the RCMP. O’Leary stated that under the updated policies, Van Rootselaar’s interactions “would have been referred to police” if discovered today. The changes are voluntary. They are not legally binding. They can be reversed at any time. Canada has no law requiring AI companies to report threats identified through their platforms, and the federal government has not yet introduced one.

The pattern

Tumbler Ridge is not an isolated case. Florida has opened the first criminal investigation into an AI company after ChatGPT allegedly advised the gunman in a mass shooting at Florida State University, including guidance on how to make a firearm operational moments before the attack that killed two people and injured five. NPR reported on April 23 that “OpenAI is under scrutiny after two mass shooters used ChatGPT to plan attacks.” Seven families have separately sued OpenAI over ChatGPT acting as what their attorneys describe as a “suicide coach,” with documented deaths in Texas, Georgia, Florida, and Oregon. In another case, OpenAI is being sued for allegedly ignoring three warnings about a dangerous user, including its own internal mass-casualty flag. The number of reported AI safety incidents rose from 149 in 2023 to 233 in 2024, a 56% increase, and the 2025 and 2026 figures will be significantly higher.

The pattern that connects these cases is not that AI systems are spontaneously generating violence. It is that AI companies are identifying dangerous behaviour on their platforms and making internal decisions about whether to act on it, decisions that carry life-and-death consequences but are governed by no external standard, no legal obligation, and no regulatory oversight. The deeper risks of emotional dependency on AI chatbots, including the phenomenon researchers have termed “AI psychosis,” raise questions about what happens when systems optimised to sustain engagement become confidantes for users in crisis. OpenAI’s “higher threshold” for reporting was a business judgement, not a legal standard. The employees who recommended contacting police applied their own moral reasoning. The executives who overruled them applied a different calculus, one that presumably weighed the reputational and legal risks of reporting against the reputational and legal risks of not reporting, and got it catastrophically wrong.

Advertisement

The safety question

OpenAI announced an external safety fellowship hours after a New Yorker investigation reported it had dissolved its internal safety team, a sequence that captures the company’s approach to safety governance with uncomfortable precision. The superalignment team, led by Ilya Sutskever before his departure, was disbanded. The AGI-readiness team was dissolved. Safety was dropped from OpenAI’s IRS filings when the company converted from a nonprofit to a for-profit structure. OpenAI’s own robotics chief resigned over safety governance concerns, specifically objecting that “surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.” The external fellowship, the voluntary policy changes, and Altman’s letter all share a common characteristic: they are gestures that OpenAI controls. They can be announced, modified, or withdrawn without external approval. They create the appearance of accountability without the mechanism of it.

OpenAI’s recent release of open-source safety policies for teen users covers graphic violence, dangerous activities, and other harm categories. OpenAI itself described these as a “meaningful safety floor,” not a comprehensive solution. The gap between floor and ceiling is where Tumbler Ridge happened. The system flagged a teenager describing gun violence scenarios. The policy said that was not enough to report. The teenager went on to kill eight people. A lower threshold would have triggered a report to the RCMP. Whether the RCMP would have acted on it, whether Canadian law would have permitted intervention based on ChatGPT conversations, whether any of that would have prevented the shooting are questions that cannot be answered because the report was never made. OpenAI’s updated policy now says it would make the report. But the updated policy is still voluntary, still internal, and still subject to the same leadership override that prevented the original report from being filed.

The gap

Canada’s AI minister, Evan Solomon, said OpenAI’s commitments “do not go far enough.” Federal ministers from the innovation, justice, public safety, and culture portfolios met with OpenAI representatives after the government summoned the company’s executives in late February. A joint task force between Innovation, Science and Economic Development Canada and Public Safety Canada is reviewing AI safety reporting protocols, with preliminary recommendations expected by summer 2026. Bill C-27, which contains the Artificial Intelligence and Data Act, was Canada’s proposed AI regulation framework but is now widely regarded as inadequate. Bill C-63, the Online Harms Act, was designed for social media platforms, not generative AI systems that conduct one-on-one conversations with users. The federal government has tabled new “lawful access” legislation to give police powers to pursue online data from foreign companies, but it does not specifically require AI companies to report threatening behaviour. Canada currently has no legal framework for assigning responsibility when an AI company possesses information that could prevent violence and chooses not to share it.

This is the gap that Altman’s letter cannot close. An apology addresses a past failure. A voluntary policy change addresses a future risk. Neither addresses the structural problem, which is that a company valued at $852 billion, racing to build artificial general intelligence, serving hundreds of millions of users, employing systems that can identify dangerous behaviour in real time, operates under no legal obligation to tell anyone what it finds. OpenAI’s employees saw a threat. OpenAI’s leadership decided the threat did not meet the company’s internal standard. Eight people are dead. The standard has been lowered. The next decision will be made by the same company, under the same voluntary framework, with the same absence of legal consequence for getting it wrong. Altman wrote that he shares the letter “with the understanding that everyone grieves in their own way and in their own time.” Tumbler Ridge is grieving. The question is not whether Sam Altman is sorry. The question is whether being sorry is a policy.

Advertisement

Source link

Continue Reading

Tech

NAD C 589 CD Player Revives MQA with QRONO d2a Processing

Published

on

NAD Electronics never bailed on the compact disc. While others chased streaming like it was the last lifeboat off the Titanic, NAD kept building CD players quietly, stubbornly, and with a clear sense of who was still buying silver discs.

Now it’s back with the C 589, a $1,399 addition to its Classic Series that moves the conversation forward from the earlier C 538. NAD is positioning it as a serious step up in digital playback, built around MQA Labs’ QRONO d2a processing and an ESS ES9039PRO DAC to improve timing accuracy, spatial detail, and overall musical coherence.

But let’s not pretend this is 2006. The CD player category didn’t die — it got crowded. Brands like FiiO, Shanling, Esoteric, Onkyo, Marantz, Audiolab, and Quad are all leaning back into physical media with players that range from affordable to borderline obsessive.

The difference? NAD never left. And the C 589 feels like a company doubling down on that decision; this time with more firepower under the hood and a market that suddenly cares again.

Advertisement
nac-c-589-cd-player-angle

The resurgence of physical formats reflects a desire to reconnect with music in a more intentional way, said Morten Nielsen, NAD Product Manager. “For many listeners, compact disc remains an incredibly rewarding format. With the C 589, we wanted to create a player that honours that experience while applying modern digital technologies to extract the best possible performance from every disc.

What is QRONO d2a?

At the heart of the C 589 is QRONO d2a, a digital audio processing technology developed by MQA Labs that focuses on improving timing precision and reconstruction accuracy during the digital-to-analogue conversion process. The goal is to reduce temporal smearing so transients land where they should, spatial cues make more sense, and the music flows without that slightly mechanical edge that can creep into lesser CD playback.

QRONO d2a is said to work alongside the onboard ESS ES9039PRO DAC, not instead of it. Think of it as an additional layer that refines how the DAC does its job, rather than replacing the core conversion architecture. The combination is designed to extract more low-level detail and dynamic nuance from compact discs without rewriting the tonal balance or over-processing the signal.

It’s also worth noting that while MQA as a streaming format lost support from TIDAL two years ago, the underlying technology was acquired by Lenbrook, which also owns NAD.

Advertisement

Disc Loading

Disc handling is handled by a dedicated loader and transport mechanism designed for consistent, low-noise operation. The transport and laser assembly are engineered to maintain accurate tracking, helping the player read discs cleanly, including older or well-worn CDs that can trip up lesser mechanisms. The goal is straightforward: fewer read errors, more consistent playback, and less reliance on error correction to fill in the gaps.

nac-c-589-cd-player-front
nac-c-589-cd-player-rear

Connection Flexibility

For system integration, the C 589 keeps things straightforward. It offers both balanced XLR and single-ended RCA analog outputs for direct connection to an integrated amplifier or preamp, along with AES/EBU, coaxial, and optical digital outputs if you’d rather use it as a dedicated transport feeding an external DAC.

What it doesn’t try to be is a digital hub. There are no digital inputs here, so you won’t be routing streamers, TVs, or other sources through it. 

Advertisement. Scroll to continue reading.
Advertisement

Usability

The C 589 keeps usability straightforward. A large front-panel display with CD Text support shows track and artist information when available, making it easy to read from a distance. An included remote provides basic playback and navigation controls, with no app or additional setup required.

Comparison

nad-c-589-538
NAD Model C 589 (2026) C 538 (2018)
Product Type CD Player CD Player 
Price $1,399 $399
Disc Playback Compatibility CD-CA (Redbook),  CD-R, CD-RW, MQA-CD
MP3-CD, WMA–CD, SACD not supported
CD-CA (Redbook), CD-R, CD-RW, MP3-CD, WMA-CD
MQA-CD and SACD not supported
CD Text Compatibility Yes
Digital to Analog Converter   ESS ES9039PRO DAC  Wolfson High Spec 24/192 DAC 
Filtering Technology QRONO d2a
Frequency Response 20 Hz – 20 kHz (±0.3 dB) ±0.5 dB (ref. 0 dB 20 Hz-20 kHz) 
Total Harmonic Distortion  ≤ 0.005% (1kHz, LPF 20kHz) ≤0.01% (ref. 1 kHz, Audio LPF) 
Channel Balance  0.3 dB (0 dB 1 kHz) ±0.5 dB (ref. 0dB 1kHz)
Channel Separation   ≥ 110 dB ≥90 dB (ref. 1 kHz) 
Signal to Noise Ratio A-Weighted) ≥ 115 dB (1 kHz, A-weighted LPF 20 kHz Play) ≥110 dB (ref. 1 kHz, A-weighted )
Dynamic Digital Headroom (DDH)  Yes
Outputs  XLR Balanced
RCA Unbalanced 
Digital Optical 
Digital Coaxial
RCA Unbalanced 
Digital Optical 
Digital Coaxial
Analog Output Level Line Out: 2.2 ±0.1V 
Balanced: 4.4 ±0.2V
Analog: 2.2 ± 0.1 V 
Digital Out Level  Digital Audio Out – Balanced: 2 – 3 Vp-p 110 ohms 
Digital Audio Out – Coaxial: 0.5 – 0.8 Vp-p 75 ohms
Not Indicated
De-Emphasis  -3.73 to -5.33 dB (0 dB 1kHz, 5 kHz)
-8.04 to -10.04 dB (0 dB 1 kHz, 16 kHz)
-4.6 ±0.8 dB (ref. 0dB 1 kHz, 5 kHz) 
-9.0 ±1.0 dB (ref. 0dB 1 kHz, 16 kHz) 
Linearity -3 ±0.1 dB (0 dB 1 kHz at -3 dB)
-6 ±0.2 dB (0 dB 1 kHz at -6 dB)
-10 ±0.25 dB (0 dB 1 kHz at -10 dB)
-20 ±0.25 dB (0dB 1 kHz at -20 dB)
-60 ±0.5 dB (0dB 1 kHz at -60 dB)
Not Indicated
Power Requirements 120v, 230v 120v, 240v
Standby Power <0.5W  <0.5W 
Idle Power ≤ 20 W <7.5W
LCD Display Yes – 0 6.1 inches Yes
Control IR Remote
12 volt Trigger
IR Remote
Gross Dimensions (WHD) 435 x 83 x 294 mm 

17 1/4 x 3 3/8 x 11 5/8 inches

435 x 70 x 249 mm

17 3/16 x 2 13/16 x 9 13/16 inches 

Advertisement
Weight (net)   5.1 kg (11.2 lbs) 3.0 kg (6.6 lbs) 
Finish   Black Black
Included Accessories 120V AC Power Cord
230V AC Power Cord 
Stereo RCA to RCA Cable
IR Remote Control
AAA Batteries x 2
Safety/Warranty Guide
Quick Setup Guide
120V AC Power Cord

Stereo RCA to RCA Cable
IR Remote Control
AAA Batteries x 2
Safety/Warranty Guide
Quick Setup Guide

nac-c-589-cd-drawer-open

The Bottom Line 

The C 589 feels like NAD Electronics getting serious about CD again—but doing it on its own terms. What stands out is the combination of QRONO d2a processing with a current-generation ESS ES9039PRO DAC, along with balanced XLR outputs that make it easy to integrate into higher-end systems. It’s not trying to win on features. It’s trying to sound better, especially with standard Red Book CDs.

What’s missing is just as clear. There’s no SACD support, no USB ripping, no headphone output, and no ability to act as a digital hub. At $1,399, those omissions will matter to some buyers—especially when other brands are stacking features at similar or even lower price points. NAD made a deliberate call here to keep the C 589 focused on playback quality rather than versatility.

So who is it for? Someone with an existing system who still owns a serious CD collection and wants a cleaner, more refined playback path without jumping into five-figure territory. If you’re looking for an all-in-one digital solution, this isn’t it. If you just want to press play and get the most out of your discs, it makes a stronger case.

Advertisement

As for what’s next, don’t be surprised if this isn’t the end of the story. With CD showing signs of life again and competition heating up, a higher-end model in NAD’s Masters Series would make sense, especially if they decide to push this QRONO approach even further.

Price & Availability

The NAD C 589 CD Player is priced at $1,399 USD ($1,999 CAD). It is also listed at £1199 in the UK and €1599 in Europe. The unit is listed as “coming soon” in 2026 through authorized NAD retailers.

For more information: nadelectronics.com/products/c-589-cd-player

Advertisement

Source link

Continue Reading

Tech

Masimo's Apple Watch ban complaint dismissed by U.S. District Court

Published

on

Masimo’s long-time lawsuit over Apple Watch patent infringement has encountered another setback, as a U.S. District Court filing reveals the complaint against the USITC will be dismissed with prejudice.

Silver Apple Watch lying face down on a dark surface, showing circular heart-rate sensor array, side button, and red-ringed digital crown without the watchband attached
Sensor on Apple Watch Series 9

The USITC’s (United States International Trade Commission) decision to deny a reinstatement of a ban on the Apple Watch has turned into a hurdle for Masimo in its long-running blood oxygen patent lawsuit against Apple. In an April 24 filing by the U.S. District Court for the District of Columbia, a complaint against the USITC about a ban has been dealt with.
The filing, detailing a complaint between Masimo and the ITC, as well as U.S. Customs and Border Protection, explains a brief history of the court’s dealings with the ITC. While it doesn’t mention the Apple Watch directly, it’s all about the patent infringement suit with Apple and the implementation and dismissal of a ban.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

iVanky FusionDock Max 2 review: Hugely better value the second time around

Published

on

The iVanky FusionDock Max 2 adds a massive 23 ports, can support up to four displays if your Mac can, and is a formidable Thunderbolt 5 dock that improves on the original in utility as well as value for money.

Black iVANKY docking station with multiple USB ports and SD card slots on a white desk, next to a small blue spherical smart speaker against a light textured wall
iVanky FusionDock Max 2

In March 2024, AppleInsider reviewed the iVanky FusionDock Max 1. While we enjoyed the massive amount of ports, but its nosebleed pricing at the time and very limited selection of ideal host computers made it not the best option.
A lot has happened since then, not the least of which is Thunderbolt 5 on Mac.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Maine’s governor vetoes data center moratorium

Published

on

Maine Governor Janet Mills has vetoed a bill that would have temporarily brought permits for new data centers to a halt.

If it had become law, L.D. 307 would have imposed the country’s first statewide moratorium on new data centers — lasting, in this case, until November 1, 2027. The bill also called for the creation of a 13-person council to study and make recommendations on data center construction.

With public opposition to data centers rising, other states including New York have considered similar moratoriums.

In a letter to the state legislature, Mills — a Democrat currently running for the U.S. Senate — said that pausing new data centers would be “appropriate given the impacts of massive data centers in other states on the environment and on electricity rates” and that she “would have signed this bill” if it had included an exemption for a data center project in the Town of Jay.

Advertisement

That project, Mills said, “enjoys strong local support from its host community and region.”

Melanie Sachs, a Democratic state representative who sponsored the bill, said Mills’ veto “poses significant potential consequences for all ratepayers, our electric grid, our environment, and our shared energy future.”

Source link

Advertisement
Continue Reading

Tech

Speed vs. Depth: How Does Using AI for Work Affect Our Confidence?

Published

on

Be careful delegating your work to that chatbot: A new peer-reviewed study published this month by the American Psychological Association found that people who heavily rely on AI tools for work tasks reported feeling less confident in their abilities and had less ownership over their work.

There has been growing research on how our brains function when we use AI tools. A landmark study from MIT in 2025 found that our brains don’t retain as much information or employ necessary critical thinking skills when writing tasks are outsourced to AI chatbots. 

This new study aimed to understand how our human behavior, specifically executive functions — like strategic planning and decision making — can change when AI is part of the process. 

Advertisement

Sarah Baldeo, the study’s author and a Ph.D. candidate in AI and neuroscience at Middlesex University in England, noted in the paper that these findings do not show that AI is harming or causing cognitive decline. Rather, they “highlight variability in how users distribute effort between themselves and AI systems under conditions of convenience and competence.” Meaning, people who use AI are making conscious trade-offs, and their confidence fluctuates as a result.

The study encouraged nearly 2,000 adults to use AI for a variety of workplace tasks, like prioritizing projects based on deadlines, explaining a strategy and developing plans with incomplete information. It then asked them to self-report their levels of confidence, ownership and AI reliance, including whether they significantly altered the AI-generated outputs. 

Overall, confidence varied with AI use. A greater reliance on AI was associated with lower confidence in their ability to reason independently. Participants also reported relatively few modifications, meaning they often did not tweak or put their own stamp on what the AI spit out. But those who modified the AI’s work reported feeling more confident and more like the author. Men reported higher reliance on AI than women.

The trade-off between speed and depth was one of the main themes participants reported.

Advertisement

“I got an answer faster, but I don’t think I thought as deeply as I normally would,” one of the participants said.

AI Atlas

This reflects one of the biggest caveats of using AI tools. Chatbots, for example, can produce text quickly, but it doesn’t always have the same level of subject matter expertise you need. AI tools can also hallucinate, or make up facts, so AI-generated output needs to be verified before it’s used. 

The office is one of the main places where people use AI tools. We’re moving beyond just chatbots, with agents that can autonomously handle tasks that would’ve otherwise required a human. 

But these tools aren’t necessarily making our work lives better; one study found they made workdays longer and more unpleasant. As AI becomes increasingly embedded in our work lives, it’s important to understand how it’s shaping our mental attitudes. Qualities like confidence and ownership of our work are important factors in determining the quality of our work life. 

Advertisement

Source link

Continue Reading

Tech

Samsung Messages is Shutting Down: Here’s How to Rescue All Your Messages Before It’s Gone

Published

on

The era of the Samsung Messages app is officially coming to an end. After years of preinstalling Google’s alternative on its newest Galaxy devices, Samsung is finally moving to deactivate its legacy texting platform for good this July. For those who have avoided the switch until now, the transition is no longer optional-failing to migrate means risking a major disruption in how you send and receive daily chats.

On a page with information about the switch, Samsung points to instructions on how to swap over to Google’s Messages app, including for phones that are still on Android 12 and Android 13. Samsung has historically preinstalled its own Messages app on Galaxy phones, but began transitioning toward Google Messages as early as 2021.

To encourage people to switch to Google Messages, Samsung’s instructions list new features offered by Google Messages, like RCS-enabled texting for features like typing indicators, easier group chats and sending higher-quality images. Google’s Messages app also has AI-powered spam detection and spam filters, multi-device access to messages and some built-in Gemini AI features. It’s also the app that most Android phones use as their default texting app, including Samsung’s more recent Galaxy S26. There are other SMS texting app alternatives in the Google Play Store if you don’t want to use the one made by Google.

Advertisement

Samsung has not said when exactly in July messaging will no longer work in the app. A Samsung representative didn’t immediately respond to a request for comment. Once the app is deactivated, only messaging to emergency services will work on Samsung Messages. 

While Samsung did stop including it as the default texting app in 2021, it wasn’t until 2024 that Samsung stopped preinstalling the texting app alongside Google Messages. The Galaxy S26 can’t download the Samsung Messages app, and other phones won’t be able to download it after the app’s July sunset.

Samsung said users of Android 11 or lower aren’t affected by the end of service, but would also likely benefit from switching to a supported texting app like Google Messages. To switch to Google Messages, the company asks users to download the app if it’s not already installed and to set it as the default SMS app when prompted after launching it. 

The post also notes that anyone using an older Galaxy Watch that runs on Samsung’s Tizen operating system will no longer have access to their full conversation history since these watches cannot use Google Messages. Samsung said that they will still be able to read and send text messages, but the company’s newer watches (Galaxy Watch 4 and later) that run WearOS will still have access to full conversations.

Advertisement

Source link

Continue Reading

Tech

Discord Sleuths Gained Unauthorized Access to Anthropic’s Mythos

Published

on

As researchers and practitioners debate the impact that new AI models will have on cybersecurity, Mozilla said on Tuesday it used early access to Anthropic’s Mythos Preview to find and fix 271 vulnerabilities in its new Firefox 150 browser release. Meanwhile, researchers identified a group of moderately successful North Korean hackers using AI for everything from vibe coding malware to creating fake company websites—stealing up to $12 million in three months.

Researchers have finally cracked disruptive malware known as Fast16 that predates Stuxnet and may have been used to target Iran’s nuclear program. It was created in 2005 and was likely deployed by the US or an ally.

Meta is being sued by the Consumer Federation of America, a nonprofit, over scam ads on Facebook and Instagram and allegedly misleading consumers about the company’s efforts to combat them. A United States surveillance program that lets the FBI view Americans’ communications without a warrant is up for renewal, but lawmakers are deadlocked on next steps. A new bill aims to address mounting lawmaker concerns, but lacks substance.

And if you’re looking for a deep dive, WIRED investigated the yearslong feud behind the prominent privacy and security conscious mobile operating system GrapheneOS. Plus we looked at the strange tale of how China spied on US figure skater Alysa Liu and her dad.

Advertisement

And there’s more. Each week, we round up the security and privacy news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.

Anthropic’s Mythos Preview AI model has been touted as a dangerously capable tool for finding security vulnerabilities in software and networks, so powerful that its creator has carefully restricted its release. But one group of amateur sleuths on Discord found their own, relatively simple ways—no AI hacking required—to gain unauthorized access to a coveted digital prize: Mythos itself.

Despite Anthropic’s efforts to control who can use Mythos Preview, a group of Discord users gained access to the tool through some straightforward relatively detective work: They examined data from a recent breach of Mercor, an AI training startup that works with developers, and “made an educated guess about the model’s online location based on knowledge about the format Anthropic has used for other models”—a phrase that many observers have speculated refers to a web URL—according to Bloomberg, which broke the story.

The person also reportedly took advantage of permissions they already possessed to access other Anthropic models, thanks to their work for an Anthropic contracting firm. As a result of their probing, however, they allegedly gained access to not only Mythos but other unreleased Anthropic AI models, too. Thankfully, according to Bloomberg, the group that accessed Mythos has only used it so far to build simple websites—a decision designed to prevent its detection by Anthropic—rather than hack the planet.

Advertisement

Security researchers have long warned that the telecom protocols known as Signaling System 7, or SS7, which govern how phone networks connect to one another and route calls and texts, are vulnerable to abuse that would allow surreptitious surveillance. This week researchers at the digital rights organization Citizen Lab revealed that at least two for-profit surveillance vendors have actually used those vulnerabilities—or similar ones in the next generation of telecom protocols—to spy on real victims. Citizen Lab found that two surveillance firms had essentially acted as rogue phone carriers, exploiting access to three small telecom firms—Israeli carrier 019Mobile, British cell provider Tango Mobile, and Airtel Jersey, based on the island of Jersey in the English Channel—to track the location of targets’ phones. Citizen Lab’s researchers say that “high-profile” people were tracked by the two surveillance firms, though it declined to name either the firms or their targets. Researchers warn, too, that the two companies they discovered abusing the protocols are likely not alone, and that the vulnerability of global telecom protocols remains a very real vector for phone spying worldwide.

In a sign of a growing—if belated—crackdown by US law enforcement on the sprawling criminal industry of human-trafficking-fueled scam compounds across Southeast Asia, the Department of Justice this week announced charges against two Chinese men for allegedly helping to manage a scam compound in Myanmar and seeking to open a second compound in Cambodia. Jiang Wen Jie and Huang Xingshan were both arrested in Thailand earlier this year on immigration charges, according to prosecutors, and now face charges for allegedly running a vast scamming operation that lured human trafficking victims to their compound with fake job offers and then forced them to scam victims, including Americans, for millions of dollars with cryptocurrency fraudulent investments. The DOJ says it also “restrained” $700 million in funds belonging to the operation—essentially freezing the funds in preparation for seizure—and also seized a channel on the messaging app Telegram prosecutors say was used to bait and enslave trafficking victims. The Justice Department’s statement claims that Huang personally took part in the physical punishment of workers in one compound, and that Jiang at one point oversaw the theft of $3 million from a single US scam victim.

Three scientific research institutions have been found selling British citizens’ health information on Alibaba, the British government and the nonprofit UK Biobank revealed this week. Over the last two decades, more than 500,000 people have shared their health data—including medical images, genetic information, and health care records—with UK Biobank, which allows scientists around the world to access the information to conduct medical research. However, the charity said the data leak involved a “breach of the contract” signed by three organizations, with one of the datasets for sale believed to have included data on all half-million research subjects. It did not detail the full types of data that were listed for sale but said it has suspended the Biobank accounts of those allegedly selling the information. The ads for the data have also been removed.

Earlier this month, 404 Media reported that the FBI was able to get copies of Signal messages from a defendant’s iPhone as the content of the messages, which are encrypted within Signal, were saved in an iOS push notification database. In this instance, the copies of the messages were still accessible even though Signal had been removed from the phone—though the issue affected all apps that send push notifications.

Advertisement

This week, in response to the issue, Apple released an iOS and iPadOS security update to fix the flaw. “Notifications marked for deletion could be unexpectedly retained on the device,” Apple’s security update for iOS 26.4.2 says. “A logging issue was addressed with improved data redaction.”

While the issue has been fixed, it is still worth changing what appears in notifications on your device. For Signal you can open the app, go to Settings, Notifications, and toggle notifications to show Name Only or No Name or Content. It is another reminder that while apps such as Signal are end-to-end encrypted, this applies to the content as it moves between devices: If someone can physically access and unlock your phone, there is the potential they can access everything on your device.

Source link

Advertisement
Continue Reading

Tech

2026 Green Powered Challenge: Ventilate Your Way To Power!

Published

on

Have you ever looked out across the rooftops of a city and idly gazed at the infrastructure that remains unseen from the street? It seems [varunsontakke80] has, because here’s their project, harvesting energy from the rotation of a rooftop ventilator.

The build is a relatively straightforward one, with a pair of disks with magnets attached being mounted on the ventilator shaft inside its dome. A third disk sits between them and is stationary, with a set of coils in which the magnets induce current as they move. A rectifier and charge circuit completes the picture.

This appears to be part of a college project, but despite searching, we can’t find any measure of how much power this thing generates. We’d be concerned that it might reduce the efficiency of the ventilator somewhat. There will be an inevitable tradeoff as power is harvested. Still, it’s a neat use of a ubiquitous piece of hardware, and we like it for that.

Advertisement

This hack is part of our 2026 Green Powered Challenge. You’ve got time to get your own entry in, so get a move on!

Source link

Advertisement
Continue Reading

Tech

From the ‘scurfy’ mouse to the Nobel Prize: How a Seattle biotech pioneer’s long game paid off

Published

on

The biotech industry is increasingly shaped by computer-designed drugs and investor pressure to move fast and show commercial traction. Nobel laureate Fred Ramsdell took a different path — one built on cell-based therapies, philanthropic funding and patient investing.

That path began at Darwin Molecular, a biotech startup in Bothell, Wash., that launched in 1992 with backing from Bill Gates and Paul Allen. The Microsoft co-founders weren’t chasing quick returns, Ramsdell said, and that freedom attracted dedicated researchers.

“People bought into that because you’re trying to do something that would make a difference,” he said. “It wasn’t a one-drug company. It wasn’t hyper-focused on something very specific. It was trying to figure out how we can affect change in patients.”

That mission-drive culture proved fertile ground. Ramsdell’s work at Darwin ultimately led to a Nobel Prize in Physiology or Medicine, awarded in October and shared with former Darwin colleague Mary Brunkow and Shimon Sakaguchi of Osaka University in Japan. The trio was recognized for foundational work in regulatory T cells, or Tregs — dubbed the “immune system’s security guards.”

Advertisement

The discovery of Tregs changed therapeutics by showing that the immune system has a built-in braking mechanism that can be enhanced to treat autoimmune disease, transplant rejection and graft-versus-host disease, or blocked to improve cancer immunotherapy.

Ramsdell recounted his journey at Life Science Washington’s annual conference in Seattle on Tuesday, tracing the unlikely origins of the discovery back to the Cold War.

The Darwin team studied a line of mice descended from post-Manhattan Project research into the effects of radiation on living organisms. In 1949, the program produced a mouse from a naturally occurring, non-radiation-induced mutation, later named “scurfy.”

A fraction of the male mice were riddled with illness and lived for only a few weeks. “They had every autoimmune disease in one animal,” Ramsdell said — diabetes, Crohn’s disease, psoriasis, myocarditis and more.

Advertisement

That suffering pointed to something important. The scurfy mice carried a mutation the Darwin scientists identified and named Foxp3 — a gene essential to keeping the immune system from attacking the body’s own healthy cells. The mouse gene has a human counterpart, FOXP3.

“We recognized the potential of these cells,” Ramsdell said. Introducing healthy Tregs into people with autoimmune disease could treat the condition — but the scientific tools to make that a reality didn’t yet exist.

Darwin was acquired in 1996 by London-based Chiroscience Group, which merged with the British company Celltech. When the company shut down its Washington R&D operations in 2004, Ramsdell and Brunkow moved on.

Ramsdell eventually landed at the Parker Institute for Cancer Immunotherapy, which he helped launch in 2016. The nonprofit research institute presented another unique opportunity. Founded with a $250 million grant from tech entrepreneur Sean Parker, it operates as a collaborative network across seven major U.S. cancer centers, applying immunotherapy to cancer in ways that siloed institutions couldn’t.

Advertisement

The secret ingredient, Ramsdell said, was trust — built deliberately through Parker Institute retreats that included scientists and their families.

“The ability to build trust and collaboration, true collaboration, and combine [research] that wouldn’t otherwise be combined, was incredibly appealing to me,” he said.

Today, Ramsdell serves as a scientific advisor for the Parker Institute and for Sonoma Biotherapeutics, a Seattle- and South San Francisco-based startup he co-founded that is focused on Treg cells. The company has a partnership with Regeneron to co-develop cell therapies for Crohn’s disease, ulcerative colitis and other conditions — a direct line from the scurfy mice of the 1940s to the clinic.

Even in advisory roles, Ramsdell keeps returning to big-picture biological questions. He’s currently intrigued by people who carry genetic predispositions for diseases that never materialize — and what that might reveal about the hidden coding in their DNA that hold illness at bay.

Advertisement

Looking at this phenomenon across populations, scientists can explore these genetic factors, he said, “and that will open up a lot of your doors.”

Source link

Continue Reading

Trending

Copyright © 2025