The Supreme Court tossed out a billion-dollar verdict against an internet service provider (ISP) on Wednesday, in a closely watched case that could have severely damaged many Americans’ access to the internet if it had gone the other way.
Tech
The Supreme Court is scared it’s going to break the internet
Wednesday’s decision in Cox Communications v. Sony Music Entertainment is part of a broader pattern. It is one of a handful of recent Supreme Court cases that threatened to break the internet — or, at least, to fundamentally harm its ability to function as it has for decades. In each case, the justices took a cautious and libertarian approach. And they’ve often done so by lopsided margins. All nine justices joined the result in Cox, although Justices Sonia Sotomayor and Ketanji Brown Jackson criticized some of the nuances of Justice Clarence Thomas’s majority opinion.
Some members of the Court have said explicitly that this wary approach stems from a fear that they do not understand the internet well enough to oversee it. As Justice Elena Kagan said in a 2022 oral argument, “we really don’t know about these things. You know, these are not like the nine greatest experts on the internet.”
Thomas’s opinion in Cox does a fine job of articulating why this case could have upended millions of Americans’ ability to get online. The plaintiffs were major music companies who, in Thomas’s words, have “struggled to protect their copyrights in the age of online music sharing.” It is very easy to pirate copyrighted music online. And the music industry has fought online piracy with mixed success since the Napster Wars of the late 1990s.
Before bringing the Cox lawsuit, the music company plaintiffs used software that allowed them to “detect when copyrighted works are illegally uploaded or downloaded and trace the infringing activity to a particular IP address,” an identification number assigned to online devices. The software informed ISPs when a user at a particular IP address was potentially violating copyright law. After the music companies decided that Cox Communications, the primary defendant in Cox, was not doing enough to cut off these users’ internet access, they sued.
Two practical problems arose from this lawsuit. One is that, as Thomas writes, “many users can share a particular IP address” — such as in a household, coffee shop, hospital, or college dorm. Thus, if Cox had cut off a customer’s internet access whenever someone using that client’s IP address downloaded something illegally, it would also wind up shutting off internet access for dozens or even thousands of innocent people.
Imagine, for example, a high-rise college dormitory where just one student illegally downloads the latest Taylor Swift album. That student might share an IP address with everyone else in that building.
The other reason the Cox case could have fundamentally changed how people get online is that the monetary penalties for violating federal copyright law are often astronomical. Again, the plaintiffs in Cox won a billion-dollar verdict in the trial court. If these plaintiffs had prevailed in front of the Supreme Court, ISPs would likely have been forced into draconian crackdowns on any customer that allowed any internet users to pirate music online — because the costs of failing to do so would be catastrophic.
But that won’t happen. After Cox, college students, hospital patients, and hotel guests across the country can rest assured that they will not lose internet access just because someone down the hall illegally downloads “The Fate of Ophelia.” Thomas’s decision does not simply reject the music industry’s suit against Cox, it nukes it from orbit.
Cox, moreover, is the most recent of at least three decisions where the Court showed similarly broad skepticism of lawsuits or statutes seeking to regulate the internet.
The Supreme Court is an internet-based company’s best friend
The most striking thing about Thomas’s majority opinion in Cox is its breadth. Cox does not simply reject this one lawsuit, it cuts off a wide swath of copyright suits against internet service providers.
Thomas argues that, in order to prevail in Cox, the music industry plaintiffs would have needed to show that Cox “intended” for its customers to use its service for copyright infringement. To overcome this hurdle, the plaintiffs would have needed to show either that internet service providers “promoted and marketed their [service] as a tool to infringe copyrights” or that the only viable use of the internet is to illegally download copyrighted music.
Thomas also adds that the mere fact that Cox may have known that some of its users were illegally pirating copyrighted material is not enough to hold them liable for that activity.
As a legal matter, this very broad holding is dubious. As Sotomayor argues in a separate opinion, Congress enacted a law in 1998 which creates a safe harbor for some ISPs that are sued for copyright infringement by their customers. Under that 1998 law, the lawsuit fails if the ISP “adopted and reasonably implemented” a system to terminate repeat offenders of federal copyright law.
The fact that this safe harbor exists suggests that Congress believed that ISPs which do not comply with its terms may be sued. But Thomas’s opinion cuts off many lawsuits against defendants who do not comply with the safe harbor provision.
Still, while lawyers can quibble about whether Thomas or Sotomayor have the best reading of federal law, Thomas’s opinion was joined by a total of seven justices. And it is consistent with the Court’s previous decisions seeking to protect the internet from lawsuits and statutes that could undermine its ability to function.
In Twitter v. Taamneh (2023), a unanimous Supreme Court rejected a lawsuit seeking to hold social media companies liable for overseas terrorist activity. Twitter arose out of a federal law permitting suits against anyone “who aids and abets, by knowingly providing substantial assistance” to certain acts of “international terrorism.” The plaintiffs in Twitter claimed that social media companies were liable for an ISIS attack that killed 39 people in Istanbul, because ISIS used those companies’ platforms to post recruitment videos and other content.
Thomas also wrote the majority opinion in Twitter, and his opinion in that case mirrors the Cox decision’s view that internet companies generally should not be held responsible for bad actors who use their products. “Ordinary merchants,” Thomas wrote in Twitter, typically should not “become liable for any misuse of their goods and services, no matter how attenuated their relationship with the wrongdoer.”
Indeed, several key justices are so protective of the internet — or, at least, so cautious about interfering with it — that they’ve taken a libertarian approach to internet companies even when their own political party wants to control online discourse.
In Moody v. Netchoice (2024) the Court considered two state laws, one from Texas and one from Florida, that sought to force social media companies to publish conservative and Republican voices that those companies had allegedly banned or otherwise suppressed. As Texas’s Republican Gov. Greg Abbott said of his state’s law, it was enacted to stop a supposedly “dangerous movement by social media companies to silence conservative viewpoints and ideas.”
Both laws were blatantly unconstitutional. The First Amendment does not permit the government to force Twitter or Facebook to unban someone for the same reason the government cannot force a newspaper to publish op-eds disagreeing with its regular columnists. As the Court held in Miami Herald Publishing Co. v. Tornillo (1974), media outlets have an absolute right to determine “the choice of material” that they publish.
After Moody reached the Supreme Court, however, the justices uncovered a procedural flaw in the plaintiffs’ case that should have required them to send the case back down to the lower courts without weighing in on whether the two state laws are constitutional. Yet, while the Court did send the case back down, it did so with a very pointed warning that the US Court of Appeals for the Fifth Circuit, which had backed Texas’s law, “was wrong.”
Six justices, including three Republicans, joined a majority opinion leaving no doubt that the Texas and Florida laws violate the First Amendment. They protected the sanctity of the internet, even when it was procedurally improper for them to do so.
This Supreme Court isn’t normally so protective of institutions
One reason why the Court’s hands-off-the-internet approach in Cox, Twitter, and Moody is so remarkable is that the Supreme Court’s current majority rarely shows such restraint in other cases, at least when those cases have high partisan or ideological stakes.
In two recent decisions — Mahmoud v. Taylor (2025) and Mirabelli v. Bonta (2026) — for example, the Court’s Republican majority imposed onerous new burdens on public schools, which appear to be designed to prevent those schools from teaching a pro-LGBTQ viewpoint to students whose parents find gay or trans people objectionable. I’ve previously explained why public schools will struggle to comply with Mahmoud and Mirabelli, and why many might find compliance impossible. Neither opinion showed even a hint of the caution that the Court displayed in Cox and similar cases.
Similarly, in Medina v. Planned Parenthood (2025), the Court handed down a decision that is likely to render much of federal Medicaid law unenforceable. If taken seriously, Medina overrules decades of Supreme Court decisions shaping the rights of about 76 million Medicaid patients, including a decision the Court handed down as recently as 2023 — though it remains to be seen if the Court’s Republican majority will apply Medina’s new rule in a case that doesn’t involve an abortion provider.
The Court’s Republican majority, in other words, is rarely cautious. And it is often willing to throw important American institutions such as the public school system or the US health care system into turmoil, especially in highly ideological cases.
But this Court does appear to hold the internet in the same high regard that it holds religious conservatives and opponents of abortion. And that means that the internet is one institution that these justices will protect.
Tech
Washington state needs a ‘coherent’ story to compete in AI, leaders agree

Washington state may have everything it needs to become a global AI hub. The problem is, it hasn’t figured out how to say so, and its political and tech leaders agree it’s time they got to work on it.
On Wednesday, the Washington Technology Industry Association (WTIA) convened a roundtable of civic and industry leaders from throughout the Seattle region to ask a pointed question: What will it actually take for Washington state to stop playing catch-up with Silicon Valley and start leading?
At the center of the debate was the nonprofit’s latest white paper, “Seattle’s AI Advantage: The Path to Global Leadership.”
In it, the author and futurist Alex Lightman argues the Emerald City holds six distinct advantages over rival tech hubs: abundance of clean energy, a backyard full of hyperscalers like Microsoft and Amazon, an acceptance of using AI to continuously improve AI and software, access to quantum computing, the ability to run large-scale simulations cheaply, and a growing foothold in space technology.
These assets, he contends, are what position Seattle to become a top-five U.S. city economically, comparable to a G7 economy with a $1 trillion GDP.
Yet while WTIA’s white paper largely shows that the city has incredible potential, the lobbying group emphasizes that it is a roadmap. The real challenge is to figure out what happens next. Once the talking is done, who’s going to organize the effort to transform the state?

“I think one of the most important things we can do is start telling this story,” said Randa Minkarah, WTIA chief operations executive, referring to Washington’s need to establish itself as a leading, responsible AI and advanced technology region. “How do we get that out there that changes people’s point of view?”
Once that narrative takes hold, it can create momentum—”a storytelling flywheel” that spreads best practices and lessons across communities and organizations, Minkarah added.
Washington’s struggle to tell a coherent AI story isn’t caused by a single issue, but rather by a host of issues. Rachel Smith, president of the Washington Roundtable, pointed to a three-way misalignment between federal priorities and dollars, state priorities and dollars, and what is actually happening on the ground in communities.
“When those things are all misaligned, it feels like we spend a whole lot of money and we don’t get a whole lot out of it,” she said.
Smith called for a broader strategy focused on economic competitiveness and tax reform. This is a topic of debate after state lawmakers approved a new income tax on high earners this month. One investor in the audience underscored the issue, noting that some of the people writing checks in Washington’s tech ecosystem have moved their residences out of state.

There’s also the failure to make AI’s benefits accessible to everyday Washingtonians, as indigenous communities and local residents feel excluded. And compounding the issue is the lack of strategic alignment, as Washington has pared back its economic development strategy. That’s not what community leaders want—they want Olympia to take the lead.
“That is a place where the state having a direction on the AI industry, where we want to go, would be super helpful,” Canedo remarked. Beau Perschbacher, Governor Bob Ferguson’s Senior Policy Advisor for Economic Development, didn’t disagree.
So what actually needs to happen?
Panelists didn’t hold back when asked what Washington’s leaders must do in the next 24 months: Joe Nguyen, a former Washington State senator and CEO of the Seattle Metropolitan Chamber of Commerce, wants more risk-takers—businesses willing to be first movers in adopting AI within their industries and then evangelize what’s possible.
Jesse Canedo, chief economic development officer for the City of Bellevue, hopes operators can execute on the white paper’s vision.
“Seattle as a region does a lot of great visioning,” he said. “It needs a lot of operationalizing of the big, bold ideas…Housing, people, and energy are the three big things that we can operationalize very quickly out of this vision.”
Not everyone agreed on the path forward.
Alvin Graylin, a fellow at Stanford’s Institute for Human-Centered Artificial Intelligence, argued that Washington should position itself as a global hub for open-source AI rather than following Silicon Valley’s closed-model, big-spending approach.
He pointed to Chinese labs producing near-equivalent models at a fraction of the cost, and said Washington could tap into millions of open-source developers worldwide rather than competing for a few thousand elite researchers at big labs.

Lightman, the white paper’s author, was skeptical. He noted that Microsoft made Netscape’s browser irrelevant by giving its own browser away, then made trillions selling everything around it. Open source has a ceiling, he argued, and it wouldn’t get Seattle to a trillion-dollar economy.
Separately, Perschbacher wants more federal funding to come to the state, and to improve community outreach to bring more people along as partners.
Can these leaders take all of their ideas and turn them into action? At the very least, the WTIA secured two pledges: The Washington Roundtable and the Seattle Metro Chamber both said they would work with the Governor’s office to shape a statewide economic development strategy, and Perschbacher committed to leading a federal funding working group.
Others joining the conversation included Alicia Teel, deputy director of Seattle’s Office of Economic Development. In addition to Minkarah, representing WTIA were Vice President of Innovation and Entrepreneurship Nick Ellingson, Chair of the Advanced Technologies Cluster Arry Yu, and Director of Industry and Community Relations Terrance Stevenson.
Tech
Next-gen AI breakthrough promises chatbots that can read the room better
Have you ever asked a chatbot something and felt like it completely missed your point? You say something with a bit of nuance, and the AI misses the subtlety entirely. That is exactly the problem researchers are trying to solve.
Even though the emotional connection with AI can feel deeper than human conversation for many users, most AI systems today still treat a sentence as a single block of sentiment. If you mix praise and criticism, the nuance often gets lost.
The research, by Zhifeng Yuan and Jin Yuan, introduces a model that can break down a sentence and understand how you feel about each part, instead of generalizing everything into one response.
How this system helps AI read your intent better
Think about a sentence like, “The food was great, but the service was terrible.” A typical AI chatbot might struggle because the sentence has both positive and negative emotions.

The proposed model looks at each part of the sentence separately and connects each emotion to the right subject. It relies on an ‘emotional keywords attention network’ to do that.
In simple terms, it teaches AI to focus on words that carry strong emotions, such as “great” or “terrible.” These words guide the system toward understanding what matters most in the sentence.
The model then links those emotional cues to a specific aspect. It learns that “great” applies to food, while “terrible” applies to service. This process, known as aspect-level sentiment analysis, makes responses far more precise.
It also uses attention mechanisms to understand context, so it does not rely on keywords alone. It can figure out how different parts of a sentence connect. Researchers say this method performs better than existing models on standard benchmarks.
This approach can make AI chatbots feel more human

If adopted widely, this could change how AI responds in real-world situations. Chatbots could handle nuanced feedback more effectively instead of defaulting to generic replies. Customer support systems could pinpoint exactly what went wrong and respond with greater accuracy.
While concerns grow around AI chatbots mirroring human personality traits a little too well, one thing is clear. AI is here to stay, and if it is going to be part of everyday conversations, it needs to get better at reading the room.
Tech
HP wants your next work PC to be an AI assistant
![]()
With the rapid rise of autonomous agents like OpenClaw and Anthropic’s Claude Work, along with the wide range of opinions about their impact on the future of work, it is not surprising to see renewed interest in workplace PCs. Add to that Intel’s recent release of commercial vPro versions of…
Read Entire Article
Source link
Tech
Amazon's Big Spring Sale delivers Apple deals from $14.99
Day 2 of Amazon’s Big Spring Sale offers deals on new M5 Pro and M5 Max MacBook Pros, along with blowout savings on Apple Watches, iPhones, and more.

Save up to 40 percent on Apple gear during the Big Spring Sale – Image credit: Amazon
Day 2 of Amazon’s weeklong sale is well underway, and we’ve rounded up the best deals on Apple hardware, including 2026 releases, along with accessories like MagSafe chargers and cables.
Shop Amazon’s Big Spring Sale
Continue Reading on AppleInsider | Discuss on our Forums
Tech
Marantz M1 Streaming Amplifier Review: Can This Compact All In One Replace Your Entire Hi-Fi System?
Oh Marantz…what exactly are you playing at here?
Just when the sub $1,000 streaming amplifier category had turned into a predictable arms race of inputs, outputs, and firmware promises, along came the Marantz Model M1 with that unmistakable Marantz swagger that is now backed by HEOS multi-room integration and Dirac Live room correction to give it some real-world muscle. Sure, the WiiM Amp Ultra and Eversolo Play might dazzle you with more HDMI ports, coaxial inputs, and firmware update promises than a Tesla—but do they offer this much soul? Doubtful.
Here’s the part nobody in the industry really wants to say out loud. The future isn’t being decided in six-figure listening rooms with Italian racks and cables that cost more than your first car. It’s being decided in apartments, offices, and living rooms where people want one box, real performance, and no drama.
The question is whether the industry actually leans into that shift or keeps pretending the old model still scales. Brands like Fosi, WiiM, Bluesound, NAD, Denon, Marantz, Yamaha, and Cambridge Audio clearly see where the market is going. Others? Still chasing a shrinking pool of traditional audiophiles with very deep pockets and very finite patience.
Marantz, to its credit, is covering both ends of the spectrum. The Model M1 reflects where the market is heading, while the Model 10 represents its high-end ambitions; and it’s one of the better implementations of Class D amplification we’ve seen, even if the price puts it out of reach for most buyers. Between those two sits a full range of AVRs and stereo receivers that bridge the gap and make a lot more sense for how people actually build systems today.

Marantz Model M1 Features and Connectivity: Fewer Ports, More Purpose
The Marantz Model M1 is designed as a compact, all-in-one streaming amplifier that simplifies system building without stripping away capability. Rated at 100 watts per channel into 8 ohms with very low distortion, it has enough power to drive a wide range of bookshelf and smaller floorstanding speakers—within reason, of course.
The inclusion of a dedicated subwoofer output with adjustable crossover and ±15dB level trim adds real flexibility for 2.1 setups, allowing for proper integration rather than guesswork.
Unlike traditional integrated amplifiers that juggle analog and digital signal paths, the M1 operates as a digital-first platform. It supports high resolution PCM up to 24-bit/192 kHz and DSD playback, handling content from streaming services, network storage, or direct USB input with consistency. This approach keeps the signal path clean and controlled, which aligns with Marantz’s goal of delivering a more refined and stable sonic presentation rather than chasing raw specification extremes.

Connectivity is focused but practical. Wireless options include Bluetooth, AirPlay 2, Qobuz Connect, TIDAL Connect, Spotify Connect, while HEOS provides the backbone for multi-room audio with support for up to 32 zones. HEOS also enables integration with home control systems such as Control4, URC, and Crestron, making the M1 viable in both simple and more complex installations.
It also works as a Roon player, although that requires an active Roon subscription and a Roon Core running on your network. The Core acts as the media server and can be hosted on a computer, NAS drive, or other compatible hardware.
For TV integration, HDMI eARC allows the M1 to function as a legitimate soundbar alternative with proper stereo imaging and significantly better amplification. Volume and power control can be handled directly through the TV remote, and the unit can be tucked out of sight without losing usability thanks to full app control and IR learning capability for third-party remotes.
One limitation worth noting is the lack of a built-in phono stage. Vinyl playback requires either a turntable with a built-in preamp or an external phono stage connected to the analog input. It’s a deliberate omission that reinforces the M1’s digital-first identity, but one that analog-focused users will need to plan around.
Onboard Dolby Digital+ decoding supports the audio codecs commonly used by broadcast and streaming TV services, making the Model M1 a viable upgrade over a typical soundbar. Additional options include Dialogue Enhancer for clearer vocals and a Virtual mode that uses Dolby processing to create a more immersive sound field from stereo content.
The Model M1 can also be paired with additional units for multi-room or expanded system setups, and its compact chassis allows two units to fit side-by-side in a standard 19-inch equipment rack if needed.
Cooling is handled through passive thermal management, so there are no fans to introduce noise or potential failure points. Combined with threaded mounting points on the bottom panel, this allows the amplifier to be installed cleanly on a wall bracket or inside cabinetry without concerns about heat buildup.
The Model M1 measures 8-9/16 inches wide, 3-3/8 inches high, and 9-15/16 inches deep, weighs 4.84 pounds, and includes a 5-year warranty.


Building a System Around the Marantz Model M1
This is where things get practical. The goal here isn’t to be cheap, it’s to be smart. There’s a difference. Chasing the lowest price usually ends with compromises you can hear five minutes into your first album. The better play is finding speakers that won’t wreck your bank account, because let’s be honest, gas and electric bills are already doing a fine job of that, but still deliver real synergy with the M1 without forcing you into endless EQ tweaks.
That matters more than ever with a product like this. The Model M1 has the control and resolution to expose mismatches, but it’s also forgiving enough to reward a well-balanced pairing. You may not even need a subwoofer depending on your room size and speaker choice, which simplifies things even further. And now that Dirac Live room correction is part of the equation, you’ve got a tool that can actually address room issues that used to derail setups like this. Not a miracle cure, but a serious advantage if you use it properly.
I rotated through the DALI Kupid, Q Acoustics 3020c, Acoustic Energy AE100 MK2, and stepped up to the Wharfedale Diamond 12.3 and Q Acoustics 5040 floorstanders to see how far the M1 could stretch without things getting stupid.
The goal wasn’t to build some aspirational system that lives on a dealer floor. I kept the ceiling under $3,000 for a straightforward two-channel setup, and around $5,000 if you add a turntable or a compact subwoofer. Real-world money. Real-world rooms. The kind of systems people actually use in a den, living room, or bedroom without needing a second mortgage or a dedicated listening shrine.
For some people, the first question is obvious: can this small box actually drive medium to higher-sensitivity floorstanding speakers, or is that pushing it? The answer is yes—with some limits. It comes down to how loud you listen and how much space you’re trying to fill.
In my setup, both the Wharfedale Diamond 12.3 and Q Acoustics 5040 proved to be very workable pairings, but placement matters. These aren’t speakers you shove against a wall and forget about. They need roughly 2 to 3 feet of space behind them and at least 2 feet from the side walls to open up properly.
Give them that breathing room and they reward you with excellent imaging and a presentation that pulls away from the cabinets. The soundstage stretches wide, with a convincing sense of height, and both models do a very good job of disappearing when everything is dialed in correctly.
Darkness on the Edge of Town?
From a tonal perspective, the M1 leans slightly to the dark side of the Force, but not at the expense of clarity, speed, or overall presence. It’s not veiled or slow—it just carries more weight and density through the midrange and bass. Compared to something like the WiiM Ultra, the difference is obvious. The M1 delivers more texture and physicality, while the WiiM chases a bit more sparkle and top-end detail. The Marantz never comes across as thin or clinical.
If you’re familiar with Audiolab’s integrated and streaming amps, this goes in the opposite direction. Audiolab tends to run cool, clean, and very controlled, sometimes to the point of feeling a little detached. The M1 adds body, more impact down low, and a sense of drive that makes music feel less polite. You do give up some resolution and edge definition in the bass compared to Audiolab, but the trade-off is a more engaging and substantial presentation.
That character really shows itself with electronic music. Deadmau5, Boards of Canada, Aphex Twin, Kraftwerk, Tangerine Dream; the M1 hits harder and fills in the space between notes in a way that feels more physical. It’s less about precision and more about momentum. Think thick Crayola markers versus ultra-fine ink pens. The Audiolab and WiiM draw cleaner lines, but the Marantz isn’t afraid to color outside them, and for this kind of music, that’s exactly the right move.
Switching over to vocals, the M1 keeps that same tonal balance intact. Male vocals come through with solid texture and weight, sitting slightly forward without sounding pushed. There’s a fullness here that works well with most recordings, but the speaker pairing makes a noticeable difference. I preferred vocals through the Q Acoustics 5040 over the Wharfedale Diamond 12.3; the 5040 offers better resolution and cleaner lower midrange detail, which gives voices more definition without thinning them out.
Sam Cooke, Elvis, Nick Cave, Jason Isbell, and John Prine all came across smooth and grounded. For some listeners, that might tip a bit too far into “safe,” depending on the speaker. Nick Cave in particular benefited from the added weight, but I missed a bit of the edge and growl that defines his delivery. The M1 doesn’t strip away character, but it does round things off slightly.

Bookshelf Speakers and the Marantz M1: Where Synergy Wins
The bookshelf choices here weren’t random. The DALI Kupid, Q Acoustics 3020c, and Acoustic Energy AE100 MK2 were picked with a specific goal in mind; maximize performance without turning the room into an equipment shrine. These are the kinds of speakers that can live on proper stands or sit cleanly on a credenza under a TV and still deliver a convincing, full-range experience.
To make that work, they had to check a few non-negotiable boxes: real presence, enough impact to carry both music and movie soundtracks, strong imaging, and a soundstage that doesn’t collapse the second you move off-axis. This isn’t about chasing perfection which isn’t realistic at this price point. it’s about building a system that actually works in a real room, with real constraints, and still sounds like you didn’t cut corners.
For a deeper look at all three, you can check out my shoot-out results, but the short version is that each brings something worthwhile to the table with the M1. The Q Acoustics 3020c is the most complete of the group, offering more output, a wider soundstage, and better overall resolution. The Acoustic Energy AE100 MK2 trades some of that refinement for greater low-end presence and a punchier upper bass and lower midrange, which gives it more weight with rock and electronic tracks.
The DALI Kupid is the most lively of the three, with a more energetic top end that adds air and sparkle without tipping into harshness. That’s not an accident; DALI has a long track record of getting tweeter design right, and it shows here. It’s open and engaging, but never brittle. That said, its U.S. pricing feels a bit ambitious given its size and low-end extension, especially when compared to how it’s positioned in other markets.
So what would I actually buy? Having lived with both pairs of floorstanders, along with the Q Acoustics 3020c and Acoustic Energy AE100 MK2, it’s a lot easier to sort through what works and what doesn’t. On the floorstanding side, I’d lean toward the Q Acoustics 5040—but with a clear condition. Keep them in a reasonably sized room. My den in New Jersey (16 x 13 x 9), the home office I’m converting (21 x 13 x 9), and my Florida setup (15 x 12 x 9) are all good examples of spaces where speakers like the 5040 or Wharfedale Diamond 12.3 make sense. They fill the room without overloading it with bass or turning placement into a constant battle.
On the bookshelf side, I tend to favor the DALI and Q Acoustics pairings for their balance of clarity, imaging, and overall ease of placement. They’re the safer choices if you want something that just works across music, TV, and movies. But if you’re after more low-end weight and a stronger push through the upper bass and lower mids, the Acoustic Energy AE100 MK2 is the sleeper here. It doesn’t get talked about enough. The pacing is excellent, it has real punch for its size, and it looks far more expensive than it has any right to.
But what about HEOS control? That’s going to matter more than anything for a lot of people. In my case, it’s pretty straightforward. I use TIDAL and Qobuz almost exclusively, so having access to TIDAL Connect and Qobuz integration is what I actually care about. Roon isn’t part of the equation anymore. I sold my Nucleus and haven’t looked back. With a 2TB drive on the network holding more than 1,900 CDs ripped to FLAC, I already have everything I need locally without adding another layer of software into the chain.
Before wrapping things up, I also tested the M1 with HDMI eARC across all three of my TVs in New Jersey using a QED cable. No drama. It locked in immediately with no handshake issues, and control worked exactly as expected. Movies and TV were an immediate upgrade. “Landman,” “The Madison” on Paramount+, and even NHL games all benefited from the added scale, clarity, and tonal weight. It’s not even a fair fight compared to internal TV speakers or most of the soundbars I’ve used. I’ll take a proper stereo soundstage and believable dynamics over fake surround tricks every time.

The Bottom Line
The Marantz Model M1 doesn’t try to outgun the competition on features—and that’s the point. It delivers a cohesive, full-bodied sound with real texture, strong midrange presence, and enough power to drive the kinds of speakers people actually use in real rooms. HEOS keeps everything connected, HDMI eARC works without the usual nonsense, and Dirac Live gives you a legitimate tool to deal with room issues instead of pretending they don’t exist.
What you don’t get is just as important. No phono stage, limited analog inputs, and it’s not chasing razor-sharp treble detail or lab-grade precision. This isn’t for someone building a shrine to separates. It’s for someone who wants a clean, compact system that sounds right and doesn’t require a manual and a weekend to figure out. At $1,000, it earns its keep—and then some.
Editors’ Choice in the Network Amplifier category for those who can swing the price and have similar speaker options.
Pros:
- Full-bodied, engaging sound with strong midrange and bass weight
- Works well with both bookshelf and smaller floorstanding speakers
- HEOS integration with built-in TIDAL Connect, Qobuz, and Roon support
- HDMI eARC performs reliably in real-world use
- Dirac Live adds meaningful room correction capability
- Compact design with flexible placement options
- Excellent system-building platform for 2.0 or 2.1 setups
Cons:
- No built-in phono stage
- Limited analog connectivity
- Slightly rounded treble may not appeal to detail-focused listeners
- App-dependent control with no included remote
Where to buy:
Related Reading:
Tech
Investigating 3D-printed metals for aeronautical engineering
UL’s Dr Kyriakos Kourousis discusses his current research in metal additive manufacturing and the work of the Metal Plasticity and Additive Manufacturing Group at UL.
Dr Kyriakos Kourousis is an associate professor in aeronautical engineering at University of Limerick (UL), as well as director of postgraduate research and education for the university’s Faculty of Science & Engineering. He also leads UL’s Metal Plasticity and Additive Manufacturing Group.
Kourousis joined UL’s School of Engineering 12 years ago, and before his career in academia, he spent more than a decade as an aeronautical engineer in the Hellenic Air Force working on aircraft maintenance, airworthiness and structural integrity – experience that he says now shapes his research and teaching.
At UL, he teaches topics around aircraft systems, the airworthiness of aircraft and the practical engineering behind them.
In terms of his current research, Kourousis says his work focuses on two things: how metals behave when they are loaded in a repeated way, leading to permanent deformation – “what engineers call metal plasticity” – and how to make and trust 3D‑printed metal parts (metal additive manufacturing), “especially for those loading conditions that cause plasticity”.
“In simple terms, we test metals, study their microstructure, build computer models that predict how they’ll perform over time, and use those models to predict how permanent deformation builds up during their operation,” he tells SiliconRepublic.com.
“Localised permanent deformation (plasticity) is the origin of fatigue in metals. My work is both on traditional metals and 3D‑printed ones.”
Here, Kourousis tells us about his work and provides a look into the world of 3D-printed materials and aeronautical engineering.
Why is your research important?
As 3D‑printed metal parts move from prototypes to real aircraft and machinery, we need to predict their behaviour with confidence. Experimental data and models help engineers design parts that won’t crack or fail early, and help industry and regulators build the evidence needed for certification. In short, better predictions mean safer, lighter, more efficient products.
Also, from a sustainability point of view, the use and reuse of powder in metal additive manufacturing offers an important advantage over other (traditional) manufacturing processes. However, with each reuse cycle, the recycled powder changes its synthesis and overall ‘quality’, which can have an effect on the produced parts, especially in terms of their plasticity behaviour.
What has been the most surprising/interesting realisation or discovery you’ve uncovered as part of this research?
One key finding is how directional 3D‑printed metals can be and what causes this directionality. For example, we showed that changing the build orientation and the post-3D printing processing of steel parts via heat treatments can noticeably change how it stretches and yields. We saw similar effects in 3D-printed titanium, in particular Ti‑6Al‑4V, which is widely used in the aerospace and biomedical industries.
We’ve also found that even lower‑cost metal 3D printing routes (like material‑extrusion/fused filament fabrication) show clear links between print settings and mechanical performance, useful for small/medium companies exploring affordable metal additive manufacturing.
What are some common misconceptions of your research area?
3D‑printed metals aren’t ‘just like’ traditional (wrought) metals. The layer‑by‑layer process creates a directional ‘grain’, so properties change with build direction, clearly shown in our work on steel and titanium. Process signatures matter. Printing can leave tiny pores (lack‑of‑fusion or keyhole) and locked‑in residual stresses; tuning scan strategy and energy helps, but these features still drive plasticity and fatigue if not managed.
An interesting debate I have with colleagues working in material science is that 3D-printed material may appear as having uniform features in the microscale, but the higher scale defects caused by the melting-solidification and re-melting can lead to a quite non-homogeneous part with differing mechanical properties at different loading directions (mechanical anisotropy).
Post‑processing can close the loop. Ageing/stress‑relief and especially hot isostatic pressing (HIP) homogenise the microstructure and seal pores, boosting ductility and fatigue, though outcomes depend on the as‑built quality and the budget available. A key target for the manufacturing industry is to make 3D printing not only accurate and consistent but also affordable, and we see that there is more work that has to be done there.
What has been the most significant development in your field since you started your academic career?
The big shift is the coming‑together of accessible metal 3D‑printing equipment with advanced, physics‑based modelling.
At UL, a milestone was obtaining a GE Concept Laser Mlab Cusing R metal 3D printer through a GE Additive award. Unlike other institutions in Ireland, our 3D printer is hosted within an industrial environment, through a collaborative agreement with our partner, Croom Medical. Our students and researchers can test ideas under realistic conditions, while both UL and Croom Medical leverage the advantages of this strategic partnership.
Can you tell me a bit about the Metal Plasticity and Additive Manufacturing Group at UL?
Our research group leads the metal additive manufacturing research activity in UL.
Our work is built around two main strands: metal plasticity modelling, where we turn lab data into reliable models of how metals actually deform; and metal additive manufacturing, where we study and improve metals such as titanium and steel, translating the results into practical build and heat‑treatment guidelines. Current projects and student work span physics‑informed yield prediction for steel 316L, laser powder bed fusion (the most widely used additive manufacturing method for metals) process optimisation, and corrosion-cyclic plasticity topics for aerospace‑grade alloys.
An interesting recent work involved showing that, by carefully retuning laser power, scan speed and hatch spacing, we can shift from the usual thin‑layer settings to much thicker layers in laser powder bed fusion of aerospace‑grade titanium, while keeping the process stable and parts dense. Led by one of our doctoral researchers who also works with Croom Medical, the study showed that those thicker‑layer builds delivered strength and ductility on a par with conventional settings, indicating that productivity can rise without an automatic hit to material performance.
Most importantly, after standard vacuum heat treatment and hot‑isostatic pressing, the parts satisfied the relevant industry standards, pointing to a practical path to higher throughput that still fits certification expectations.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Tech
UK warns 2G shutdown could leave older devices offline by 2033
![]()
In newly issued guidance, UK officials outlined the timeline for shutting down legacy mobile infrastructure. Operators have already switched off 3G services, and 2G is set to follow between 2029 and 2033. Users are being urged to prepare ahead of time, as not all devices will make the transition intact.
Read Entire Article
Source link
Tech
You Can Skip a Lot of Amazon’s Spring Sale, but Don’t Skip This Travel Upgrade
The WIRED Reviews Team has been covering Amazon’s Big Spring Sale since it began at on Wednesday, and the overall deals have been … not great, honestly. So far, we’ve found decent markdowns on vacuums, smart bird feeders, and even an air fryer we love, but I just saw that Cadence Capsules, those colorful magnetic containers you may have seen on your social media pages, are 20 percent off. (For reference, the last time I saw them on sale, they were a measly 9 percent off.)
If you’re not familiar, they allow you to decant your full-sized personal care products you use at home—from shampoo and sunscreen to serums and pills—into a labeled, modular system of hexagonal containers that are leak-proof, dishwasher safe, and stick together magnetically in your bag or on a countertop. No more jumbled, travel-sized toiletries and leaky, mismatched bottles and tubes.
Cadence Capsules have garnered some grumbling online for being overly heavy or leaking, but I’ve been using them regularly for about a year—I discuss decanting your daily-use products in my guide to How to Pack Your Beauty Routine for Travel—and haven’t experienced any leaks. They do add weight if you’re trying to travel super-light, and because they’re magnetic, they will also stick to other metal items in your toiletry bag, like bobby pins or other hair accessories. This can be annoying, especially if you’re already feeling chaotic or in a hurry.
Otherwise, Capsules are modular, convenient, and make you feel supremely organized—magnetic, interchangeable inserts for the lids come with permanent labels like “shampoo,” “conditioner,” “cleanser,” and “moisturizer.” Maybe you love this; maybe you don’t. But at least if you buy on Amazon, you can choose which label genre you get (Haircare, Bodycare, Skincare, Daily Routine). If this just isn’t your jam, the Cadence website offers a set of seven that allows you to customize the color and lid label of each Capsule, but that set is not currently on sale.
Tech
Saturn’s Rings and Storms Stand Out in Combined Webb and Hubble Telescope Views

Astronomers have just released what may be the sharpest views of Saturn ever captured, courtesy of the Hubble and James Webb space telescopes working in tandem. One image was taken in visible light and is breathtaking on its own, while the other, captured in infrared, pulls back the curtain on an entirely different layer of detail across the planet’s clouds, rings, and poles.
Hubble captured its image on August 22nd during a routine weather monitoring sweep of the outer planets. Bands of clouds wrap around the globe with subtle shifts in tone where sunlight catches the upper atmosphere, and the rings cast long shadows across the planet’s face at that particular angle. Three of Saturn’s smaller moons, Janus, Mimas, and Epimetheus, sit quietly at the edges of the frame, adding a sense of scale to an already striking image.
Sale
Gskyer Telescope, 70mm Aperture 400mm AZ Mount Astronomical Refracting Telescope for Kids Beginners…
- Superior Optics: 400mm(f/5.7) focal length and 70mm aperture, fully coated optics glass lens with high transmission coatings creates stunning images…
- Magnification: Come with two replaceable eyepieces and one 3x Barlow lens.3x Barlow lens trebles the magnifying power of each eyepiece. 5×24 finder…
- Wireless Remote: This refractor telescope includes one smart phone adapter and one Wireless camera remote to explore the nature of the world easily…
The James Webb Space Telescope returned to the same spot a few months later on November 29th, this time with its near infrared camera. The rings respond brilliantly to infrared light, the water ice within them practically glowing in the exposure. The narrow outer F ring shows up with crisp definition alongside the broader B ring, which carries subtle spoke like structures that are easy to miss at first glance. The wider field of view also reveals six of Saturn’s larger moons, including Titan off to one side and Dione and Enceladus sitting remarkably close together.

The two images were taken 14 weeks apart, during a period when Saturn was slowly approaching its 2025 equinox. The northern hemisphere is easing out of summer while the south is just beginning its transition into spring, and that gradual seasonal shift gives astronomers a rare window to track how the planet’s clouds, rings, and atmospheric features evolve over the coming decade.

Hubble’s visible light image captures Saturn’s surface and the cloud formations that scientists have been studying for decades, but Webb’s infrared view goes considerably deeper, revealing cloud structures and atmospheric compounds at multiple levels, from the dense lower layers all the way up to the thin air at the top. Together the two images give researchers something far more powerful than either could provide alone, allowing them to study the atmosphere in layers rather than as a single flat snapshot.

The Webb image reveals a wavy jet stream cutting across the northern mid latitudes, bent by atmospheric waves churning beneath it. Further south a handful of small storms dot the lower hemisphere, one of which appears to be the final remnant of the enormous storm system that raged for years after it first appeared in 2010. Over in the Hubble image the famous north pole hexagon is faintly visible, the six sided wind pattern that has persisted since the 1980s and shows no signs of fading yet, though it will eventually disappear as Saturn’s north pole descends into a 15 year winter by the 2040s.

The poles in the infrared image take on a grey green tint that scientists believe could be caused by high altitude aerosols or charged particles connected to auroral activity around Saturn’s magnetic field, details that are simply invisible in visible light. The rings tell their own story across both images as well. Visible light shows their structure and the shadows they cast across the planet’s surface, while infrared highlights just how reflective the ice particles within them are, making the entire ring system pop against the darkness of space. Subtle differences between the two images also reflect the different viewing angles and wavelengths each telescope works with, adding another layer of information for researchers to work through.
[Source]
Tech
Intercom’s new post-trained Fin Apex 1.0 beats GPT-5.4 and Claude Sonnet 4.6 at customer service resolutions
Intercom is taking an unusual gamble for a legacy software company: building its own AI model.
The 15-year-old, Dublin, Ireland-based massive customer service platform announced Fin Apex 1.0 on Thursday, a small, purpose-built AI model that the company claims outperforms leading frontier models from OpenAI and Anthropic on the metrics that matter most for customer support.
The model powers Intercom’s existing Fin AI agent, which already handles over one million customer conversations weekly.
According to benchmarks shared with VentureBeat, Fin Apex 1.0 achieves a 73.1% resolution rate—the percentage of customer issues fully resolved without human intervention—compared to 71.1% for both GPT-5.4 and Claude Opus 4.5, and 69.6% for Claude Sonnet 4.6. That roughly 2 percentage point margin may sound modest, but it’s wider than the typical gap between successive generations of frontier models.
“If you’re running large service operations at scale and you’ve got 10 million customers or a billion dollars in revenue, a delta of 2% or 3% is a really large amount of customers and interactions and revenue,” Intercom CEO Eoghan McCabe told VentureBeat in a video call interview earlier this week.
The model also shows significant improvements in speed and accuracy. Fin Apex delivers responses in 3.7 seconds—0.6 seconds faster than the next-fastest competitor—and demonstrates a 65% reduction in hallucinations compared to Claude Sonnet 4.6.
Perhaps most striking for enterprise buyers: it runs at roughly one-fifth the cost of using frontier models directly, and is included in Intercom’s existing “per-outcome”-based pricing structure for its existing customer plans.
What’s the base model? Does it even matter?
But there’s a catch. When asked to specify which base model Apex was built on—and its parameter size—Intercom declined.
“We’re not sharing the base model we used for Apex 1.0—for competitive reasons and also because we plan to switch base models over time,” a company spokesperson told VentureBeat. The company would only confirm that the model is “in the size of hundreds of millions of parameters.”
That’s a notably small model. For comparison, Meta’s Llama 3.1 ranges from 8 billion to 405 billion parameters; even efficient open-weights models like Mistral 7B dwarf the sub-billion scale Intercom describes.
Whether Apex’s performance claims hold up against that context—or whether the benchmarks reflect optimizations possible only in narrow, domain-specific applications—remains an open question.
Intercom says it learned from the backlash AI coding startup Cursor faced when critics accused the coding assistant of burying the fact that its Composer 2 model was built on fine-tuned open-weights models rather than proprietary technology. But the lesson Intercom drew may not satisfy skeptics: the company is transparent that it used an open-weights base, just not which one.
“We are very transparent that we have” used an open-weights model, the spokesperson said. Yet declining to name the model while claiming transparency is a contradiction that will likely draw scrutiny—particularly as more companies tout “proprietary” AI that amounts to post-trained open-source foundations.
Post-training as the new frontier
Intercom’s argument is that the base model simply doesn’t matter much anymore.
“Pre-training is kind of a commodity now,” McCabe said. “The frontier, if you will, is actually in post-training. Post-training is the hard part. You need proprietary data. You need proprietary sources of truth.”
The company post-trained its chosen foundation using years of proprietary customer service data accumulated through Fin, which now resolves 2 million customer queries per week. That process involved more than just feeding transcripts into a model. Intercom built reinforcement learning systems grounded in real resolution outcomes, teaching the model what successful customer service actually looks like—the appropriate tone, judgment calls, conversational structure, and critically, how to recognize when an issue is truly resolved versus when a customer is still frustrated.
“The generic models are trained on generic data on the internet. The specific models are trained on hyper-specific domain data,” McCabe explained. “It stands to reason therefore that the intelligence of the generic models is generic, and the intelligence of the specific models is domain-specific and therefore operates in a far superior way for that use case.”
If McCabe is right that the magic is entirely in post-training, the reluctance to name the base becomes harder to justify. If the foundation is truly interchangeable, what competitive advantage does secrecy protect?
A $100 million bet paying off
The announcement comes as Intercom’s AI-first pivot appears to be working. Fin is approaching $100 million in annual recurring revenue and growing at 3.5x, making it the fastest-growing segment of the company’s $400 million ARR business. Fin is projected to represent half of Intercom’s total revenue early next year.
That trajectory represents a remarkable turnaround. When Fin launched, its resolution rate was just 23%. Today it averages 67% across customers, with some large enterprise deployments seeing rates as high as 75%.
To make this happen, Intercom grew its AI team from roughly 6 researchers to 60 over the past three years—a significant investment for a company that McCabe admits was “in a really bad place” before its AI pivot. The average growth rate for public software companies sits around 11%; Intercom expects to hit 37% growth this year.
“We’re by far the first in the category to train our own model,” McCabe said. “There’s no one else that’s going to have this for a year or more.”
The speciation and specialization of AI
McCabe’s thesis aligns with a broader trend that Andrej Karpathy, former AI leader at Tesla and OpenAI, recently described as the “speciation” of AI models—a proliferation of specialized systems optimized for narrow tasks rather than general intelligence.
Customer service, McCabe argues, is uniquely suited for this approach. It’s one of only two or three enterprise AI use cases that have found genuine economic traction so far, alongside coding assistants and potentially legal AI. That’s attracted over a billion dollars in venture funding to competitors like Decagon and Sierra—and made the space, in McCabe’s words, “ruthlessly competitive.”
The question is whether domain-specific models represent a durable advantage or a temporary arbitrage that frontier labs will eventually close. McCabe believes the labs face structural limitations.
“Maybe the future is that Anthropic has a big offering of many different specialized models. Maybe that’s what it looks like,” he said. “But the reality is that I don’t think the generic models are going to be able to keep up with the domain-specific models right now.”
Beyond efficiency to experience
Early enterprise AI adoption focused heavily on cost reduction—replacing expensive human agents with cheaper automated ones. But McCabe sees the conversation shifting toward experience quality.
“Originally it was like, ‘Holy shit, we can actually do this for so much cheaper.’ And now they’re thinking, ‘Wait, no, we can give customers a far better experience,’” he said.
The vision extends beyond simple query resolution. McCabe imagines AI agents that function as consultants—a shoe retailer’s bot that doesn’t just answer shipping questions but offers styling advice and shows customers how different options might look on them.
“Customer service has always been pretty shit,” McCabe said bluntly. “Even the very best brands, you’re left waiting on a call, you’re bounced around different departments. There’s an opportunity now to provide truly perfect customer experience.”
Pricing and availability
For existing Fin customers, the upgrade to Apex comes at no additional cost. Intercom confirmed that customer pricing remains unchanged—users continue to pay per outcome as before, at $0.99 per resolved interaction, and automatically benefit from the new model.
Apex is not available as a standalone model or through an external API. It is accessible only through Fin, meaning businesses cannot license the model independently or integrate it into their own products. That constraint may limit Intercom’s ability to monetize the model beyond its existing customer base—but it also keeps the technology proprietary in a practical sense, regardless of what the underlying base model turns out to be.
What’s next
Intercom plans to expand Fin beyond customer service into sales and marketing—positioning it as a direct competitor to Salesforce’s Agentforce vision, which aims to provide AI agents across the customer lifecycle.
For the broader SaaS industry, Intercom’s move raises uncomfortable questions. If a 15-year-old customer service company can build a model that outperforms OpenAI and Anthropic in its domain, what does that mean for vendors still relying on generic API calls? And if “post-training is the new frontier,” as McCabe insists, will companies claiming breakthroughs face pressure to show their work—or continue hiding behind competitive secrecy while touting transparency?
McCabe’s answer to the first question, laid out in a recent LinkedIn post, is stark: “If you can’t become an agent company, your CRUD app business has a diminishing future.”
The answer to the second remains to be seen.
-
Crypto World6 days ago
NIO (NIO) Stock Plunges 6.5% as Shelf Registration Sparks Dilution Worries
-
Fashion6 days agoWeekend Open Thread: Adidas – Corporette.com
-
NewsBeat1 day agoManchester United reach agreement with Casemiro over contract clause amid transfer speculation
-
Politics6 days agoJenni Murray, Long-Serving Woman’s Hour Presenter, Dies Aged 75
-
Crypto World5 days agoBest Crypto to Buy Now: Strategy Just Spent $1.57 Billion on Bitcoin During Fear While Early Investors Quietly Enter Pepeto for 150x Potential
-
Crypto World5 days agoBitcoin Price News: Bhutan Sells $72 Million in BTC Under Fiscal Pressure, but the Smart Money Entering Pepeto Sees What the Market Does Not
-
Tech7 days agoinKONBINI Lets You Spend Summer Days Behind the Register
-
News Videos18 hours agoParliament publishes latest register of MPs’ financial interests
-
Sports3 days agoRemo Stars and Kano Pillars Strengthen Survival Hopes in NPFL
-
Business4 days agoNo Winner in March 21 Drawing as Prize Rolls to $133 Million for Next
-
Sports3 days agoGary Kirsten Accuses Pakistan Cricket Board Of ‘Interference’, Mohsin Naqvi Responds
-
Tech4 days agoGive Your Phone a Huge (and Free) Upgrade by Switching to Another Keyboard
-
Sports6 days ago2026 Kentucky Derby horses, odds, futures, preview, date: Expert who nailed 12 Derby-Oaks Doubles enters picks
-
Tech4 days agoAI enters the chat: New Seattle dating app relies on tech to facilitate meaningful human connections
-
Business7 days agoDLocal: Entering 2026 At Escape Velocity
-
Business6 days ago
Columbia Sportswear enters $500 million credit agreement with JPMorgan Chase
-
Tech5 days agoToday’s NYT Connections Hints, Answers for March 22 #1015
-
News Videos3 days agoCh 9 Financial Management Part 1 | Detailed One Shot | Class 12 Business Studies Boards 2026
-
Business4 days agoWill Duke Basketball Win It All? Duke Basketball Enters Second Round as Third Favorite to Claim NCAA Title
-
Sports4 days ago2026 Kentucky Derby horses, odds, futures, preview, date: Expert who hit 12 Derby-Oaks Doubles enters picks




You must be logged in to post a comment Login