The WIRED Reviews Team has been covering Amazon’s Big Spring Sale since it began at on Wednesday, and the overall deals have been … not great, honestly. So far, we’ve found decent markdowns on vacuums, smart bird feeders, and even an air fryer we love, but I just saw that Cadence Capsules, those colorful magnetic containers you may have seen on your social media pages, are 20 percent off. (For reference, the last time I saw them on sale, they were a measly 9 percent off.)
If you’re not familiar, they allow you to decant your full-sized personal care products you use at home—from shampoo and sunscreen to serums and pills—into a labeled, modular system of hexagonal containers that are leak-proof, dishwasher safe, and stick together magnetically in your bag or on a countertop. No more jumbled, travel-sized toiletries and leaky, mismatched bottles and tubes.
Cadence Capsules have garnered some grumbling online for being overly heavy or leaking, but I’ve been using them regularly for about a year—I discuss decanting your daily-use products in my guide to How to Pack Your Beauty Routine for Travel—and haven’t experienced any leaks. They do add weight if you’re trying to travel super-light, and because they’re magnetic, they will also stick to other metal items in your toiletry bag, like bobby pins or other hair accessories. This can be annoying, especially if you’re already feeling chaotic or in a hurry.
Otherwise, Capsules are modular, convenient, and make you feel supremely organized—magnetic, interchangeable inserts for the lids come with permanent labels like “shampoo,” “conditioner,” “cleanser,” and “moisturizer.” Maybe you love this; maybe you don’t. But at least if you buy on Amazon, you can choose which label genre you get (Haircare, Bodycare, Skincare, Daily Routine). If this just isn’t your jam, the Cadence website offers a set of seven that allows you to customize the color and lid label of each Capsule, but that set is not currently on sale.
With the rapid rise of autonomous agents like OpenClaw and Anthropic’s Claude Work, along with the wide range of opinions about their impact on the future of work, it is not surprising to see renewed interest in workplace PCs. Add to that Intel’s recent release of commercial vPro versions of… Read Entire Article Source link
Just when the sub $1,000 streaming amplifier category had turned into a predictable arms race of inputs, outputs, and firmware promises, along came the Marantz Model M1 with that unmistakable Marantz swagger that is now backed by HEOS multi-room integration and Dirac Live room correction to give it some real-world muscle. Sure, the WiiM Amp Ultra and Eversolo Play might dazzle you with more HDMI ports, coaxial inputs, and firmware update promises than a Tesla—but do they offer this much soul? Doubtful.
Here’s the part nobody in the industry really wants to say out loud. The future isn’t being decided in six-figure listening rooms with Italian racks and cables that cost more than your first car. It’s being decided in apartments, offices, and living rooms where people want one box, real performance, and no drama.
The question is whether the industry actually leans into that shift or keeps pretending the old model still scales. Brands like Fosi, WiiM, Bluesound, NAD, Denon, Marantz, Yamaha, and Cambridge Audio clearly see where the market is going. Others? Still chasing a shrinking pool of traditional audiophiles with very deep pockets and very finite patience.
Advertisement
Marantz, to its credit, is covering both ends of the spectrum. The Model M1 reflects where the market is heading, while the Model 10 represents its high-end ambitions; and it’s one of the better implementations of Class D amplification we’ve seen, even if the price puts it out of reach for most buyers. Between those two sits a full range of AVRs and stereo receivers that bridge the gap and make a lot more sense for how people actually build systems today.
Marantz Model M1
Marantz Model M1 Features and Connectivity: Fewer Ports, More Purpose
The Marantz Model M1 is designed as a compact, all-in-one streaming amplifier that simplifies system building without stripping away capability. Rated at 100 watts per channel into 8 ohms with very low distortion, it has enough power to drive a wide range of bookshelf and smaller floorstanding speakers—within reason, of course.
The inclusion of a dedicated subwoofer output with adjustable crossover and ±15dB level trim adds real flexibility for 2.1 setups, allowing for proper integration rather than guesswork.
Unlike traditional integrated amplifiers that juggle analog and digital signal paths, the M1 operates as a digital-first platform. It supports high resolution PCM up to 24-bit/192 kHz and DSD playback, handling content from streaming services, network storage, or direct USB input with consistency. This approach keeps the signal path clean and controlled, which aligns with Marantz’s goal of delivering a more refined and stable sonic presentation rather than chasing raw specification extremes.
Advertisement
Marantz Model M1
Connectivity is focused but practical. Wireless options include Bluetooth, AirPlay 2, Qobuz Connect, TIDAL Connect, Spotify Connect, while HEOS provides the backbone for multi-room audio with support for up to 32 zones. HEOS also enables integration with home control systems such as Control4, URC, and Crestron, making the M1 viable in both simple and more complex installations.
It also works as a Roon player, although that requires an active Roon subscription and a Roon Core running on your network. The Core acts as the media server and can be hosted on a computer, NAS drive, or other compatible hardware.
For TV integration, HDMI eARC allows the M1 to function as a legitimate soundbar alternative with proper stereo imaging and significantly better amplification. Volume and power control can be handled directly through the TV remote, and the unit can be tucked out of sight without losing usability thanks to full app control and IR learning capability for third-party remotes.
Advertisement. Scroll to continue reading.
One limitation worth noting is the lack of a built-in phono stage. Vinyl playback requires either a turntable with a built-in preamp or an external phono stage connected to the analog input. It’s a deliberate omission that reinforces the M1’s digital-first identity, but one that analog-focused users will need to plan around.
Advertisement
Onboard Dolby Digital+ decoding supports the audio codecs commonly used by broadcast and streaming TV services, making the Model M1 a viable upgrade over a typical soundbar. Additional options include Dialogue Enhancer for clearer vocals and a Virtual mode that uses Dolby processing to create a more immersive sound field from stereo content.
The Model M1 can also be paired with additional units for multi-room or expanded system setups, and its compact chassis allows two units to fit side-by-side in a standard 19-inch equipment rack if needed.
Cooling is handled through passive thermal management, so there are no fans to introduce noise or potential failure points. Combined with threaded mounting points on the bottom panel, this allows the amplifier to be installed cleanly on a wall bracket or inside cabinetry without concerns about heat buildup.
The Model M1 measures 8-9/16 inches wide, 3-3/8 inches high, and 9-15/16 inches deep, weighs 4.84 pounds, and includes a 5-year warranty.
Advertisement
Building a System Around the Marantz Model M1
This is where things get practical. The goal here isn’t to be cheap, it’s to be smart. There’s a difference. Chasing the lowest price usually ends with compromises you can hear five minutes into your first album. The better play is finding speakers that won’t wreck your bank account, because let’s be honest, gas and electric bills are already doing a fine job of that, but still deliver real synergy with the M1 without forcing you into endless EQ tweaks.
That matters more than ever with a product like this. The Model M1 has the control and resolution to expose mismatches, but it’s also forgiving enough to reward a well-balanced pairing. You may not even need a subwoofer depending on your room size and speaker choice, which simplifies things even further. And now that Dirac Live room correction is part of the equation, you’ve got a tool that can actually address room issues that used to derail setups like this. Not a miracle cure, but a serious advantage if you use it properly.
I rotated through the DALI Kupid, Q Acoustics 3020c, Acoustic Energy AE100 MK2, and stepped up to the Wharfedale Diamond 12.3 and Q Acoustics 5040 floorstanders to see how far the M1 could stretch without things getting stupid.
The goal wasn’t to build some aspirational system that lives on a dealer floor. I kept the ceiling under $3,000 for a straightforward two-channel setup, and around $5,000 if you add a turntable or a compact subwoofer. Real-world money. Real-world rooms. The kind of systems people actually use in a den, living room, or bedroom without needing a second mortgage or a dedicated listening shrine.
Advertisement
For some people, the first question is obvious: can this small box actually drive medium to higher-sensitivity floorstanding speakers, or is that pushing it? The answer is yes—with some limits. It comes down to how loud you listen and how much space you’re trying to fill.
In my setup, both the Wharfedale Diamond 12.3 and Q Acoustics 5040 proved to be very workable pairings, but placement matters. These aren’t speakers you shove against a wall and forget about. They need roughly 2 to 3 feet of space behind them and at least 2 feet from the side walls to open up properly.
Give them that breathing room and they reward you with excellent imaging and a presentation that pulls away from the cabinets. The soundstage stretches wide, with a convincing sense of height, and both models do a very good job of disappearing when everything is dialed in correctly.
Advertisement. Scroll to continue reading.
Advertisement
Darkness on the Edge of Town?
From a tonal perspective, the M1 leans slightly to the dark side of the Force, but not at the expense of clarity, speed, or overall presence. It’s not veiled or slow—it just carries more weight and density through the midrange and bass. Compared to something like the WiiM Ultra, the difference is obvious. The M1 delivers more texture and physicality, while the WiiM chases a bit more sparkle and top-end detail. The Marantz never comes across as thin or clinical.
If you’re familiar with Audiolab’s integrated and streaming amps, this goes in the opposite direction. Audiolab tends to run cool, clean, and very controlled, sometimes to the point of feeling a little detached. The M1 adds body, more impact down low, and a sense of drive that makes music feel less polite. You do give up some resolution and edge definition in the bass compared to Audiolab, but the trade-off is a more engaging and substantial presentation.
That character really shows itself with electronic music. Deadmau5, Boards of Canada, Aphex Twin, Kraftwerk, Tangerine Dream; the M1 hits harder and fills in the space between notes in a way that feels more physical. It’s less about precision and more about momentum. Think thick Crayola markers versus ultra-fine ink pens. The Audiolab and WiiM draw cleaner lines, but the Marantz isn’t afraid to color outside them, and for this kind of music, that’s exactly the right move.
Switching over to vocals, the M1 keeps that same tonal balance intact. Male vocals come through with solid texture and weight, sitting slightly forward without sounding pushed. There’s a fullness here that works well with most recordings, but the speaker pairing makes a noticeable difference. I preferred vocals through the Q Acoustics 5040 over the Wharfedale Diamond 12.3; the 5040 offers better resolution and cleaner lower midrange detail, which gives voices more definition without thinning them out.
Advertisement
Sam Cooke, Elvis, Nick Cave, Jason Isbell, and John Prine all came across smooth and grounded. For some listeners, that might tip a bit too far into “safe,” depending on the speaker. Nick Cave in particular benefited from the added weight, but I missed a bit of the edge and growl that defines his delivery. The M1 doesn’t strip away character, but it does round things off slightly.
Left to right: Acoustic Energy AE100 MK2, DALI Kupid, Q Acoustics 3020c bookshelf speakers
Bookshelf Speakers and the Marantz M1: Where Synergy Wins
The bookshelf choices here weren’t random. The DALI Kupid, Q Acoustics 3020c, and Acoustic Energy AE100 MK2 were picked with a specific goal in mind; maximize performance without turning the room into an equipment shrine. These are the kinds of speakers that can live on proper stands or sit cleanly on a credenza under a TV and still deliver a convincing, full-range experience.
To make that work, they had to check a few non-negotiable boxes: real presence, enough impact to carry both music and movie soundtracks, strong imaging, and a soundstage that doesn’t collapse the second you move off-axis. This isn’t about chasing perfection which isn’t realistic at this price point. it’s about building a system that actually works in a real room, with real constraints, and still sounds like you didn’t cut corners.
For a deeper look at all three, you can check out my shoot-out results, but the short version is that each brings something worthwhile to the table with the M1. The Q Acoustics 3020c is the most complete of the group, offering more output, a wider soundstage, and better overall resolution. The Acoustic Energy AE100 MK2 trades some of that refinement for greater low-end presence and a punchier upper bass and lower midrange, which gives it more weight with rock and electronic tracks.
Advertisement
The DALI Kupid is the most lively of the three, with a more energetic top end that adds air and sparkle without tipping into harshness. That’s not an accident; DALI has a long track record of getting tweeter design right, and it shows here. It’s open and engaging, but never brittle. That said, its U.S. pricing feels a bit ambitious given its size and low-end extension, especially when compared to how it’s positioned in other markets.
So what would I actually buy? Having lived with both pairs of floorstanders, along with the Q Acoustics 3020c and Acoustic Energy AE100 MK2, it’s a lot easier to sort through what works and what doesn’t. On the floorstanding side, I’d lean toward the Q Acoustics 5040—but with a clear condition. Keep them in a reasonably sized room. My den in New Jersey (16 x 13 x 9), the home office I’m converting (21 x 13 x 9), and my Florida setup (15 x 12 x 9) are all good examples of spaces where speakers like the 5040 or Wharfedale Diamond 12.3 make sense. They fill the room without overloading it with bass or turning placement into a constant battle.
On the bookshelf side, I tend to favor the DALI and Q Acoustics pairings for their balance of clarity, imaging, and overall ease of placement. They’re the safer choices if you want something that just works across music, TV, and movies. But if you’re after more low-end weight and a stronger push through the upper bass and lower mids, the Acoustic Energy AE100 MK2 is the sleeper here. It doesn’t get talked about enough. The pacing is excellent, it has real punch for its size, and it looks far more expensive than it has any right to.
But what about HEOS control? That’s going to matter more than anything for a lot of people. In my case, it’s pretty straightforward. I use TIDAL and Qobuz almost exclusively, so having access to TIDAL Connect and Qobuz integration is what I actually care about. Roon isn’t part of the equation anymore. I sold my Nucleus and haven’t looked back. With a 2TB drive on the network holding more than 1,900 CDs ripped to FLAC, I already have everything I need locally without adding another layer of software into the chain.
Advertisement
Advertisement. Scroll to continue reading.
Before wrapping things up, I also tested the M1 with HDMI eARC across all three of my TVs in New Jersey using a QED cable. No drama. It locked in immediately with no handshake issues, and control worked exactly as expected. Movies and TV were an immediate upgrade. “Landman,” “The Madison” on Paramount+, and even NHL games all benefited from the added scale, clarity, and tonal weight. It’s not even a fair fight compared to internal TV speakers or most of the soundbars I’ve used. I’ll take a proper stereo soundstage and believable dynamics over fake surround tricks every time.
The Bottom Line
The Marantz Model M1 doesn’t try to outgun the competition on features—and that’s the point. It delivers a cohesive, full-bodied sound with real texture, strong midrange presence, and enough power to drive the kinds of speakers people actually use in real rooms. HEOS keeps everything connected, HDMI eARC works without the usual nonsense, and Dirac Live gives you a legitimate tool to deal with room issues instead of pretending they don’t exist.
What you don’t get is just as important. No phono stage, limited analog inputs, and it’s not chasing razor-sharp treble detail or lab-grade precision. This isn’t for someone building a shrine to separates. It’s for someone who wants a clean, compact system that sounds right and doesn’t require a manual and a weekend to figure out. At $1,000, it earns its keep—and then some.
Advertisement
Editors’ Choice in the Network Amplifier category for those who can swing the price and have similar speaker options.
Pros:
Full-bodied, engaging sound with strong midrange and bass weight
Works well with both bookshelf and smaller floorstanding speakers
HEOS integration with built-in TIDAL Connect, Qobuz, and Roon support
HDMI eARC performs reliably in real-world use
Dirac Live adds meaningful room correction capability
Compact design with flexible placement options
Excellent system-building platform for 2.0 or 2.1 setups
Cons:
No built-in phono stage
Limited analog connectivity
Slightly rounded treble may not appeal to detail-focused listeners
UL’s Dr Kyriakos Kourousis discusses his current research in metal additive manufacturing and the work of the Metal Plasticity and Additive Manufacturing Group at UL.
Dr Kyriakos Kourousis is an associate professor in aeronautical engineering at University of Limerick (UL), as well as director of postgraduate research and education for the university’s Faculty of Science & Engineering. He also leads UL’s Metal Plasticity and Additive Manufacturing Group.
Kourousis joined UL’s School of Engineering 12 years ago, and before his career in academia, he spent more than a decade as an aeronautical engineer in the Hellenic Air Force working on aircraft maintenance, airworthiness and structural integrity – experience that he says now shapes his research and teaching.
At UL, he teaches topics around aircraft systems, the airworthiness of aircraft and the practical engineering behind them.
Advertisement
In terms of his current research, Kourousis says his work focuses on two things: how metals behave when they are loaded in a repeated way, leading to permanent deformation – “what engineers call metal plasticity” – and how to make and trust 3D‑printed metal parts (metal additive manufacturing), “especially for those loading conditions that cause plasticity”.
“In simple terms, we test metals, study their microstructure, build computer models that predict how they’ll perform over time, and use those models to predict how permanent deformation builds up during their operation,” he tells SiliconRepublic.com.
“Localised permanent deformation (plasticity) is the origin of fatigue in metals. My work is both on traditional metals and 3D‑printed ones.”
Here, Kourousis tells us about his work and provides a look into the world of 3D-printed materials and aeronautical engineering.
Advertisement
Why is your research important?
As 3D‑printed metal parts move from prototypes to real aircraft and machinery, we need to predict their behaviour with confidence. Experimental data and models help engineers design parts that won’t crack or fail early, and help industry and regulators build the evidence needed for certification. In short, better predictions mean safer, lighter, more efficient products.
Also, from a sustainability point of view, the use and reuse of powder in metal additive manufacturing offers an important advantage over other (traditional) manufacturing processes. However, with each reuse cycle, the recycled powder changes its synthesis and overall ‘quality’, which can have an effect on the produced parts, especially in terms of their plasticity behaviour.
What has been the most surprising/interesting realisation or discovery you’ve uncovered as part of this research?
One key finding is how directional 3D‑printed metals can be and what causes this directionality. For example, we showed that changing the build orientation and the post-3D printing processing of steel parts via heat treatments can noticeably change how it stretches and yields. We saw similar effects in 3D-printed titanium, in particular Ti‑6Al‑4V, which is widely used in the aerospace and biomedical industries.
We’ve also found that even lower‑cost metal 3D printing routes (like material‑extrusion/fused filament fabrication) show clear links between print settings and mechanical performance, useful for small/medium companies exploring affordable metal additive manufacturing.
Advertisement
What are some common misconceptions of your research area?
3D‑printed metals aren’t ‘just like’ traditional (wrought) metals. The layer‑by‑layer process creates a directional ‘grain’, so properties change with build direction, clearly shown in our work on steel and titanium. Process signatures matter. Printing can leave tiny pores (lack‑of‑fusion or keyhole) and locked‑in residual stresses; tuning scan strategy and energy helps, but these features still drive plasticity and fatigue if not managed.
An interesting debate I have with colleagues working in material science is that 3D-printed material may appear as having uniform features in the microscale, but the higher scale defects caused by the melting-solidification and re-melting can lead to a quite non-homogeneous part with differing mechanical properties at different loading directions (mechanical anisotropy).
Post‑processing can close the loop. Ageing/stress‑relief and especially hot isostatic pressing (HIP) homogenise the microstructure and seal pores, boosting ductility and fatigue, though outcomes depend on the as‑built quality and the budget available. A key target for the manufacturing industry is to make 3D printing not only accurate and consistent but also affordable, and we see that there is more work that has to be done there.
What has been the most significant development in your field since you started your academic career?
The big shift is the coming‑together of accessible metal 3D‑printing equipment with advanced, physics‑based modelling.
Advertisement
At UL, a milestone was obtaining a GE Concept Laser Mlab Cusing R metal 3D printer through a GE Additive award. Unlike other institutions in Ireland, our 3D printer is hosted within an industrial environment, through a collaborative agreement with our partner, Croom Medical. Our students and researchers can test ideas under realistic conditions, while both UL and Croom Medical leverage the advantages of this strategic partnership.
Can you tell me a bit about the Metal Plasticity and Additive Manufacturing Group at UL?
Our research group leads the metal additive manufacturing research activity in UL.
Our work is built around two main strands: metal plasticity modelling, where we turn lab data into reliable models of how metals actually deform; and metal additive manufacturing, where we study and improve metals such as titanium and steel, translating the results into practical build and heat‑treatment guidelines. Current projects and student work span physics‑informed yield prediction for steel 316L, laser powder bed fusion (the most widely used additive manufacturing method for metals) process optimisation, and corrosion-cyclic plasticity topics for aerospace‑grade alloys.
An interesting recent work involved showing that, by carefully retuning laser power, scan speed and hatch spacing, we can shift from the usual thin‑layer settings to much thicker layers in laser powder bed fusion of aerospace‑grade titanium, while keeping the process stable and parts dense. Led by one of our doctoral researchers who also works with Croom Medical, the study showed that those thicker‑layer builds delivered strength and ductility on a par with conventional settings, indicating that productivity can rise without an automatic hit to material performance.
Advertisement
Most importantly, after standard vacuum heat treatment and hot‑isostatic pressing, the parts satisfied the relevant industry standards, pointing to a practical path to higher throughput that still fits certification expectations.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
In newly issued guidance, UK officials outlined the timeline for shutting down legacy mobile infrastructure. Operators have already switched off 3G services, and 2G is set to follow between 2029 and 2033. Users are being urged to prepare ahead of time, as not all devices will make the transition intact. Read Entire Article Source link
The Supreme Court tossed out a billion-dollar verdict against an internet service provider (ISP) on Wednesday, in a closely watched case that could have severely damaged many Americans’ access to the internet if it had gone the other way.
Wednesday’s decision in Cox Communications v. Sony Music Entertainment is part of a broader pattern. It is one of a handful of recent Supreme Court cases that threatened to break the internet — or, at least, to fundamentally harm its ability to function as it has for decades. In each case, the justices took a cautious and libertarian approach. And they’ve often done so by lopsided margins. All nine justices joined the result in Cox, although Justices Sonia Sotomayor and Ketanji Brown Jackson criticized some of the nuances of Justice Clarence Thomas’s majority opinion.
Some members of the Court have said explicitly that this wary approach stems from a fear that they do not understand the internet well enough to oversee it. As Justice Elena Kagan said in a 2022 oral argument, “we really don’t know about these things. You know, these are not like the nine greatest experts on the internet.”
Thomas’s opinion in Cox does a fine job of articulating why this case could have upended millions of Americans’ ability to get online. The plaintiffs were major music companies who, in Thomas’s words, have “struggled to protect their copyrights in the age of online music sharing.” It is very easy to pirate copyrighted music online. And the music industry has fought online piracy with mixed success since the Napster Wars of the late 1990s.
Advertisement
Before bringing the Cox lawsuit, the music company plaintiffs used software that allowed them to “detect when copyrighted works are illegally uploaded or downloaded and trace the infringing activity to a particular IP address,” an identification number assigned to online devices. The software informed ISPs when a user at a particular IP address was potentially violating copyright law. After the music companies decided that Cox Communications, the primary defendant in Cox, was not doing enough to cut off these users’ internet access, they sued.
Two practical problems arose from this lawsuit. One is that, as Thomas writes, “many users can share a particular IP address” — such as in a household, coffee shop, hospital, or college dorm. Thus, if Cox had cut off a customer’s internet access whenever someone using that client’s IP address downloaded something illegally, it would also wind up shutting off internet access for dozens or even thousands of innocent people.
Imagine, for example, a high-rise college dormitory where just one student illegally downloads the latest Taylor Swift album. That student might share an IP address with everyone else in that building.
The other reason the Cox case could have fundamentally changed how people get online is that the monetary penalties for violating federal copyright law are often astronomical. Again, the plaintiffs in Coxwon a billion-dollar verdict in the trial court. If these plaintiffs had prevailed in front of the Supreme Court, ISPs would likely have been forced into draconian crackdowns on any customer that allowed any internet users to pirate music online — because the costs of failing to do so would be catastrophic.
Advertisement
But that won’t happen. After Cox, college students, hospital patients, and hotel guests across the country can rest assured that they will not lose internet access just because someone down the hall illegally downloads “The Fate of Ophelia.” Thomas’s decision does not simply reject the music industry’s suit against Cox, it nukes it from orbit.
Cox, moreover, is the mostrecent of at least three decisions where the Court showed similarly broad skepticism of lawsuits or statutes seeking to regulate the internet.
The Supreme Court is an internet-based company’s best friend
The most striking thing about Thomas’s majority opinion in Cox is its breadth. Cox does not simply reject this one lawsuit, it cuts off a wide swath of copyright suits against internet service providers.
Advertisement
Thomas argues that, in order to prevail in Cox, the music industry plaintiffs would have needed to show that Cox “intended” for its customers to use its service for copyright infringement. To overcome this hurdle, the plaintiffs would have needed to show either that internet service providers “promoted and marketed their [service] as a tool to infringe copyrights” or that the only viable use of the internet is to illegally download copyrighted music.
Thomas also adds that the mere fact that Cox may have known that some of its users were illegally pirating copyrighted material is not enough to hold them liable for that activity.
As a legal matter, this very broad holding is dubious. As Sotomayor argues in a separate opinion, Congress enacted a law in 1998 which creates a safe harbor for some ISPs that are sued for copyright infringement by their customers. Under that 1998 law, the lawsuit fails if the ISP “adopted and reasonably implemented” a system to terminate repeat offenders of federal copyright law.
The fact that this safe harbor exists suggests that Congress believed that ISPs which do not comply with its terms may be sued. But Thomas’s opinion cuts off many lawsuits against defendants who do not comply with the safe harbor provision.
Advertisement
Still, while lawyers can quibble about whether Thomas or Sotomayor have the best reading of federal law, Thomas’s opinion was joined by a total of seven justices. And it is consistent with the Court’s previous decisions seeking to protect the internet from lawsuits and statutes that could undermine its ability to function.
In Twitter v. Taamneh (2023), a unanimous Supreme Court rejected a lawsuit seeking to hold social media companies liable for overseas terrorist activity. Twitter arose out of a federal law permitting suits against anyone “who aids and abets, by knowingly providing substantial assistance” to certain acts of “international terrorism.” The plaintiffs in Twitter claimed that social media companies were liable for an ISIS attack that killed 39 people in Istanbul, because ISIS used those companies’ platforms to post recruitment videos and other content.
Thomas also wrote the majority opinion in Twitter, and his opinion in that case mirrors the Cox decision’s view that internet companies generally should not be held responsible for bad actors who use their products. “Ordinary merchants,” Thomas wrote in Twitter, typically should not “become liable for any misuse of their goods and services, no matter how attenuated their relationship with the wrongdoer.”
Indeed, several key justices are so protective of the internet — or, at least, so cautious about interfering with it — that they’ve taken a libertarian approach to internet companies even when their own political party wants to control online discourse.
Advertisement
In Moody v. Netchoice (2024) the Court considered two state laws, one from Texas and one from Florida, that sought to force social media companies to publish conservative and Republican voices that those companies had allegedly banned or otherwise suppressed. As Texas’s Republican Gov. Greg Abbott said of his state’s law, it was enacted to stop a supposedly “dangerous movement by social media companies to silence conservative viewpoints and ideas.”
Both laws were blatantly unconstitutional. The First Amendment does not permit the government to force Twitter or Facebook to unban someone for the same reason the government cannot force a newspaper to publish op-eds disagreeing with its regular columnists. As the Court held in Miami Herald Publishing Co. v. Tornillo (1974), media outlets have an absolute right to determine “the choice of material” that they publish.
After Moody reached the Supreme Court, however, the justices uncovered a procedural flaw in the plaintiffs’ case that should have required them to send the case back down to the lower courts without weighing in on whether the two state laws are constitutional. Yet, while the Court did send the case back down, it did so with a very pointed warning that the US Court of Appeals for the Fifth Circuit, which had backed Texas’s law, “was wrong.”
Six justices, including three Republicans, joined a majority opinion leaving no doubt that the Texas and Florida laws violate the First Amendment. They protected the sanctity of the internet, even when it was procedurally improper for them to do so.
Advertisement
This Supreme Court isn’t normally so protective of institutions
One reason why the Court’s hands-off-the-internet approach in Cox, Twitter, and Moody is so remarkable is that the Supreme Court’s current majority rarely shows such restraint in other cases, at least when those cases have high partisan or ideological stakes.
In two recent decisions — Mahmoud v. Taylor (2025) and Mirabelli v. Bonta (2026) — for example, the Court’s Republican majority imposed onerous new burdens on public schools, which appear to be designed to prevent those schools from teaching a pro-LGBTQ viewpoint to students whose parents find gay or trans people objectionable. I’ve previouslyexplained why public schools will struggle to comply with Mahmoud and Mirabelli, and why many might find compliance impossible. Neither opinion showed even a hint of the caution that the Court displayed in Cox and similar cases.
Similarly, in Medina v. Planned Parenthood (2025), the Court handed down a decision that is likely to render much of federal Medicaid law unenforceable. If taken seriously, Medinaoverrules decades of Supreme Court decisions shaping the rights of about 76 million Medicaid patients, including a decision the Court handed down as recently as 2023 — though it remains to be seen if the Court’s Republican majority will apply Medina’s new rule in a case that doesn’t involve an abortion provider.
Advertisement
The Court’s Republican majority, in other words, is rarely cautious. And it is often willing to throw important American institutions such as the public school system or the US health care system into turmoil, especially in highly ideological cases.
But this Court does appear to hold the internet in the same high regard that it holds religious conservatives and opponents of abortion. And that means that the internet is one institution that these justices will protect.
Astronomers have just released what may be the sharpest views of Saturn ever captured, courtesy of the Hubble and James Webb space telescopes working in tandem. One image was taken in visible light and is breathtaking on its own, while the other, captured in infrared, pulls back the curtain on an entirely different layer of detail across the planet’s clouds, rings, and poles.
Hubble captured its image on August 22nd during a routine weather monitoring sweep of the outer planets. Bands of clouds wrap around the globe with subtle shifts in tone where sunlight catches the upper atmosphere, and the rings cast long shadows across the planet’s face at that particular angle. Three of Saturn’s smaller moons, Janus, Mimas, and Epimetheus, sit quietly at the edges of the frame, adding a sense of scale to an already striking image.
Superior Optics: 400mm(f/5.7) focal length and 70mm aperture, fully coated optics glass lens with high transmission coatings creates stunning images…
Magnification: Come with two replaceable eyepieces and one 3x Barlow lens.3x Barlow lens trebles the magnifying power of each eyepiece. 5×24 finder…
Wireless Remote: This refractor telescope includes one smart phone adapter and one Wireless camera remote to explore the nature of the world easily…
The James Webb Space Telescope returned to the same spot a few months later on November 29th, this time with its near infrared camera. The rings respond brilliantly to infrared light, the water ice within them practically glowing in the exposure. The narrow outer F ring shows up with crisp definition alongside the broader B ring, which carries subtle spoke like structures that are easy to miss at first glance. The wider field of view also reveals six of Saturn’s larger moons, including Titan off to one side and Dione and Enceladus sitting remarkably close together.
Advertisement
The two images were taken 14 weeks apart, during a period when Saturn was slowly approaching its 2025 equinox. The northern hemisphere is easing out of summer while the south is just beginning its transition into spring, and that gradual seasonal shift gives astronomers a rare window to track how the planet’s clouds, rings, and atmospheric features evolve over the coming decade.
Hubble’s visible light image captures Saturn’s surface and the cloud formations that scientists have been studying for decades, but Webb’s infrared view goes considerably deeper, revealing cloud structures and atmospheric compounds at multiple levels, from the dense lower layers all the way up to the thin air at the top. Together the two images give researchers something far more powerful than either could provide alone, allowing them to study the atmosphere in layers rather than as a single flat snapshot.
The Webb image reveals a wavy jet stream cutting across the northern mid latitudes, bent by atmospheric waves churning beneath it. Further south a handful of small storms dot the lower hemisphere, one of which appears to be the final remnant of the enormous storm system that raged for years after it first appeared in 2010. Over in the Hubble image the famous north pole hexagon is faintly visible, the six sided wind pattern that has persisted since the 1980s and shows no signs of fading yet, though it will eventually disappear as Saturn’s north pole descends into a 15 year winter by the 2040s.
The poles in the infrared image take on a grey green tint that scientists believe could be caused by high altitude aerosols or charged particles connected to auroral activity around Saturn’s magnetic field, details that are simply invisible in visible light. The rings tell their own story across both images as well. Visible light shows their structure and the shadows they cast across the planet’s surface, while infrared highlights just how reflective the ice particles within them are, making the entire ring system pop against the darkness of space. Subtle differences between the two images also reflect the different viewing angles and wavelengths each telescope works with, adding another layer of information for researchers to work through. [Source]
Intercom is taking an unusual gamble for a legacy software company: building its own AI model.
The 15-year-old, Dublin, Ireland-based massive customer service platform announced Fin Apex 1.0 on Thursday, a small, purpose-built AI model that the company claims outperforms leading frontier models from OpenAI and Anthropic on the metrics that matter most for customer support.
According to benchmarks shared with VentureBeat, Fin Apex 1.0 achieves a 73.1% resolution rate—the percentage of customer issues fully resolved without human intervention—compared to 71.1% for both GPT-5.4 and Claude Opus 4.5, and 69.6% for Claude Sonnet 4.6. That roughly 2 percentage point margin may sound modest, but it’s wider than the typical gap between successive generations of frontier models.
Advertisement
Fin Apex 1.0 select benchmarks comparison chart. Credit: Intercom
“If you’re running large service operations at scale and you’ve got 10 million customers or a billion dollars in revenue, a delta of 2% or 3% is a really large amount of customers and interactions and revenue,” Intercom CEO Eoghan McCabe told VentureBeat in a video call interview earlier this week.
The model also shows significant improvements in speed and accuracy. Fin Apex delivers responses in 3.7 seconds—0.6 seconds faster than the next-fastest competitor—and demonstrates a 65% reduction in hallucinations compared to Claude Sonnet 4.6.
Perhaps most striking for enterprise buyers: it runs at roughly one-fifth the cost of using frontier models directly, and is included in Intercom’s existing “per-outcome”-based pricing structure for its existing customer plans.
Advertisement
What’s the base model? Does it even matter?
But there’s a catch. When asked to specify which base model Apex was built on—and its parameter size—Intercom declined.
“We’re not sharing the base model we used for Apex 1.0—for competitive reasons and also because we plan to switch base models over time,” a company spokesperson told VentureBeat. The company would only confirm that the model is “in the size of hundreds of millions of parameters.”
That’s a notably small model. For comparison, Meta’s Llama 3.1 ranges from 8 billion to 405 billion parameters; even efficient open-weights models like Mistral 7B dwarf the sub-billion scale Intercom describes.
Whether Apex’s performance claims hold up against that context—or whether the benchmarks reflect optimizations possible only in narrow, domain-specific applications—remains an open question.
Advertisement
Intercom says it learned from the backlash AI coding startup Cursor faced when critics accused the coding assistant of burying the fact that its Composer 2 model was built on fine-tuned open-weights models rather than proprietary technology. But the lesson Intercom drew may not satisfy skeptics: the company is transparent that it used an open-weights base, just not which one.
“We are very transparent that we have” used an open-weights model, the spokesperson said. Yet declining to name the model while claiming transparency is a contradiction that will likely draw scrutiny—particularly as more companies tout “proprietary” AI that amounts to post-trained open-source foundations.
Post-training as the new frontier
Intercom’s argument is that the base model simply doesn’t matter much anymore.
“Pre-training is kind of a commodity now,” McCabe said. “The frontier, if you will, is actually in post-training. Post-training is the hard part. You need proprietary data. You need proprietary sources of truth.”
Advertisement
The company post-trained its chosen foundation using years of proprietary customer service data accumulated through Fin, which now resolves 2 million customer queries per week. That process involved more than just feeding transcripts into a model. Intercom built reinforcement learning systems grounded in real resolution outcomes, teaching the model what successful customer service actually looks like—the appropriate tone, judgment calls, conversational structure, and critically, how to recognize when an issue is truly resolved versus when a customer is still frustrated.
“The generic models are trained on generic data on the internet. The specific models are trained on hyper-specific domain data,” McCabe explained. “It stands to reason therefore that the intelligence of the generic models is generic, and the intelligence of the specific models is domain-specific and therefore operates in a far superior way for that use case.”
If McCabe is right that the magic is entirely in post-training, the reluctance to name the base becomes harder to justify. If the foundation is truly interchangeable, what competitive advantage does secrecy protect?
A $100 million bet paying off
The announcement comes as Intercom’s AI-first pivot appears to be working. Fin is approaching $100 million in annual recurring revenue and growing at 3.5x, making it the fastest-growing segment of the company’s $400 million ARR business. Fin is projected to represent half of Intercom’s total revenue early next year.
Advertisement
That trajectory represents a remarkable turnaround. When Fin launched, its resolution rate was just 23%. Today it averages 67% across customers, with some large enterprise deployments seeing rates as high as 75%.
To make this happen, Intercom grew its AI team from roughly 6 researchers to 60 over the past three years—a significant investment for a company that McCabe admits was “in a really bad place” before its AI pivot. The average growth rate for public software companies sits around 11%; Intercom expects to hit 37% growth this year.
“We’re by far the first in the category to train our own model,” McCabe said. “There’s no one else that’s going to have this for a year or more.”
The speciation and specialization of AI
McCabe’s thesis aligns with a broader trend that Andrej Karpathy, former AI leader at Tesla and OpenAI, recently described as the “speciation” of AI models—a proliferation of specialized systems optimized for narrow tasks rather than general intelligence.
Advertisement
Customer service, McCabe argues, is uniquely suited for this approach. It’s one of only two or three enterprise AI use cases that have found genuine economic traction so far, alongside coding assistants and potentially legal AI. That’s attracted over a billion dollars in venture funding to competitors like Decagon and Sierra—and made the space, in McCabe’s words, “ruthlessly competitive.”
The question is whether domain-specific models represent a durable advantage or a temporary arbitrage that frontier labs will eventually close. McCabe believes the labs face structural limitations.
“Maybe the future is that Anthropic has a big offering of many different specialized models. Maybe that’s what it looks like,” he said. “But the reality is that I don’t think the generic models are going to be able to keep up with the domain-specific models right now.”
Beyond efficiency to experience
Early enterprise AI adoption focused heavily on cost reduction—replacing expensive human agents with cheaper automated ones. But McCabe sees the conversation shifting toward experience quality.
Advertisement
“Originally it was like, ‘Holy shit, we can actually do this for so much cheaper.’ And now they’re thinking, ‘Wait, no, we can give customers a far better experience,’” he said.
The vision extends beyond simple query resolution. McCabe imagines AI agents that function as consultants—a shoe retailer’s bot that doesn’t just answer shipping questions but offers styling advice and shows customers how different options might look on them.
“Customer service has always been pretty shit,” McCabe said bluntly. “Even the very best brands, you’re left waiting on a call, you’re bounced around different departments. There’s an opportunity now to provide truly perfect customer experience.”
Pricing and availability
For existing Fin customers, the upgrade to Apex comes at no additional cost. Intercom confirmed that customer pricing remains unchanged—users continue to pay per outcome as before, at $0.99 per resolved interaction, and automatically benefit from the new model.
Advertisement
Apex is not available as a standalone model or through an external API. It is accessible only through Fin, meaning businesses cannot license the model independently or integrate it into their own products. That constraint may limit Intercom’s ability to monetize the model beyond its existing customer base—but it also keeps the technology proprietary in a practical sense, regardless of what the underlying base model turns out to be.
What’s next
Intercom plans to expand Fin beyond customer service into sales and marketing—positioning it as a direct competitor to Salesforce’s Agentforce vision, which aims to provide AI agents across the customer lifecycle.
For the broader SaaS industry, Intercom’s move raises uncomfortable questions. If a 15-year-old customer service company can build a model that outperforms OpenAI and Anthropic in its domain, what does that mean for vendors still relying on generic API calls? And if “post-training is the new frontier,” as McCabe insists, will companies claiming breakthroughs face pressure to show their work—or continue hiding behind competitive secrecy while touting transparency?
McCabe’s answer to the first question, laid out in a recent LinkedIn post, is stark: “If you can’t become an agent company, your CRUD app business has a diminishing future.”
The most advanced of the new models, the Ring Battery Video Doorbell Pro 2nd Gen, is priced at $249.99 and features 4K video and a 10x digital zoom. The Battery Video Doorbell Plus 2nd Gen, at $179.99, offers 2K video resolution and 6x zoom. A third model, the $99.99 Battery… Read Entire Article Source link
Car manufacturers often strive to give their models unique names so that they can stand out in a crowded market. These could be as simple as a string of letters and numbers to indicate a model’s position in a brand’s lineup, like Audi, BMW, and Mercedes-Benz, or they could follow a company’s traditional naming structure, like how many Lamborghini models often reference bullfighting. There are also quite a few model names that have a special meaning attached to them.
But despite their efforts to create unique names, there are a few model names that have been used by multiple manufacturers. Note that these aren’t cars that have simply been rebadged to have a different logo on their grille but have kept their model name, like the Buick/Opel Cascada. Instead, we’re looking at car names that have been used by multiple car makers that aren’t related to each other at all.
Advertisement
There are multiple reasons for this — one infamous example is the Pontiac GTO, which was specifically named to evoke the performance of the legendary Ferrari GTO. Another explanation for the same model names is that they’re inspired by a place or a body style, or it could be that it’s been years, if not decades, since a particular name was last used, so current buyers are unlikely to mistake it for another vehicle. But whatever the case, these are a few car names that have been used by multiple brands.
Advertisement
California
If you’re a fan of sports cars and you hear the “California” model name, Ferrari would likely be the first brand that would come to mind. This name was first used on a car with the prancing horse logo in 1957 when it released the 250 California and was in production until 1963. Ferrari released the 365 California in 1966, but it only produced 14 examples and was only made for a year. The Italian carmaker named this convertible after the Golden State because it wanted buyers to imagine California and its open roads, hoping to sell more examples to the American market.
Ferrari revived the California model name in 2008, this time dropping the numbers and simply calling it “California.” This model still followed the 2+2 convertible formula used by the cars that inspired it but featured a hard-top roof. The carmaker also said that this was its first V8 road car to feature a mid-front layout that delivered excellent handling and performance.
On the other side of the spectrum, Volkswagen also released its own California model in 2005. But instead of a top-down sports car, this one is a campervan based on the VW Transporter. Ironically, despite being named after a U.S. state, you can’t buy this vehicle in the United States. The closest that you can get is the VW ID. Buzz — and even though it’s a rather nice passenger van, it still doesn’t give you everything you need for camping, including a kitchen sink.
Advertisement
Century
The Century model name is often attributed to Toyota, which is the car company’s most premier offering, introduced in 1967. Despite being continuously sold since then, this model has only had three generations and one SUV model — a nod to its timeless elegance, with the latest model still resembling the first one produced nearly 60 years ago. Unfortunately, you cannot get the Toyota Century in the U.S. There is another Century that you can buy, though, but it’s not as luxurious as the one from Japan.
The American model that bears the Century name comes from Buick, and it’s even older than the one from Toyota. The Buick Century arrived in 1936, mating the small body of the Buick Special with a 120-horsepower straight-eight engine from the larger Roadmaster. This gave it an excellent power-to-weight ratio, with some people calling it “the banker’s hot rod.” The American car company released the second-generation Century from 1954 to 1958, and it used the same formula as the original model. In 1973, the third-generation Century arrived, with the car company continuously producing the model until 2005 through the sixth generation.
Advertisement
The Buick and Toyota Century catered to completely different markets. The Toyota Century focuses on comfort and craftsmanship, with the Japanese emperor using one as his official state car. On the other hand, the Buick Century is a bit more focused on performance, with the car brand stuffing the biggest engine it could find in the smallest chassis it has in its lineup. Unfortunately, the styling of the last-generation Century was rather bland, and it also had some engine problems, landing it in our list of Buick models you should steer clear of.
Advertisement
GT
GT stands for Gran Turismo in Italian, or Grand Touring in English, and, in theory, GT cars are designed to travel long distances at high speeds and offer all the luxuries and creature comforts that any driver would want while behind the wheel. Because of this, many carmakers add the GT moniker to their cars to indicate a sportier or performance-focused trim of an existing model. However, a few carmakers have decided to use “GT” as an actual model name instead of using it to denote a premium variant.
One of the most popular GT models is the Ford GT, which the company first introduced in 2004 to celebrate its 100th anniversary and pay homage to the Le Mans-winning GT40. The American carmaker eventually introduced a second generation in 2017, before ending production in 2022. But even though the Ford GT technically fits the description of a grand tourer, it’s more of a supercar than a GT. A better example would be the Mercedes-AMG GT — although the 2025 model is one of the fastest AMG models ever made, it’s still quite comfortable and luxurious.
We’ve also seen other European manufacturers use the GT badge, with the Opel GT being one of the most underrated German cars that deserves more attention. This two-door coupe, made from 1968 to 1973, somewhat resembles a Chevrolet Corvette, and it also received a second-generation model from 2007 to 2010 as a rebadged Saturn Sky/Pontiac Solstice. Italian manufacturer Alfa Romeo also had a GT model from 2003 to 2010, but because it was based on a compact hatchback, it lacked the space buyers wanted from a true GT.
Advertisement
Monza
The Ferrari Monza is one of the most interesting models to come out of the prancing horse’s stable. This vehicle comes with several cool features that any car enthusiast and collector would love, like its “virtual windscreen” and its single carbon-fiber seat for the SP1 model. The Ferrari Monza is inspired by the brand’s racing cars from the 1950s, with its retro styling and an 809 hp 6.5-liter V12 engine designed to make the car go from 0 to 62 mph in under 3 seconds.
On the flip side, Chevrolet produced a subcompact two-door muscle car from 1975 to 1980 bearing the same name. Although this isn’t some exotic sports car that cost millions of dollars, it’s still one of the most underappreciated Chevy muscle cars you can find. Even though it shared the same platform as the Vega, the American carmaker used the profile of the Ferrari 365 GTC/4 as an inspiration for this vehicle and even named it after an Italian racetrack.
Advertisement
The Chevrolet Monza had quite a good run in motorsport, with the vehicle being favored by drag racers for its small size and good aerodynamics, and also winning a couple of IMSA Camel GT titles — presumably with custom or tuned engines. But because it arrived in the mid-1970s, that meant it was caught at the height of the Malaise Era. Factory models came with anemic engine options, with tests showing that it took more than 13 seconds for the Monza to hit 60 mph from a standstill and required nearly 20 seconds to finish the quarter mile.
Advertisement
Sebring
Although it’s not as popular as the 24 Hours of Le Mans, the 12 Hours of Sebring is still one of America’s most iconic endurance races, beginning in 1952. It’s for this reason that we see two models from different carmakers sport this name — Chrysler and Maserati. The Maserati Sebring is one of the classic cars from the 1960s that no one talks about today, although it still fetches six-digit bids at auctions. The Italian carmaker released this model in 1962 to commemorate its success at the Sebring endurance race, and it was in production until 1968. Despite that relatively long period, Maserati only made fewer than 600 Sebrings, making it quite a rare vehicle.
Chrysler also made its own Sebring from 1995 until 2010. But unlike the Italian Sebring, which had a limited production run, this midsize model is a mass-market car designed to compete against the likes of the Honda Accord and Toyota Camry. The Chrysler vehicle is available either as a sedan or a coupe, and you can also get the latter as a convertible if you like feeling the wind in your hair. In 2011, the American carmaker discontinued the Sebring name and replaced it with the 200, although the 200 is a heavily revised Sebring and not an all-new model.
You must be logged in to post a comment Login