If you spend most of your working day at a mouse, the Logitech MX Vertical is one of the more practical desk upgrades you can make, and at $74.99 it’s down $45 off its $119.99 list price in a limited-time deal. The vertical design isn’t a gimmick: Logitech’s own testing shows a 10% reduction in muscular activity compared to a standard mouse, and the 57° wrist angle addresses the pressure points that build up over a long day in a way that a standard horizontal mouse simply doesn’t.
What you’re getting
The MX Vertical’s 57° angle puts your hand in a natural handshake position rather than the pronated grip that standard mice require. That rotation takes the pressure off the forearm and wrist, and the dedicated thumb rest positions your thumb comfortably without any adjustment period. Logitech worked with leading ergonomists on the design criteria, which is reflected in the result: this is a mouse that was built around how the human hand actually sits rather than retrofitted with a vertical angle as an afterthought.
The 4000 DPI high-precision sensor means less physical hand movement to cover the same cursor distance, which compounds the fatigue reduction across a full working day. A cursor speed switch on the top of the MX Vertical lets you adjust DPI on the fly without diving into software settings, which is a practical detail for anyone switching between tasks that need precision and those that don’t.
Multi-device support covers up to three Windows and Apple computers simultaneously, with Easy-Switch toggling between them. The rechargeable battery removes the ongoing cost of disposables, and the textured rubber surface keeps the grip secure without feeling clinical. Wireless connectivity keeps the desk clean.
Advertisement
Why it’s worth it
Ergonomic mice with serious sensor specs and multi-device support typically hold their price well. The MX Vertical at $74.99 brings all of that to a price that makes the upgrade decision straightforward for anyone already experiencing wrist discomfort or looking to get ahead of it, and the limited-time pricing makes this worth acting on before it moves back up.
The bottom line
The Logitech MX Vertical at $74.99 is one of the more genuinely useful desk upgrades available at this price. The vertical design, 4000 DPI sensor, and multi-device support add up to a mouse that improves how your wrist feels at the end of the day and performs well enough to make no compromises doing it, and the $45 saving makes the timing right.
Powders, gels, and fermented nutrients could someday join the battlefield menu
Eating in the field has never been fun for US Army soldiers. And they may soon face even stranger field rations than they do today: Alternative proteins delivered in formats ranging from powders and sauces to gels and semi-solids.
The Army on Monday published a sources sought announcement to gather submissions from interested industry and academic partners in the “alternative protein sector,” willing to help the branch develop rations that are lighter weight, have a longer shelf life, and could potentially be produced in combat-forward environments.
Advertisement
According to the announcement, the Army is looking for submissions covering four areas: Technologies for developing alternative proteins, like fermentation and other biomanufacturing methods, meat alternative products for ration inclusion, consumer research seeking to “enhance the acceptability … of alternative proteins within a military population,” and food samples for government taste and performance evaluations.
As an added element, the Army said that it wants ration products that meet its existing “stringent requirements for nutrition, shelf stability, and palatability,” though anyone who has served in the US Army and eaten field rations may have doubts about the military branch’s commitment to palatability on its Meal, Ready-to-Eat (MRE).
As a US Army veteran, this vulture can attest to an unfortunate level of familiarity with MREs, circa 2002. Beef frankfurters were famously one of the worst, as was the so-called “beef steak” meal that was more like a compressed loaf of meat leavings than an actual steak. The flavor didn’t matter at the end of the day, though, when you’d just marched 15 miles carrying 75 pounds on your back: You just needed sustenance, and even that five pack of frankfurters with a taste I shudder to recall sounded good under the right circumstances.
The MRE menu lineup, which has changed several times in the past 20 years, includes a few vegetarian options, and it’s those that make one of the Army’s requirements for this program so surprising. Civilians might be surprised to learn how popular the non-meat meals were, even among hardcore carnivores.
Advertisement
The four or so vegetarian options in the overall MRE lineup were always the first to go when I was in. Not only did they replace military mystery MRE meat with something more appealing to eat out of an envelope, but they were actually tasty – relatively, of course. Vegetarian MREs also tended to be slightly less calorically dense than their animal-derived counterparts, so they included extra bits that made them an even bigger hit.
Whether that would translate into soldiers embracing alternative proteins in future MREs isn’t a guarantee, of course. Most weren’t choosing the veggie MREs for alignment with their personal ethics so much as that they wanted a meal that didn’t suck.
The Army’s goal of developing “lightweight and nutrient-dense ration solutions to reduce logistical burdens and physical load on warfighter” through the program is definitely a noble one. MREs get heavy quickly if you’re on a long field expedition, but the openness the Army is leaving in the announcement doesn’t make it sound like appetizing solutions could be the first to come out.
“Gel/semi-solid formats, dry powder mixes, [and] sauce-style components” are all on the table, with the Army saying the format of “novel ready-to-eat formats … is at the offeror’s discretion.”
Advertisement
In other words, future ration components could include gel packs stuffed with fermented mushroom protein and other nutrients, some form of unholy shake, or whatever else food scientists can come up with.
Interested parties will need to move fast, though: As a sources sought announcement, this isn’t a solicitation, includes no promise the ideas will be given a research grant or procurement dollars, and has to be in by Friday, May 15, with no assistance from the government.
The submissions the Army receives could help shape future solicitations in this space, however, meaning the MRE we currently know and … love … may eventually evolve into something rather more futuristic. Hopefully it tastes a bit better.
One thing that soldiers will probably be thrilled about? No bugs in whatever field rations come next.
Advertisement
“We are specifically excluding solutions related to cell-cultured, lab-grown meat or insect protein,” the Army said, though we note that’s only for the purposes of this particular announcement, so tomorrow’s soldiers might still be subsisting on crickets and ants. ®
Motorola has officially confirmed the launch date for the Razr Fold in India. The company’s first book-style foldable will debut in the country on May 13 after being introduced earlier this year at CES. Here’s everything you need to know.
Motorola Razr Fold Specifications
The Razr Fold has an 8.1-inch LTPO OLED foldable screen, providing a 2K display and a 120Hz refresh rate. The outer screen on the Razr Fold has a 6.6-inch OLED display with a 165Hz refresh rate. To improve durability, Motorola uses Gorilla Glass Ceramic 3 on the cover screen and Ultra Thin Glass on the foldable panel.
Instead of a compact flip design, the phone opens like a tablet for a bigger viewing experience. The smartphone will be available in Blackened Blue and Lily White shades, as well as a FIFA World Cup 26 special edition. For performance, Motorola has used Qualcomm’s Snapdragon 8 Gen 5 chip in the Razr Fold. Buyers will get storage options such as 12GB RAM with 256GB storage and 16GB RAM with 512GB storage. Motorola also sells a higher 1TB storage version in global markets.
Furthermore, Motorola is offering Android 16 on the Razr Fold along with its Hello UX/My UX experience. It features a desktop-style layout, trackpad support, and stylus compatibility for work and multitasking. Motorola will also provide up to seven years of software updates for the device.
Camera and Battery
Motorola has focused heavily on cameras with the Razr Fold. The smartphone includes a 50MP main sensor, a 50MP ultra-wide camera, and a 50MP periscope telephoto lens for zoom photography. Users also get separate selfie cameras on both the outer and inner displays. The foldable supports macro shots and additional camera features for different shooting situations.
One of the main highlights of the Motorola Razr Fold is its large 6,000mAh battery. The phone supports both 80W wired charging and 50W wireless charging. With such a powerful battery and fast charging support, the Razr Fold promises better battery life than most competing folding-screen smartphones.
Advertisement
Price and Availability
Globally, the smartphone debuted at EUR 1,999, which is around Rs 2.14 lakh. Although Indian pricing is expected to be lower, the Razr Fold will likely remain a premium foldable smartphone, competing with devices like the Samsung Galaxy Z Fold7 and the Google Pixel 10 Pro Fold. Buyers will be able to purchase the foldable through Flipkart, the company’s official website, and retail stores across the country.
Western Digital claims spinning down hard drives no longer cripple application performance
WD’s solution offers lower storage power consumption without sacrificing consistent response times
Reduced drive power usage allows more storage capacity inside existing rack limits
Western Digital (WD) has developed a new power-optimized drive technology which allows hard disks to spin down without causing major performance penalties.
The company’s Chief Product Officer, Ahmed Shihab, said the technique lowers power consumption enough to matter to customers while preserving the performance they expect.
Traditional hard drives consume significant power even when they are not actively being accessed by users or applications, and this is not sustainable in the long run.
Latest Videos From
Advertisement
Spinning down drives saves power
The technique allows drives to enter a low-power state without the lengthy spin-up delays that have made such approaches impractical in the past.
When a drive spins down, it consumes far less electricity, which directly reduces the operating costs of large storage arrays.
The capacity benefit comes from a secondary effect: lower power consumption per drive means data center operators can pack more drives into the same power and cooling envelope.
Western Digital claims the performance impact of spinning drives’ down and back up is small enough that most applications will not notice the difference.
Advertisement
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The company has designed the technology to be sympathetic to the software stack running above it, requiring no major changes from customers.
Previous attempts to spin down hard drives for power savings have failed because the performance hit was simply too severe for production environments.
Applications expecting sub-millisecond access times would stall while waiting for disks to spin back up to full operating speed.
Advertisement
Western Digital’s new formula balances power savings against accessibility, keeping the delay short enough to stay within typical application timeouts.
The company says this is the first time it has seen genuine interest and positive feedback from customers in lower power technology.
Hyperscale operators have been asking for storage solutions that do not force them to choose between energy efficiency and reliable performance.
Advertisement
A new storage tier between fast and slow drives
The technology effectively creates a new storage tier that sits between high-performance SSDsand traditional archival hard drives.
Data that is accessed frequently stays on fully spun-up drives, while less critical data can be parked on drives that spin down when idle.
The operating system and storage software determine which data belongs on which tier, not the drive itself.
Advertisement
Western Digital’s innovation is purely on the hardware side, making spin-down practical without waiting for software to catch up.
The capacity gains come from density, not from larger platters or new recording techniques.
More drives in the same power budget means more total terabytes per rack, and that is a math problem that every data center operator understands.
The clever part is making the spin-down cycle fast enough that no one notices, and that is where Western Digital claims to have finally solved what has been an industry-wide headache.
Advertisement
That said, hyperscalers will test this solution aggressively, and their verdict will decide if the rest of the industry follows.
Satya Nadella testified in the Musk v. Altman trial that he feared Microsoft would become “the next IBM,” revealing that the $13B OpenAI investment was a survival bet backed by a $92B return projection, not a commitment to the nonprofit mission.
Advertisement
Satya Nadella told a federal jury on Monday that he feared Microsoft would become “the next IBM” while OpenAI became the next Microsoft. The admission, drawn from an April 2022 internal email presented by Elon Musk’s lead attorney, reveals the strategic anxiety that drove the largest corporate investment in artificial intelligence history. Microsoft did not put 13 billion dollars into OpenAI because it believed in a nonprofit mission to develop safe AI for the benefit of humanity. It invested because its CEO believed the company would become irrelevant if it did not.
A January 2023 memo from Microsoft president Brad Smith to the company’s board, also presented to the jury, projected a 92 billion dollar return on that cumulative investment, with a 20 per cent annual escalator starting in 2025. The document reframes the Microsoft-OpenAI partnership from a technology collaboration into what may be the largest financial hedge in corporate history: a bet by the world’s most valuable software company that it could not survive the AI era on its own.
The IBM analogy is not casual. In the 1980s, IBM built the personal computer and outsourced the operating system to a small software company in Redmond, Washington. That decision made Microsoft and unmade IBM. Nadella was telling his team that the same dynamic was forming in AI. OpenAI was building the reasoning engine. Microsoft was building the cloud infrastructure. If OpenAI became the platform and Microsoft became the commodity, the company that defined enterprise software for four decades would fade into the same irrelevance as the company that defined enterprise hardware for three.
Musk’s attorneys presented the email to suggest that Microsoft’s investment was commercially motivated from the beginning, undermining OpenAI’s nonprofit origins. Nadella’s response was to defend the partnership as mutually beneficial. But the email speaks for itself. The CEO of Microsoft was not writing about advancing AI safety. He was writing about survival.
The return
Brad Smith’s 92 billion dollar projection landed on the Microsoft board’s desks one month before the company publicly announced its expanded 10 billion dollar investment in OpenAI. The memo included a 20 per cent annual escalator from 2025, meaning the projected return would compound as OpenAI’s models became more commercially valuable. At the time, ChatGPT had been public for less than two months.
Advertisement
The financial calculus was straightforward. Microsoft was the exclusive cloud provider for OpenAI’s models and held exclusive commercial rights to resell them through Azure. Every dollar of OpenAI revenue flowed through Microsoft infrastructure. The 13 billion dollars was not a donation to a nonprofit. It was a down payment on a distribution monopoly for the most important technology of the decade.
OpenAI is now valued at 852 billion dollars. Microsoft holds 27 per cent of the for-profit entity that emerged from the October 2025 conversion. The nonprofit foundation that was supposed to govern the technology retains 26 per cent. The alignment between mission and money that OpenAI’s founders promised has been replaced by a cap table.
The blind spots
Under cross-examination, Nadella acknowledged that he was not aware of any full-time employees at the OpenAI nonprofit before March 2026. He could not identify any grants, research, or open-sourced technology the nonprofit had produced. He was not informed in advance that the board planned to fire Sam Altman in November 2023. He was never given clarity on why Altman was removed.
The admissions paint a portrait of a partnership in which the investor knew everything about the commercial operation and nothing about the nonprofit governance. Musk’s legal team wants the jury to conclude that the nonprofit was a shell. Nadella’s testimony does not contradict that framing. It reinforces it from the perspective of the company that had the most to gain from the commercial side.
Advertisement
The witnesses
The trial has spent three weeks accumulating testimony that dismantles every participant’s stated motives. Greg Brockman, OpenAI’s co-founder and president, disputed Musk’s account of the startup’s early days and testified that Musk had OpenAI employees do secret work on self-driving technology at Tesla. Brockman’s own journals, presented as evidence, contained entries that called the nonprofit mission “a lie”, undermining both Musk’s claim that the mission was sacred and OpenAI’s claim that it was preserved.
Former board members Helen Toner and Natasha McCauley testified that Altman was untrustworthy, withheld information from the board, and sometimes lied. McCauley told the jury the board had “buckets of concerns” about Altman’s leadership, including an incident in which Altman falsely claimed that OpenAI’s legal department had cleared the GPT-4 Turbo launch in India without safety board review. The women who fired Altman in November 2023 told the jury why, and their reasons had nothing to do with Musk’s lawsuit.
The admissions
Musk took the stand during the trial’s first week and told the jury that OpenAI’s leaders had duped him into bankrolling the company. He repeated a phrase that became the trial’s refrain: “You can’t just steal a charity.” He argued he was not opposed to a small for-profit arm funding the nonprofit but lost trust in Altman when he learned about Microsoft’s 10 billion dollar investment, texting Altman in late 2022: “What the hell is going on? This is a bait and switch.”
Then came the question about distillation. Asked whether xAI uses OpenAI’s models to train Grok, Musk said it was a general industry practice. Asked whether that meant yes, he replied: “Partly.” The admission that his own AI company copies the technology he claims was stolen from a charity drew audible gasps in the courtroom. Musk told the jury the case would set a precedent for “looting every charity in America” while simultaneously acknowledging that he was using the charity’s output to build a competitor.
Advertisement
Shivon Zilis, a former OpenAI board member and the mother of four of Musk’s children, testified that Musk tried to recruit Altman to lead a new AI lab at Tesla. He offered Altman a Tesla board seat. He asked Andrej Karpathy to send a list of top OpenAI researchers to poach. The man suing for breach of charitable trust was, according to the testimony of his own witness, actively trying to strip the charity of its leadership and talent.
The defence
Altman took the stand on Monday. He testified that Musk’s departure from OpenAI’s board in 2018 was a “morale boost” for some employees because Musk had demotivated key researchers by ranking their accomplishments. Altman told the jury that Musk left because he lost confidence in the project and wanted long-term control that the other founders would not grant him.
In a tense exchange, Musk’s attorney confronted Altman with a text message he sent Musk on 18 February 2023: “I’m tremendously thankful for everything you’ve done to help. I don’t think that OpenAI would have happened without you.” The implication was that Altman privately acknowledged Musk’s contribution while publicly diminishing it. The text was sent three months after Musk learned about the Microsoft investment and seven months before the board fired Altman.
The trial began with 150 billion dollars at stake over whether OpenAI’s conversion from nonprofit to for-profit corporation was a breach of charitable trust. Musk wants the court to unwind the conversion, oust Altman and Brockman, and direct damages to the nonprofit. OpenAI argues Musk is suing because he wanted control of the most valuable AI company in the world and did not get it.
Advertisement
The hedge
While the trial plays out in Oakland, Microsoft is quietly proving that Nadella learned the IBM lesson. Microsoft dropped its exclusive licence to OpenAI’s technology, retaining only a non-exclusive agreement through 2032. It did so voluntarily, which makes sense only if Microsoft no longer needs exclusivity because it has alternatives.
It does. Microsoft launched three in-house AI models that directly challenge the partner it spent 13 billion dollars cultivating. The company that feared becoming IBM responded by doing what IBM never did: building its own operating system before the partner could lock it out. Nadella’s April 2022 fear that Microsoft would become dependent on OpenAI appears to have been the founding anxiety of an entire corporate strategy designed to ensure it never would.
The trial is expected to continue through 21 May before Judge Yvonne Gonzalez Rogers. The jury will decide whether OpenAI’s leaders breached a charitable trust and whether Musk is owed restitution. But Nadella’s testimony has already answered a different question. The most powerful corporate backer of the nonprofit AI mission invested because he was afraid his company would die without it. The 92 billion dollar return projection was not a byproduct of the partnership. It was the point. The nonprofit wrapper that Musk claims was stolen may never have contained what any of the parties involved believed it did.
For music and movie lovers who want the best musical and cinematic experiences at home, there have always been two camps: simplicity vs. performance. In the 80s, that meant buying a basic one-brand integrated component stereo system – cables and stand included – vs. buying the individual separate components, potentially from different brands, and hooking everything up yourself. In the 90’s and beyond, companies started building amplifiers and source connectivity into the speakers themselves. Powered soundbars became popular, particularly for those who just wanted a little better sound from their TVs or projectors with minimal set-up required.
Today, you can spend thousands of dollars on “simple” wireless speaker systems and soundbar-based systems with integrated streaming and networking. Some of these actually sound pretty good, but they tend to be a bit lacking in installation flexibility, features and ultimate sonic performance. For those who want something a bit more flexible and powerful, AVRs (Audio Video Receivers) are popular, as these offer a plethora of inputs, plenty of power and expandability for multi-channel and multi-room systems as well as relatively simple hookup.
But packing all these functions into a single box can lead to compromises in performance. In a receiver, all of the various components – DACs, processors, preamplification, switching, streaming receivers and radio tuners – share the same power supply as the amplifiers. And with receivers frequently including seven, nine or even eleven channels of amplification, power management and thermal management can get tricky and can lead to limited dynamic range, heat buildup, distortion and loss of detail.
The Case for A/V Separates
For those who find that A/V receivers aren’t flexible enough or require too large a compromise in performance, there is a step above known as A/V separates: separate components for pre-amplification and for power amplification. A/V separates typically include two types of components: a preamp/processor (pre/pro) which handles all of the low voltage/low current functions like audio decoding, processing, switching and volume adjustment/attenuation and power amplifiers which amplify the low level output of the preamp into high level output to drive the speakers. In theory, this reduces interference among components, simplifies thermal management and improves detail and dynamic range by giving the power amps “room to breathe.”
Advertisement
Marantz offers both A/V receivers and A/V separates, including preamp/processors and multi-channel power amplifiers. Their most recent product, the AV 30 preamplifier/processor ($4,000), is also their most affordable. And their AMP 30 ($4,000) is the company’s latest multi-channel power amplifier offering six channels of Class D amplification enhanced with Marantz HDAM modules for more refined sound. As we needed more than six channels for our test system, we paired the AV 30 with a Marantz AMP 20 ($6,000) 12-channel power amp, for the purposes of this review. We also checked out the AMP 30 and will cover that separately.
What Is It?
The Marantz AV 30 is an 11.4-channel A/V preamplifier/processor designed for high-end home theater and multi-channel surround sound systems. “11.4” means that it offers eleven channels of audio processing as well as four independently adjustable subwoofer outputs. The AV 30 supports all of the most popular immersive audio formats including Dolby Atmos, DTS:X, Auro-3D and MPEG-H/360 Reality Audio. It also is IMAX Enhanced Certified and can identify the IMAX DTS:X soundtracks on select UHD Blu-ray Discs and streaming services, applying IMAX processing and EQ for a more dynamic and theatrical sound. As of the publication of this review in May, 2026, the AV 30 doesn’t decode Eclipsa Audio (a.k.a. “IAMF”), the open source immersive audio format of choice on YouTube. But who knows what the future holds?
Marantz AV 30 preamp/processor with its lower control panel exposed.
The chassis itself is elegantly styled in black aluminum with textured side pieces. The unit was designed in Shirakawa, Japan and features the hallmark Marantz porthole, carried over from some of the company’s earliest products, back when Saul Marantz himself (R.I.P.) was at the helm. An integrated hideaway panel at the bottom keeps the look simple and understated with only two controls visible on the main front panel: dials for volume and input selection. The power button can be found on the left textured side panel and a ¼-inch headphone jack appears on the right. Opening that lower panel reveals a larger rectangular digital display, as well as a few more buttons and a 5-way control to navigate through set-up and other screens.
Even though the preamp does include a fairly robust screen at the bottom, you do still need to connect it to a display of some kind (monitor, TV or projector) in order to complete the set-up.
The Power To Back it Up
As robust as the AV 30 it is, it wouldn’t be able to make much sound without a power amplifier. For the purposes of this review, I paired the AV 30 with a Marantz AMP 20 multi-channel power amp. This 12-channel amp features customized Class D power modules from ICEPower, enhanced by Marantz HDAM (Hyper Dynamic Amplifier Module) technology.
Advertisement
Marantz AMP 20 12-channel power amplifier.
First developed by Marantz in 1992, HDAM modules replace traditional IC-based op-amp stages using hand-selected transistors, resistors and other components, mounted on a small board to minimize interference. This design increases slew rate, enhancing accuracy and reducing noise and distortion compared to traditional op-amp designs.
One of Marantz’ custom Class D amplifiers with HDAM module (from the company’s Shirakawa Audio works factory).
When I asked Ogata-san, the Marantz soundmaster, why Marantz has been moving away from Class A/B amplifiers to Class D, he said it is a combination of efficiency, thermal management and weight, which are particularly important on multi-channel amplifiers. And Ogata-san says with developments in Class D amplifiers, combined with Marantz HDAM, they are able to make smaller, lighter, more efficient, cooler running amplifiers without any sacrifice in that Marantz signature immersive, accurate sound quality.
Advertisement. Scroll to continue reading.
Also, when asked why Marantz’ multi-channel power amplifiers are offered with even numbers of channels (6-channel, 12-channel, 16-channel), while most home theater systems are designed with odd channel numbers (7-channel, 11-channel, 15-channel), a Marantz rep explained that each amplifier module is manufactured as a stereo pair, with each stereo pair bridgeable to double the output. So the AMP 20 includes six individual stereo amplifier modules, while the AMP 30 has three stereo modules.
Connectivity
The AV 30 offers a generous seven HDMI 2.1-compatible inputs, all of which support video resolution up to 8K/60Hz or 4K/120 Hz and HDCP 2.3. It includes three HDMI outputs for connection to multiple TVs, monitors or projectors. One HDMI port includes support for HDMI ARC/eARC for single-cable connection (audio/video send and audio return) to an ARC/eARC compatible TV or projector. The AV 30 also features legacy analog and digital audio and video connections, including composite and component video, analog RCA audio, fiberoptic and coax digital audio. There’s even a built-in phono preamp with support for MM (moving magnet) phono cartridges.
The Marantz AV 30 offers a bevy of inputs, both analog and digital including 7 HDMI ports. It also offers both unbalanced RCA and balanced XLR output for up to 11 different channels as well as four independently adjustable subwoofer outputs.
As for output to the power amplifier, the AV 30 supports both unbalanced RCA output as well as balanced XLR. Initially, I set up the system with nine meter-long RCA cables, but this led to a bit of a rats’ nest of wiring. I ordered a 10-pack of 12-inch XLR cables on Amazon and this cleaned things up a bit.
The Marantz AMP 20 power amp features 12 input channels (balanced and unbalanced) and 12 powered speaker outputs. Each of the six pairs of amplifiers can be bridged to mono, thereby doubling the power output.
Balanced XLR connections can lead to cleaner sound with less interference and a lower noise floor. The Marantz AMP 20 and AMP 30 both have switches on each pair of amp channels to switch between XLR balanced and RCA unbalanced inputs. The amps also offer bi-amp output as well as a bridging option which combines two amp channels into one in order to deliver twice as much power to half as many channels. If the 12 channels of the AMP 20 are more than you need, you can bridge one or more amplifier pairs for more power to your center channel or main left and right channel speakers, or use the extra amps to power speakers in other rooms of your home.
If you need more than the rated 200 watts per channel, you can bridge each of the six pair of amps inside the AMP 20 and get 400 watts of power into 8 ohms.
HEOS FTW!
A highlight of the AV 30 is its HEOS integration. This network streaming music platform allows you to play music from the top streaming services like TIDAL, Spotify, Qobuz and Amazon Music anywhere in your home on compatible devices like receivers, streaming amplifiers, soundbars and wireless speakers. HEOS is included in most Denon and Marantz receivers and pre/pros made in the past decade as well as select products from Classe (all part of the “Sound United” family of products). Install the HEOS app, link your music accounts and you’ll see the AV 30 appear as a supported device. It’s that easy.
Advertisement
The HEOS module is built into all of Denon’s and Marantz’ high-end receivers and preamp/processors, providing whole home wireless streaming right out of the box.
For internet radio stations, HEOS supports TuneIn and iHeart Radio. My Go-To Internet Radio station, the fabulous Radio Paradise is available in HEOS, through TuneIn. Sadly, TuneIn’s support for lossless or high resolution audio is non-existent. So while Radio Paradise is available in a lossless FLAC stream on BluOS, Sonos and WiiM, the best quality you can get for Radio Paradise on HEOS currently is a 320 KBPS AAC stream. That said, Radio Paradise does sound quite good through HEOS on the AV 30, whether in its pure stereo form or expanded into surround through one of the AV 30’s many immersive sound options.
But where we gain HEOS, we lose Google Cast. Unlike some competitive receivers and pre/pros, the Marantz AV 30 doesn’t offer Google Cast (formerly “Chromecast Built-in”) wireless connectivity. This means you won’t be able to cast Dolby Atmos multi-channel music from your music streaming app on your phone to the AV 30. Marantz tells us this is an intentional choice and they recommend using an external device like Apple TV 4K or Amazon FireTV stick if you want to listen to music encoded in Dolby Atmos (or Sony 360 Reality Audio, which the Marantz pre/pro also supports).
You can connect your audio streaming apps to the pre/pro via the HEOS app or Bluetooth, but these do not currently support Dolby Atmos music. Apple AirPlay 2 is available meaning you can connect an Apple device to the AV 30 wirelessly. The HEOS app also supports TIDAL Connect, Spotify Connect and Qobuz Connect, for wireless lossless (and even hi-res) music, but again, only for stereo 2-channel music tracks.
Correction and Calibration
The AV 30 comes with Audyssey MultEQ XT32 room correction and calibration on board. This includes SubEQ HT for managing up to four independent subwoofers. It also includes Audyssey Dynamic EQ and Dynamic Volume as options. For those who prefer DIRAC room correction, the AV 30 can be upgraded to support DIRAC Live, DIRAC Live Bass Control or DIRAC ART (Active Room Treatment) via the additional purchase of a DIRAC license (currently $259-$799, depending on the options). For the purposes of our review, we used the built-in Audyssey calibration system with the included microphone. We may do a follow-up story on DIRAC for those interested.
The Set-Up
Initial set-up was fairly straightforward. As mentioned, you do need to connect the AV 30’s HDMI output to a TV or projector in order to see the set-up screens. You can exit the set-up wizard after selecting language, but if not, then the wizard takes you through connection of every single channel as well as connection of speakers to a connected power amp. It was a bit tedious to go through all these screens, but I’m sure it would be helpful to someone new to the process. And I will say that even this seasoned “A/V guru” swapped the positive and negative leads on one of my speaker connections. Thankfully this phase error was identified during the Audyssey set up process so it was easy enough to fix. (Oopsy).
Advertisement
The Marantz Setup Assistant will walk you through each element of set-up in sometimes painstaking detail.
Audyssey calibration is pretty straightforward. Plug in the included mic to the front panel, and run through a series of test tones. Marantz recommends using eight distinct measuring points but for our small theater room, three measurement points were sufficient. Post calibration, the system sounded nicely balanced though we did manually boost the subwoofer level slightly as the sound was a bit thin in the lower octaves after Audyssey calibration.
The AV 30 supports multiple different speaker configurations, compatible with Dolby Atmos, DTS:X and AURO-3D immersive surround.
The set-up wizard also requires you to set up HEOS, using the HEOS mobile app for iOS or Android. There is no “skip” option for this, though you can always manually exit the set-up wizard by backing all the way out or hitting the Setup button on the front panel or on the remote control. This proved slightly problematic as I had forgotten my HEOS account password and the link to reset it in the app failed repeatedly. Eventually I remembered the password and was able to complete the set-up. New HEOS users (or those who don’t forget their passwords) probably wouldn’t have an issue here. During the set-up, a firmware update was found and applied to the AV 30, which took a few extra minutes to complete.
Advertisement. Scroll to continue reading.
“Control, Control. You Must Learn Control!” -Yoda
The AV 30 can be controlled by its fully backlit remote control, by the Marantz AVR Remote app, or by controls on the hideaway front panel. Navigation among menus is quick and intuitive. The remote feels solid and having a full backlight on all buttons is a nice touch when watching or listening in the dark. The remote offers direct input buttons for all inputs as well as dedicated buttons for different surround modes (Movie, Music, Game and Pure). These cycle through various sound processing modes like Dolby Surround, DTS Neural:X and Auro-3D.
The AV 30’s remote is fully backlit and offers direct access to each input as well as sound mode buttons for cycling though the various stereo and surround sound listening modes.
Listening Notes
Speakers used for testing included a Klipsch Reference series home theater system as well as a pair of KEF LS50s. For source devices, I connected several components including an XBOX Series X, Fire TV 4K Max, Apple TV 4K, Samsung UHD Blu-ray Player, OPPO Blu-ray Player and a Kaleidescape Strato V 4K movie player all via HDMI cables. I also hooked up a vintage cassette deck for those classic 80s and 90s mix tapes using my grand-daddy’s analog RCA cables as well as a Systemdek turntable so I could spin some 70s vinyl.
Much of my music listening these days is to multi-channel immersive tracks encoded either in Dolby Atmos or Sony 360 Reality Audio. With no Google Cast option, I used the Amazon Music app on the FireTV 4K Max to load up my Dolby Atmos and 360RA playlists. Dolby Atmos favorites like KX5/Deadmau5 “Alive,” Ed Sheeran “Shape of You,” Elton John “Rocket Man” and A-Ha “Take On Me” had a wonderful sense of air and spaciousness through the Marantz preamp/amp combo, with deep tight bass and pinpoint imaging precision. 360RA tracks like Daft Punk “Get Lucky” and Pink’s “Trustfall” were equally enveloping, with instruments and voices filling the listening room with a cohesive dome of sound.
Moving onto movies, I hit the system with everything I had: Dolby Atmos, DTS:X, IMAX Enhanced and even AURO-3D. The system was able to decode all of these formats, routing the sound to the appropriate speakers for full immersion. It was at least as good if not better than the experience I’ve had in many premium movie theaters.
Advertisement
The screen on the bottom front panel on the AV 30 displays useful information like what input is selected and which audio codec is being received.
For Dolby Atmos, I started with Denis Villeneuve’s “Dune” on UHD Blu-ray. In the worm attack on the spice crawler about 1:05 into the film, the soundtrack includes a cacophony of different sounds that can challenge the finest surround system. As the spice-saturated sand encompasses Paul Atreides, the sound abruptly cuts out, music swells, sand swirls and the voices of the Bene Gesserit build in a spice-induced vision. The line “Qwizzatz Haderrach awakes,” which is difficult to make out on some systems, cuts cleanly through the sonic mayhem here.
Moving onto another Dolby Atmos soundtrack, I put on the first episode of “Andor” on Disney+. The rain falling from above in the opening scene permeated my listening room and as our antihero enters a nightclub, his conversation with the club worker is easy to make out over the pulsing background music. Complex soundscapes are handled effortlessly by the Marantz AV 30/AMP 20 combination.
For DTS:X, I put on “Ex Machina” to verify that the claustrophobic feeling inside the compound when the power fails was conveyed realistically and effectively (it was). The UHD Blu-ray 4K remaster of “Blues Brothers” also features a DTS:X soundtrack which was particularly lively during the mall chase scene as glass and debris exploded all around the room.
I also watched some of the IMAX Enhanced movies on Disney+ which offer DTS:X soundtracks enhanced with IMAX EQ. On “Guardians of the Galaxy 3,” the preamp identified the IMAX Enhanced flag in the content and decoded the DTS:X track with excellent dynamics and thunderous bass. Switching to the IMAX Enhanced “Queen Rock Montreal” concert film, “We Will Rock You” was recreated in all its grandeur, making me feel like I was there in the stadium enjoying the show.
The concert film “Queen Rock Montreal” was recently remastered in IMAX for a theatrical release. It’s now available on select TVs and projectors on Disney+ in IMAX Enhanced format with DTS:X immersive sound.
For Auro-3D, there isn’t as much selection as far as software goes, but I put on a few clips from an AURO-3D test Blu-ray Disc and also the UHD Blu-ray Disc of “Shine” which is the first major film with an AURO-3D soundtrack available on physical media in the U.S. For some reason, the preamp didn’t identify the AURO-3D flag in the DTS-HD MA datastream, so it did not automatically engage AURO-3D surround. However, manually switching the AV 30 to AURO-3D did allow the preamp to decode the stream and process all the channels properly, including the overhead voice of god speaker that Auro-3D is known for (which was virtualized at the center point of my ceiling, using the four Dolby height channels currently in my system).
For stereo music, I listened to high res and lossless tracks on both Qobuz (using Qobuz Connect) and Amazon Music (through the HEOS app). The Marantz offers both “Direct” and “Pure Direct” stereo listening modes that present stereo music as God intended – in standard two-channel mode with minimal to no additional processing.
Advertisement
“Direct” modes bypasses DSP processing (including EQ and Audyssey calibration) for the purest signal, while “Pure Direct” mode does this and also shuts down the front display and video circuitry to eliminate any potential electrical noise. Depending how you’ve configured the speakers, these modes may also disable the subwoofer outputs, so they’re better suited for larger tower speakers. In these modes, the AV 30 offered outstanding clarity. Paired with the Marantz power amp, stereo music came through cleanly and transparently with excellent dynamics and a solid three dimensional soundstage. There’s also a standard stereo listening mode that does preserve DSP and subwoofer output.
Personally, I don’t mind listening to stereo tracks with some ambiance added, so I experimented with Dolby Surround, DTS Virtual X and Auro:3D upmixer. Of these, I felt that Dolby Surround offered the best sense of space and presence for stereo music. It didn’t sound artificial or gimmicky. It just sounded more substantial and full compared to the pure stereo reproduction. Vocals were still locked front and center on most cuts while subtle room reverberation emanated from the rear speakers, giving the music a more palpable presence.
Advertisement. Scroll to continue reading.
All That and Black Vinyl, Too
While most of my listening was done with digital sources, I also fired up my Systemdek IIX turntable to spin a few classic LPs from the 70s and 80s. Using the AV 30’s built-in moving magnet phono stage, albums like Genesis “And Then There Were Three” and ZZ Top “Eliminator” were presented with nice detail, musicality and warmth, though having to get up to change sides and endure the clicks and pops of less-than-perfectly maintained vinyl reminded me of why I ultimately prefer high quality lossless and high res audio digital sources to analog.
Advertisement
I admit I did get a bit emotional when listening to “Star Wars and Other Galactic Funk,” which includes a rock/disco version of the “Star Wars” soundtrack by Meco. Immersed in the thrumming sounds of drums, lightsabers, guitars and xylophone solos, I was transported to a galaxy far, far away, namely my parents basement where my friends and I would gather to listen to our latest records. But this record never sounded so good back then on my parent’s old console system. If you’ve got a penchant for black vinyl or two-channel music listening in general, the AV 30 won’t let you down.
Meanwhile, Back in Japan
I should note that on a recent trip to Denon and Marantz headquarters in Kawasaki, Japan, as well as the Shirakawa Audio Works, where many of the company’s high end audio products are manufactured (including the AV 30 and AMP 20), I experienced the attention to detail that goes into the construction of Marantz products. The manufacturing process includes equal parts high tech robotics and hand assembly by gifted craftsmen (and craftswomen).
And this commitment to quality doesn’t only go into the physical assembly. New designs like the AV 30 and AMP 20 only make it into production after they have passed the careful ears of Marantz Soundmaster Yoshinori Ogata (Ogata-san). Only after Ogata-san signs off on the sound quality and build quality of each new product does that product make it into mass production and ultimately into a customer’s home.
A peek inside the Marantz listening room in Kawasaki, Japan with Marantz soundmaster Ogata-san.
The Bottom Line
With the AV 30, Marantz is bringing the cost of entry for A/V separates down to a fairly reasonable level. With 11 channels of processing, 4K and 8K video passthrough, and decoding for virtually all of the popular surround codecs, the Marantz AV 30 makes an excellent choice for those who are ready to graduate from A/V receiver to the next level of performance, flexibility and sonic clarity. And for those with a 7.1.2-channel or 5.1.4-channel speaker layout, the 12-channel AMP 20 power amplifier is truly a match made in heaven.
Pros:
11 channels plus 4 independent subwoofer outputs make this suitable for medium to large home theater installations
Competitively priced for the category
Built-in Audyssey MultEQ XT32 room correction/calibration is effective at reducing negative room interactions
Option to add DIRAC room correction, including DIRAC ART (Active Room Treatment)
Comes with HEOS whole-home multi-room wireless music platform
Elegant design
Supports unbalanced RCA and balanced XLR output for maximum flexibility and improved performance
Includes all the popular immersive sound formats: Dolby Atmos, DTS:X, IMAX Enhanced, AURO-3D, MPEG-H and Sony 360 Reality Audio
Top notch sound quality
Cons:
Lacks Google Cast WiFi casting
HEOS implementation does not natively support Dolby Atmos Music
This sponsored article is brought to you by Ampace.
As AI workloads grow to gigascale levels, the global data center industry has hit a hidden physical wall. The real bottleneck is no longer just the thermal limit of the chip or the capacity of the cooling system — it is the dynamic resilience of the power chain.
Modern AI computing clusters, driven by massive GPU clusters, generate high-frequency, abrupt, and synchronized spikey pulse loads. As rack densities soar beyond 100 kW, these fluctuations are amplified into a “power paradox”: while the digital logic of AI is moving faster than ever, the physical infrastructure supporting it remains tethered to legacy response capabilities.
The power usage of these gigascale sites and their drastic, high frequency, abrupt load surges from the AI GPU clusters can trigger transient voltage events and frequency instability, risking the entire local grid. The grid itself is not robust enough to support these loads. This leads to the infrastructure gap: The utility is not robust enough and traditional backup sources, such as diesel generators and gas turbines, simply cannot react to millisecond-level power spikes in output. This will often force operators into a cycle of costly infrastructure over sizing just to buffer the volatility.
Advertisement
AI infrastructure requires energy systems capable of instantaneous response while safeguarding continuity and reliability.
The industry has explored various mitigations — from rack-level BBUs to 800V DC architectures — yet the mature, high volume, traditional UPS system remains the most viable and scalable foundation for gigawatt-level facilities. Consequently, the UPS-integrated battery system has emerged as the critical “physical buffer” to neutralize these pulses at the source.
At Data Center World 2026 in Washington, D.C., Ampace led a pivotal technical dialogue with Eaton during the session “Powering Giga-scale AI.” Their exchange unveiled a fundamental paradigm shift: To bridge the AI power gap, energy storage must evolve from a passive insurance policy into an active, high-speed stabilizer. By aligning Ampace’s semi-solid-state battery innovation with Eaton’s proven system intelligence, we are moving beyond simple backup to solve the physical paradox of the AI era.
To move beyond simple backup and solve the physical paradox of the AI era, Ampace is aligning its semi-solid-state battery innovation with Eaton’s proven system intelligence.Ampace
The “Shock Absorber” physics: semi-solid chemistry for AI pulses
Conventional power systems were designed for steady-state loads, not the rapid heartbeat of a massive AI GPU cluster. When thousands of GPUs synchronize their computing cycles, they generate high-frequency, abrupt pulse loads that can lead to voltage sags, frequency oscillations, and potential interruptions of critical AI training.
Advertisement
Ampace’s PU Series semi-solid and low-electrolyte cells address this challenge by acting as high-speed “shock absorbers.” Leveraging ultra-low internal resistance (DCR) and high cycle capability, these batteries neutralize millisecond-level power spikes at the source, stabilizing the local power loop before disturbances propagate upstream to the grid or on-site generators. These high-rate cells enable 100 kW+ racks to maintain peak performance without transmitting instability across the power chain.
This capability aligns closely with Eaton’s matured UPS architectures, such as double-conversion topologies and advanced power electronics upgrades, which have long prioritized rapid load responsiveness and high system stability.
Together, these approaches embody a shared industry philosophy: AI infrastructure requires energy systems capable of instantaneous response while safeguarding continuity and reliability.
Ampace’s semi-solid state chemistry minimizes liquid electrolyte, greatly reducing the risk of leakage and thermal runaway under continuous AI high-load conditions.Ampace
Algorithmic intelligence: synchronizing energy and control
Hardware alone cannot solve the AI power paradox; the system also requires intelligent coordination between energy storage and power management. Sophisticated battery management systems (BMS) like Ampace’s high-precision design track state-of-charge (SOC) with high-speed sampling, even during rapid, shallow cycling typical in AI workloads.
Advertisement
Complementary algorithmic approaches in modern UPS platforms — such as ramp-rate control and average power management — effectively suppress sub-synchronous oscillations and optimize load smoothing. In large-scale AI training environments, where thousands of GPUs can trigger millisecond-level power pulses, these intelligent layers ensure that batteries buffer high-frequency fluctuations without compromising the mandatory emergency backup reserves.
By transforming energy storage from passive “standby insurance” into active, schedulable assets, the system simultaneously safeguards continuous AI training and maintains the long-term health of the data center infrastructure. In practical terms, this means that even during peak compute bursts, the infrastructure remains stable, training cycles continue uninterrupted, and operators avoid costly oversizing or grid stress.
Eaton’s dual-layer algorithms serve as a valuable benchmark in this space, demonstrating how advanced control logic can achieve similar objectives, reinforcing Ampace’s approach and philosophy within the broader data center power ecosystem.
Economic scalability: optimizing AI infrastructure efficiently
One of the largest costs in deploying AI infrastructure is “oversizing”: procuring transformers, generators, and UPS systems to handle brief peak spikes. This traditional approach inflates the Total Cost of Ownership (TCO) and leads to wasted capital on underutilized hardware.
Advertisement
Ampace’s turn-key cabinet design developed by its independent R&D is engineered for seamless compatibility with mature, high volume UPS systems. By leveraging Eaton’s double-conversion UPS topologies alongside intelligent ramp-rate and average power management algorithms, AI data centers can scale dynamically without requiring costly infrastructure redesigns. This approach allows the UPS and batteries to act as active load-shapers, smoothing AI-driven pulses while strictly maintaining mandatory emergency backup capacity.
By utilizing energy storage as an active, schedulable asset, operators can right-size their infrastructure, avoid unnecessary grid upgrades, and deploy gigascale AI clusters with unprecedented efficiency.
Safety First: Protecting AI Infrastructure While Enabling Innovation
In high-density AI facilities, safety is non-negotiable. Ampace’s semi-solid state chemistry minimizes liquid electrolyte, greatly reducing the risk of leakage and thermal runaway under continuous AI high-load conditions.
Ampace’s turn-key cabinet design developed by its independent R&D is engineered for seamless compatibility with mature, high volume UPS systems. Ampace
At the same time, Eaton’s UPS design emphasizes system-level energy scheduling that never sacrifices mandatory emergency backup reserves, ensuring thermal safety and uninterrupted operation.
Advertisement
This “safety-first” approach ensures that infrastructure can sustain aggressive performance targets without compromising the physical integrity of the facility. Coupled with over a decade of proven high-cycle life operation and design under shallow pulse conditions, these systems can extend operational lifespan, reduce replacement requirements, and provide operators with confidence that safety and reliability remain uncompromised as compute density continues to grow.
To remain the scalable backbone of AI data centers
As AI computing scales over the next two to three years, the industry will face stricter grid requirements and even more demanding pulse load characteristics. This evolution demands a forward-looking design philosophy that harmonizes UPS, battery, and grid compatibility.
Ampace views current low-electrolyte semi-solid technologies as the optimal transitional step toward a fully solid-state future — one that promises ultimate safety and performance.
Ampace remains committed to this long-term technological roadmap. We view current low-electrolyte semi-solid technologies as the optimal transitional step toward a fully solid-state future — one that promises ultimate safety and performance. Whether through rack-level BBU, integrated UPS systems, or containerized storage, the universal core of the AI era remains constant: high-speed response, long shallow-cycle life, and refined energy management.
Advertisement
By engaging in deep technical exchanges with Eaton and leading energy innovators, Ampace ensures that its solutions not only meet today’s AI pulse challenges but also harmonize with broader infrastructure strategies and shared industry best practices.
Ultimately, as traditional diesel generators gradually give way to diversified alternatives, the integrated UPS-plus-energy-storage system will become the fundamental infrastructure standard.
The dialogue has just begun. Ampace will continue to engage in strategic exchanges with global industrial automation leaders and digital energy pioneers, co-authoring the playbook for a safer, more efficient, and more resilient AI-ready world.
Google is rolling out a sweeping update to Android Auto and cars powered by its Google Built-in software. The suite of changes and new features includes an overhaul to Google Maps, the addition of in-dash video playback and a full visual refresh rolling out to compatible vehicles and devices throughout 2026.
As cars get smarter and in-car screens get weirder, the previewed changes should keep Google’s automotive ambitions competitive with Apple’s CarPlay.
Advertisement
Design that literally fits your car
Android Auto is getting a full visual refresh built on Google’s Material 3 Expressive design language, bringing new fonts, animations and wallpapers from the phone experience to the dashboard. The interface can now adapt to any screen shape, including the familiar portrait and landscape orientations, but also new ultrawide and nonrectangular display geometries. Google showcased just how nonstandard Android Auto can get, filling the circular OLED display of the latest generation Mini vehicles and the skewed hexagonal screen of BMW’s Neue Klasse EVs.
Also new are home screen widgets, letting drivers keep glanceable information — such as favorite contacts, garage door controls and weather info — surfaced alongside active navigation.
Enlarge Image
Advertisement
Android Auto is now able to squeeze and stretch into non-traditional screen shapes, including circles, parallelograms and everything between.
Google
The biggest Maps update in a decade
The centerpiece of the update is Immersive Navigation, which Google describes as its biggest Maps update in over a decade. The feature brings a 3D map view with rendered buildings, overpasses and terrain, and highlights lane markings, traffic lights and stop signs to aid complex maneuvers. The new look, to my eye, is not dissimilar to what I’ve seen on Apple’s Maps and is a welcome aesthetic and functional upgrade.
Cars running native Google Built-in get even more new navigation capability not available in standard Android Auto. The biggest new feature is Live Lane Guidance, which uses the vehicle’s front-facing camera to determine the driver’s current lane position and provide real-time guidance through lane changes and exits.
HD Video comes to Android Auto
Android Auto is adding full HD video playback at 60 frames per second when the car is parked, launching later this year on vehicles from BMW, Ford, Genesis, Hyundai, Kia, Mercedes-Benz and Volvo in the US. (Outside of the States, that list grows to include Mahindra, Renault, Skoda and Tata cars.) When your charge sesh is complete and the car shifts from park to drive, Android Auto will also be able to seamlessly transition content to audio-only in apps that support background audio, so you can keep listening to that video podcast you just started.
Advertisement
Enlarge Image
Android Auto now supports HD Video at 60 fps while parked. Shift into drive and the content automatically switches to background audio.
Google
Dolby Atmos spatial audio is also coming to Android Auto in supported apps and vehicles, starting with BMW, Genesis, Mercedes-Benz and Volvo. After hours of listening to Dolby Atmos in cars, this might be the feature I’m most excited about. Media app interfaces, including YouTube Music and Spotify, are also receiving visual updates, breaking out of the standard and familiar Android Auto template we’ve seen since the software’s launch.
Advertisement
Meanwhile, cars with Google Built-in will receive the same video and audio improvements, along with support for meeting apps like Zoom.
Gemini hits the road
Gemini is now broadly available in Android Auto for general driving assistance, rolling out to drivers over the past year. Devices with Gemini Intelligence — Google’s context-aware AI tier — will gain additional capabilities later this year, including Magic Cue, which can surface relevant information from messages, email and your calendar to respond to incoming texts in a single tap. In the demo, Google shows a driver receiving a text message asking for their destination, which is then replied to with a single tap.
Google is also enabling in-car food ordering through DoorDash via voice command. I’m sure someone will find that useful.
Advertisement
Enlarge Image
On certain cars running Google Built-in software, Gemini will be able to answer knowledge-based questions about the vehicle and its capabilities.
Google
In cars with Google Built-in, Gemini integrates directly with vehicle hardware, enabling queries specific to the car itself. For example, a driver could ask Gemini to identify a dashboard warning light or to estimate whether the bulky TV they’re buying will fit within their car’s specific cargo dimensions.
The announcement comes as part of this year’s Gemini-fueled Android Show: I/O Edition and hot on the heels of General Motors’ April announcement that it’s rolling Gemini functionality into its Google Built-in infotainment stack. For GM alone, you’re talking about roughly 4 million Cadillac, Chevrolet and Buick vehicles in the US that will benefit from today’s updates. Globally and across all supported vehicle brands, Google boasted that 250 million cars currently support Android Auto at last count, with more than 50 models running Google Built-in natively — most of which will be getting these upgrades over the coming months.
ASUS India has unveiled new deals on its ExpertBook laptop range during Flipkart SASA LELE Sale 2026. Alongside the flagship ExpertBook Ultra, ASUS has also introduced various AI-powered ExpertBook P Series laptops featuring enterprise-level security and finance options.
Over the last year, the Taiwanese laptop maker has expanded its ExpertBook range, which now comprises 34 models. The series starts at Rs 41,990 and is available during the Flipkart SASA LELE sale 2026. Some top-range laptops will get discounts up to 34.5 percent. ASUS is also providing cashback deals of up to Rs 15,000 through banks and Rs 20,000 through exchange offers.
The flagship ExpertBook Ultra also comes with a special 5+5+5 support package. Under this offer, users receive 5 years of warranty, 5 years of battery support, and 5 years of accidental damage protection for long-term peace of mind.
ASUS ExpertBook Ultra
The ExpertBook Ultra comes with a lightweight 0.99kg body and an ultra-slim 10.9mm design for easy portability. ASUS uses aerospace-grade magnesium-aluminum alloy to enhance the laptop’s durability and strength. The company has also added a nano-ceramic coating for improved scratch resistance during travel and daily work.
The laptop runs on Intel Core Ultra Series 3 processors and delivers up to 180 TOPS AI performance for advanced AI-based workloads. It also includes Intel Arc graphics and ASUS ExpertCool Pro cooling technology. For visuals, the ExpertBook Ultra offers a 14-inch 3K Tandem OLED touchscreen display with a 144Hz refresh rate and up to 1400 nits of HDR brightness.
Advertisement
The panel also includes anti-glare technology and Gorilla Glass Victus protection. The laptop packs a 70Wh battery and offers up to 26 hours of usage on a single charge. It supports 90W USB-C fast charging along with Wi-Fi 7 and Bluetooth 6 connectivity. ASUS also includes Thunderbolt 4 ports and HDMI 2.1 support for improved connectivity options.
ASUS ExpertBook P Series
The ASUS ExpertBook P Series laptop is intended for professional, office, startup, and expanding business users. The ExpertBook P Series comprises the following models: ExpertBook P1, ExpertBook P3, and ExpertBook P5. Both the 14-inch and 16-inch screen versions are available in these three models. These laptops run on Intel Core Ultra Series 2 chips with fast DDR5 memory and PCIe Gen 4 storage.
ASUS has also added dual RAM slots and expandable storage options on selected models, making the laptops more future-ready. Connectivity features include Wi-Fi 7, HDMI 2.1, RJ45 Ethernet ports, and USB-C charging support. ASUS says the laptops come with batteries of up to 63Wh and can also be charged using power banks and airplane chargers for better portability.
For security and support, ASUS includes TPM 2.0 protection, McAfee Premium security subscription, self-healing BIOS, and chassis intrusion alerts.
Top ASUS Laptop Deals During Flipkart SASA LELE Sale
ExpertBook P3405CVA: The laptop features a 14-inch WUXGA display, Intel Core i5-13420H processor, and 16GB DDR5 memory. It is currently available for Rs 63,990 after a Rs 10,000 price cut.
ExpertBook P5405CSA: Powered by an Intel Core Ultra 7 chip, the laptop features 32GB LPDDR5X RAM and a 1TB SSD for AI workloads. ASUS has priced it at Rs 1,03,990 in the sale.
ExpertBook Ultra B9406CAA: This is the flagship laptop from the brand, featuring a Core X7 CPU, PCIe 5.0 storage, and a 3K OLED display. The company is selling this laptop for Rs 2,39,990.
ASUS ExpertBook Flipkart SASA LELE Sale 2026 Full Price List
Quietly extends waivers to 2029 after realizing it was about to leave millions of devices unpatched
America’s telco regulator has seen some sense over its ban
on foreign-made routers, deciding that existing devices should continue receiving software and firmware updates after all.
The Federal Communications Commission (FCC) has extended waivers covering certain foreign-made routers (and drones) already operating in the US, pushing the update deadline to at least January 1, 2029. Without the extension, updates would have been blocked as early as 2027.
Advertisement
The biggest practical security risk with routers is not only who made them, but whether they remain patched… The original restriction risked creating exactly that problem: millions of deployed routers frozen in time, unable to receive security fixes
Back in March, the FCC updated its Covered List to include all
foreign-made consumer routers, prohibiting the approval of any new models.
This effectively banned any new kit made in other countries from being sold,
but did not prevent the import, sale, or use of existing models that had previously
been authorized.
The policy stems from fears that foreign-made router pose a security threat. Because they handle network traffic, they could introduce
vulnerabilities exploitable against critical infrastructure, and in
the words of the FCC represent “a severe cybersecurity risk that could harm
Americans.”
Miscreants have exploited security flaws in routers to
disrupt networks or steal intellectual property, and routers are implicated in
the Volt, Flax, and Salt Typhoon cyberattacks.
Advertisement
The policy was widely regarded as flawed, not just because the
vast majority of consumer router kit is made outside the US or built from components
sourced abroad, but because vulnerabilities and security flaws are not limited
to any particular geography, and appear in products from all brands and
countries of origin, as noted
by the Global Electronics Association (GEA).
Blocking firmware updates, which typically deliver security patches for newly discovered flaws, also seemed a peculiar own goal for a regulator whose stated motivation is reducing network vulnerability.
The FCC has belatedly recognized this, stating that its
policies would have “had the effect of prohibiting permissive changes to the
UAS, UAS critical components, and routers added to the Covered List in December
and March.
“This prohibition would be in effect even for Class I and Class II
permissive changes – such as software and firmware security updates that
mitigate harm to US consumers – because previously authorized UAS, UAS critical
components, and routers are now covered equipment.”
Advertisement
The waivers now run until at least until January 1, 2029, falling into the final month of the Trump administration, when there is a chance this may be overlooked in the preparations for Trump’s successor.
The FCC extension was met with some approval. Doc McConnell, head
of policy and compliance at security biz Finite State said in a supplied
remark:
“I strongly support the FCC’s decision to allow firmware and software
updates for already-authorized routers, including covered devices already
deployed in the United States.”
“The biggest practical security risk with routers is not
only who made them, but whether they remain patched. When they stop receiving
updates, known vulnerabilities remain exposed, attackers gain durable
footholds, and consumers are left with equipment they cannot realistically
secure on their own.
Advertisement
“The original restriction risked creating exactly that
problem: millions of deployed routers frozen in time, unable to receive
security fixes. I appreciate the FCC recognizing that preventing updates could
unintentionally make Americans less safe,” he added.
However, as previously reported by The Register, the FCC’s
Conditional Approval framework explicitly requires vendors seeking approval for
new routers to submit plans to establish or expand manufacturing in America, with quarterly progress updates.
As stated by the GEA, “The policy’s logic assumes that
manufacturers can and will move production to the United States.” That might be
an assumption too far.
®
Between May 6 and 7, four security research teams published findings about Anthropic’s Claude that most outlets covered as three separate stories. One involved a water utility in Mexico, another targeted a Chrome extension, and a third hijacked OAuth tokens through Claude Code. In one case, Claude identified a water utility’s SCADA gateway without being told to look for one.
These are not three bugs. They are one architectural question playing out on three surfaces. No single patch released so far addresses all of them.
The common thread is the confused deputy, a trust-boundary failure where a program with legitimate authority executes actions on behalf of the wrong principal. In each case, Claude held real capabilities on every surface and handed them to whoever showed up. An attacker probing a water utility’s network. A Chrome extension with zero permissions. A malicious npm package rewriting a config file.
Carter Rees, VP of Artificial Intelligence at Reputation, identified the structural reason this class of failure is so dangerous. The flat authorization plane of an LLM fails to respect user permissions, Rees told VentureBeat in an exclusive interview. An agent operating on that flat plane does not need to escalate privileges, it already has them.
Advertisement
Kayne McGladrey, an IEEE senior member who advises enterprises on identity risk, described the same dynamic independently in an interview with VentureBeat. Enterprises are cloning human permission sets onto agentic systems, McGladrey said. The agent does whatever it needs to do to get its job done, and sometimes that means using far more permissions than a human would.
Dragos found Claude targeting a water utility’s SCADA gateway without being told to look for one
Dragos published its analysis on May 6. Between December 2025 and February 2026, an unidentified adversary compromised multiple Mexican government organizations. In January 2026, the campaign reached Servicios de Agua y Drenaje de Monterrey, the municipal water and drainage utility serving the Monterrey metropolitan area.
Dragos analyzed more than 350 artifacts. The adversary used Claude as the primary technical executor and OpenAI’s GPT models for data processing. Claude wrote a 17,000-line Python framework containing 49 modules for network discovery, credential harvesting, privilege escalation, and lateral movement. Claude compressed what would traditionally take days or weeks of tooling development into hours, according to the Dragos analysis.
Without any prior ICS/OT context, Claude identified a server running a vNode SCADA/IIoT management interface, classified the platform as high-value, generated credential lists, and launched an automated password spray. The attack failed, and no OT breach occurred, but Claude did the targeting. Dragos noted that this was not a product vulnerability in the traditional sense because Claude performed exactly as designed. The architectural gap, as the firm described it, is that the model cannot distinguish an authorized developer from an adversary using the same interface.
Advertisement
Jay Deen, associate principal adversary hunter at Dragos, wrote that the investigation showed how commercial AI tools have made OT more visible to adversaries already operating within IT.
CrowdStrike CTO Elia Zaitsev told VentureBeat why this class of incident evades detection. Nothing bad has happened until the agent acts, Zaitsev said. It is almost always at the action layer. The Monterrey reconnaissance looked like a developer querying internal systems. The developer tool just had an adversary at the keyboard.
Stack blind spot: OT monitoring does not flag AI-generated recon from IT-side developer tools. EDR sees the process but has no visibility into intent.
LayerX proved any Chrome extension can hijack Claude through a trust boundary Anthropic partially patched
On May 7, LayerX researcher Aviad Gispan disclosed ClaudeBleed. Claude in Chrome uses Chrome’s externally connectable feature to allow communication with scripts on the claude.ai origin, but does not verify whether those scripts came from Anthropic or were injected by another extension. Any Chrome extension can inject commands into Claude’s messaging interface. Zero permissions required.
Advertisement
LayerX reported the flaw on April 27. Anthropic shipped version 1.0.70 on May 6. LayerX found that the patch did not remove the vulnerable handler. LayerX bypassed the new protections through the side-panel initialization flow and by switching Claude into “Act without asking” mode, which required no user notification. Anthropic’s patch survived less than a day.
Mike Riemer, SVP of Network Security Group and Field CISO at Ivanti, told VentureBeat that threat actors are now reverse engineering patches within 72 hours using AI assistance. If a vendor releases a patch and the customer has not applied it within that window, the vulnerability is already being exploited, Riemer said. Anthropic’s ClaudeBleed patch did not survive even a third of that window.
Stack blind spot: EDR watches files and processes but does not monitor extension-to-extension messaging within the browser. ClaudeBleed produces no file writes, no network anomalies, and no process spawns.
Mitiga showed a config file rewrite steals OAuth tokens and survives rotation
Also on May 7, Mitiga Labs researcher Idan Cohen published a man-in-the-middle attack chain targeting Claude Code. Claude Code stores MCP configuration and OAuth tokens in ~/.claude.json, a single user-writable file. A malicious npm postinstall hook can rewrite the MCP server URL to route traffic through an attacker’s proxy, capturing OAuth tokens for Jira, Confluence, and GitHub. Because the postinstall hook fires on every Claude Code load, it reasserts the malicious endpoint even after token rotation — meaning the standard incident response step of rotating credentials does not break the attack chain unless the hook itself is removed first.
Advertisement
Mitiga reported the finding on April 10. On April 12, Anthropic classified it as out of scope, according to Mitiga’s published disclosure.
Riemer described the principle this chain violates. I do not know you until I validate you, Riemer told VentureBeat. Until I know what it is and I know who is on the other side of the keyboard, I am not going to communicate with it. The ~/.claude.json rewrite substitutes the attacker’s endpoint for the legitimate one. Claude Code never re-validates.
Riemer has spent 21 years architecting the product he now leads and holds five patents on its security infrastructure. He applies the same defensive logic he built into his own platform. If a threat actor gets in, drop all connections. That is a fail-safe design. Anthropic’s architecture does the opposite. It fails open.
Stack blind spot: Web application firewalls never see local config rewrites. EDR treats JSON file writes as normal developer behavior. Rotating tokens does not break the chain unless responders also confirm the hook is removed.
Advertisement
Anthropic’s response pattern treats the user’s trust decision as the security boundary
Anthropic classified Mitiga’s MCP token theft as out of scope on April 12. The company called OX Security’s STDIO vulnerability affecting an estimated 200,000 MCP servers “expected” and by design. Anthropic declined Adversa AI’s TrustFall as outside its threat model, according to Adversa’s published disclosure. ClaudeBleed was partially patched. Across all four disclosures, the researchers say the underlying trust model remains exploitable.
Alex Polyakov, co-founder of Adversa AI, told The Register that each vulnerability gets patched in isolation, but the underlying class has not been fixed.
Zaitsev offered a frame for why consent alone cannot serve as the trust boundary. If you think you can always understand intent, Zaitsev told VentureBeat, then you would also think it is possible to write a program that reads a text transcript and figures out if someone is lying. That is intuitively an impossible problem to solve.
Adversa AI showed that a cloned repo can auto-execute arbitrary code the moment a developer clicks trust
Adversa AI researcher Alex Polyakov published TrustFall, demonstrating that project-scoped Claude configuration files in a cloned repository can silently authorize MCP servers to run as native OS processes with full user privileges. The moment a developer clicks the generic “Yes, I trust this folder” dialog, any MCP server defined in the project config launches. The dialog does not show what it authorizes.
Advertisement
In automated build pipelines where Claude Code runs without a screen, the trust dialog never appears. The attack executes with zero human interaction. Adversa confirmed the pattern is not unique to Claude Code. All four major coding agents (Claude Code, Cursor, Gemini CLI, and GitHub Copilot) can auto-execute project-defined MCP servers the moment a developer accepts that dialog.
Stack blind spot: No current security tooling can tell the difference between a legitimate project config and a malicious one. The trust dialog is the only thing standing between the developer and arbitrary code execution, and it does not show what it is about to authorize.
The matrix below maps each surface that Claude wrongly trusted, the stack blind spot, the detection signal, and the recommended action.
Claude Confused Deputy Audit Matrix
Surface
Advertisement
Who Claude Trusted
Why Your Stack Misses It
Detection Signal
Recommended Action
Advertisement
claude.ai / API
Dragos, May 6
350+ artifacts analyzed
Attacker posing as an authorized user via Claude’s prompt interface.
Advertisement
Claude cannot distinguish a developer mapping internal systems from an adversary doing the same thing through the same interface.
OT monitoring watches ICS protocols and anomalous traffic patterns.
AI-generated recon originates from an IT-side developer tool, not from the OT network. The queries look identical to legitimate developer activity because they ARE legitimate developer activity with an adversary at the keyboard.
Query:
Advertisement
Claude API logs for requests referencing internal hostnames, IP ranges, or SCADA/ICS keywords.
Alert trigger:
>5 credential generation requests against internal services in 60 minutes.
Escalation:
Advertisement
OT team notified on any AI-originated query touching vNode, SCADA, HMI, or PLC keywords.
Segment AI-assisted sessions from OT-adjacent network segments.
Log all Claude API calls referencing internal hostnames or IP ranges.
Alert on automated credential generation targeting internal authentication interfaces.
Advertisement
Require explicit OT authorization for any AI tool with internal network access.
Claude in Chrome
LayerX, May 7
v1.0.70 patch bypassed <24hrs
Advertisement
Any script running in the claude.ai browser context, including scripts injected by zero-permission extensions.
The externally connectable manifest trusts the origin (claude.ai), not the execution context. Any extension can inject into that origin.
EDR monitors file system activity, process execution, and network connections.
Extension-to-extension messaging happens entirely within the browser runtime. No file writes. No network anomalies. No process spawns. EDR has zero visibility into Chrome’s internal messaging API.
Advertisement
Query:
Chrome extension inventory for any extension with content scripts targeting claude.ai in the manifest.
Alert trigger:
New extension installed with claude.ai in permissions or content script targets.
Advertisement
Escalation:
Browser security team reviews any extension communicating with Claude’s messaging interface.
Audit Chrome extensions across the fleet for claude.ai content script access.
Disable “Act without asking” mode in Claude in Chrome enterprise-wide.
Advertisement
Deploy browser security tooling that inspects extension messaging channels.
Monitor for extensions injecting content scripts into claude.ai domain.
Claude Code MCP
Mitiga, May 7
Advertisement
Anthropic: “out of scope” April 12
Rewritten ~/.claude.json routing MCP traffic through attacker-controlled proxy.
Claude Code reads the MCP server URL from the config file on every load. It never re-validates that the URL matches the endpoint the user originally authorized.
WAF inspects HTTP traffic between clients and servers. It never sees a local config file rewrite.
Advertisement
EDR treats JSON file writes in the user’s home directory as normal developer behavior. Token rotation feeds the chain because the npm postinstall hook reasserts the malicious URL on every Claude Code load.
Query:
File integrity monitor on ~/.claude.json for MCP server URL changes.
Alert trigger:
Advertisement
MCP server URL changed to endpoint not on approved allowlist.
Escalation:
IR team confirms postinstall hook removal before closing ticket. Token rotation alone is insufficient.
Monitor ~/.claude.json for unexpected MCP endpoint changes against an allowlist.
Advertisement
Block or alert on npm postinstall hooks that modify files outside the package directory.
Maintain a centralized MCP server URL allowlist.
Do NOT assume token rotation breaks the chain without confirming the malicious hook is removed first.
Claude Code project settings
Advertisement
Adversa AI, May 7
Affects Claude, Cursor, Gemini CLI, Copilot
Project-scoped .claude configuration file in a cloned repository.
Clicking the generic “Yes, I trust this folder” dialog silently authorizes any MCP server defined in the project config. The dialog does not show what it authorizes.
Advertisement
No current security tooling can tell the difference between a legitimate project config and a malicious one.
In automated build pipelines, Claude Code runs without a screen. The attack executes with zero human interaction against pull-request branches.
Query:
Pre-clone scan for .claude, .claude.json, .mcp.json, CLAUDE.md files in repository root.
Advertisement
Alert trigger:
Repo contains MCP server definition not on approved organizational list.
Escalation:
DevSecOps reviews before any developer opens the repo in Claude Code or any coding agent.
Advertisement
Scan cloned repositories for .claude configuration files before opening in any AI coding agent.
Require explicit per-server MCP approval rather than blanket folder trust.
Flag repos that define custom MCP servers in project configuration.
Audit CI/CD pipelines running Claude Code headless where trust dialogs are skipped entirely.
Advertisement
The deputy changed
Norm Hardy described the confused deputy in 1988. The deputy he had in mind was a compiler. This one writes 17,000-line exploitation frameworks, identifies SCADA gateways on its own, and holds OAuth tokens to Jira, Confluence, and GitHub. Four research teams found the same failure class on four surfaces in the same week. Anthropic’s response to each one was some version of “the user consented.” The matrix above is the audit Anthropic has not built. If your team runs Claude Code or Claude in Chrome, start there.
You must be logged in to post a comment Login