In the letter, which was also shared online, the executive said the board and its independent advisors thoroughly reviewed GameStop’s proposal before making a decision. Read Entire Article Source link
Building a railway tunnel through somewhere as historic as Varberg in Sweden meant the authorities couldn’t just send in the contractors straightaway. That’s because Swedish law requires archaeological digs first in these sensitive zones, since careless digging could destroy valuable artifacts.
Case in point, a team of archaeologists and marine archaeologists from Arkeologerna, Bohuslän Museum, Visual Archaeology, and Cultural Environment Halland got to it. They started digging in 2019 and ended up finding a whopping six old ships over the next few years, with some dating way back to the Middle Ages.
Advertisement
The dig itself was part of the Varberg Tunnel project, a major undertaking that’s taking the main stretch of rail and burying it under Varberg itself, similar to the E39 Ferry Free project in Norway. This is a 3 km (1.86 mi) stretch, which, after moving underground, will give the waterfront back to the locals and smooth out commutes. The area itself was once a harbor with defensive structures, so old vessels showing up there makes sense. The ships were all found buried in mud, and four of them are from the Middle Ages, while another dates back to the 17th century. The sixth is a bit of a mystery, though, since the team couldn’t pin down its age.
The crew detailed their findings in a report, as reported by the Swedish Arkeologerna – though the initial version only covers three of the wrecks. Out of these, the second wreck got the most thorough look since it was the best preserved. Wrecks five and six, on the other hand, had to be lifted out of the mud in a hurry due to the tight schedule of the tunnel construction, and they weren’t in great shape.
Advertisement
A ship may have been set on fire on purpose
The second wreck was also the most interesting of the bunch, and a significant section of the ship was found in one piece. Overall, two starboard hull sections, a bunch of scattered timbers, and a berghult – a wooden strip bolted to the outside of the hull, mainly there to take a beating when the ship pulls up to a quay – were fished out.
The ship itself dates back to the late 1530s, putting it roughly in the same window as France’s deepest shipwreck. It’s made out of oak from the Halland and West Sweden timber stock. It’s also built clinker-style, meaning the planks overlap at their edges rather than sitting flush. Perhaps the oddest bit about the whole ship is the burn marks on that berghult. The team reckons that the whole thing went up in flames before sinking, if it wasn’t intentionally torched.
Then there’s fifth wreck, which has plenty in common with the second one. Even though it was built about a century later in the 1600s, it uses the same kind of oak. This one probably worked the waters around Varberg and nearby Ny Varberg, another medieval city in the area, and likely sailed through the Baltic Sea too. Those are the same waters where another historic Navy shipwreck broke through the surface after 400 years under the sea. The final one in the report is Wreck 6, and it’s the odd one out. It’s a caravel-style vessel, meaning the planks sit edge to edge against the frame instead of wrapping around.
Advertisement
The thing is, with large infrastructure projects popping up along Sweden’s West Coast, it’s likely that even more preserved shipwrecks will be unearthed in the region. After all, this area has served as a port for centuries.
Google identified the first zero-day exploit it believes was developed with AI and thwarted a planned mass exploitation event. The GTIG report documents state-sponsored actors from China, North Korea, and Russia using AI for vulnerability research, autonomous malware using Google’s Gemini API, and supply chain attacks targeting the AI software ecosystem.
Advertisement
Google has identified the first zero-day exploit it believes was developed with artificial intelligence. The criminal threat actor that built it planned to use it in a mass exploitation event. Google’s Threat Intelligence Group discovered the vulnerability before it was deployed, worked with the affected vendor to patch it, and disrupted the operation. The exploit, a Python script that bypasses two-factor authentication on a popular open-source system administration tool, contained hallucinated CVSS scores, educational docstrings, and the structured textbook formatting characteristic of large language model output. Google has high confidence that an AI model was used to find and weaponise the flaw.
The disclosure comes in a report published on Monday by the Google Threat Intelligence Group that documents a maturing transition from experimental AI-enabled hacking to what GTIG calls the “industrial-scale application of generative models within adversarial workflows.” State-sponsored actors from China and North Korea are using AI for vulnerability research. Russia-nexus threat actors are deploying AI-generated decoy code against Ukrainian targets. An Android malware called PROMPTSPY uses Google’s own Gemini API to autonomously navigate victim devices, capture biometric data, and block its own uninstallation. The AI cybersecurity arms race that experts warned about is no longer theoretical. It is in Google’s incident response logs.
The exploit targeted a semantic logic flaw, not a memory corruption bug or an input sanitisation error, but a high-level design mistake where the developer hardcoded a trust assumption into the two-factor authentication logic. Traditional vulnerability scanners and fuzzers are optimised to detect crashes and data-flow sinks. They miss this category of flaw. Large language models do not. Frontier models can perform contextual reasoning, reading the developer’s intent and correlating the authentication enforcement logic with hardcoded exceptions that contradict it. The model surfaced a dormant logic error that appeared functionally correct to every traditional scanner but was strategically broken from a security perspective.
GTIG worked with the impacted vendor to responsibly disclose the vulnerability. It does not believe Gemini was used. The criminal group behind the exploit has, according to Google, “a strong record of high-profile incidents and mass exploitation.” The planned mass exploitation event was prevented by proactive counter-discovery. The implication is that AI has crossed a threshold. It can now find vulnerabilities that humans and traditional tools miss, and it is being used by criminal actors to do so at scale.
The autonomous malware
PROMPTSPY is an Android backdoor first identified by ESET in February 2026. Initial reporting focused on its use of the Gemini API to maintain persistence by navigating the Android user interface to pin the malicious application in the recent apps list. Google’s analysis revealed capabilities that go significantly further.
Advertisement
The malware contains an autonomous agent module called GeminiAutomationAgent. It serialises the device’s visible user interface hierarchy into an XML-like format via the Accessibility API and sends it to the gemini-2.5-flash-lite model. The model returns structured JSON responses containing action types and spatial coordinates, which PROMPTSPY parses to simulate physical gestures: clicks, swipes, and navigation. The AI interprets the device’s state and generates commands in real time without human supervision.
PROMPTSPY can capture victim biometric data to replay authentication gestures and regain access to compromised devices. If a victim tries to uninstall it, the malware identifies the on-screen coordinates of the uninstall button and renders an invisible overlay that intercepts touch events, making the button appear unresponsive. Its command and control infrastructure, including Gemini API keys and VNC relay servers, can be updated dynamically at runtime, meaning that blocking specific endpoints does not disable the backdoor. Google has disabled the assets associated with this activity and confirmed that no apps containing PROMPTSPY are found on Google Play.
The state actors
Chinese and North Korean state-sponsored threat actors are using AI for vulnerability research with increasing sophistication. GTIG observed UNC2814, a Chinese-linked group, directing Gemini to act as a “senior security auditor” and “C/C++ binary security expert” to support vulnerability research into TP-Link firmware and file transfer protocol implementations. North Korea’s APT45 sent thousands of repetitive prompts that recursively analysed different CVEs and validated proof-of-concept exploits, building an arsenal of exploit capabilities that would be impractical to manage without AI assistance.
Chinese threat actors experimented with a specialised vulnerability repository called wooyun-legacy, a Claude code skill plugin containing a distilled knowledge base of more than 85,000 real-world vulnerability cases collected by the Chinese bug bounty platform WooYun between 2010 and 2016. By priming an AI model with this vulnerability data, the actors enabled in-context learning that steered the model to approach code analysis like an experienced researcher and identify logic flaws the base model would otherwise miss.
Advertisement
Russia-nexus actors targeting Ukrainian organisations are deploying malware families called CANFAIL and LONGSTREAM, both of which use AI-generated decoy code to obfuscate their malicious functionality. CANFAIL’s source code contains developer comments that explicitly identify unused blocks as filler content designed to disguise malicious activity. LONGSTREAM contains 32 instances of code querying the system’s daylight saving status, a repetitive benign-looking operation that exists solely to camouflage the downloader’s real purpose. APT27, a Chinese-linked group, used Gemini to accelerate development of an operational relay box network management tool with multi-hop proxy configurations designed to obfuscate intrusion origins.
The supply chain
A cyber crime group called TeamPCP claimed responsibility for multiple supply chain compromises of popular GitHub repositories and associated GitHub Actions in late March 2026, including Trivy, Checkmarx, LiteLLM, and BerriAI. The attackers gained initial access through compromised PyPI packages and malicious pull requests, then embedded credential-stealing malware to extract AWS keys and GitHub tokens from affected build environments. The stolen credentials were monetised through partnerships with ransomware and data theft extortion groups.
The compromise of LiteLLM, an AI gateway utility used to integrate multiple large language model providers, is particularly significant. Because the package is widely deployed, the breach could expose AI API secrets across the software supply chain. GTIG notes that attackers who gain access to an organisation’s AI systems through compromised dependencies could leverage internal models to identify, collect, and exfiltrate sensitive information at scale, or perform reconnaissance to move deeper within the network. The AI software ecosystem has become both a tool for attackers and a target.
Google announced its agent infrastructure at Cloud Next 2026, positioning Gemini as the reasoning backbone for autonomous AI workflows across enterprise. The same company is now documenting how adversaries are using agentic workflows to orchestrate attacks. The GTIG report describes threat actors deploying tools called Hexstrike and Strix against a Japanese technology firm and an East Asian cybersecurity platform, with Hexstrike using a temporal knowledge graph to maintain persistent state of the attack surface and autonomously pivot between reconnaissance tools. The agents that Google is selling to enterprises are being mirrored by agents that adversaries are deploying against them.
Advertisement
The defence
Google’s response includes Big Sleep, an AI agent developed by Google DeepMind and Google Project Zero that searches for unknown security vulnerabilities in software. Big Sleep found the vulnerability that the criminal group planned to exploit before the attack was launched. Google also introduced CodeMender, an AI-powered agent that uses Gemini’s reasoning capabilities to automatically fix critical code vulnerabilities. The defensive AI found the flaw. The offensive AI created the exploit. Google’s proactive discovery arrived first.
Google has repositioned Chrome as an enterprise security platform with real-time data loss prevention and AI governance controls, reporting a 50 per cent reduction in unauthorised AI data transfers. The investment in defensive infrastructure reflects the scale of the threat GTIG is documenting: 308 petabytes of industry telemetry in 2025 across more than four million identities, endpoints, and cloud assets, producing nearly 30 million investigative leads. No human team can process that volume. The defensive AI is not optional. It is the only way to match the speed of the offensive AI.
The policy gap
The Trump administration blocked the expansion of Anthropic’s Mythos, the most powerful vulnerability-discovery AI ever built, even as the GTIG report documents criminal and state-sponsored actors using AI to find and exploit the same types of flaws that Mythos was designed to detect. The policy contradiction is that the US government is simultaneously restricting access to defensive AI and facing an adversary landscape in which offensive AI is being deployed at industrial scale.
UK banks received their Mythos briefing within days of the European access crisis, illustrating the scramble among governments and financial institutions to gain access to AI security tools that can match the capabilities GTIG describes. Euro-area finance ministers convened to discuss the fact that no EU government had access to the most advanced vulnerability-discovery AI while the adversaries documented in the GTIG report, state-sponsored actors from China, North Korea, and Russia, were already using AI to find zero-days, generate autonomous malware, and attack the AI software supply chain.
Advertisement
The GTIG report is 33 pages of evidence that the AI cybersecurity arms race has moved from hypothesis to operational reality. Criminal actors are using AI to discover zero-day vulnerabilities and plan mass exploitation events. State-sponsored groups are building AI-augmented exploit arsenals. Autonomous malware is using commercial AI APIs to navigate victim devices without human supervision. The supply chain that connects AI models to enterprise systems is under active attack. Google’s defensive AI found the zero-day before the attackers could deploy it. The question the report does not answer is how many zero-days have been found by actors whose work Google has not yet detected.
The announcement caught many fans off guard, given the film’s epic scope and blockbuster-level cast.
What is the Voltron live-action movie based on?
For those unfamiliar, Voltron: Defender of the Universe is a beloved 1984 animated series that followed a group of pilots who commandeer five giant robotic lions that combine to form a colossal warrior robot called Voltron. The team uses this mighty machine to battle an intergalactic warlord named Zarkon and his army of monsters.
Henry Cavill, best known as Superman and The Witcher‘s Geralt of Rivia, will play King Alfur, a legendary warrior and former ruler of planet Altea. Sterling K. Brown plays Zarkon, the film’s primary villain and Alfur’s nemesis.
The rest of the cast includes Rita Ora, Alba Baptista, John Harlan Kim, Samson Kayo, Tharanya Tharan, Daniel Quinn-Toye, Laura Gordon, Tim Griffin, and Nathan Jones.
Everything about this movie screams big screen, so the streaming news hits differently
The idea of turning Voltron into a live-action movie has been floating around Hollywood since 2005, passing through several studios before Amazon MGM Studios finally secured the rights.
Filming wrapped last year, and the movie is expected to arrive sometime in 2027. Rawson Marshall Thurber, who directed Red Notice, helmed the project alongside a script he co-wrote with Ellen Shanman.
Advertisement
IMDB
The production even built a massive physical rig called the “Lion’s Den” to throw actors around and capture their reactions during robot combat sequences, minimizing heavy CGI reliance.
Given all that ambition, bypassing theaters feels like an odd call, and fans will inevitably start asking whether the decision is about convenience or something more telling about the final product.
How are fans reacting to this?
Fan reaction to the streaming news has been mixed. Some called it “genuinely disappointing” for a film with obvious big-screen potential, while others were perfectly happy watching from home.
Hey, @AmazonMGMStudio It’s genuinely disappointing to hear the new live-action Voltron movie is skipping theaters and going straight to streaming on Amazon Prime. This felt like a film with real theatrical potential….A massive sci-fi spectacle that could’ve brought audiences… https://t.co/YiEgpH4erupic.twitter.com/PQbkeGkTTj
Love the hell out of this. Going to theaters is in the past for me and the majority of us have a subscription to watch stuff. It’s time all new movies just go directly to streaming or a couple weeks after theater release.
Whether the streaming decision signals anything about the film’s quality remains to be seen, but Voltron is clearly one to watch when it finally lands on Prime Video.
Part of the Seattle skyline as seen from the waterfront. (GeekWire Photo / Kurt Schlosser)
Two new rules governing H-1B visas took effect last year, aiming to fix the program’s two most persistent criticisms: that foreign workers were taking jobs Americans wanted, and that visa holders were essentially indentured to their employers in order to remain in the country.
So are the changes solving the problems as hoped? Yes, said one expert.
“It is working as intended,” said Xiao Wang, co-founder and CEO of Boundless Immigration, a Seattle-based startup that helps companies and workers navigate the immigration process.
But there’s a new challenge: the system gives employers a strong incentive to offer higher salaries — not because the job requires it, but to improve their workers’ odds in the lottery. And that could make international hiring more expensive.
The new rules replaced a random, equal-odds lottery with a weighted system that gives the highest-wage H-1B applicants odds four times better than the lowest-wage workers. Employers also now pay a $100,000 fee per new application. The changes come as tech-sector layoffs have raised fresh questions about whether the program displaces American workers — one of the criticisms the new rules were designed to address.
Advertisement
Boundless, whose clients include companies applying for H-1B workers, released a report on the 2026 visa application cycle while federal data is still pending. Its findings show the wage weighting is having a real effect, but with notable tradeoffs:
Higher-wage workers were selected at significantly higher rates than entry-level candidates: 68% of Level III and 64% of Level IV workers were selected, compared to 40% for Level I and 36% for Level II. That pattern suggests the program is increasingly targeting harder-to-fill roles. Under last year’s random lottery, Boundless client approval rates ranged from 32% to 49% across wage levels with no consistent pattern.
But the $100,000 fee is proving prohibitive for rural hospitals and healthcare facilities trying to hire doctors and nurses. That’s also true for startups eager to employ international talent, such as foreign-born founders recruiting from their personal networks.
“This cycle was meaningfully different from last year,” said Priyanka Kulkarni, founder and CEO of Casium, a Seattle startup that also provides immigration services. “Registrations were down, and the weighted lottery is doing what it was designed to do. Level III and Level IV selection rates both came in above 50%.”
Many of the early-stage companies that Casium serves came out “in good shape,” she added.
While the H-1B program targets wide-ranging sectors including healthcare, finance, academia, architecture and sciences, Seattle-area tech companies remain among the program’s largest users. Amazon ranked first among all U.S. employers with 13,265 approved applicants in 2025, and Microsoft ranked third with 6,258, according to federal data. Meta came in second and an Indian IT outsourcing firm placed fourth.
India-born workers have dominated H-1B approvals, accounting for about 71% in recent years. Whether that share will shift remains to be seen. Bloomberg has documented how Cognizant, a major IT outsourcing firm, focused on securing H-1B visas for lower-level workers from India, a practice that the new rules could disrupt.
Advertisement
Wang cautioned that companies and workers are still adjusting their strategies, and he expects the numbers to keep moving. The government projected only 15% of Level I applicants would be approved, suggesting that the rate observed by Boundless in the first round could decline.
At the same time, the wage thresholds could also push employers to inflate salaries in order to move workers into higher tiers and improve their lottery odds.
For example, in King County — which includes Seattle, Redmond and Bellevue — the prevailing annual wage for a software engineer is $117,000, while a Level IV worker in the same area earns $212,000. That gap could also push companies to locate H-1B hires in lower-cost cities, choosing Cincinnati over Seattle or San Francisco to hit a given wage tier at a lower cost.
For highly skilled workers, though, the new system may actually be a draw. Better odds for higher earners make the visa process more predictable, which could attract top-tier international talent who previously avoided the program’s uncertainty.
Advertisement
“All of a sudden it has turned from a crapshoot of being able to stay in this country to an expectation that you can stay in this country if you’re at a certain wage level,” Wang said.
Three Amazon employees told The Financial Times that the company’s internal AI usage metrics are likely inflated. According to the sources, employees are using MeshClaw, Amazon’s internal AI platform, to perform non-essential tasks in an effort to make a stronger impression on managers. Read Entire Article Source link
Powders, gels, and fermented nutrients could someday join the battlefield menu
Eating in the field has never been fun for US Army soldiers. And they may soon face even stranger field rations than they do today: Alternative proteins delivered in formats ranging from powders and sauces to gels and semi-solids.
The Army on Monday published a sources sought announcement to gather submissions from interested industry and academic partners in the “alternative protein sector,” willing to help the branch develop rations that are lighter weight, have a longer shelf life, and could potentially be produced in combat-forward environments.
Advertisement
According to the announcement, the Army is looking for submissions covering four areas: Technologies for developing alternative proteins, like fermentation and other biomanufacturing methods, meat alternative products for ration inclusion, consumer research seeking to “enhance the acceptability … of alternative proteins within a military population,” and food samples for government taste and performance evaluations.
As an added element, the Army said that it wants ration products that meet its existing “stringent requirements for nutrition, shelf stability, and palatability,” though anyone who has served in the US Army and eaten field rations may have doubts about the military branch’s commitment to palatability on its Meal, Ready-to-Eat (MRE).
As a US Army veteran, this vulture can attest to an unfortunate level of familiarity with MREs, circa 2002. Beef frankfurters were famously one of the worst, as was the so-called “beef steak” meal that was more like a compressed loaf of meat leavings than an actual steak. The flavor didn’t matter at the end of the day, though, when you’d just marched 15 miles carrying 75 pounds on your back: You just needed sustenance, and even that five pack of frankfurters with a taste I shudder to recall sounded good under the right circumstances.
The MRE menu lineup, which has changed several times in the past 20 years, includes a few vegetarian options, and it’s those that make one of the Army’s requirements for this program so surprising. Civilians might be surprised to learn how popular the non-meat meals were, even among hardcore carnivores.
Advertisement
The four or so vegetarian options in the overall MRE lineup were always the first to go when I was in. Not only did they replace military mystery MRE meat with something more appealing to eat out of an envelope, but they were actually tasty – relatively, of course. Vegetarian MREs also tended to be slightly less calorically dense than their animal-derived counterparts, so they included extra bits that made them an even bigger hit.
Whether that would translate into soldiers embracing alternative proteins in future MREs isn’t a guarantee, of course. Most weren’t choosing the veggie MREs for alignment with their personal ethics so much as that they wanted a meal that didn’t suck.
The Army’s goal of developing “lightweight and nutrient-dense ration solutions to reduce logistical burdens and physical load on warfighter” through the program is definitely a noble one. MREs get heavy quickly if you’re on a long field expedition, but the openness the Army is leaving in the announcement doesn’t make it sound like appetizing solutions could be the first to come out.
“Gel/semi-solid formats, dry powder mixes, [and] sauce-style components” are all on the table, with the Army saying the format of “novel ready-to-eat formats … is at the offeror’s discretion.”
Advertisement
In other words, future ration components could include gel packs stuffed with fermented mushroom protein and other nutrients, some form of unholy shake, or whatever else food scientists can come up with.
Interested parties will need to move fast, though: As a sources sought announcement, this isn’t a solicitation, includes no promise the ideas will be given a research grant or procurement dollars, and has to be in by Friday, May 15, with no assistance from the government.
The submissions the Army receives could help shape future solicitations in this space, however, meaning the MRE we currently know and … love … may eventually evolve into something rather more futuristic. Hopefully it tastes a bit better.
One thing that soldiers will probably be thrilled about? No bugs in whatever field rations come next.
Advertisement
“We are specifically excluding solutions related to cell-cultured, lab-grown meat or insect protein,” the Army said, though we note that’s only for the purposes of this particular announcement, so tomorrow’s soldiers might still be subsisting on crickets and ants. ®
Motorola has officially confirmed the launch date for the Razr Fold in India. The company’s first book-style foldable will debut in the country on May 13 after being introduced earlier this year at CES. Here’s everything you need to know.
Motorola Razr Fold Specifications
The Razr Fold has an 8.1-inch LTPO OLED foldable screen, providing a 2K display and a 120Hz refresh rate. The outer screen on the Razr Fold has a 6.6-inch OLED display with a 165Hz refresh rate. To improve durability, Motorola uses Gorilla Glass Ceramic 3 on the cover screen and Ultra Thin Glass on the foldable panel.
Instead of a compact flip design, the phone opens like a tablet for a bigger viewing experience. The smartphone will be available in Blackened Blue and Lily White shades, as well as a FIFA World Cup 26 special edition. For performance, Motorola has used Qualcomm’s Snapdragon 8 Gen 5 chip in the Razr Fold. Buyers will get storage options such as 12GB RAM with 256GB storage and 16GB RAM with 512GB storage. Motorola also sells a higher 1TB storage version in global markets.
Furthermore, Motorola is offering Android 16 on the Razr Fold along with its Hello UX/My UX experience. It features a desktop-style layout, trackpad support, and stylus compatibility for work and multitasking. Motorola will also provide up to seven years of software updates for the device.
Camera and Battery
Motorola has focused heavily on cameras with the Razr Fold. The smartphone includes a 50MP main sensor, a 50MP ultra-wide camera, and a 50MP periscope telephoto lens for zoom photography. Users also get separate selfie cameras on both the outer and inner displays. The foldable supports macro shots and additional camera features for different shooting situations.
One of the main highlights of the Motorola Razr Fold is its large 6,000mAh battery. The phone supports both 80W wired charging and 50W wireless charging. With such a powerful battery and fast charging support, the Razr Fold promises better battery life than most competing folding-screen smartphones.
Advertisement
Price and Availability
Globally, the smartphone debuted at EUR 1,999, which is around Rs 2.14 lakh. Although Indian pricing is expected to be lower, the Razr Fold will likely remain a premium foldable smartphone, competing with devices like the Samsung Galaxy Z Fold7 and the Google Pixel 10 Pro Fold. Buyers will be able to purchase the foldable through Flipkart, the company’s official website, and retail stores across the country.
Western Digital claims spinning down hard drives no longer cripple application performance
WD’s solution offers lower storage power consumption without sacrificing consistent response times
Reduced drive power usage allows more storage capacity inside existing rack limits
Western Digital (WD) has developed a new power-optimized drive technology which allows hard disks to spin down without causing major performance penalties.
The company’s Chief Product Officer, Ahmed Shihab, said the technique lowers power consumption enough to matter to customers while preserving the performance they expect.
Traditional hard drives consume significant power even when they are not actively being accessed by users or applications, and this is not sustainable in the long run.
Latest Videos From
Advertisement
Spinning down drives saves power
The technique allows drives to enter a low-power state without the lengthy spin-up delays that have made such approaches impractical in the past.
When a drive spins down, it consumes far less electricity, which directly reduces the operating costs of large storage arrays.
The capacity benefit comes from a secondary effect: lower power consumption per drive means data center operators can pack more drives into the same power and cooling envelope.
Western Digital claims the performance impact of spinning drives’ down and back up is small enough that most applications will not notice the difference.
Advertisement
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The company has designed the technology to be sympathetic to the software stack running above it, requiring no major changes from customers.
Previous attempts to spin down hard drives for power savings have failed because the performance hit was simply too severe for production environments.
Applications expecting sub-millisecond access times would stall while waiting for disks to spin back up to full operating speed.
Advertisement
Western Digital’s new formula balances power savings against accessibility, keeping the delay short enough to stay within typical application timeouts.
The company says this is the first time it has seen genuine interest and positive feedback from customers in lower power technology.
Hyperscale operators have been asking for storage solutions that do not force them to choose between energy efficiency and reliable performance.
Advertisement
A new storage tier between fast and slow drives
The technology effectively creates a new storage tier that sits between high-performance SSDsand traditional archival hard drives.
Data that is accessed frequently stays on fully spun-up drives, while less critical data can be parked on drives that spin down when idle.
The operating system and storage software determine which data belongs on which tier, not the drive itself.
Advertisement
Western Digital’s innovation is purely on the hardware side, making spin-down practical without waiting for software to catch up.
The capacity gains come from density, not from larger platters or new recording techniques.
More drives in the same power budget means more total terabytes per rack, and that is a math problem that every data center operator understands.
The clever part is making the spin-down cycle fast enough that no one notices, and that is where Western Digital claims to have finally solved what has been an industry-wide headache.
Advertisement
That said, hyperscalers will test this solution aggressively, and their verdict will decide if the rest of the industry follows.
Satya Nadella testified in the Musk v. Altman trial that he feared Microsoft would become “the next IBM,” revealing that the $13B OpenAI investment was a survival bet backed by a $92B return projection, not a commitment to the nonprofit mission.
Advertisement
Satya Nadella told a federal jury on Monday that he feared Microsoft would become “the next IBM” while OpenAI became the next Microsoft. The admission, drawn from an April 2022 internal email presented by Elon Musk’s lead attorney, reveals the strategic anxiety that drove the largest corporate investment in artificial intelligence history. Microsoft did not put 13 billion dollars into OpenAI because it believed in a nonprofit mission to develop safe AI for the benefit of humanity. It invested because its CEO believed the company would become irrelevant if it did not.
A January 2023 memo from Microsoft president Brad Smith to the company’s board, also presented to the jury, projected a 92 billion dollar return on that cumulative investment, with a 20 per cent annual escalator starting in 2025. The document reframes the Microsoft-OpenAI partnership from a technology collaboration into what may be the largest financial hedge in corporate history: a bet by the world’s most valuable software company that it could not survive the AI era on its own.
The IBM analogy is not casual. In the 1980s, IBM built the personal computer and outsourced the operating system to a small software company in Redmond, Washington. That decision made Microsoft and unmade IBM. Nadella was telling his team that the same dynamic was forming in AI. OpenAI was building the reasoning engine. Microsoft was building the cloud infrastructure. If OpenAI became the platform and Microsoft became the commodity, the company that defined enterprise software for four decades would fade into the same irrelevance as the company that defined enterprise hardware for three.
Musk’s attorneys presented the email to suggest that Microsoft’s investment was commercially motivated from the beginning, undermining OpenAI’s nonprofit origins. Nadella’s response was to defend the partnership as mutually beneficial. But the email speaks for itself. The CEO of Microsoft was not writing about advancing AI safety. He was writing about survival.
The return
Brad Smith’s 92 billion dollar projection landed on the Microsoft board’s desks one month before the company publicly announced its expanded 10 billion dollar investment in OpenAI. The memo included a 20 per cent annual escalator from 2025, meaning the projected return would compound as OpenAI’s models became more commercially valuable. At the time, ChatGPT had been public for less than two months.
Advertisement
The financial calculus was straightforward. Microsoft was the exclusive cloud provider for OpenAI’s models and held exclusive commercial rights to resell them through Azure. Every dollar of OpenAI revenue flowed through Microsoft infrastructure. The 13 billion dollars was not a donation to a nonprofit. It was a down payment on a distribution monopoly for the most important technology of the decade.
OpenAI is now valued at 852 billion dollars. Microsoft holds 27 per cent of the for-profit entity that emerged from the October 2025 conversion. The nonprofit foundation that was supposed to govern the technology retains 26 per cent. The alignment between mission and money that OpenAI’s founders promised has been replaced by a cap table.
The blind spots
Under cross-examination, Nadella acknowledged that he was not aware of any full-time employees at the OpenAI nonprofit before March 2026. He could not identify any grants, research, or open-sourced technology the nonprofit had produced. He was not informed in advance that the board planned to fire Sam Altman in November 2023. He was never given clarity on why Altman was removed.
The admissions paint a portrait of a partnership in which the investor knew everything about the commercial operation and nothing about the nonprofit governance. Musk’s legal team wants the jury to conclude that the nonprofit was a shell. Nadella’s testimony does not contradict that framing. It reinforces it from the perspective of the company that had the most to gain from the commercial side.
Advertisement
The witnesses
The trial has spent three weeks accumulating testimony that dismantles every participant’s stated motives. Greg Brockman, OpenAI’s co-founder and president, disputed Musk’s account of the startup’s early days and testified that Musk had OpenAI employees do secret work on self-driving technology at Tesla. Brockman’s own journals, presented as evidence, contained entries that called the nonprofit mission “a lie”, undermining both Musk’s claim that the mission was sacred and OpenAI’s claim that it was preserved.
Former board members Helen Toner and Natasha McCauley testified that Altman was untrustworthy, withheld information from the board, and sometimes lied. McCauley told the jury the board had “buckets of concerns” about Altman’s leadership, including an incident in which Altman falsely claimed that OpenAI’s legal department had cleared the GPT-4 Turbo launch in India without safety board review. The women who fired Altman in November 2023 told the jury why, and their reasons had nothing to do with Musk’s lawsuit.
The admissions
Musk took the stand during the trial’s first week and told the jury that OpenAI’s leaders had duped him into bankrolling the company. He repeated a phrase that became the trial’s refrain: “You can’t just steal a charity.” He argued he was not opposed to a small for-profit arm funding the nonprofit but lost trust in Altman when he learned about Microsoft’s 10 billion dollar investment, texting Altman in late 2022: “What the hell is going on? This is a bait and switch.”
Then came the question about distillation. Asked whether xAI uses OpenAI’s models to train Grok, Musk said it was a general industry practice. Asked whether that meant yes, he replied: “Partly.” The admission that his own AI company copies the technology he claims was stolen from a charity drew audible gasps in the courtroom. Musk told the jury the case would set a precedent for “looting every charity in America” while simultaneously acknowledging that he was using the charity’s output to build a competitor.
Advertisement
Shivon Zilis, a former OpenAI board member and the mother of four of Musk’s children, testified that Musk tried to recruit Altman to lead a new AI lab at Tesla. He offered Altman a Tesla board seat. He asked Andrej Karpathy to send a list of top OpenAI researchers to poach. The man suing for breach of charitable trust was, according to the testimony of his own witness, actively trying to strip the charity of its leadership and talent.
The defence
Altman took the stand on Monday. He testified that Musk’s departure from OpenAI’s board in 2018 was a “morale boost” for some employees because Musk had demotivated key researchers by ranking their accomplishments. Altman told the jury that Musk left because he lost confidence in the project and wanted long-term control that the other founders would not grant him.
In a tense exchange, Musk’s attorney confronted Altman with a text message he sent Musk on 18 February 2023: “I’m tremendously thankful for everything you’ve done to help. I don’t think that OpenAI would have happened without you.” The implication was that Altman privately acknowledged Musk’s contribution while publicly diminishing it. The text was sent three months after Musk learned about the Microsoft investment and seven months before the board fired Altman.
The trial began with 150 billion dollars at stake over whether OpenAI’s conversion from nonprofit to for-profit corporation was a breach of charitable trust. Musk wants the court to unwind the conversion, oust Altman and Brockman, and direct damages to the nonprofit. OpenAI argues Musk is suing because he wanted control of the most valuable AI company in the world and did not get it.
Advertisement
The hedge
While the trial plays out in Oakland, Microsoft is quietly proving that Nadella learned the IBM lesson. Microsoft dropped its exclusive licence to OpenAI’s technology, retaining only a non-exclusive agreement through 2032. It did so voluntarily, which makes sense only if Microsoft no longer needs exclusivity because it has alternatives.
It does. Microsoft launched three in-house AI models that directly challenge the partner it spent 13 billion dollars cultivating. The company that feared becoming IBM responded by doing what IBM never did: building its own operating system before the partner could lock it out. Nadella’s April 2022 fear that Microsoft would become dependent on OpenAI appears to have been the founding anxiety of an entire corporate strategy designed to ensure it never would.
The trial is expected to continue through 21 May before Judge Yvonne Gonzalez Rogers. The jury will decide whether OpenAI’s leaders breached a charitable trust and whether Musk is owed restitution. But Nadella’s testimony has already answered a different question. The most powerful corporate backer of the nonprofit AI mission invested because he was afraid his company would die without it. The 92 billion dollar return projection was not a byproduct of the partnership. It was the point. The nonprofit wrapper that Musk claims was stolen may never have contained what any of the parties involved believed it did.
For music and movie lovers who want the best musical and cinematic experiences at home, there have always been two camps: simplicity vs. performance. In the 80s, that meant buying a basic one-brand integrated component stereo system – cables and stand included – vs. buying the individual separate components, potentially from different brands, and hooking everything up yourself. In the 90’s and beyond, companies started building amplifiers and source connectivity into the speakers themselves. Powered soundbars became popular, particularly for those who just wanted a little better sound from their TVs or projectors with minimal set-up required.
Today, you can spend thousands of dollars on “simple” wireless speaker systems and soundbar-based systems with integrated streaming and networking. Some of these actually sound pretty good, but they tend to be a bit lacking in installation flexibility, features and ultimate sonic performance. For those who want something a bit more flexible and powerful, AVRs (Audio Video Receivers) are popular, as these offer a plethora of inputs, plenty of power and expandability for multi-channel and multi-room systems as well as relatively simple hookup.
But packing all these functions into a single box can lead to compromises in performance. In a receiver, all of the various components – DACs, processors, preamplification, switching, streaming receivers and radio tuners – share the same power supply as the amplifiers. And with receivers frequently including seven, nine or even eleven channels of amplification, power management and thermal management can get tricky and can lead to limited dynamic range, heat buildup, distortion and loss of detail.
The Case for A/V Separates
For those who find that A/V receivers aren’t flexible enough or require too large a compromise in performance, there is a step above known as A/V separates: separate components for pre-amplification and for power amplification. A/V separates typically include two types of components: a preamp/processor (pre/pro) which handles all of the low voltage/low current functions like audio decoding, processing, switching and volume adjustment/attenuation and power amplifiers which amplify the low level output of the preamp into high level output to drive the speakers. In theory, this reduces interference among components, simplifies thermal management and improves detail and dynamic range by giving the power amps “room to breathe.”
Advertisement
Marantz offers both A/V receivers and A/V separates, including preamp/processors and multi-channel power amplifiers. Their most recent product, the AV 30 preamplifier/processor ($4,000), is also their most affordable. And their AMP 30 ($4,000) is the company’s latest multi-channel power amplifier offering six channels of Class D amplification enhanced with Marantz HDAM modules for more refined sound. As we needed more than six channels for our test system, we paired the AV 30 with a Marantz AMP 20 ($6,000) 12-channel power amp, for the purposes of this review. We also checked out the AMP 30 and will cover that separately.
What Is It?
The Marantz AV 30 is an 11.4-channel A/V preamplifier/processor designed for high-end home theater and multi-channel surround sound systems. “11.4” means that it offers eleven channels of audio processing as well as four independently adjustable subwoofer outputs. The AV 30 supports all of the most popular immersive audio formats including Dolby Atmos, DTS:X, Auro-3D and MPEG-H/360 Reality Audio. It also is IMAX Enhanced Certified and can identify the IMAX DTS:X soundtracks on select UHD Blu-ray Discs and streaming services, applying IMAX processing and EQ for a more dynamic and theatrical sound. As of the publication of this review in May, 2026, the AV 30 doesn’t decode Eclipsa Audio (a.k.a. “IAMF”), the open source immersive audio format of choice on YouTube. But who knows what the future holds?
Marantz AV 30 preamp/processor with its lower control panel exposed.
The chassis itself is elegantly styled in black aluminum with textured side pieces. The unit was designed in Shirakawa, Japan and features the hallmark Marantz porthole, carried over from some of the company’s earliest products, back when Saul Marantz himself (R.I.P.) was at the helm. An integrated hideaway panel at the bottom keeps the look simple and understated with only two controls visible on the main front panel: dials for volume and input selection. The power button can be found on the left textured side panel and a ¼-inch headphone jack appears on the right. Opening that lower panel reveals a larger rectangular digital display, as well as a few more buttons and a 5-way control to navigate through set-up and other screens.
Even though the preamp does include a fairly robust screen at the bottom, you do still need to connect it to a display of some kind (monitor, TV or projector) in order to complete the set-up.
The Power To Back it Up
As robust as the AV 30 it is, it wouldn’t be able to make much sound without a power amplifier. For the purposes of this review, I paired the AV 30 with a Marantz AMP 20 multi-channel power amp. This 12-channel amp features customized Class D power modules from ICEPower, enhanced by Marantz HDAM (Hyper Dynamic Amplifier Module) technology.
Advertisement
Marantz AMP 20 12-channel power amplifier.
First developed by Marantz in 1992, HDAM modules replace traditional IC-based op-amp stages using hand-selected transistors, resistors and other components, mounted on a small board to minimize interference. This design increases slew rate, enhancing accuracy and reducing noise and distortion compared to traditional op-amp designs.
One of Marantz’ custom Class D amplifiers with HDAM module (from the company’s Shirakawa Audio works factory).
When I asked Ogata-san, the Marantz soundmaster, why Marantz has been moving away from Class A/B amplifiers to Class D, he said it is a combination of efficiency, thermal management and weight, which are particularly important on multi-channel amplifiers. And Ogata-san says with developments in Class D amplifiers, combined with Marantz HDAM, they are able to make smaller, lighter, more efficient, cooler running amplifiers without any sacrifice in that Marantz signature immersive, accurate sound quality.
Advertisement. Scroll to continue reading.
Also, when asked why Marantz’ multi-channel power amplifiers are offered with even numbers of channels (6-channel, 12-channel, 16-channel), while most home theater systems are designed with odd channel numbers (7-channel, 11-channel, 15-channel), a Marantz rep explained that each amplifier module is manufactured as a stereo pair, with each stereo pair bridgeable to double the output. So the AMP 20 includes six individual stereo amplifier modules, while the AMP 30 has three stereo modules.
Connectivity
The AV 30 offers a generous seven HDMI 2.1-compatible inputs, all of which support video resolution up to 8K/60Hz or 4K/120 Hz and HDCP 2.3. It includes three HDMI outputs for connection to multiple TVs, monitors or projectors. One HDMI port includes support for HDMI ARC/eARC for single-cable connection (audio/video send and audio return) to an ARC/eARC compatible TV or projector. The AV 30 also features legacy analog and digital audio and video connections, including composite and component video, analog RCA audio, fiberoptic and coax digital audio. There’s even a built-in phono preamp with support for MM (moving magnet) phono cartridges.
The Marantz AV 30 offers a bevy of inputs, both analog and digital including 7 HDMI ports. It also offers both unbalanced RCA and balanced XLR output for up to 11 different channels as well as four independently adjustable subwoofer outputs.
As for output to the power amplifier, the AV 30 supports both unbalanced RCA output as well as balanced XLR. Initially, I set up the system with nine meter-long RCA cables, but this led to a bit of a rats’ nest of wiring. I ordered a 10-pack of 12-inch XLR cables on Amazon and this cleaned things up a bit.
The Marantz AMP 20 power amp features 12 input channels (balanced and unbalanced) and 12 powered speaker outputs. Each of the six pairs of amplifiers can be bridged to mono, thereby doubling the power output.
Balanced XLR connections can lead to cleaner sound with less interference and a lower noise floor. The Marantz AMP 20 and AMP 30 both have switches on each pair of amp channels to switch between XLR balanced and RCA unbalanced inputs. The amps also offer bi-amp output as well as a bridging option which combines two amp channels into one in order to deliver twice as much power to half as many channels. If the 12 channels of the AMP 20 are more than you need, you can bridge one or more amplifier pairs for more power to your center channel or main left and right channel speakers, or use the extra amps to power speakers in other rooms of your home.
If you need more than the rated 200 watts per channel, you can bridge each of the six pair of amps inside the AMP 20 and get 400 watts of power into 8 ohms.
HEOS FTW!
A highlight of the AV 30 is its HEOS integration. This network streaming music platform allows you to play music from the top streaming services like TIDAL, Spotify, Qobuz and Amazon Music anywhere in your home on compatible devices like receivers, streaming amplifiers, soundbars and wireless speakers. HEOS is included in most Denon and Marantz receivers and pre/pros made in the past decade as well as select products from Classe (all part of the “Sound United” family of products). Install the HEOS app, link your music accounts and you’ll see the AV 30 appear as a supported device. It’s that easy.
Advertisement
The HEOS module is built into all of Denon’s and Marantz’ high-end receivers and preamp/processors, providing whole home wireless streaming right out of the box.
For internet radio stations, HEOS supports TuneIn and iHeart Radio. My Go-To Internet Radio station, the fabulous Radio Paradise is available in HEOS, through TuneIn. Sadly, TuneIn’s support for lossless or high resolution audio is non-existent. So while Radio Paradise is available in a lossless FLAC stream on BluOS, Sonos and WiiM, the best quality you can get for Radio Paradise on HEOS currently is a 320 KBPS AAC stream. That said, Radio Paradise does sound quite good through HEOS on the AV 30, whether in its pure stereo form or expanded into surround through one of the AV 30’s many immersive sound options.
But where we gain HEOS, we lose Google Cast. Unlike some competitive receivers and pre/pros, the Marantz AV 30 doesn’t offer Google Cast (formerly “Chromecast Built-in”) wireless connectivity. This means you won’t be able to cast Dolby Atmos multi-channel music from your music streaming app on your phone to the AV 30. Marantz tells us this is an intentional choice and they recommend using an external device like Apple TV 4K or Amazon FireTV stick if you want to listen to music encoded in Dolby Atmos (or Sony 360 Reality Audio, which the Marantz pre/pro also supports).
You can connect your audio streaming apps to the pre/pro via the HEOS app or Bluetooth, but these do not currently support Dolby Atmos music. Apple AirPlay 2 is available meaning you can connect an Apple device to the AV 30 wirelessly. The HEOS app also supports TIDAL Connect, Spotify Connect and Qobuz Connect, for wireless lossless (and even hi-res) music, but again, only for stereo 2-channel music tracks.
Correction and Calibration
The AV 30 comes with Audyssey MultEQ XT32 room correction and calibration on board. This includes SubEQ HT for managing up to four independent subwoofers. It also includes Audyssey Dynamic EQ and Dynamic Volume as options. For those who prefer DIRAC room correction, the AV 30 can be upgraded to support DIRAC Live, DIRAC Live Bass Control or DIRAC ART (Active Room Treatment) via the additional purchase of a DIRAC license (currently $259-$799, depending on the options). For the purposes of our review, we used the built-in Audyssey calibration system with the included microphone. We may do a follow-up story on DIRAC for those interested.
The Set-Up
Initial set-up was fairly straightforward. As mentioned, you do need to connect the AV 30’s HDMI output to a TV or projector in order to see the set-up screens. You can exit the set-up wizard after selecting language, but if not, then the wizard takes you through connection of every single channel as well as connection of speakers to a connected power amp. It was a bit tedious to go through all these screens, but I’m sure it would be helpful to someone new to the process. And I will say that even this seasoned “A/V guru” swapped the positive and negative leads on one of my speaker connections. Thankfully this phase error was identified during the Audyssey set up process so it was easy enough to fix. (Oopsy).
Advertisement
The Marantz Setup Assistant will walk you through each element of set-up in sometimes painstaking detail.
Audyssey calibration is pretty straightforward. Plug in the included mic to the front panel, and run through a series of test tones. Marantz recommends using eight distinct measuring points but for our small theater room, three measurement points were sufficient. Post calibration, the system sounded nicely balanced though we did manually boost the subwoofer level slightly as the sound was a bit thin in the lower octaves after Audyssey calibration.
The AV 30 supports multiple different speaker configurations, compatible with Dolby Atmos, DTS:X and AURO-3D immersive surround.
The set-up wizard also requires you to set up HEOS, using the HEOS mobile app for iOS or Android. There is no “skip” option for this, though you can always manually exit the set-up wizard by backing all the way out or hitting the Setup button on the front panel or on the remote control. This proved slightly problematic as I had forgotten my HEOS account password and the link to reset it in the app failed repeatedly. Eventually I remembered the password and was able to complete the set-up. New HEOS users (or those who don’t forget their passwords) probably wouldn’t have an issue here. During the set-up, a firmware update was found and applied to the AV 30, which took a few extra minutes to complete.
Advertisement. Scroll to continue reading.
“Control, Control. You Must Learn Control!” -Yoda
The AV 30 can be controlled by its fully backlit remote control, by the Marantz AVR Remote app, or by controls on the hideaway front panel. Navigation among menus is quick and intuitive. The remote feels solid and having a full backlight on all buttons is a nice touch when watching or listening in the dark. The remote offers direct input buttons for all inputs as well as dedicated buttons for different surround modes (Movie, Music, Game and Pure). These cycle through various sound processing modes like Dolby Surround, DTS Neural:X and Auro-3D.
The AV 30’s remote is fully backlit and offers direct access to each input as well as sound mode buttons for cycling though the various stereo and surround sound listening modes.
Listening Notes
Speakers used for testing included a Klipsch Reference series home theater system as well as a pair of KEF LS50s. For source devices, I connected several components including an XBOX Series X, Fire TV 4K Max, Apple TV 4K, Samsung UHD Blu-ray Player, OPPO Blu-ray Player and a Kaleidescape Strato V 4K movie player all via HDMI cables. I also hooked up a vintage cassette deck for those classic 80s and 90s mix tapes using my grand-daddy’s analog RCA cables as well as a Systemdek turntable so I could spin some 70s vinyl.
Much of my music listening these days is to multi-channel immersive tracks encoded either in Dolby Atmos or Sony 360 Reality Audio. With no Google Cast option, I used the Amazon Music app on the FireTV 4K Max to load up my Dolby Atmos and 360RA playlists. Dolby Atmos favorites like KX5/Deadmau5 “Alive,” Ed Sheeran “Shape of You,” Elton John “Rocket Man” and A-Ha “Take On Me” had a wonderful sense of air and spaciousness through the Marantz preamp/amp combo, with deep tight bass and pinpoint imaging precision. 360RA tracks like Daft Punk “Get Lucky” and Pink’s “Trustfall” were equally enveloping, with instruments and voices filling the listening room with a cohesive dome of sound.
Moving onto movies, I hit the system with everything I had: Dolby Atmos, DTS:X, IMAX Enhanced and even AURO-3D. The system was able to decode all of these formats, routing the sound to the appropriate speakers for full immersion. It was at least as good if not better than the experience I’ve had in many premium movie theaters.
Advertisement
The screen on the bottom front panel on the AV 30 displays useful information like what input is selected and which audio codec is being received.
For Dolby Atmos, I started with Denis Villeneuve’s “Dune” on UHD Blu-ray. In the worm attack on the spice crawler about 1:05 into the film, the soundtrack includes a cacophony of different sounds that can challenge the finest surround system. As the spice-saturated sand encompasses Paul Atreides, the sound abruptly cuts out, music swells, sand swirls and the voices of the Bene Gesserit build in a spice-induced vision. The line “Qwizzatz Haderrach awakes,” which is difficult to make out on some systems, cuts cleanly through the sonic mayhem here.
Moving onto another Dolby Atmos soundtrack, I put on the first episode of “Andor” on Disney+. The rain falling from above in the opening scene permeated my listening room and as our antihero enters a nightclub, his conversation with the club worker is easy to make out over the pulsing background music. Complex soundscapes are handled effortlessly by the Marantz AV 30/AMP 20 combination.
For DTS:X, I put on “Ex Machina” to verify that the claustrophobic feeling inside the compound when the power fails was conveyed realistically and effectively (it was). The UHD Blu-ray 4K remaster of “Blues Brothers” also features a DTS:X soundtrack which was particularly lively during the mall chase scene as glass and debris exploded all around the room.
I also watched some of the IMAX Enhanced movies on Disney+ which offer DTS:X soundtracks enhanced with IMAX EQ. On “Guardians of the Galaxy 3,” the preamp identified the IMAX Enhanced flag in the content and decoded the DTS:X track with excellent dynamics and thunderous bass. Switching to the IMAX Enhanced “Queen Rock Montreal” concert film, “We Will Rock You” was recreated in all its grandeur, making me feel like I was there in the stadium enjoying the show.
The concert film “Queen Rock Montreal” was recently remastered in IMAX for a theatrical release. It’s now available on select TVs and projectors on Disney+ in IMAX Enhanced format with DTS:X immersive sound.
For Auro-3D, there isn’t as much selection as far as software goes, but I put on a few clips from an AURO-3D test Blu-ray Disc and also the UHD Blu-ray Disc of “Shine” which is the first major film with an AURO-3D soundtrack available on physical media in the U.S. For some reason, the preamp didn’t identify the AURO-3D flag in the DTS-HD MA datastream, so it did not automatically engage AURO-3D surround. However, manually switching the AV 30 to AURO-3D did allow the preamp to decode the stream and process all the channels properly, including the overhead voice of god speaker that Auro-3D is known for (which was virtualized at the center point of my ceiling, using the four Dolby height channels currently in my system).
For stereo music, I listened to high res and lossless tracks on both Qobuz (using Qobuz Connect) and Amazon Music (through the HEOS app). The Marantz offers both “Direct” and “Pure Direct” stereo listening modes that present stereo music as God intended – in standard two-channel mode with minimal to no additional processing.
Advertisement
“Direct” modes bypasses DSP processing (including EQ and Audyssey calibration) for the purest signal, while “Pure Direct” mode does this and also shuts down the front display and video circuitry to eliminate any potential electrical noise. Depending how you’ve configured the speakers, these modes may also disable the subwoofer outputs, so they’re better suited for larger tower speakers. In these modes, the AV 30 offered outstanding clarity. Paired with the Marantz power amp, stereo music came through cleanly and transparently with excellent dynamics and a solid three dimensional soundstage. There’s also a standard stereo listening mode that does preserve DSP and subwoofer output.
Personally, I don’t mind listening to stereo tracks with some ambiance added, so I experimented with Dolby Surround, DTS Virtual X and Auro:3D upmixer. Of these, I felt that Dolby Surround offered the best sense of space and presence for stereo music. It didn’t sound artificial or gimmicky. It just sounded more substantial and full compared to the pure stereo reproduction. Vocals were still locked front and center on most cuts while subtle room reverberation emanated from the rear speakers, giving the music a more palpable presence.
Advertisement. Scroll to continue reading.
All That and Black Vinyl, Too
While most of my listening was done with digital sources, I also fired up my Systemdek IIX turntable to spin a few classic LPs from the 70s and 80s. Using the AV 30’s built-in moving magnet phono stage, albums like Genesis “And Then There Were Three” and ZZ Top “Eliminator” were presented with nice detail, musicality and warmth, though having to get up to change sides and endure the clicks and pops of less-than-perfectly maintained vinyl reminded me of why I ultimately prefer high quality lossless and high res audio digital sources to analog.
Advertisement
I admit I did get a bit emotional when listening to “Star Wars and Other Galactic Funk,” which includes a rock/disco version of the “Star Wars” soundtrack by Meco. Immersed in the thrumming sounds of drums, lightsabers, guitars and xylophone solos, I was transported to a galaxy far, far away, namely my parents basement where my friends and I would gather to listen to our latest records. But this record never sounded so good back then on my parent’s old console system. If you’ve got a penchant for black vinyl or two-channel music listening in general, the AV 30 won’t let you down.
Meanwhile, Back in Japan
I should note that on a recent trip to Denon and Marantz headquarters in Kawasaki, Japan, as well as the Shirakawa Audio Works, where many of the company’s high end audio products are manufactured (including the AV 30 and AMP 20), I experienced the attention to detail that goes into the construction of Marantz products. The manufacturing process includes equal parts high tech robotics and hand assembly by gifted craftsmen (and craftswomen).
And this commitment to quality doesn’t only go into the physical assembly. New designs like the AV 30 and AMP 20 only make it into production after they have passed the careful ears of Marantz Soundmaster Yoshinori Ogata (Ogata-san). Only after Ogata-san signs off on the sound quality and build quality of each new product does that product make it into mass production and ultimately into a customer’s home.
A peek inside the Marantz listening room in Kawasaki, Japan with Marantz soundmaster Ogata-san.
The Bottom Line
With the AV 30, Marantz is bringing the cost of entry for A/V separates down to a fairly reasonable level. With 11 channels of processing, 4K and 8K video passthrough, and decoding for virtually all of the popular surround codecs, the Marantz AV 30 makes an excellent choice for those who are ready to graduate from A/V receiver to the next level of performance, flexibility and sonic clarity. And for those with a 7.1.2-channel or 5.1.4-channel speaker layout, the 12-channel AMP 20 power amplifier is truly a match made in heaven.
Pros:
11 channels plus 4 independent subwoofer outputs make this suitable for medium to large home theater installations
Competitively priced for the category
Built-in Audyssey MultEQ XT32 room correction/calibration is effective at reducing negative room interactions
Option to add DIRAC room correction, including DIRAC ART (Active Room Treatment)
Comes with HEOS whole-home multi-room wireless music platform
Elegant design
Supports unbalanced RCA and balanced XLR output for maximum flexibility and improved performance
Includes all the popular immersive sound formats: Dolby Atmos, DTS:X, IMAX Enhanced, AURO-3D, MPEG-H and Sony 360 Reality Audio
Top notch sound quality
Cons:
Lacks Google Cast WiFi casting
HEOS implementation does not natively support Dolby Atmos Music
You must be logged in to post a comment Login