Yamaha’s current AVR lineup has been running on 2020 and 2021 hardware, with firmware updates doing the heavy lifting to keep things relevant. That trick only works for so long. At some point, HDMI, processing, wireless features, and home theater expectations move on, and no amount of software fairy dust changes the hardware underneath. For 2026, Yamaha appears ready to turn the page with the new RX300A and RX500A, two entry level A/V receivers aimed at buyers who want a modern home theater upgrade without wandering into flagship pricing territory.
“The RX300A and RX500A close the gap between soundbars and true AV receiver-based home theater,” said Alex Sadeghian, director of marketing for consumer audio at Yamaha. “They include all the essential tech you need to build a modern home theater with phenomenal sound at an accessible price point, while offering simplified setup and operation that will appeal to both first-time AV receiver users and experienced enthusiasts alike.”
A New Look
Yamaha RX300A
The RX300A and RX500A also give Yamaha’s entry level AVR design a needed visual reset. The front panels look cleaner than the outgoing models, with fewer buttons, simpler labeling, and less of the “command center from a 2004 cable box” energy. The essential controls are still there, but Yamaha has clearly tried to make the layout easier to read and less cluttered. It is not a radical redesign, but it does make the RX300A and RX500A look more current without alienating longtime Yamaha home theater owners.
On The Inside
Yamaha is leaning on more than four decades of AVR development with the RX300A and RX500A, and the engineering story is familiar in the best way. The company’s True Sound philosophy is not just marketing wallpaper here. In practical terms, it points to circuit layout, shorter signal paths, vibration control, and the kind of internal housekeeping that matters when an AVR is being asked to handle movies, music, gaming, and whatever else gets plugged into it before dinner.
Both models also inherit Yamaha’s Anti Resonance Technology Wedge, a center mounted fifth foot borrowed from the company’s flagship AVENTAGE models. The goal is simple: reduce chassis vibration and improve stability. Nobody should expect a $600 receiver to suddenly behave like a five figure separates stack, but better mechanical control is still better mechanical control.
Advertisement
The bigger upgrade for most buyers will be HDMI 2.1 support. The RX300A and RX500A are built for modern video sources with 4K/120Hz and 8K/60Hz pass through, along with Dolby Vision and HDR10+. Gamers also get VRR and ALLM, which should help with smoother motion and lower input lag when used with current consoles.
Room Correction
The RX300A and RX500A include a setup microphone for automatic room correction, allowing the receivers to measure room acoustics and speaker behavior before adjusting performance for the space. Yamaha also includes an on screen setup guide that walks users through connections and configuration step by step, which should make installation less painful for first time AVR owners and anyone who would rather not spend Saturday afternoon decoding a manual like it was recovered from a Cold War dead drop.
Sound Setting Simplicity
To simplify the listening experience, both AVRs feature Scene buttons. These buttons enable users to recall system settings with a single press.
Each Scene button can be programmed to select an input, sound mode, and other key parameters, making it easy to switch seamlessly between activities like watching TV, streaming music, or gaming. The result is a more intuitive experience that keeps the user focused on enjoying content rather than getting distracted fiddling around trying to find the right settings.
RX300A: Great For Beginners
Building on the previous Yamaha RX‑V385, the RX300A is a 5.2 channel AVR designed to meet the needs of those who may be just getting started in home theater, wanting to upgrade from a soundbar or are on a budget with a price ($399.95 MSRP).
New enhancements compared with the RX-V385 include support for Dolby Atmos and DTS Virtual:X, compatibility with 4K/120Hz and 8K/60Hz video, gaming support that includes ALLM and VRR, dual subwoofer outputs, Bluetooth Multipoint, enhanced build quality, and an updated on-screen setup guide with streamlined menus.
Advertisement
Advertisement. Scroll to continue reading.
The RX300A supports Dolby Atmos in flexible speaker configurations, including 3.2.2-channel with up-firing or in-ceiling height speakers and virtualized rear channel sound, or with a traditional 5.1 or 5.2-channel setup in combination with virtual height processing to create sound from above without dedicated height speakers.
Bluetooth Multipoint allows two devices to remain paired simultaneously, making it easy to switch between sources without reconnecting.
RX500A: More Channels, Wi-Fi, and Streaming
The RX500A builds on the RX300A platform with 7.2 channel amplification and more flexible speaker layout options.
With seven channels of amplification, Dolby Atmos support allows the RX500A to work with real discrete speakers for both the height channels and the surround channels, creating a more convincing immersive sound field than you can get with a 5-channel system. The RX500A supports multiple height speaker configurations, including in ceiling speakers or up-firing height modules. And if you don’t want to bother with height channels, the RX500A can virtualize those with its speaker virtualization technology. This can leave two of your amplifier channels free for speakers in a second room. The RX500A also supports DTS:X, giving users access to the two major immersive audio formats without moving into Yamaha’s more expensive AVR models.
The RX500A also adds stronger network audio support. In addition to Bluetooth Multipoint, it includes built in Wi-Fi and Ethernet for music streaming through Spotify Connect, AirPlay 2, Google Cast, Qobuz Connect, TIDAL Connect, internet radio, and other supported services. That makes it the more complete option for buyers who want both home theater flexibility and everyday music streaming in one box.
Advertisement
The RX500A is a new model tier in the Yamaha AV receiver lineup, offering a step up from the RX300A for those who want more speaker channels and more advanced music streaming capabilities at an accessible MSRP of $599.95. The current Yamaha RX-V6A 7.2-channel AV receiver remains in the lineup—offering some additional features such as MusicCast capabilities (e.g., full app control and multi-room audio), more connectivity options, Zone 2, increased performance, and other features—at an MSRP of $799.95.
Dolby Atmos Dolby True HD Dolby Digital Plus Dolby Digital DTS-HD Master Audio DTS-HD High Resolution DTS-Express DTS DTS-ES Matrix 6.1 DTS-ES Discrete 6.1 DTS 96/24 DTS:X
Dolby Atmos Dolby True HD Dolby Digital Plus Dolby Digital DTS
Dolby True HD Dolby Digital Plus Dolby Digital DTS-HD Master Audio DTS- HD High Resolution DTS DTS 96/24 DTS Neo:6
Surround Sound Post Decoding Formats
Dolby Surround DTS Neural:X
Dolby Surround DTS Virtual:X
Not Indicated
Network Decoding Formats
MP3, MPEG4-AAC, WMA, WAV, FLAC, Apple Lossless, AIFF
No
No
USB Decoding Formats
MP3 MPEG4-AAC WMA WAV
MP3 MPEG4-AAC WMA WAV
MP3 MPEG4-AAC WMA WAV
HDMI Decoding Formats
PCM (8ch max)
PCM (8ch max)
PCM (8ch max)
Sound Modes
Pure Direct Straight Movie All Channel Stereo 2 Channel Stereo Music Night
Pure Direct Straight Movie All Channel Stereo 2 Channel Stereo Music Night
Direct Straight Enhancer Bass program BD/DVD TV CD Radio
Zone B
Yes
Yes
Not Indicated
Room Calibration
Room Correction
Room Correction
YPAO
Other Features
Dialogue Level Subwoofer Trim Extra Bass Lip Sync
Dialogue Level Subwoofer Trim Extra Bass Lip Sync
Dialogue Level Subwoofer Trim Extra Bass Lip Sync
HDMI Connections
4 Inputs / 1 Output
4 Inputs / 1 Output
4 Inputs / 1 Output
HDMI Features
HDMI 2.1 8K60Hz/4K120Hz eARC, ARC VRR ALLM QMS HDCP 2.3 CEC Auto Lip Sync Deep Color x.v. Color HD audio playback
HDMI 2.1 8K60Hz/4K120Hz eARC, ARC VRR ALLM QMS HDCP 2.3 CEC Auto Lip Sync Deep Color x.v. Color HD audio playback
HDMI 2.1 4K60p eARC, ARC HDCP 2.2 CEC Auto Lip Sync Deep Color x.v. Color HD audio playback
High Dynamic Range (HDR) Support
HDR10+ HDR10 Dolby Vision Hybrid Log-Gamma
HDR10+ HDR10 Dolby Vision Hybrid Log-Gamma
HDR10 Dolby Vision Hybrid Log-Gamma
Speaker Output
7 (binding post terminals)
5 (binding post terminals)
5 (binding post terminals)
Headphone Output
1
1
1
Subwoofer Pre-outs
2
2
1
HDMI
4 Inputs / 1 Output
4 Inputs / 1 Output
4 Inputs / 1 Output
Analog RCA Inputs
2
2
2
Optical Input
1
1
1
Coaxial Input
1
1
2
USB
1 (Audio File Playback from a Mass Storage Device, Firmware Updates)
1 (Audio File Playback from a Mass Storage Device, Firmware Updates)
1 (Audio File Playback from a Mass Storage Device, Firmware Updates)
FM/AM Tuner
Yes / No
Yes / No
Yes/Yes
Bluetooth
Yes (Ver. 5.3, Multipoint)
Yes (Ver. 5.3, Multipoint)
Yes (Version 2.1)
Streaming
Spotify Connect Qobuz Connect TIDAL Connect Google Cast AirPlay 2 Net Radio Podcasts
No (Streaming through Bluetooth only)
No (Streaming through Bluetooth only)
Wi-Fi / Ethernet Port
Yes / Yes
No
No
Power Consumption
260W
260W
Not Indicated
Standby Power Consumption
≤0.3W
≤0.3W
Not Indicated
Auto Power Standby
Yes
Yes
Not Indicated
Dimensions (WxHxD)
434 x 157 x 319 mm 17-1/8” x 6-1/8” x 12-1/2”
434 x 157 x 319 mm 17-1/8” x 6-1/8” x 12-1/2”
17.13″ x 6.31 x 12.56″
Weight (Unit)
8.0 kg; 17.6 lbs
7.6 kg; 16.8 lbs
17 lbs
App
Audio Connect
Not Indicated
Not Indicated
Included Accessories
Remote Control Batteries FM Antenna Setup Mic Microphone Stand Quick Guide Safety Guide
Remote Control Batteries FM Antenna Setup Mic Microphone Stand Quick Guide Safety Guide
Remote Control Batteries AM/FM Antenna Setup Mic Microphone Stand Quick Guide Safety Guide
The Bottom Line
Yamaha finally has new entry level AVRs, and the RX300A and RX500A look like practical updates rather than a full reset. That is not a bad thing. HDMI 2.1 support, cleaner industrial design, automatic room correction, better setup tools, and broader gaming and streaming compatibility all matter for buyers moving beyond a soundbar without stepping into flagship AVR pricing.
The RX500A is the more interesting of the two, thanks to 7.2 channel amplification, Dolby Atmos, DTS:X, Wi Fi, Ethernet, and support for Spotify Connect, AirPlay 2, Google Cast, Qobuz Connect, TIDAL Connect, and internet radio. That makes it the better fit for users who want a real home theater foundation and modern music streaming in one box.
What is missing? HDMI 2.2 would have been nice from a future proofing standpoint, but the current ecosystem does not really demand it yet. The bigger question is whether Yamaha follows these models with updated midrange and AVENTAGE AVRs. Denon, Marantz, Onkyo and others are not waiting around politely with tea and biscuits. Yamaha needed fresh hardware. The RX300A and RX500A are a solid first step.
As usual, Google delivered much of its consumer-focused news this week during the Android Show, ahead of its I/O developer conference. We’ve gotten a closer look at Android 17, which will sport a slew of new Gemini AI integrations, including some new agentic upgrades. The company also officially announced Googlebooks, its latest line of laptops built around AI features and Android interoperability. It looks like a major evolution on the concept of Chromebooks, though Google says those won’t be going anywhere.
Advertisement
Topics
What’s new at The Android Show: Googlebooks, Gemini Intelligence, and file sharing with iOS – 1:25
eBay rejects Gamestop’s offer as “not credible or attractive” – 32:18
U.S. cell carriers form a joint venture to fix service dead spots – 33:41
OpenAI sued by spouse of FSU shooting victim, who used ChatGPT to plan shooting spree – 38:44
Apple is making the iOS Camera app more customizable – 44:06
RIP Rufus, we hardly knew ye: Amazon dubs Alexa its new shopping assistant – 44:58
Around Engadget – 47:14
Working on – 49:26
Pop culture picks – 51:15
Advertisement
Livestream
Advertisement
Credits
Hosts: Devindra Hardawar and Igor Bonifacic Producer: Ben Ellman Music: Dale North and Terrence O’Brien
Landlords would rather hold out for higher-paying tenants than lower rent
For most Singaporeans, the mall is where life happens—groceries, dinner, a haircut, the kids’ enrichment class, all under one roof. It’s as much infrastructure as it is retail.
And we are not short of options. According to shopping mall directory SingMalls, there are at least 106 malls across the island—serving a total population of 6.11 million people. That works out to roughly one mall for every 58,000 people, in a country spanning just 700 square kilometres.
But in some once-bustling malls today, the silence is striking: empty shopfronts, boarded-up units, and “Coming Soon” signs left hanging for months.
Singapore’s retail vacancy rate has been rising—hitting 6.8% island-wide in Q1 2025, up from 6.2% the previous quarter, according to Savills Research. Some businesses have cited rising rental costs, yet higher vacancies alone do not seem to be forcing landlords to lower rents.
Advertisement
Aren’t landlords bleeding money by leaving units vacant? The uncomfortable answer, it turns out, is often no—and The Woodleigh Mall is the clearest illustration of why.
Vulcan Post examines what is going on behind the scenes.
An exodus of shops
Stores left empty and many “coming soon” boards put up for new tenants are a prominent sight at The Woodleigh Mall./ Image Credit: ConsiderationNo1619 via Reddit
The Woodleigh Mall soft-launched in May 2023 as the anchor of a brand new estate, meant to be the beating heart of the Bidadari community—the only full-fledged mall serving thousands of residents in the area. \
But less than two years later, the picture looks very different.
Advertisement
Residents and workers at The Woodleigh Mall have watched an exodus unfold, particularly in the basement food cluster known as fEAsT@Woodleigh.
According to Mothership, more than 15 shops vacated the mall within the span of a year. Former tenants include Burger King, Fish & Co., Lee Wee Brothers, and Swee Heng Bakery—all gone.
A 45-year-old shop employee who has worked at the mall for about three years told the publication that the high turnover among F&B tenants has been happening “since the start.”
“Since the start, the mall is not doing very well. The footfall is low over here compared to other malls,” she said. “Normally for a new mall, the crowd will slowly build up, but not for this mall. It has been very stagnant,” she added.
Advertisement
Moreover, some residents have cited expensive parking, a confusing layout, and limited retail options as reasons they go elsewhere. “People don’t come here to shop because there’s nothing to shop,” the employee said.
Many fingers have pointed to another key culprit: rent.
Constance Tan, director of bubble tea brand No.17 Tea, told Stomp that when her lease came up for renewal, the landlord quoted a 30% increase in rent, a figure she described as “totally unsustainable.” Rather than leave entirely, No.17 Tea downsized to a smaller kiosk-format unit within the same mall.
Residents’ instinct is to blame greedy landlords, and that isn’t entirely wrong. But it misses how commercial property actually works—and why holding out for for higher-paying tenants can make perfect financial sense.
Advertisement
A mall is valued like a stock, rather than a business
The Woodleigh Mall./ Image Credit: BYKidO
The footfall problem and the rent problem feed into each other in a vicious cycle.
Mall rents are largely determined by foot traffic. A mall pulling in five million visitors a month, for example, can command far higher rents than one drawing 1.5 million. But when footfall disappoints, tenants struggle to generate the sales needed to justify their rent. Some eventually leave, making the mall even less attractive to shoppers—and even harder to lease out.
So why not simply lower rents to fill empty units?
Because for commercial landlords, lower rents do not just mean lower income. They can also reduce the mall’s overall value.
Unlike an HDB flat, which is valued mainly based on nearby transaction prices, commercial properties are typically valued based on the rental income they generate. According to CKS Property Consultants, this is calculated by taking a property’s net operating income—rental revenue minus operating expenses—and dividing it by a market capitalisation rate. In simple terms: the more rent a mall collects, the more valuable it is.
Advertisement
That creates a dilemma for landlords. If they cut rents across the board to attract tenants, the mall’s projected income falls, and so does its valuation.
This matters because banks lend against that valuation. In Singapore, commercial property loans are typically capped at around 75% of a property’s value. So if a mall’s valuation drops by S$50 million after rental cuts, the landlord’s borrowing capacity could shrink by S$37.5 million. That’s real money off the table.
Think about it from the landlord’s perspective.
An empty unit could cost them a few thousand dollars a month in lost rent. Dropping rents across the board to fill the mall could slash its valuation by millions overnight. Holding out isn’t stubbornness, but math.
Advertisement
Who are the higher-paying tenants?
Clinics and enrichment centres make up about a quarter of the tenant mix in The Woodleigh Mall./ Image Credit: Parkway MediCentre, Mavis Tutorial Centre
Ironically, the tenants most willing to pay high rents are often tuition centres, clinics, and enrichment studios—businesses with steady income streams that generate little to no shopping footfall.
These tenants are attractive to landlords precisely because they are less dependent on walk-in traffic. A quiet Tuesday afternoon may hurt a restaurant or fashion retailer, but a tuition centre still collects its fees. They pay reliably, sign longer leases, and offer stable income streams, making them ideal tenants on paper, even if they gradually hollow out the mall’s retail atmosphere.
Healthcare and enrichment tenants have been part of The Woodleigh Mall’s mix since its soft launch—sitting alongside the F&B brands that were publicly celebrated. As those F&B tenants have left, the balance has shifted, and the education and medical presence has become increasingly prominent. One in four units at The Woodleigh Mall is now an enrichment centre or medical clinic.
On paper, occupancy still looks healthy because these units count as filled space. But to residents hoping to grab dinner, shop, or run errands, the mall can feel less like a vibrant retail hub and more like a ghost town, one where “everything is so expensive.”
Chinese restaurant Nong Geng Ji has over 100 stores worldwide, out of which 8 are opened here./ Image Credit: EveC via Google Reviews
On the other end of the spectrum, landlords are also holding out for deep-pocketed foreign brands, particularly the wave of Chinese F&B chains currently expanding aggressively into Singapore.
As of Aug 2025, some 85 Chinese F&B brands were operating around 405 outlets in Singapore, more than double the 32 brands recorded just a year earlier. Many of these brands have reportedly offered higher rental bids to secure prime retail spots—precisely the kind of tenant a landlord waiting for a premium offer is hoping to attract.
Advertisement
As unfair as it sounds, there’s no mechanism in Singapore that forces landlords to fill units with retail tenants, or to fill them at all.
There is no vacancy tax on commercial retail space, nor any penalty for keeping units empty for extended periods while waiting for a better tenant. There is also no requirement that a mall serving a residential estate must maintain a minimum standard of retail or F&B options.
The Woodleigh Mall’s joint venture owners—Cuscaden Peak Investments and Kajima Development—even put the mall up for sale for an asking price of S$800 million in July 2024, less than a year after its grand opening. That’s likely not the behaviour of owners optimising for the community, but for exit.
What does this mean for residents?
The frustration residents feel at Woodleigh is real and legitimate. When you’re one of thousands of HDB residents with a single mall serving your estate, it matters what’s in it.
Advertisement
But the problem isn’t simply that landlords are greedy. It’s the entire valuation and financing system for commercial property that creates rational incentives to prioritise rent income over community function. A landlord who drops rent to fill their mall with popular F&B tenants may literally be destroying value on paper by doing so.
Until there’s a policy lever that changes those incentives, whether that’s a vacancy tax, use requirements for community-serving malls, or something else entirely, the cycle is likely to continue.
So the next time you walk past a shuttered unit in your neighbourhood mall, remember: it might not be sitting empty because nobody wants it. It might be sitting empty because the landlord is waiting for a tenant who makes better financial sense.
And right now, nothing is stopping them.
Advertisement
Read other articles we’ve written on Singaporean businesses here.
Featured Image Credit: The Woodleigh Mall, ConsiderationNo1619 via Reddit
VueBuds, a prototype developed by University of Washington researchers who have embedded a rice-grain-sized camera into each earbud of a standard pair of Sony wireless earbuds. (UW Photo)
Wireless earbuds seemingly sprang out of nowhere. Popularized by Apple’s AirPods, they were suddenly everywhere — on the subway, in the grocery store, in the ears of the person sitting across from you — until somewhere along the way, they became the thing nearly everyone wears without a second thought.
Could that popularity make earbuds better than smart glasses for AI? That is the bet behind VueBuds, a prototype developed by University of Washington researchers who have embedded a rice-grain-sized camera into each earbud of a standard pair of Sony wireless earbuds. The result is a visual AI assistant hiding in plain sight: look at a can of food and ask how many calories it has, hold up an unfamiliar kitchen tool and get an answer in about a second.
The system processes images on-device and responds through a connected AI model — no cloud required, no images stored.
The UW team believes it is the first to embed cameras directly in commercial wireless earbuds.
The earbuds don’t remember anything, but the people around you might not know that. That tension sits at the heart of what the UW team built and raises a question the researchers take seriously: what are the social norms when cameras are embedded in objects nobody thinks of as cameras?
Advertisement
The team’s answer is to lean hard on minimizing data collection. Images are processed and discarded; nothing is saved. But the system offers no outward signal to bystanders that a camera is present, which the researchers acknowledge is an open challenge rather than a solved one.
For technology like this to earn trust, Maruchi Kim, lead researcher and UW doctoral student in the Paul G. Allen School of Computer Science & Engineering, argued that privacy can’t be an afterthought.
“We don’t support saving the images,” Kim said. “It’s mainly just to bridge the interaction between a person and having access to AI on the go, especially in hands-free scenarios.”
The team’s other central argument is about form factor — and it’s a pointed challenge to Meta, which has spent years and hundreds of millions of dollars trying to make camera glasses a mainstream product.
Advertisement
The UW team’s position is that smart glasses will never fully shed their social baggage: the memory of Google Glass, the discomfort of being watched, the visible signal that the wearer has opted into something most people haven’t. Earbuds carry none of that history.
“From the get-go, we didn’t want to be associated with that,” Kim said.
Getting cameras into earbuds required solving a power problem first. Cameras consume far more energy than microphones, so the team opted for a low-power sensor that captures roughly one frame per second in black and white — slow by video standards, but fast enough for the question-and-answer style of interaction the researchers had in mind.
The cameras are angled five to 10 degrees outward, providing a 98- to 108-degree field of view, and images from both earbuds are stitched into a single frame before processing, cutting response time to about one second.
Advertisement
The applications range from the practical to the significant. The system can read text on food packaging, identify objects, and translate written Korean. But for people with low vision or cataracts, the implications run deeper.
The team received more than a dozen emails from people with visual impairments describing what they’d use it for: understanding facial expressions, reading books, watching television — tasks that existing AI tools can’t easily support in a hands-free, ambient way.
Kim sees another underserved group in the workforce. Electricians, plumbers, and workers in industrial settings often can’t pause to pull out a phone mid-task — a pipe fitting wedged in place, a live wire that needs both hands.
For those workers, a voice-queryable visual assistant that doesn’t require touching a screen is the difference between having access to AI and not having it at all.
Advertisement
“There’s a lot of blue collar work where those people aren’t really able to harness the benefits of recent AI advances,” Kim said. “They can’t just whip out their phones and take a photo.”
The hands-free framing extends broadly: surgeons, cooks, anyone who has ever tried to follow a recipe with wet hands.
The system remains experimental and isn’t available for purchase. Shyam Gollakota, a professor in the Allen School and the project’s senior researcher, said interest from technology companies has been significant, and camera-equipped earbuds could reach consumers within a few years.
On cost, Gollakota is optimistic. The camera sensor itself could run under a dollar at the component level, he said — meaning that at the scale of a major consumer electronics manufacturer, the price premium over standard earbuds would likely be modest.
Advertisement
The $10 figure Gollakota cited refers to a more conservative estimate at smaller production volumes.
“What we do at the universities is show that you can solve technical problems,” Gollakota said. “Then we show a path for these companies and other people to say that this is actually possible.”
What’s left of CBS News recently landed an interview with Israeli Prime Minister Benjamin Netanyahu. It’s a bit of a doozy (transcript, video). There’s a part where Netanyahu tries to blame foreign social media bot farms for the rise in people disgusted by his government’s carpet bombing of children. There’s a part where he pretends to not actually want billions in U.S. taxpayer dollars.
And there’s this part where he likens himself to Churchill and makes some strange comments about Hitler:
PRIME MINISTER BENJAMIN NETANYAHU: They implant themselves among civilians, you know, so that they have civilian casualties and they can put it on the tube or in your cell phone. So, yes, I mean, I don’t know how to fight it. I mean, Churchill, without cell phones and without digital campaigns and farm bots was labeled a warmonger in the 1930s because he said, “You have to stand up to Hitler.”
MAJOR GARRETT: Hitler, right.
PRIME MINISTER BENJAMIN NETANYAHU: And they accused him of being a warmonger. And Hitler didn’t even say “death to America, death to Britain,” you know. I– I think he might have planned it, but he didn’t say it. And still they accused him of that.
Advertisement
The interviewer, Major Garrett, spends absolutely no serious time pushing back against the claims Netanyahu makes. Or meaningfully addressing indisputable evidence that the Israeli government has engaged in widespread genocidal war crimes on the U.S. taxpayer dime. When Netanyahu tries to dismiss the massive civilian casualties in Gaza, Iran, and Lebanon as minor andinnocent mistakes, Garrett has no response.
Garrett doesn’t normally work for 60 Minutes. He was brought on board from elsewhere within CBS because Netanyahu specifically asked for him. According to Oliver Darcy’s excellent media newsletter Status, 60 Minutes correspondent Leslie Stahl was trying to land the interview with Netanyahu when Weiss intervened and shuffled the interview over to Garrett, causing (more) internal anger:
“But behind the scenes, Status has learned that famed “60 Minutes” correspondent Lesley Stahl had also been gunning for the interview but was upstaged by CBS News boss Bari Weiss, who booked Netanyahu herself and handed the interview to Garrett, who is notably not a “60 Minutes” correspondent. The move sparked hostility and amplified the already strained relationship between Weissand the reporting team at the iconic newsmagazine.”
There’s been a mass exodus at CBS for months as actual journalists bristle at the obvious shift toward soggy corporatist agitprop under Weiss. While Weiss was hired on to modernize CBS and make autocratic billionaire ass kissing exciting, viral, and good for ratings; the whole experiment has been a monumental failure so far, with CBS News recently seeing its lowest ratings in a quarter century.
Weiss rose to prominence at her weird little troll blog Free Press, which obviously hasn’t translated well to running a television network. Case in point: Weiss’ preferred new CBS News anchor, Tony Dokoupil, is having to broadcast the network’s coverage on Trump’s China visit from Taiwan because Weiss and friends failed to secure his visa on time for the trip. This mirrors other similar competency issues like Weiss making last-minute unapproved changes to teleprompter text that screws up broadcasts.
Beyond the clownish nature of it all, it remains an open question who this sort of stuff is actually for (beyond the extremely rich people endlessly trying to control information flow). Despite having a massive fortune, Ellison seems incapable of creating propaganda people actually want to watch, and even their target audience — center-right bigots with impaired critical thinking faculties — aren’t tuning in because they have a universe of other terrible (but far more entertaining) choices.
Like Jeff Bezos’ sad and desperate effort to repurpose the Washington Post into what now feels like a satirical billionaire-coddling rag, all the money in the world can’t seem to produce class warfare agitprop actual human beings want to consume. Almost as if the behaviors of the global authoritarian extraction class are starting to reach a point where they’re simply too heinous and ham-handed to spin.
Microsoft is introducing a new capability that will allow it to remotely roll back problematic Windows drivers delivered through Windows Update.
Called Cloud-Initiated Driver Recovery, the new feature will remove the need for hardware partners or end users to manually fix driver issues once drivers have been distributed to devices. The recovery process is entirely managed by Microsoft, with no partner-side actions required, and will only be initiated for Windows drivers rejected due to quality issues during shiproom evaluation.
Under the current system, if a driver distributed through Windows Update has quality issues, the hardware partner must submit a replacement, or users must manually uninstall the faulty driver, which can leave devices using subpar drivers for a long time.
With Cloud-Initiated Driver Recovery, Microsoft can directly trigger a rollback to a previous, stable driver version (or the next best version available on Windows Update) without requiring new software or actions from hardware partners.
Advertisement
“Today, when a driver published through Windows Update is identified after distribution to have quality issues, the remediation path relies on the hardware partner to submit an updated driver — or on end users to manually uninstall the problematic driver themselves. This creates a gap where devices may remain on a low-quality driver for an extended period,” Microsoft said.
“With Cloud-Initiated Driver Recovery, Microsoft can now trigger a recovery action directly from the Hardware Dev Center (HDC) Driver Shiproom, rolling back a problematic driver to the previously known-good version via the Windows Update pipeline. This is handled through coordinated updates to the PnP driver stack and the driver flighting and publishing services.”
The company also noted that:
Devices where a Driver Shiproom-approved driver cannot be located will not attempt Cloud-Initiated Driver Recovery
Recovery is delivered through the existing Windows Update infrastructure — no new client agent or partner tooling is required.
The new Windows Update feature is being tested between May and August and will begin rolling back drivers rejected during Flighting or Gradual Rollout starting September 2026.
Last week, at WinHEC 2026 (the Windows Hardware Engineering Conference) in Taipei, Microsoft unveiled a Driver Quality Initiative (DQI) to raise driver quality, reliability, and security across the Windows ecosystem, in coordination with OEM, silicon, and hardware partners.
Advertisement
“In the months ahead, we will keep investing in the fundamentals that matter most to customers: reliability, security, performance, compatibility and quality,” Microsoft said. “We’ll also keep collaborating with OEMs, silicon partners, IHVs, ODMs and the broader hardware ecosystem through the Windows Resiliency Initiative, the new Driver Quality Initiative and the work we do together every day.”
In June 2025, Microsoft also announced plans to periodically remove legacy drivers from the Windows Update catalog to mitigate compatibility issues and security risks.
Automated pentesting tools deliver real value, but they were built to answer one question: can an attacker move through the network? They were not built to test whether your controls block threats, your detection rules fire, or your cloud configs hold.
This guide covers the 6 surfaces you actually need to validate.
The round, led by Schroders Capital, follows January’s acquisition of Berlin-based StackFuel and 50% year-on-year revenue growth. Customers include the AA, Babcock, and Capital.
Multiverse, the London-based AI- and tech-upskilling platform founded by Euan Blair, said on Friday it had raised $70m in primary funding led by Schroders Capital, at a $2.1bn valuation.
Existing investors, including General Catalyst, Lightspeed, D1 Capital, Index Ventures, Bond and StepStone Group, joined the round.
The new valuation marks a $400m step up from the company’s $1.7bn Series D in 2022. Multiverse said revenue grew 50% year-on-year for the third consecutive year of accelerating growth, and the company reported its first cash-positive quarter, from January to March 2026. All employees are being offered equity in connection with the raise.
Advertisement
Multiverse is positioning the new round behind a category pitch rather than a product one. Chief executive Euan Blair told the company’s blog the firm wants to become ‘Europe’s AI adoption platform’, sitting between the businesses buying AI tools and the workforces meant to use them.
In his framing, the missing layer of the AI stack is not another model or another agent runtime; it is the workforce capable of operating them.
The European push has a foothold already. In January Multiverse completed the acquisition of StackFuel, a Berlin-based data and AI training provider with corporate customers including Mercedes-Benz, IAV and Telefónica, and a stated goal of training 100,000 German workers in AI skills.
StackFuel reports a 92% programme completion rate. Its founders, Leo Marose and Stefan Berntheisel, joined the senior leadership of the combined entity.
On the company’s own numbers, Multiverse has delivered more than £2bn in verified ROI for over 1,000 employers to date, including Babcock, the AA, Capita and Addison Lee.
Advertisement
Atlas, its AI coaching platform, tripled daily active users in the past year. Partnerships have moved upmarket on the tools side, with Microsoft, Palantir and Databricks now named as platform partners.
Multiverse is leaning into a familiar enterprise complaint: AI budgets are up, AI returns are uneven. The company cites BCG’s 2026 AI Radar, which reports corporate AI spend has doubled since last year, and notes that ‘trailblazer’ adopters invest about twice as much in workforce upskilling as ‘follower’ peers.
CEOs surveyed by Multiverse named skills gaps as the second-largest barrier to AI adoption, behind only regulation and ahead of data quality.
The raise comes with a political signal, of the sort British scale-ups now actively seek. Chancellor of the Exchequer Rachel Reeves said in a statement provided by the company that the UK government wants Britain to achieve the fastest rate of AI adoption in the G7, and called Multiverse ‘a fantastic example of a British company helping turn that ambition into reality’. The investment, Reeves added, would ‘support its expansion across Europe’.
Advertisement
Multiverse’s thesis cuts across a louder one in the enterprise AI conversation. Where companies including Klarna have frozen hiring on the argument that AI tools let them do more with fewer people, Multiverse is selling employers on the opposite trade: that the value of an AI deployment is determined by how well the existing workforce can operate it.
The new round is, in effect, a $70m bet that the latter view is the one enterprise buyers will be writing cheques against next.
Multiverse did not name the banks involved or disclose run-rate revenue. The company said the funding would be used to accelerate European expansion, with no further geographic breakdown.
The next time you refuel, you may notice several different grades of gasoline for sale. You are probably familiar with regular unleaded and premium unleaded. But there is another grade in between, which can be called “Plus” like above, or by another name.
A gasoline’s octane rating is typically shown by a large number on the pump. According to the U.S. Energy Information Administration, 87 octane is regular, mid-grade gas is 89 or 90 octane, and premium is between 91 and 94. What the different octane numbers show is each fuel’s resistance to knocking, with higher octane numbers indicating a higher knock resistance.
Knocking coming from your engine, also called premature detonation, means your fuel is not burning evenly in one or more cylinders, which can have numerous causes and will result in serious damage if not fixed promptly. While engines with higher performance levels tend to require premium fuel that is less likely to knock, average, lower-stressed engines should be fine with regular gas. The 89 octane mid-grade falls in the middle.
Advertisement
The origin story of 89 octane unleaded starts in 1975, when the EPA banned leaded gasoline and the transition to unleaded began, long before top tier gas existed. Because unleaded gas requires more processing compared to leaded gas, unleaded premium cost more than its leaded equivalent. Gas station operators marketed this 89-octane gas for less than premium, with higher octane than regular. It never really caught on, currently amounting to eight percent of total gasoline sales. No vehicles made today require 89-octane fuel.
Advertisement
Is there ever a reason to use 89 octane gas instead of 87 octane regular?
alvarog1970/Shutterstock
If your vehicle runs fine on regular, without any pinging or knocking sounds, and you are maintaining it according to the manufacturer’s recommended service intervals, there is nothing to do. Continue using regular in your car, because you don’t have a problem, even though gas prices just hit a four-year high. If you are in doubt, check your owner’s manual for the correct fuel recommended by the manufacturer.
But let’s say that your engine pings or knocks when using regular gas. While this can be caused by gas that is too low in octane, it can also be a result of a build-up of carbon deposits in your engine’s combustion chambers, worn spark plugs, an air/fuel mixture that’s too lean, bad spark timing, or an engine that is overheating. It’s probably a good idea to make a date with your mechanic, so that you can rule out all of the non-fuel-related issues first. Otherwise, you may be paying more for that higher octane fuel without seeing any benefits, which is a fuel myth you really should stop believing.
In case you are wondering about how 89-octane, a type of gasoline that makes up a measly eight percent of the market and is not required in any production vehicles, can still be offered at the pumps, there’s a little trick to how it is made. The 89-octane gas is made by blending high-octane premium with low-octane regular, which can usually be done right at the pump. So, whoever wants it is free to buy it.
Nvidia is known for making some of the best graphics cards, and these days, a lot of them end up powering AI-related workloads instead of games. Graphics processing units (GPUs) are the very foundation of the data centers that make AI possible. The funny thing is that the circle of AI life goes on and on, as Nvidia now also uses AI to help create new chips, which later end up in GPUs. A recent interview revealed that this leads to benefits like faster chip design, fewer man-hours used on certain tasks, and even new, innovative, sometimes odd ways to approach existing problems.
In a discussion with Google’s Jeff Dean at the 2026 GPU Technology Conference (GTC), Bill Dally, Nvidia’s chief scientist and senior vice president of research, revealed that the chipmaker is trying to introduce AI at every step of GPU design. The headliner is, undoubtedly, the fact that Nvidia used AI to save so much time and money during one stage of the process.
Whenever a new semiconductor process is introduced (essentially the process node that Nvidia then builds its GPU around), the company needs to port its standard cell library to it, which adds up to around 2,500 to 3,000 cells. Completing this task used to take eight people around 10 months.
Advertisement
Nvidia then designed NVCell, a program that completes this time-consuming task in one night on just one GPU. There seems to be no catch here, as Dally clarified that the results were better than what human engineers produced.
Advertisement
Nvidia uses AI for more than just porting cell libraries
Evolf/Shutterstock
It’s unclear how long it took for Nvidia to develop the reinforcement learning-based NVCell, but it does seem to be paying off. Not only is Nvidia saving precious time, but it’s also making it easier to move on to a new process when the next generation of GPUs comes around. While replacing eight engineers with one GPU seemed like the biggest win Nvidia had to share, Dally also listed a couple more ways in which the company is leaning into AI in its design process.
The next tool is Prefix RL, and to save you a highly technical explainer, its job is tackling various chip design options. The software tries to solve these concepts in a process of trial-and-error, and grades itself to learn from its attempts. Dally said that the tool comes up with all sorts of odd ideas, but at the end of the day, they’re 20-30% better than human designs.
Nvidia also uses AI to free up some time that its senior engineers had to spend helping more junior colleagues. Its internal large language models (LLMs), Chip Nemo and Bug Nemo, were trained on Nvidia’s proprietary database and codebase, so they know everything there is to know about the way Nvidia builds and designs GPUs. Armed with that knowledge, those LLMs can help junior engineers and explain complex concepts in an approachable manner. On the consumer side, Nvidia recently revealed Alpamayo, bringing AI models to self-driving cars.
Few pieces of technology capture attention quite like a device that launched an era. In September 2008 T-Mobile teamed up with Google to reveal the G1, a phone built from the ground up to run Android software. Available starting that October for customers on a two-year plan, it arrived at stores priced at $179 ($277 today) and immediately drew crowds eager to try something different.
When users opened the body, they discovered a full QWERTY keyboard stashed away beneath, which was a lifesaver for hammering out emails or scribbling down quick notes without having to look for those tiny on-screen buttons. The 3.2″ capacitive touchscreen sat above the keyboard, displaying some wonderful 320 x 480 pixel pictures and menus that would come to life with a tap or sweep of the finger.
BIGGER, YET SLIMMER THAN EVER: Who would’ve guessed that wider could also be lighter? The design of Galaxy Z Fold7 is refined to feel like a…
BEST CAMERA ON A FOLD YET: You asked for more – now you can have the most. Galaxy Z Fold7 now boasts an ultra-premium 200MP camera with Pro-Visual…
SCREENSHARE FOR STREAMLINED ASSISTANCE: Intrigued by something you see? Go Live with Google Gemini, then screenshare or point your camera at it for…
A little trackball tucked itself away just below the screen, making it easy to scroll through large lists or zoom into maps, which was difficult to perform on the glass without it. Nearby buttons handled the typical navigation, while a dedicated search key quickly returned Google results. Power came from a Qualcomm MSM7201A processor running at 528MHz and supported by 192MB of memory. In the box, they’d discover a 1GB microSD card already installed and ready to start, with room to add an 8GB one if necessary.
Advertisement
The 3.15MP rear camera took clear and detailed shots, and the autofocus quickly locked onto whatever you were trying to photograph. Despite the lack of a flash, daily photographs typically looked far better than you’d expect for a phone of this period. The detachable 1150 milliamp-hour battery pack provided around five hours of conversation time and more than five days on standby. It only took a few seconds to swap it out for a spare, and you were back up and running without having to find a charger.
The G1 launched with new software, the first version of Android, and did an excellent job of connecting with Google services from the start. Your Gmail would send new messages instantly, and Maps would show streets in sharp clarity and even spin to mirror the real world using a built-in compass. YouTube videos would play smoothly over Wi-Fi or a carrier data connection, and the new Android Market allowed you to download additional apps directly to your smartphone.
Connectivity-wise, the phone had all the bases covered, as it functioned on GSM networks, could do 3G speeds when available, easily snagged Wi-Fi networks, quickly paired with Bluetooth headsets, and provided accurate turn-by-turn directions via GPS. It weighed 158 grams and was just over half an inch thick, so it fit comfortably inside a pocket even with the sliding mechanism. You might choose between black, white, or brown to match your unique taste.
Pershing Square has taken a new position in Microsoft, with the size to be disclosed in a 13F filing later on Friday. The stock is down roughly 16% year-to-date.
Bill Ackman has bought Microsoft. Pershing Square’s chief executive said on X on Friday morning that the fund had taken a new position in the software company after its recent share-price decline, with the size to be disclosed in a regulatory filing later in the day.
Ackman’s stated rationale was that the market is mispricing the enterprise franchise rather than the AI one. Investors have underestimated Microsoft’s software ‘given its deeply embedded role across enterprises and highly attractive price-value proposition’, he wrote, framing the position as a quality-compounder bet on the installed base rather than a directional call on Azure capex.
The timing is the substance of the trade. Microsoft shares are down roughly 16% year-to-date and have traded near $413 since late April, when chief financial officer Amy Hood used the company’s fiscal Q3 results to raise full-year capital expenditure guidance to about $190bn, well above the roughly $155bn analysts had penciled in.
The results themselves were a beat. Azure grew 40%, the AI run-rate hit $37bn, and total revenue cleared $82.9bn. The stock fell anyway, on what one widely circulated buy-side note called the $190bn capex plan that repriced AI.
Advertisement
Ackman has run this play before this year. Pershing Square disclosed a new stake in Meta in February, three weeks after the latter’s own capex-driven sell-off, with Ackman describing the position at the time as a ‘deeply discounted valuation’.
The Microsoft entry follows the same shape: a megacap dragged lower by an AI-spending guide, framed by Ackman as an opportunity to buy a high-quality franchise at a temporarily de-rated multiple.
Funds running more than $100m are required to file Form 13F disclosures of US-listed positions within 45 days of quarter-end, which makes Friday a heavy day for hedge-fund reading.
Pershing Square’s last 13F, covering the December quarter, showed eleven positions and roughly $16bn in disclosed US holdings concentrated in Brookfield, Uber, Amazon, Alphabet and Meta. Microsoft did not appear. Today’s filing will show whether the firm has trimmed any existing names to fund the new one or sized it from cash.
Advertisement
The trade also lands inside a wider AI-infrastructure debate. Hyperscalers have committed more than $650bn to AI capex across 2026, on the combined Q1 numbers from Microsoft, Alphabet, Amazon, Meta and Apple, and the market is now pricing the question of when, or whether, that spend converts cleanly into operating earnings.
Ackman is, in effect, arguing that Microsoft’s existing Office, Windows and Azure book of business is enough to clear the bar, separately from the AI optionality.
Microsoft’s deep integration of OpenAI’s models across Copilot, Azure and the developer stack has been the dominant narrative in the company’s repricing over the past three years (TNW has tracked the arc). The capex bill is the cost of holding that lead. Ackman’s bet is that the enterprise software business underneath it is being given less credit than it should.
Pershing Square has not disclosed the size of the position or the average purchase price. The 13F filing is expected later on Friday US time.
You must be logged in to post a comment Login