Connect with us
DAPA Banner

Tech

NYT Connections hints and answers for Thursday, April 30 (game #1054)

Published

on

Good morning! Let’s play Connections, the NYT’s clever word game that challenges you to group answers in various categories. It can be tough, so read on if you need Connections hints.

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

The Goodyear Tires Discount You Might Not Know USAA Members Have Access To

Published

on





The United Services Automobile Association (USAA) is more than just an insurance service for veterans — it also comes with tons of rewards and offers through USAA Perks. This includes 25% off on new Goodyear tires, which could lead to some pretty impressive savings. Made right here in the United States, Goodyear’s wide range of tires has impressive treadwear ratings to ensure better safety in various driving conditions. Some of these tires can cost hundreds each, so USAA’s discount can help out a bit.

However, there are a few things to keep in mind if you want that 25% off. The tires must be purchased directly from Goodyear’s website and put on by one of the installers listed on the site — just enter your zip code so you can find one in your area. It can’t be installed by USAA. The 25% discount may be used in combination with a Goodyear tire rebate offer, but it can’t be combined with any other promotions.

Advertisement

USAA has other car care benefits

There are plenty of other vehicle-related benefits that come with USAA Perks. Members can utilize negotiated prices on vehicle maintenance and fuel from brands around the United States, including Pep Boys, Jiffy Lube, Firestone, Chevron, Exxon/Mobil, Kwik Trip, and Speedway. Thanks to a partnership with CarAdvise, USAA members won’t pay retail prices at these locations — instead, they will get pre-negotiated prices up to 26% off before arrival.

Advertisement

On top of that, you can get a CarAdvise Fuel Card to save 5 cents per gallon at over 60,000 gas stations across the nation, including a wide range of brands. However, you do need to pay a one-time $4.95 processing fee when you first get your card. You can use your card to track fuel purchases to see how much you’ve saved. With gas prices rising so high that some states are opting out of gas tax, any savings are welcome when you’re looking for tips to save money at the pump.



Advertisement

Source link

Continue Reading

Tech

Iodyne Pro Data 24TB review: Specs, features, price

Published

on

The Iodyne Pro Data 24TB delivers enormous uninterrupted transfer speed, isn’t network attached, and it isn’t limited to one user. It’s also a $14,995 wallet-breaking money-saver for the right audience.

It’s not every day we get a second loaner for a review product years after the fact.

The market has changed, workflows have changed, since we first reviewed the Iodyne Pro Data. Video workflows are getting bigger and bigger with 8K HDR 3D, and so forth. A single iPod like the Lord of the Rings dailies were shuttled around on are a thing of the past.

Thunderbolt 5 isn’t as fast as it could be. The media inside is impacted by cache and slow writes as that cache fills up with large transfers.

Advertisement

The Iodyne Pro Data aims to let the user have their cake and eat it too. It is, in effect, a giant external drive that can be accessed by multiple Macs at the same time.

All at Thunderbolt speeds, uninterrupted by full caches, and not throttled by transferring over a network.

It’s costly, of course. It’s also a money-saver if you’re moving enormous files around.

Iodyne Pro Data 24TB review: Physical design

The Pro Data is hefty. At 15.39 inches long by 10 inches wide, it has a considerable footprint on any desk. It’s also 1.22 inches thick, or 1.4 inches including the feet.

Advertisement

So, it’s fortunate that there’s a vertical stand included.

Closed dark blue MacBook with Apple logo resting on top of a larger gray device featuring horizontal ventilation slats, all placed on a light-colored surface

Iodyne Pro Data 24TB review: 13-inch MacBook Air for scale

It’s physically larger than a 16-inch MacBook Pro. It also happens to be heavier than a MacBook Pro, at 7.3 pounds. Its aluminum enclosure, which helps with thermal management, certainly counts a lot towards that figure.

I tested putting it into the ebags Pro Slim Laptop Backpack, a pretty typical tech bag capable of holding a 17-inch notebook. It fits, but only barely. If your bag is thick enough, you can cram in your 16-inch MacBook Pro, too, but don’t try this with one of the thinner bags.

Advertisement
Partially open gray laptop bag on a white surface, revealing the edge and cooling vents of a laptop or electronic device inside, with visible zippers and orange interior lining

Iodyne Pro Data 24TB review: It just about fits in a backpack.

For single-person use, this is really impractical compared to a much smaller and lighter external drive. And, a single person can store data locally.

But, in the context of being used by a group of people on a project, this is still relatively portable. At least, it’s better than your typical boxy NAS in this respect.

Rectangular iodyne Pro Data external storage device on a desk, with a black iodyne-5301 power supply brick resting on top, connected by a cable on the right

Iodyne Pro Data 24TB review: A relatively small power brick

Advertisement

The supplied power brick is relatively small and is a 180W Gallium Nitride (GaN) charger. It’s a merciful addition, given the overall mass of the unit.

Iodyne Pro Data 24TB review: Connectivity

The interesting thing about the Iodyne Pro Data is that it is intended as a fast storage device that runs off Thunderbolt, for multiple users. That lends itself to the relatively lean connection setup at hand here.

On one edge, there are eight Thunderbolt ports, each of which connects at 40Gbps. They are divided up into pairs, with each consisting of an upstream to a Mac and a downstream for other hardware to be connected.

Close-up of a sleek gray electronic dock with a ribbed metal top and several USBC or Thunderbolt ports lined along the curved front edge on a white surface

Iodyne Pro Data 24TB review: Port pairs

Advertisement

For the upstream, you’ve got two options. One: four users can access the storage.

And two, the more interesting use case: if you need even more speed, you can connect two of the upstream ports to one Mac.

As originally reviewed, and is still the case today, each port is 40Gbps.

As for the downstream ports, each can be used to daisy-chain more Thunderbolt devices. You can connect up to six devices as a daisy-chain for each Thunderbolt pair, though that chain only works with the host connected to that pair’s upstream port.

Advertisement

That means if you have two upstream connections to one Mac, the host can also use two of the daisy chains, in what is called by Iodyne as Thunderbolt Multipathing.

It’s possible to use all four Thunderbolt connections with one host Mac. That’s really only practical if you want to maximize the daisy-chaining capability, and it isn’t possible at all on the MacBook Pro, since there are only three Thunderbolt ports now.

And yes, to be clear, all computers connected to the upstream ports can access the storage in the device.

As for host connectivity, a pair of 1-meter (3.2-feet) Thunderbolt cables is included. You are going to need to get more — and longer — cables if you want to connect more Macs.

Advertisement

There’s support for macOS 13.0 or later, with Windows 10 version 21H2 and Ubuntu 22.04 or later also capable of connecting to the device.

Iodyne Pro Data 24TB review: Storage

The Pro Data includes 12 NVMe SSDs, with supplied capacities between 12TB and 192TB. The version supplied to the review is 24TB in capacity, holding 12 2TB drives.

However, it is possible to expand the storage considerably, with Iodyne claiming it can go up to 6.9 petabytes. However, really, it’s a maximum of 576TB using built-in drives, with the petabyte level achieved using daisy-chaining.

This would be an astronomically expensive thing to do, but at least there’s headroom.

Advertisement
Open electronic device with large metal heatsink on the left and right, exposing a blue circuit board full of chips, capacitors, connectors, and black cooling fins in a rectangular enclosure

Iodyne Pro Data 24TB review: You can take the cover off to access the drives.

If you do want to add more, it is possible to take the enclosure off and replace the NVMe drives yourself. There’s no fixed-in-place storage here.

The panel can be removed by loosening just two screws, with each NVMe M.2 SSD able to be pulled after removing one more. Each module also has its own heatsink to help cool each drive.

All of these drives are connected and configured under RAID-0 or RAID-6. RAID-0 stripes data across all drives with no redundancy, so it’s full-speed but without a failsafe option.

Advertisement

RAID-6 is the more favorable one, as it uses dual parity to allow for two drives in the array to fail and still keep the data intact, while sacrificing some capacity. This provides robust redundancy, which, for the kind of projects this sort of drive would be used for, is the best option.

For the 24TB version supplied to us, that equates to 20 terabytes of usable storage.

The supplied software to manage and configure the device lets you set up separate containers with different properties. For example, one container could have RAID-6 and a large capacity as well as a password, while another could be a RAID-0 scratch disk without a password.

Practically speaking, you can configure storage for specific users or Macs, or for multiple Macs to use, depending on the task.

Advertisement

You can enable per-container passwords, using XTS-AES-256 encryption and a hardware Secure Enclave. Up to 15 containers can be set up per unit, which should be more than enough for small teams.

The software management in the app is also used to monitor the health of each installed SSD, warning of hardware issues when they come up.

You can also register the unit with the Iodyne Cloud, though it’s not a cloud storage service. Really, it takes telemetry reports on the health of the Pro Data itself and the SSD modules, not stored data.

This is very handy since replacements for under-warranty drives can be sent to users automatically at no charge. Users are also guided on how to replace the drive to minimize downtime.

Advertisement

Iodyne Pro Data 24TB review: Performance

I want to get this in front of this section, as it is key to the entire product, and why it exists.

This unit will run at maximum speed, essentially until the drive is full. You won’t be held back by slow SSD caches as the transfer size increases.

According to Iodyne, it is capable of up to 5.2 gigabytes per second for read speeds and up to 2.4 gigabytes per second for writes.

This sounds impressive, and it is. It’s also something we observed for ourselves, with 5.2 GBps on reads and 2.2 GBps for writes under multi-path RAID-0.

Advertisement

Single-path connections will be a little limited by the 40Gbps Thunderbolt connectivity. However, at 3.1 GBps for reads and 1.8GBps for writes, also under RAID-0, it’s still more than adequate for a single transfer.

Dark macOS application windows showing storage management: left panel provisioning a new RAID-6 APFS container named workspace; right panel displaying Pro Data 24T device status with twelve SSDs and fan indicators.

Iodyne Pro Data 24TB review: Management software.

If you were to throw multiple users at it, the bandwidth will hit a bottleneck as all that bandwidth will be consumed. But even that is an extreme case.

In our testing, the speeds aren’t linearly cut, but you do see a bit of a drop as more devices connect up. Connecting two Macs using two Thunderbolt cables each and with different containers, reads reached 2.6 gigabytes per second, and writes were at 950 megabytes per second.

Advertisement

At three devices, we saw 2.1 gigabytes per second reads and 700 megabytes per second writes.

Changing over to RAID-6 instead of RAID-0, performance does dip a tiny bit. But, at about 200 megabytes per second down for both reads and writes, and under single- and multi-path modes, this is still a pretty speedy connection here.

One key point to clarify here is that the connection speeds are sustained over several hours. The bandwidth doesn’t dip over time as data is thrown at it.

Single- or dual-drive units will hit a transfer wall quickly. Each SSD has an onboard cache, which absorbs as much of the inbound data as possible and feeds it into the main storage element over time.

Advertisement

Normally, this results in a fast transfer at first, either to DRAM or relatively faster flash media, before slowing as the cache gets full. However, since we’re talking about 12 drives and therefore 12 cache allocations, that’s constant cache availability, especially since the data is striped across drives.

The sheer number of drives and caches means that you’re just about always going to have this high level of transfer speed.

And that’s the key to the Iodyne Pro Data. If you’re moving 20TB of data, it can take half a day on a dual-drive enclosure. It will just take a few hours on this unit.

If you buy one, take advantage of the container capabilities. There’s no versioning in play here, just bare RAID storage, so you have to be careful of users potentially overwriting the work of others if they are all working collaboratively on the same file.

Advertisement

Iodyne Pro Data 24TB review: It’s expensive and probably not for you

The idea of a massive and fast data store is a very appealing thought for most computer users. That said, the vast majority of people have no real need for this sort of device in the first place.

Partly because of the price, partly because of its utility.

It is safe to say that the cost is prohibitive for your average home user. To get the cheapest configuration at 12TB, you would have to pay $5,995.

The version sent to us, 24TB, would set you back a steep $14,995, with 48TB at $29,995, and 96TB for $58,995. The top-spec option, 192TB, is $117,995. The two new capacities were released after our first review, and the price of the smaller ones was half of what it is now.

Advertisement

Again, thanks AI data farms buying up all the flash media that’s made. This is your fault.

The key to remember here is that it is really specialized gear. It’s Thunderbolt storage designed to work with multiple hosts, with consistent data speeds, which really is something designed for a really narrow use case.

In the course of this second review, I’ve spoken to animation houses that have produced movies you have seen, some military and federal folks that need consistent transfer speeds, and filmmakers who have made movies that you’ve watched. I even threw in a few large YouTube channels to boot.

To a person, they all salivated at the hardware. They uniformly said that this would fix one workflow or another, where data ingestion speeds and access to that data by more than one user were major, major bottlenecks for production.

Advertisement

That said, home users working on just one Mac at home would find getting a NAS or a normal external drive to be a much more fiscally prudent approach.

Really, this sort of hardware is made for groups of people with a need to deal with a ton of data, and therefore need consistently high speed. That, as well as the pricing, puts it firmly into enterprise, federal, and creative industry offices.

If you’re producing a video and need to offload tons of video to a central store, so it can then be worked on by editors who are also on location, this device makes perfect sense. It’s more than fast enough to ingest footage and have that data available instantly for editors to immediately work on it.

Its size is also an advantage, as you can also imagine that same team of people being used to carrying around a lot of other equipment. A seven-pound storage appliance that is shaped like a very large notebook wouldn’t be much of a burden in that instance.

Advertisement

The mention of small teams working closely together on location is also apt, since it’s all based on Thunderbolt connections. If you want to connect at the maximum speed the 40Gbps Thunderbolt connections can manage, you’re going to be limited to keeping your Mac within about nine feet of the device.

A NAS device using Ethernet can cover a very large area, but in 2026 and probably through 2035, will not come close to delivering this speed. If you want the speed, you’re going to have to play within the limitations of the Thunderbolt specifications, and shell out for some expensive cables too.

As it stands, the Iodyne Pro Data 24TB is a great tool for YouTubers and others with data needs in both capacity and speed, and can afford it. In that respect, there’s no complaint to be made.

Calling it overkill for a home user who happens to have the spare cash lying around for it is an understatement. Unless they happen to be working on projects that require high-speed storage access in a locally collaborative fashion, there’s no need for this.

Advertisement

For the kind of groups and situations where it is useful to employ the Iodyne Pro Data, it is worth the weight of your choice of precious metal.

The average user, or even the most prosumer user, should not even begin to think about getting one.

Iodyne Pro Data 24TB review pros

  • Massive bandwidth, massive fast storage
  • User serviceable
  • Per-host daisy-chaining

Iodyne Pro Data 24TB review cons

  • Usage range is limited by Thunderbolt cable specifications
  • Massively expensive

Rating: 4.5 out of 5

I hate giving scores because they will never be universal. It’s clear that this product is not for the home, not for the small office, and not even for most large companies.

To be clear, the score here is based on it being useful for the target market, its intended purpose being to move mass quantities of data around, as fast as possible, for as long as possible.

Advertisement

For that, it is an incredible product. For that, it is best in class, and it is not close right now.

There’s no better product in this capacity to do that. You know if you need it already, and if you’re on the fence, you probably don’t, and have better options.

It’s been incredibly fun showing this off to people, and having that kind of consistent speed has been a joy to play around with. I’m going to miss it when it goes back.

Where to buy the Iodyne Pro Data 24TB

Iodyne sells the Pro Data directly, starting from $5,995 for 12TB. The 24TB model loaned for this review costs $14,995.

Advertisement

It’s also available from B&H Photo, with the 12TB priced at $5,995 and the 24TB at $14,495.

Source link

Advertisement
Continue Reading

Tech

How To Kill Humidity Sensors With Humidity

Published

on

An often overlooked section in the datasheets for popular humidity sensors like the BME280 and DHT22 is the ‘non-condensing humidity’ bit, which puts an important constraint on which environments you can use this sensor in. This was the painful lesson that [Mellow Labs] recently had to learn when multiple of such sensors had kicked the bucket after being used in a nicely steamed-up bathroom. Fortunately, it introduced him to sensors that are rated for use in condensing humidity environments, such as the SHT40 that’s demonstrated in the video.

This particular sensor is made by Sensirion, and as we can see in the datasheet it features a built-in heater that allows it to keep working even in a condensing environment. This heater has three heating levels which are controlled via the I2C interface, though duration is limited to one second in order to prevent overheating the sensor.

Of note is that you cannot take measurements while the heater is operating, and its use obviously increases power draw significantly. This then mostly leaves when to turn on the heater as an exercise to the engineer, with [Mellow Labs] opting to start the heater when relative humidity hit 70% as a conservative choice.

Advertisement

In the comments to the video other options for suitable sensors were pitched, including the Bosch BME690 which is similarly rated for condensing environments. All of which condenses down to the importance of reading the datasheet for any part that you intend to use in possibly demanding environments.

Advertisement

Source link

Continue Reading

Tech

Leading Cancer Charity Stops Funding Open Access Publishing Because It’s Just Not Working

Published

on

from the publishers-mess-up-everything dept

As numerous posts on this blog have emphasised, the underlying idea of open access (OA) – allowing anyone to read and share published academic research for free – is great in principle, but in practice has failed in important ways. That’s because traditional academic publishers have subverted the open access model to such an extent that the costs for research institutions of publishing in OA journals have barely changed at all. And yet one of the other key aims of open access was to save money while widening availability. Against that background, a natural question to ask is: if open access has failed to deliver savings, why bother supporting it? Cancer Research UK, the world’s leading cancer charity, has evidently asked itself that question and come up with an answer, which it explains in a post entitled “Why we won’t be funding open access publishing any more”:

We need efficient scholarly communications to spread scientific ideas via a fair economic model. We currently don’t have that. The open access movement was bold and promising, but ultimately disappointing. Now is the time to stop and call for a new way to make publishing work…

Ceasing to fund open access in the way we currently do will save us £5.2m of donors’ money over the next three years. That’s a substantial amount which can be put towards cancer research.

The post by Dan Burkwood, Director of Research Operations and Communications at Cancer Research UK, explains what exactly the problem is:

Advertisement

We currently fund open access publishing for our researchers in a number of ways. Despite hopes that this would enable a flourishing of open access dissemination of science, most of the growth has occurred in hybrid journals. These are publications that combine OA articles with those behind a paywall – this means the publishers will still charge for university and institute libraries to access them, even though researchers have paid for their work to be published. For us, this means we currently use donated money to fund our researchers, institutes and centres to publish OA research articles, yet they still have to pay to access the majority of journals in which those articles appear. The publishers are – so to speak – having their cake whilst also eating it.

These so-called “hybrid models” are discussed at length in Chapter 3 of Walled Culture the book (free digital versions available). They were presented as a transitional approach towards journals that were fully open access, but in many cases that transition hasn’t happened, not least because the hybrid model is so profitable for publishers, who therefore have little incentive to move to fully open access titles. Burkwood rightly points to a key reason why academic publishers continue to wield such power: the academic world’s insistence on using published articles in prestigious titles as a metric of success.

Cancer Research UK are working to widen the way we evaluate research in order to mitigate the heavy focus on publication outputs. It’s clear to us that a broader view of an applicant’s career is vital to gauge potential success. By signing up to DORA (San Francisco Declaration on Research Assessment), we encourage our reviewers to assess the quality and impact of research through means other than just journal impact factor. Additionally, we invite applicants to submit a narrative CV, allowing a more holistic view of their track record, research outputs and career progression.

But as he acknowledges, “Despite our, and others, attempts to limit the emphasis of the ‘publish-or-perish’ mindset, it will take time for the culture to change.” In the meantime, he suggests:

If researchers have no access to publishing funds they can publish their work for open access at no cost, but the publication will sit behind a paywall for 6 months (under embargo) before being deposited on Europe PMC open access – this is known as green open access.

Green open access provides full and free access to papers, but only after an embargo period, typically six months, but sometimes longer (gold open access provides instant access, but requires payment by researchers’ institutions.) That makes green OA a poor substitute for real, immediate open access.

The problem here is that such embargo periods have long been accepted as the norm, but that is only because a terrible blunder was made over two decades ago by the Research Councils UK (RCUK). In 2005, the RCUK stipulated that the work it funded would require open access publication. However, when the final version of the RCUK’s policy appeared in June 2006, it had a significant flaw, expressed in the following provision: ‘Full implementation of these requirements must be undertaken such that current copyright and licensing policies, for example embargo periods or provisions limiting the use of deposited content to non-commercial purposes, are respected by authors.’ As the leading open access scholar Peter Suber wrote at the time, this was a completely unnecessary concession:

Advertisement

Researchers sign funding contracts with the research councils long before they sign copyright transfer agreements with publishers. Funders have a right to dictate terms, such as mandated open access, precisely because they are upstream from publishers. If one condition of the funding contract is that the grantee will deposit the peer-reviewed version of any resulting publication in an open-access repository [immediately], then publishers have no right to intervene.

At the root of the issue of embargoes lies copyright. If researchers retained full control of the copyright of their articles, rather than assigning it to publishers, they could prevent any embargoes being applied to them.

Cancer Research UK’s decision is regrettable but understandable. The fear has to be that others will follow suit. While the hybrid model is not universal, it is widespread enough to undermine the open access idea. Until researchers refuse to publish in such hybrid titles, publishers will continue to profit from them. Given the unnecessary embargoes imposed on articles released under green open access, that leaves alternatives such as diamond open access, where there are no charges for anyone, an approach that has long been espoused on this blog.

Follow me @glynmoody on Mastodon and on Bluesky. Originally posted to Walled Culture.

Filed Under: academic publishing, cancer research, copyright, hybrid, knowledge, open access, research

Companies: cancer research uk

Advertisement

Source link

Continue Reading

Tech

28 Years Later: The Bone Temple 4K UHD Review: Still Infected or Just Going Through the Motions?

Published

on

They’re taking their commitment to the number 28 mighty seriously: First they gave us 28 Days Later, then Weeks…. okay, they skipped months but the leap to an entirely new generation brought us Danny Boyle’s 28 Years Later, and now Nia DaCosta’s direct sequel, The Bone Temple, filmed immediately after. Alex Garland, who created and wrote most of the “28 Universe” saga (he sat out Weeks), has scripted this entire new trilogy that’s been unfolding since last year, and he wastes no time reminding us what a savage world this is, never the same once the experimental rage virus got loose. The infected are still scattered across the British countryside, and we’re introduced to a sadistic gang of “Jimmys” who perform unspeakable acts on any innocent folk they encounter. (I had to avert my eyes at least once.)

Amid all of this post-apocalyptic tension, a lone National Health Service doctor (Ralph Fiennes) has been developing a means to overcome the effects of the virus and even befriends the current alpha, Samson, offering a glimmer of hope for the future. It’s a fine entry in the series, but there is an air of sameness to it, as certain characters and plot developments will seem strikingly familiar if you’ve watched more than a few tales of zombies and their ilk.

28 Years Later: The Bone Temple 4K UHD Blu-ray Steelbook Front Cover

We’ve come a long way since the hazy sub-HD video of the first film back in 2002, shot as it was on consumer-grade MiniDV camcorders. Such beautiful cinematography on Bone Temple for such brutal content is a bit ironic, giving us crisp, subjective focus on some lovely landscapes but also no shortage of disturbing sights. Colors are gorgeous, even when they’re in service of the indelible image of a bald, shirtless, iodine-covered Fiennes and his piercing blue eyes. The HDR earns its keep in the many dark, dark scenes, often lit by a plethora of small candles and lanterns.

The Atmos soundtrack displays an engrossing three-dimensional presence across a range of environments, most frequently the birds, insects and miscellany of the living forest. Trebles get their due, be it the sharp clang of a steel blade or the sometimes tinny renditions of Duran Duran hits playing on an old, entry-level turntable. Oscar-winner Hildur Guðnadóttir’s eclectic original musical compositions effectively enhance the emotions in a variety of scenes.

28 Years Later: The Bone Temple 4K UHD Blu-ray Steelbook Back Cover

Director DaCosta provides a solo audio commentary track, popping up in the trio of behind-the-scenes featurettes as well. A deleted scene centers on the boss Jimmy (Jack O’Connell) from late in the story, and the two minutes of bloopers are extra-funny, born as they are from such sinister doings. Presently, the 4K edition is only available in this lovely, single-disc SteelBook package, no HD Blu-ray provided but there’s a digital copy with all of those same extras.

At turns gruesome and unabashedly cruel, but ultimately daring to suggest some optimism, 28 Years Later: The Bone Temple is a disquieting rumination on the nature of evil and what it actually means to be human.

Advertisement

Movie Details

  • STUDIO: Sony Pictures Home Entertainment
  • FORMAT: Ultra HD 4K Blu-ray (April 21, 2026)
  • THEATRICAL RELEASE YEAR: 2026
  • ASPECT RATIO: 2.39:1
  • HDR FORMATS: Dolby Vision, HDR10
  • AUDIO FORMAT: Dolby Atmos with TrueHD 7.1 core
  • LENGTH: 109 mins.
  • MPAA RATING: R
  • DIRECTOR: Nia DaCosta
  • STARRING: Ralph Fiennes, Jack O’Connell, Alfie Williams, Erin Kellyman, Chi Lewis-Parry

Our Ratings

★★★★★★★★★★ Picture

★★★★★★★★★★ Sound

★★★★★★★★★★ Extras

Where to buy:

Advertisement. Scroll to continue reading.
Advertisement

Source link

Continue Reading

Tech

Wilson Audio’s Autobiography Speakers Are Built for Ultra Rich Music Listeners With No Apologies

Published

on

In 2026, the high-end loudspeaker market has taken a hard turn into what can only be described as flagship meshugas. One week it is Wilson Audio unveiling the Autobiography floorstanding speakers, the next it is Børresen Acoustics pushing even further into the stratosphere, and not long before that, YG Acoustics dropped the Titan in active sub configuration with a nickel finish at a cool $910,000 per pair. At this level, the question is no longer about system matching or room treatment. It is whether you need a bigger listening room or a real estate agent.

Against that backdrop, Wilson Audio believes it has answered the “ultimate speaker” question with the new Autobiography. Standing 81 inches tall and tipping the scale at over 800 pounds per speaker, built from the company’s proprietary V Material, X Material, and S Material composites, this is not a product designed to blend in. It is a statement piece in every sense, and one that demands a closer look.

The Story Behind the Flagship Speaker

An autobiography is a personal account shaped by time, intent, and refinement. That framing is not accidental. With the Autobiography, Wilson Audio is presenting what it sees as a physical expression of its design history and engineering priorities.

Advertisement

The Autobiography draws on more than five decades of work inside the company, from the late-David A. Wilson’s early experiments with time alignment, enclosure materials, and resonance control to the current generation’s continued focus on precision and consistency. Every aspect of the speaker, from cabinet geometry to material selection, is positioned as an extension of that ongoing development rather than a reset.

To be clear, this is not a retrospective product. Wilson Audio isn’t repackaging past ideas and calling it a day. The Autobiography builds on a lineage that stretches back to the original WAMM and WATT systems, but the goal here is forward momentum, taking what worked, understanding why it worked, and applying that knowledge with newer materials, tighter tolerances, and more advanced modeling.

If there’s a “story” here, it’s told through engineering choices rather than sentiment. The Autobiography is less about looking back and more about documenting where Wilson Audio believes it stands right now—and how far it can still push the envelope.

wilson-autobiography-ethereal-white-satin-front-rear-angle

The Drivers

At the core of the Autobiography is an entirely new driver complement. This is a five-way loudspeaker built around an MTM array, and none of the drivers are off the shelf or repurposed. Wilson Audio designed each one specifically for this system, with the expectation that they function as a unified acoustic platform rather than a collection of individual parts.

The vertical layout is deliberate. A 7-inch midrange driver anchors both the top and bottom of the enclosure, while the center section uses a symmetrical MTM crescent array with dual 2-inch midrange drivers flanking Wilson’s CSLS front-firing tweeter. The goal here is controlled dispersion and consistent behavior through the critical midband, where most of the music actually lives.

Advertisement

Bass duties are handled by two dissimilar woofers, a 12-inch and a 15-inch unit, engineered to work together rather than cover separate ranges in isolation. Rounding things out is a rear-firing ambient tweeter intended to add spatial information without calling attention to itself.

On paper, the architecture is about maintaining timing, dynamic range, and tonal balance across the full spectrum. In practice, the intent is straightforward: every driver does its job without stepping on the others, so the system presents music as a single, coherent event rather than a stitched together performance.

CSLS Front Firing Tweeter

wilson-audio-autobiography-tweeter

The Convergent Synergy Laser Sintered (CSLS) front firing tweeter represents the latest evolution of Wilson Audio’s Convergent Synergy platform. It is not a cosmetic update. The CSLS unit incorporates a redesigned rear wave chamber intended to better manage back wave energy and reduce internal reflections that can smear high frequency detail.

Advertisement. Scroll to continue reading.

The focus here is lowering mechanical and acoustic noise at the source. By improving energy dissipation behind the diaphragm, the tweeter operates with less interference from its own enclosure, which in turn helps preserve low level information and spatial cues.

Advertisement

In practice, the goal is refinement rather than emphasis. The CSLS front firing tweeter is engineered to extend high frequency performance without adding edge or artificial detail, allowing micro dynamics and harmonic texture to come through with greater stability and less effort.

2-inch Midrange

wilson-audio-autobiography-midrange

Flanking the CSLS tweeter are two newly developed 2-inch midrange drivers paired with optimized sonic faceplates. Referred to as the 2-inch MID (Midband Integration Driver), these units are designed to bridge the gap between the speed and articulation of the tweeter and the weight and texture of the larger midrange drivers. Their symmetrical placement supports even dispersion and consistent time alignment through the most sensitive region of human hearing, where small errors are easiest to detect. The intent is straightforward: Wilson Audio is using the 2-inch MID drivers to smooth the midband transition so the system behaves as a single acoustic source, allowing the listener to hear a continuous presentation rather than a collection of individual drivers.

PentaMag 7-inch Midrange Driver

wilson-audio-autobiography-pentamag

Above and below the MTM assembly sit two 7-inch PentaMag midrange drivers. These build on Wilson’s earlier QuadraMag platform, now using five AlNiCo (aluminum, nickel, cobalt) magnets arranged to increase motor strength, improve flux stability, and maintain linearity under dynamic load. The objective is better control through the midrange under real listening conditions, not just at lower levels. In practice, that translates into a presentation that can scale with volume while retaining clarity and tonal consistency, allowing voices and instruments to carry weight and detail without sounding congested or strained

Rear Firing Tweeter/RFT 

wilson-audio-autobiography-rft

Autobiography incorporates an inverted dome rear firing tweeter designed to enhance spatial depth, ambient retrieval, and harmonic decay. The driver uses aerospace grade unidirectional spread carbon fiber, chosen for its stiffness, consistency, and predictable behavior under load. The diaphragm features a variable thickness profile to reduce inertia while maintaining structural integrity, which is critical for low level detail and decay.

This is a wide dispersion design intended to reproduce ambient information without drawing attention to itself. Its operating range extends from 6 kHz to 22 kHz, focusing on spatial cues rather than primary tonal content. An attenuation control allows adjustment from 0 dB to minus 40 dB, with the maximum setting calibrated to begin at minus 7 dB at 10 kHz relative to the front firing tweeter at typical listening distances.

The goal is flexibility without excess. Wilson Audio gives the user the ability to fine tune how much ambient energy is introduced into the room, allowing for subtle reinforcement of space and decay without compromising the system’s overall balance.

The Woofers

wilson-audio-autobiography-woofers

The low frequency architecture of Autobiography is built around a clear objective: deliver bass that is fast, controlled, and authoritative without losing tonal nuance as it transitions into the midrange. To achieve this, Wilson Audio employs two purpose built woofers, a 12-inch and a 15-inch unit, engineered to operate in parallel as a unified system rather than as separate contributors covering different bands.

Using different sized drivers in this way introduces challenges in timing, pressure loading, and harmonic consistency. Those are addressed through dedicated motor structures, tuned suspension geometries, and a shared enclosure that allows both woofers to function as a single acoustic source. The result is a low frequency foundation that prioritizes speed, control, and scale, delivering extension and impact without excess or overhang that can compromise integration with the rest of the system.

Advertisement

Port Integration 

wilson-audio-autobiography-loudspeaker-port

In addition to the woofers, Autobiography uses a slot type bass reflex port that can be sealed without tools. Adjustments to the port cover and port ring allow the user to fine tune how the system interacts with the room, without adding unnecessary complexity to setup. The cross load flow porting system is designed to provide controlled adjustment of low frequency behavior rather than a fixed response.

In a forward firing configuration, output in the 10 Hz to 75 Hz range is reduced by approximately 1.0 to 1.5 dB, while output between 75 Hz and 130 Hz increases by roughly 1.5 to 2.0 dB. Switching to a rear firing configuration reverses that balance. The intent is straightforward: give the user a practical way to adapt low frequency performance to room boundaries and placement constraints without resorting to external processing.

Addressing Alignment

wilson-audio-autobiography-loudspeaker-red-rock sunset-side

Wilson Audio designs have used different approaches to mechanical alignment over the years, but Autobiography introduces hardware developed specifically for this system, with a focus on improving how the individual driver modules are positioned for time alignment. From the module alignment sleds to the precision slide spikes, each element is designed to function as part of an integrated structure that supports consistent setup and repeatability. The emphasis here is on accuracy, durability, and ease of adjustment without adding unnecessary complexity.

Advertisement. Scroll to continue reading.

Both the upper and lower 7-inch PentaMag midrange modules are independently adjustable using the alignment sled system. The indicators, gears, and reference scales are clearly marked and can be set using a rotating cam grip, allowing for precise positioning without specialized tools. The MTM crescent frame follows a similar approach. Compared to prior flagships like the WAMM Master Chronosonic and Chronosonic XVX, the goal is a higher degree of control over time domain alignment while making the process more straightforward to implement in an actual listening room.

Connectivity

wilson-audio-autobiography-loudspeaker-connections

Custom Wilson Audio spade connectors are used throughout, designed to mate with the company’s proprietary binding posts to ensure a secure and consistent electrical connection. Machined wire clasps are integrated into the gantry to provide structured cable management and reduce strain on the terminals.

Resistors are mounted to pure copper heatsinks to improve thermal dissipation and reduce the potential for performance drift under load. These components are accessible via a framed resistor mount plate on the rear of the woofer enclosure, allowing changes to be made without tools.

Advertisement

Wilson Audio Autobiography Specifications

Wilson Audio Model Autobiography
Product Type  Floorstanding Loudspeaker
MSRP $788,000 per pair
Speaker Configuration 5-Way 
Forward Firing Tweeter One x 1-inch, Dome (2.54 cm)
Upper Midrange Two x 2-inch (5.08 cm)
Lower Midrange Two x 7-inch (17.78 cm)
Woofers One x 12-inch (30.48 cm)
One x 15-inch (38.10 cm)
Rear-Firing Tweeter One x 1-inch, Inverted Dome (2.54 cm)
Front Port Yes – slot design (can be sealed or opened)
Enclosure Forward Firing Tweeter – Sealed
Midrange (2 in) – Rear Vented
Midrange (7 in) – Bottom Vented
Woofers – Front or Rear Adjustable Ported
Sensitivity 89.5 dB @ 1W @ 1m @ 1kHz
Nominal Impedance 4 ohms / minimum
2.1 ohms @ 293 Hz
Minimum Amplifier Power 100 watts/channel
Frequency Response 18 Hz – 36 kHz: +/- 2 dB: Room Average Response [RAR]
Dimensions Height: 81 3/16 inches (206 cm) w/o spikes [Variable]
Width: 21 1/2 inches (55 cm)
Depth: 34 7/8 inches (89 cm)
System Weight Per Channel (uncrated) 821 lbs (372.40 kg)
wilson-audio-autobiography-loudspeaker-nz-black-sand-side

The Bottom Line 

The Autobiography is clearly positioned as Wilson Audio’s current statement product and is less about chasing a single headline feature and more about consolidating decades of work in materials, driver integration, and time alignment into one platform. What makes it stand out is the level of system control: proprietary cabinet materials, a fully bespoke driver array, and a mechanical alignment scheme that is more refined than anything the company has previously offered.

It also sits in a very specific tier alongside products like the Børresen M8 Gold Signature and the Sonus faber Suprema. These are not incremental upgrades over “normal” high end speakers. They are part of a category where scale, cost, and engineering ambition are pushed to extremes, and where the conversation shifts from value to execution at any cost.

That leads to the real question: who is this for? Not someone building their first serious system, and not even most seasoned audiophiles. Buyers at this level already have the home, the space, and the budget. The issue is not whether you can afford the speakers — it is whether your room can support them. Systems built around speakers like the Autobiography typically involve six figure investments in amplification, sources, cabling, power, and acoustic work. You are not choosing between these and a more modest setup. You are deciding whether your environment can accommodate this level of scale and output without compromise.

Ultimately, the Autobiography and its peers are about removing limits as much as possible. Whether that translates into a meaningful improvement over less extreme systems will depend on setup and room more than anything else.

wilson-audio-autobiography-loudspeakers-crowned-rose-grilles

Price & Availability

The Wilson Audio Autobiography Floorstanding Loudspeakers are priced at $788,000 (US) per pair through Authorized Wilson Audio Dealers.

Advertisement

For more information: wilsonaudio.com

Source link

Advertisement
Continue Reading

Tech

Taylor Swift Wants to Trademark Her Likeness. These TikTok Deepfake Ads Show Why

Published

on

Last week, Taylor Swift filed a trio of trademark applications to protect her image and voice. One is meant to cover a well-known photograph of the pop singer holding a pink guitar during a concert on her record-breaking Eras tour, while the two sound trademarks are for simple identifying phrases: “Hey, it’s Taylor Swift” and “Hey, it’s Taylor.”

The move comes as AI deepfakes continue to proliferate across social media. Any individual stands to have their likeness exploited in the creation of nonconsensual AI-generated material; earlier this month, an Ohio man was the first person convicted under a new federal law criminalizing “intimate” visual deceptions of this sort. Celebrities, meanwhile, find themselves at risk of both explicit deepfakes and false endorsements.

A new report from the AI detection company Copyleaks shows that Swift and other stars have recently had their likenesses used in scammy advertisements. Researchers identified a cluster of sponsored videos on TikTok that appeared to show Swift, Kim Kardashian, Rihanna, and others promoting “potentially fraudulent or malicious services,” with the clips making use of what the researchers call “realistic-sounding voices” as well as “textured filters meant to mask some of the flaws in the AI-generated visuals.”

The fake ads show Swift et al. in what seem to be common interview settings—red carpet events or talk show sets. Rather than answering questions, however, the AI-generated celebrities talk up supposed rewards programs in which TikTok users are paid for offering feedback on content served to them.

Advertisement

“I was reading about digital behavior this week and came across a testing feature called TikTok Pay,” says a deepfaked Swift in an ad that uses manipulated footage from an appearance the real Swift made on The Tonight Show Starring Jimmy Fallon in October. “Certain users are being invited to watch videos and submit opinions.” The deepfaked Swift goes on to say that the program is in “limited rollout” for the moment but encourages viewers to see if they qualify for it, adding: “If the page opens for you, don’t overthink it.”

Naturally, anyone who clicks is accepted. These ads eventually lead the user to a third-party service that, despite the TikTok name and logo, has evidently been vibe coded using the AI platform Lovable, whose own branding appears on the page and in the URL. At this point, the researchers say, the user is prompted to begin entering their name and personal information.

While it’s not clear what the advertisers intend to with all the data mined through their celebrity deepfake promotion, scam ads with similar objectives are exceedingly common. Last week, the nonprofit Consumer Federation of America sued Meta, alleging that the tech giant misled Facebook and Instagram users about its efforts to crack down on scam ads—and profited by allowing them to proliferate. On Monday, the US Federal Trade Commission reported that social media scams have surged overall, with Facebook scams accounting for the highest total of financial losses.

It’s no surprise that Swift and her peers are taking legal steps to distance themselves from this fraudulent economy. While Swift hasn’t publicly commented on the reasoning behind her trademark filings, the reputational damage that deceitful deepfakes pose to her billion-dollar brand can hardly be overlooked. The trouble is, they grow more sophisticated by the day.

Advertisement

Source link

Continue Reading

Tech

NYT Strands hints and answers for Thursday, April 30 (game #788)

Published

on

Looking for a different day?

A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Wednesday’s puzzle instead then click here: NYT Strands hints and answers for Wednesday, April 29 (game #787).

Strands is the NYT’s latest word game after the likes of Wordle, Spelling Bee and Connections – and it’s great fun. It can be difficult, though, so read on for my Strands hints.

Advertisement

Source link

Advertisement
Continue Reading

Tech

The retrieval rebuild: Why hybrid retrieval intent tripled as enterprise RAG programs hit the scale wall

Published

on

Something shifted in enterprise RAG in Q1 2026. VB Pulse data spanning January through March tells a consistent story: the market stopped adding retrieval layers and started fixing the ones it already has. Call it the retrieval rebuild.

The survey covered three consecutive monthly waves from organizations with 100 or more employees, with between 45 and 58 qualified respondents per month across platform adoption, buyer intent, architecture outlook and evaluation criteria. The data should be treated as directional.

Enterprise intent to adopt hybrid retrieval tripled from 10.3% to 33.3% in a single quarter — even as 22% of qualified enterprise respondents reported having no production RAG systems at all. For data engineers and enterprise architects building agentic AI infrastructure, the data reveals a market in active transition: the RAG architecture most enterprises built to scale is not the one they expect to run by year-end. 

VB RAG study strategic direction

Credit: VentureBeat Pulse survey

Advertisement

Hybrid retrieval has become the consensus enterprise strategy. Unlike single-method RAG pipelines that rely on vector similarity alone, hybrid retrieval combines dense embeddings with sparse keyword search and reranking layers, trading simplicity for the retrieval accuracy and access control that production agentic workloads require.

The standalone vector database category is under pressure. Weaviate, Milvus, Pinecone and Qdrant each lost adoption share across the quarter in the VB Pulse data. Custom stacks and provider-native retrieval are absorbing their displaced share.

A growing minority of enterprises are stepping back from RAG altogether — a signal that the market’s maturity narrative has meaningful exceptions.

Organizations that went wide on RAG in 2025 are hitting the same failure point: the architecture built for document retrieval does not hold at agentic scale.

Advertisement

Enterprises that scaled RAG fast are now paying to rebuild it

The two largest intent movements in Q1 are directly connected — enterprises confronting retrieval quality problems at scale, and hybrid retrieval emerging as the consensus answer.

Investment priorities shifted in parallel. Evaluation and relevance testing led budget intent in January at 32.8% and fell to 15.6% by March. Retrieval optimization moved in the opposite direction, from 19.0% to 28.9% — overtaking evaluation as the top growth investment area for the first time. 

VB RAG survey investment priorities

Credit: VentureBeat Pulse survey

Steven Dickens, vice president and practice lead at HyperFRAME Research, described the operational burden enterprise data teams are facing in a VentureBeat interview in March on Oracle’s agentic AI data stack. “Data teams are exhausted by fragmentation fatigue,” Dickens said. “Managing a separate vector store, graph database and relational system just to power one agent is a DevOps nightmare.”

Advertisement

That fatigue shows directly in the platform data. The custom stack rise to 35.6% is not a rejection of managed retrieval — many organizations run both. It is a consolidation response from engineering teams that have hit the limits of assembling too many components.

Not every enterprise has made it that far. The VB Pulse data includes a signal that complicates the market’s overall growth narrative: 22.2% of qualified respondents reported no production RAG by March, up from 8.6% in January.  The report attributes this cohort to organizations that have “not yet committed to any retrieval infrastructure, or have paused programs” — concentrated in Healthcare, Education and Government, the same sectors showing the highest rates of flat budgets.

Standalone vector databases are losing the adoption argument but winning the reliability one

Recent reporting by VentureBeat illustrates why the dedicated retrieval layer still matters in production. 

Two enterprises building on Qdrant show why purpose-built vector infrastructure still wins in production.

Advertisement

 &AI builds patent litigation infrastructure and runs semantic search across hundreds of millions of documents. Grounding every result in a real source document is not optional — patent attorneys will not act on AI-generated text. That requirement makes the architectural choice clear.

“The agent is the interface,” Herbie Turner, &AI’s founder and CTO, told VentureBeat in March. “The vector database is the ground truth.”

GlassDollar, a startup that helps Siemens and Mahle evaluate startups, runs an agentic retrieval pattern across a corpus approaching 10 million indexed documents. A single user prompt fans out into multiple parallel queries, each retrieving candidates from a different angle before results are combined and re-ranked. That query volume and precision requirement is what drove the choice of purpose-built vector infrastructure.

“We measure success by recall,” Kamen Kanev, GlassDollar’s head of product, told VentureBeat in March. “If the best companies aren’t in the results, nothing else matters. The user loses trust.”

Advertisement

The VB Pulse data shows that framing — retrieval as ground truth rather than feature — is gaining traction across the broader enterprise market, even as standalone vector database adoption declines. 

Why enterprises say they need a dedicated vector layer shifted significantly across Q1. In January the top reasons were access control complexity (20.7%) and retrieval precision (19.0%). By March, operational reliability at scale had surged to 31.1% — more than doubling and overtaking everything else. Enterprises are no longer keeping vector infrastructure primarily for precision. They are keeping it because it is the part of the stack they can rely on when query volumes scale.

How enterprises are redefining what good retrieval means

How enterprises judge their retrieval systems shifted notably across Q1 — and the direction of that shift points to a market getting more sophisticated about what good retrieval actually means.

In January, response correctness dominated evaluation criteria at 67.2% — far above anything else. By March, response correctness (53.3%), retrieval accuracy (53.3%) and answer relevance (53.3%) had converged exactly. Getting the right answer is no longer enough if it came from the wrong document or missed the context of the question.

Advertisement

Answer relevance was the only criterion that rose across the quarter, gaining five percentage points. It is also the hardest to measure — whether the retrieved context is actually the right context for that specific question requires purpose-built evaluation infrastructure, not just pass-or-fail correctness checks. Its rise signals that a meaningful share of enterprise buyers have moved past basic RAG testing entirely. 

VB RAG survey top evaluation

Credit: VentureBeat Pulse survey

The market’s verdict: RAG isn’t dead. The original architecture is

The “RAG is dead” narrative had real momentum heading into 2026. It rested on two claims. The first: that long-context windows — models capable of processing hundreds of thousands of tokens in a single prompt — would make dedicated retrieval unnecessary. The second: that agentic memory systems, which store what an agent learns across sessions rather than retrieving it fresh each time, would absorb the knowledge access problem entirely.

The VB Pulse data is the enterprise market’s answer to the first claim. The long-context-as-dominant-architecture position collapsed from 15.5% in January to 3.5% in February before partially recovering to 6.7% in March. January’s sample was heavily weighted toward Technology and Software respondents — the segment most exposed to long-context model announcements in late 2025. As the sample diversified, the position evaporated.

Advertisement

On the memory question, Jonathan Frankle, chief AI scientist at Databricks, framed the architecture clearly in a March interview with VentureBeat: a vector database with millions of entries sits at the base of the agentic memory stack, too large to fit in context. The LLM context window sits at the top. Between them, new caching and compression layers are emerging — but none of them replace the retrieval layer at the base. New agentic memory systems like Hindsight, developed by Vectorize, and observational memory approaches like those in the Mastra framework address session continuity and agent context over time — a different problem than high-recall search across millions of changing enterprise documents.

The most consequential signal: the share of respondents not expecting large-scale RAG deployments by year-end grew from 3.4% to 15.6% — nearly 5x. That is not a verdict against retrieval. It is a verdict against the retrieval architecture most enterprises built first.

VB RAG survey expected dominant architecture

Credit: VentureBeat Pulse survey

The retrieval rebuild is not optional

The retrieval rebuild is the cost of scaling RAG without first deciding what architecture could actually support it.

Advertisement

If your organization is among the 43.1% that entered Q1 planning to expand RAG into more workflows, the VB Pulse data suggests that plan has already changed for many of your peers — and may need to change for you. Hybrid retrieval is the consensus destination. Custom stack growth to 35.6% reflects teams building retrieval infrastructure around requirements that off-the-shelf products do not fully address.

RAG is not dead. The architecture most enterprises used to implement it is. The data suggests the rebuild is not a future decision. For 33% of enterprises, the rebuild is already the stated priority.

Source link

Advertisement
Continue Reading

Tech

Emergency First Responders Say Waymos Are Getting Worse

Published

on

Emergency first-responder leaders told federal regulators in a private meeting last month that they were frustrated with the performance of autonomous vehicles on their streets—that city firefighters, police officers, EMTs, and paramedics are forced to spend time during emergencies resolving issues with frozen or stuck cars. One fire official called them “a safety issue for our crews as well as the victims.” WIRED obtained an audio recording of the meeting.

Officials from San Francisco and Austin, where Waymo has been ferrying passengers without drivers for more than a year, said the vehicles’ performance is getting worse. “We are actually seeing something interesting: backsliding of some things that had improved upon,” Mary Ellen Carroll, the executive director of San Francisco’s Department of Emergency Management, told officials with the National Highway Traffic Safety Administration (NHTSA), which oversees self-driving vehicle safety in the US. “They are committing more traffic violations.”

“We’ve seen some behavior we haven’t seen in a few years … Waymo is frequently now blocking our fire stations from access,” added Chief Patrick Rabbitt, the head of the San Francisco Fire Department. “Their default is to freeze.” The situation can prevent firetrucks from responding to emergencies in a “timely and appropriate” way, he said.

In Austin, first responders have been frequently stymied by Waymos “freezing up,” said Lieutenant William White, head of Highway Enforcement Command at the Austin Police Department. White said that, contrary to what Waymo had told first responders, the vehicles often fail to recognize or respond to officers’ hand signals, which can lead to cascading delays during emergencies or unusual road incidents.

Advertisement

“I believe the technology was deployed too quickly in too vast amounts, with hundreds of vehicles, when it wasn’t really ready,” White said. NHTSA did not respond to WIRED’s request for comment.

The complaints come as Waymo embarks on an ambitious expansion across the US and the world. Today, the company offers driverless rides in parts of 10 US cities, with plans to launch service in 10 more before the end of the year, including London. Waymo said last month that it’s now providing 500,000 paid rides weekly—a figure that’s still dwarfed by human-powered ride-hail services (Uber provides some 400 times that number weekly) but has grown tenfold since last year.

But these comments from cities where the service is already operating threaten to slow the rollout of driverless technology, which, according to Waymo’s data, reduces serious crashes compared to human-driven cars. Waymo is already facing political opposition, especially from organized labor, in several dense, blue, and potentially lucrative cities, including Boston, New York City, Seattle, and Washington, DC.

In a statement, Waymo spokesperson Julia Ilina wrote: “We deeply value our partnership with first responders and our shared commitment to safety. Their ongoing feedback has been instrumental in driving impactful improvements to the Waymo service.” The company says it has conducted in-person training for more than 35,000 emergency responders across the country.

Advertisement

Public Comment Periods

The comments made in the private meeting are blunter than what government officials have generally said in public. But they reflect long-simmering and sometimes vocal frustrations expressed by city leaders since at least late last year. Since autonomous vehicle operations are regulated in California and Texas by state rather than city officials, local first-responder departments and those who represent them can generally only request that developers like Waymo make specific changes to their operations.

On Wednesday, Austin first responders appeared before the City Council to discuss Waymo’s response to an incident last month in which a driverless vehicle blocked an ambulance for two minutes that was responding to a shooting in the city’s downtown, which killed three people and injured at least 14. Though officers were able to connect quickly with Waymo operators to move the vehicle, they reported that it had taken up to three minutes to connect with a remote agent in the past. They reiterated that Waymos don’t always respond well to hand signals, especially ones from police mounted on motorcycles.

Waymo declined to attend the meeting, and two front-row chairs labeled “RESERVED FOR: WAYMO” remained empty throughout the two-hour session.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025