Heat damage is the quiet enemy of healthy hair, and most dryers on the market will keep making that problem quietly worse with every single morning use.
That engineering starts with the V9 digital motor, which spins at up to 110,000rpm to generate a high-pressure jet of controlled air that dries hair quickly without ever needing to rely on extreme heat.
Advertisement
What makes that possible is the intelligent heat control system, which measures air temperature over 40 times per second and adjusts it continuously, so your hair never gets more heat than it actually needs.
The result is a dryer that genuinely protects natural shine rather than stripping it away, which is a meaningful distinction if you colour-treat your hair or already deal with dryness and breakage.
Advertisement
Dyson Supersonic has also three speed settings and four heat settings, including a cold shot, give you precise control over the finish whether you want a smooth blowout, added volume, or a more textured result.
The three included attachments extend that versatility further: the concentrator focuses airflow for a sleek, directed finish, the Gentle Air attachment dials down intensity for finer or more delicate hair, and the Flyaway Tool uses the Coanda effect to lift stray hairs and smooth them flat.
That last attachment is the one that genuinely separates the Dyson Supersonic from cheaper alternatives, delivering the kind of polished, salon-quality finish that would otherwise require a separate styling tool on top.
This edition comes in Prussian Blue and Rich Copper with an aluminium build, and the package is backed by a two-year limited warranty for peace of mind.
The Dyson Supersonic is the sort of purchase that tends to pay for itself over time, and at £218 the upfront cost is the lowest it has been in a while, making this a compelling moment for anyone who has been holding off.
Complaining about automagic enforcement of copyrights on the internet through copyright bots is so old hat at this point so as to be cliché. But there is a very good reason for that. Even in the narrower realm of PC gaming, we have seen examples of how copyright enforcement has ensnared totally innocent games, or fallen victim to clear fraud and abuse, resulting in the delisting of those games from platforms like Steam. This often happens in the all important early release windows for these games and, to be sure, it’s smaller indie studios that are hurt the most by this failed process. It’s incredibly frustrating to watch all of this in the macro and then witness the major platforms do absolutely nothing about it.
This, in fact, despite the absolutely absurd situations all of this produces. Take the demo for Wired Tokyo 2007 that was supposed to be released recently, but wasn’t, all because indie dev Daikichi dared to use intellectual property existing outside of the game. Except, of course, that said IP was the property of Daikichi itself.
As reported by VGC via GameSpark, a post to X from the developer lays out the situation, in which they explain that (via machine translation) “the motif of a board game I personally created in the past, placed within the game Wired Tokyo 2007, is getting caught by Steam’s side as third-party intellectual property.” As a result, Valve has blocked the release of the game’s promised demo which is currently listed as “Coming soon.”
The copyright-violating aspects, as claimed by Valve, include “dinosaur themed card-games shown on the environment within your app in gameplay,” which refers to a board game called Dinostone, created by one Daikichi. In Daikichi’s response, they link to the Board Game Geek page for their table-top game, which lists the same developer name.
“It’s not a third party,” says Daikichi on X. “It’s just me wanting to use my own intellectual property rights myself.” They add, “I have no idea what the meaning of this is at all.”
Advertisement
In an incredible response from Valve, the platform is demanding Daikichi provide some form of documented agreement to license the images used in the game, or else provide a documented letter of authorization from an attorney in order to get the demo approved for release. This is a “papers, please!” moment in video gaming, and it makes no sense.
The situation gets even more Monty-Python-esque from there. Daikichi decided their best course of action was to send a signed letter to itself, signed by itself, authorizing itself to use the assets it had created in the game it also had created.
Then over the weekend, rather wonderfully, the developer says they “created a signed document granting myself permission to use all of my created works, including board games, and resubmitted it for the demo review.”
We’ve crossed the Rubicon, folks. And now Valve is in the uncomfortable situation of having to choose between accepting this “evidence” of IP ownership when the evidence is literally a dev writing himself a letter like a crazy person, or else Valve refuses to accept it and a developer remains unable to release a game demo because they used their own property within it.
It’s a choice that only has two wrong outcomes. Sympathy is in short supply, however, as this is the result of the guilty-first process Valve has come up with for copyright takedowns on its platform.
The US Commerce Department has reportedly given 10 Chinese firms, including Alibaba, Tencent, TikTok parent company ByteDance, retailer JD.com, Lenovo and Foxconn, the permission to purchase NVIDIA’s second-best H200 processors. According to Reuters, however, NVIDIA has yet to make a delivery.
In December 2025, the US government allowed NVIDIA to sell H200 processors to approved customers in China, after blocking its sales due to concerns that it would aid the development of the country’s military technologies. China agreed to import several hundred thousand H200 chips in January, Reuters reported at the time, with the first shipments being meant for three unnamed Chinese internet companies.
The H200 is one of the company’s most powerful AI chips, second only to its high-end B200 processors. While the B200 is faster, the H200 is still a lot more capable than the H20, which was cleared for the Chinese market half a year earlier than the H200. The approved companies for H200 can buy up to 75,000 chips either directly from NVIDIA or from intermediaries, but the firms reportedly pulled back from making a purchase after getting guidance from the Chinese government.
Advertisement
Reuters says China’s guidance was triggered by changes on the US side, but it says those changes remain unclear. It’s also worth noting that local Chinese companies have been developing their own chips after the US blocked exports to their country. China has been encouraging local firms to use them in order to stimulate the homegrown chip industry. The Chinese government is also reportedly worried that the H200 chips sold to its companies have hidden vulnerabilities. That’s because in order for the US government to legally get its 25 percent cut from H200 sales, the chips have to pass through US territory.
NVIDIA CEO Jensen Huang, who previously warned the US government that its export restrictions are making his company lose its hold on China, recently flew to Beijing with President Donald Trump to attend a summit with Chinese President Xi Jinping. It remains to be seen whether the trip would lead to China giving local companies the green light to purchase the H200.
ForkLift combines the power of a robust file manager with versatile file transfer capabilities, seamlessly bridging the gap between your local and remote files. Whether you’re dealing with cloud services or more traditional FTP and SFTP servers, ForkLift streamlines your file management and transfers, making them smoother than ever.
ForkLift 4 is your all-in-one solution for efficient file management and seamless file transfers, across multiple platforms and services.
We understand that accessing, organizing, synchronizing, and sharing your files should be hassle-free, especially as the landscape of file sharing evolves with the increasing importance of cloud service providers such as Dropbox, Amazon S3, Google Drive, and OneDrive.
Is ForkLift worth it compared to alternatives like Finder or Nimble Commander?
Many users appreciate its dual-pane layout, batch operations, archives support, and advanced sync features. Compared to rivals, ForkLift offers better transfer management and preview behavior. Still, some feel the interface isn’t as lightweight or intuitive as others.
Advertisement
Is it easy to switch from Finder to ForkLift as the default file manager?
Yes. By running a simple defaults write -g NSFileViewer -string com.binarynights.ForkLift command in Terminal, you can set ForkLift as the system default for opening folders.
Which protocols can I connect to ForkLift?
ForkLift will connect to any remote server such as SFTP, FTP, WebDav, Amazon S3, Backblaze B2, Google Drive, OneDrive, Dropbox, Rackspace Cloudfiles, SMB, AFP, and NFS remote volumes. You can manage your files and connect to multiple servers at a time and even copy between them with drag and drop.
Is ForkLift free?
You can download a time limited trial of ForkLift, but you will need to buy a license in order to keep using it.
Features
Remote Connections
Advertisement
Connect to SFTP, FTP, WebDav, Amazon S3, Backblaze B2, Google Drive, OneDrive, Dropbox, Rackspace Cloudfiles, SMB, AFP, and NFS remote volumes. Manage your files efficiently across networks: connect to multiple servers simultaneously and even copy between them with drag and drop.
Sync
Compare local or remote source and target folders identifying matching, modified, new and deleted files. Synchronize them one or two-way with a single mouseclick, or save it as a favorite. Up to 20x faster analyzation than ForkLift 3.
Favorite Paths
Advertisement
Experience enhanced efficiency for remote destinations. Think of it like having favorites within favorites – an organized way to keep track of paths you frequently use and want quick access to.
Preview
The preview panel shows you useful information about the selected file. Playback audio and video files, inspect images, PDFs and other popular document types. Quick edit text files in place, both on local drives and remote servers.
Activity View
Advertisement
Whether you’re copying, renaming, deleting, compressing, or handling other tasks, this feature lets you see exactly what’s going on. No more guesswork – watch your tasks progress in real-time and stay in control of your file management action.
Quick Open
Easily access your favorites, devices, menu commands, open a selected file with a preferred application, or apply a previously saved Multi Rename preset on selected files or folders.
Log View
Advertisement
Get valuable insights into your file management activities and their results, all in one easy-to-access place.
Favorite Sync
ForkLift will keep all your favorites synchronized across multiple computers via iCloud.
Dropbox Support
Advertisement
Copying Dropbox links to files located in your Dropbox directory is just a right-click away.
Transfers
Reorder transfers, set conflict management rules, error handling, limit download and upload bandwidth.
Tags
Advertisement
Organize your documents and files with tags: add, edit, remove, search, or filter them within ForkLift.
Sync Browsing
Given two identical folder structures. Browse in one pane and let ForkLift do the job for you in the other pane.
Tabs
Advertisement
Open different folders in the same pane, instead of separate windows.
Search
Search and filter by name, extension, kind, tags or content, even on remote servers.
Quick Select
Advertisement
Select files by typing a filename, an extension, or a tag and add them or exclude them from the selection.
Remote Editing
Set your preferred editor in ForkLift to edit remote files and we take care of uploading your changes as you save.
Command Line Tools
Advertisement
Extend ForkLift’s capabilities to the max by invoking command line tools and apply them by using shortcuts.
Themes
A seamless way to personalize your interface. Choose from predefined themes that suit your taste, or let your creativity shine by crafting your very own themes.
App Deleter
Advertisement
ForkLift comes with an application deleter to remove the last morsels of an application you want to uninstall.
iCloud Support
Seamlessly access and manage your iCloud files through ForkLift.
Archive Management
Advertisement
Browse local and remote archives as if they were ordinary folders. You can even Quick Look, search and filter.
Keyboard Control
Control every operation straight from the keyboard and customize it to your preferences.
Multilingual
Advertisement
ForkLift speaks English, German and Hungarian. More languages are coming soon!
Workspaces
Save different layouts with opened tabs and locations and load what you need at the moment.
Git Support
Advertisement
ForkLift knows git and will show you the status of individual files. You can add, commit, push, and pull.
Open in Terminal
An absolute must for powerusers. Open a Terminal, iTerm, Hyper, Kitty or Warp window at your current local path.
Hidden Files
Advertisement
Make hidden files and folders visible easily by using a shortcut or pressing a button in the toolbar.
Share
Share gives you an easy way to share all kinds of documents and other files instantly.
Default File Viewer
Advertisement
Set ForkLift as the default file viewer and almost every app will point to ForkLift instead of Finder.
File Compare
Compare two text or images files with Xcode’s FileMerge, Kaleiodoscope, Beyond Compare, or Araxis Merge.
What’s New
External Drive Tags, comments, and Checksum Improvements
Advertisement
We recently introduced the ability to calculate checksums for files, making it easier to verify that files have not been altered or corrupted during transfers and that two files are truly identical. Checksums can be calculated for multiple files at the same time, with the file names and checksums displayed in a dedicated window.
In ForkLift 4.6.2, we improved the usability of the Checksum Window by adding support for selecting multiple items at once. You can now easily copy selected items or use Command-A and Command-C to copy all entries. The copied content is exported in CSV format, making it easy to paste into spreadsheet applications such as Microsoft Excel or Apple Numbers.
Changes to the Preview API on macOS Sequoia and Tahoe
In version 4.6.1, we introduced a new Preview API to support the new style of folder icon previews. Unfortunately, this API currently has several limitations that cannot be addressed by us directly. We have reported these issues to Apple and hope they will be resolved in future macOS updates.
Advertisement
The new style folder icons are only available on macOS Tahoe, so in ForkLift 4.6.2 we have disabled the new Preview API for users on macOS Sequoia because it caused multiple issues with icon previews on that version of macOS. As a result, users on macOS Sequoia will no longer experience the negative side effects of the new API.
Users on macOS Tahoe will continue using the new API and will still be able to see colored folders and icons added to folders in Finder. However, the API currently does not correctly handle certain folder icon customizations made in System Settings, which means some custom folder appearances may not be represented accurately in ForkLift. Unfortunately, this limitation is outside of our control.
We have also fixed a possible crash introduced in the previous version, along with several other possible crashes and hangs, and some other fixes and improvements.
Full List of changes:
Advertisement
Improvements
Displays tags on external drives
Comments added in ForkLift now show up in Finder as well
Adds an option to select multiple items in the Checksum Window; the copied list is in CSV format, so it can be easily pasted into spreadsheet applications
Adds “Edit” and “Hide from Sidebar” options to the context menu of Sync favorites in the Sidebar
Fixes
Fixes an issue with the movement of favorites in the sidebar that pointed to the same server
Fixes a possible hang caused by recent items in the sidebar
Fixes a possible crash in List View
Displays a folder icon on remote locations when the folder name contains a dot
Displays the Kind of folders correctly when the folder name contains a dot
Fixes an issue that made it impossible to delete the last remaining digit from the Time Offset Correction field in the Sync Window
Fixes a possible hang after a tab or window was closed during an unfinished search operation
Fixes a possible hang after a tab or window was closed during an unfinished search operation
Removes the new Preview API on macOS Sequoia introduced in version 4.6.1 due to several issues with icon previews
Previous Release Notes:
ForkLift 4.6.1 is available – Tahoe folder colors, improved memory usage, and restored PDF preview
Folder icon support on macOS Tahoe and performance improvements
Finder on macOS Tahoe introduced a big change by allowing users to customize folder icons. The last applied color tag now determines the folder color, and it is also possible to add icons to folders to make them stand out.
Advertisement
With ForkLift 4.6.1, we have updated how folder icons are handled. ForkLift now inherits folder colors from Finder, and when you assign a color tag in ForkLift, the folder color updates accordingly. ForkLift also displays custom icons that were added in Finder. However, due to missing system APIs, adding custom folder icons directly in ForkLift is not possible at this time.
We have also fixed multiple possible memory leaks and improved overall memory usage. This results in a more stable and efficient experience.
PDF preview fixes and iCloud tag improvements
In addition, this version includes several smaller improvements. You can now edit file favorites directly from the sidebar, use the Tab key to autocomplete tags more reliably, and benefit from improved information display in the Status bar and in the Preview pane.
Advertisement
We were also able to restore the full functionality of the PDF viewer in the Preview pane. It is now again possible to customize preview options through the right-click context menu, and text selection inside PDF previews has been re-enabled.
It is now also possible to remove the last remaining tag on iCloud Drive, and we have resolved an issue where pressing the Tab key during tag autocompletion could cause the tag to disappear.
New
ForkLift now colorizes folder icons on macOS Tahoe the same way as Finder, inheriting and applying the assigned colors
ForkLift displays customized folder icons set in Finder, however, customizing folder icons directly in ForkLift is not supported
Improvements
Improves the display of aggregated date information in the Preview pane when multiple files are selected
Adds an alert when iCloud Drive is busy and interaction is not possible, to clarify why an action cannot be executed
Adds an option to edit file favorites from the context menu in the Sidebar
Improves memory usage
Fixes
Fixes an issue that made the file view jump in List View after renaming or deleting files when Group by was enabled
Fixes multiple possible memory leaks
Fixes an issue that didn’t show the lock icon on locked folders when icon preview is enabled (the lock icon doesn’t show up on files starting from this version)
Fixes an issue in the Status bar, which didn’t update the displayed info correctly after folders were expanded and collapsed
Fixes an issue of the preview of PDF file in the Preview Pane introduced in version 4.3.5, which made it impossible to select text inside the preview and to use the right-click context menu to customize the preview options. This version restores the previous functionality
Fixes an issue that made it impossible to remove the last tag on iCloud Drive when there was only one tag remaining
Fixes an issue where pressing the Tab key while autocompleting a tag made the tag disappear
Two powerful new features in ForkLift 4.6
Exciting news! ForkLift 4.6 introduces two major features that power users will love, especially those who rely heavily on tags and those who create custom tools.
Advertisement
Smarter tagging and autocompletion
The first major improvement in ForkLift 4.6 focuses on tags.
Working with tags in ForkLift has traditionally not been as seamless as in Finder, mainly due to macOS limitations for third-party developers. Unlike Finder, third-party apps cannot access and edit tag data in the same way, which means features like native tag autocompletion are not directly available.
In earlier versions, we introduced the Tags section in the Settings to allow users to add their own tags to the sidebar, making them easily accessible. With ForkLift 4.6, we are expanding this feature and making it more powerful.
Advertisement
ForkLift now uses the tags listed in the Settings for autocompletion in the Tags section of the Preview Pane or in the info window. As you type, it suggests matching tags from your predefined list.
ForkLift tag autocomplete as you write To take full advantage of this feature, you should add the tags you use most frequently to the Settings. This ensures they are always available for quick and consistent tagging.
Better control over sidebar tags
Since you may not want to display every tag in the sidebar, we have made it easier to manage their visibility. To hide a tag, right-click it in the Settings and select “Hide from Sidebar”. To show it again, right-click and select “Show in Sidebar”
Advertisement
If you want to manage the visibility of multiple tags at once, it is easier to use the Sidebar Editor. Add the tags you want in the Settings, then close the Settings window. From the menu, select View > Show Sidebar Editor. When the Sidebar Editor appears, you can quickly hide tags by unchecking them. Once you are done, you can close the Sidebar Editor again from the View menu.
This gives you full control over which tags are visible without affecting their availability for autocompletion.
Assign tags directly from the sidebar
The sidebar is now more than just a way to filter files. You can still click a tag to view all associated items, but now you can also assign tags directly from the sidebar:
Advertisement
Select one or more files or folders
Drag them onto a tag in the sidebar
Drop them to assign that tag
This makes tagging faster and more intuitive, especially when working with multiple files.
Improved tag input field
We have also improved the tag input field in the sidebar. Not only does it now include a dropdown to suggest matching tags as you type, but it can also expand to display all assigned tags, making them easier to view and manage. Previously, this field had limited space and could not show all tags at once when working with a large number of tags.
Custom tools, now one click away
The second major improvement focuses on tools.
Advertisement
The Tools feature has always been one of the more powerful, yet somewhat hidden, features of ForkLift. It allows you to create your own tools in the settings using zsh scripts, so you can execute complex actions and workflows more easily.
Previously, these tools could only be executed from the Commands menu, and later they became available in the right-click context menu. Now, we are taking this a step further:
You can add your custom tools directly to the toolbar and execute them with a single click.
Customize your tool icons
Advertisement
When creating or editing a tool, you can assign an icon to represent it both in the Tools section and in the toolbar. By default, new tools use a gear icon, and existing tools are also assigned this icon automatically.
You can easily change it:
Click the icon in the top-left corner while editing or creating a tool
Choose a new icon from the pop-up window
This makes it much easier to visually distinguish your tools at a glance.
How to add tools to the toolbar
Adding your tools to the toolbar is simple, but it happens in two steps:
Advertisement
1. Enable the option
Go to the Tools tab in the Settings
While creating or editing a tool, enable “Show in Toolbar”
2. Add it manually to the toolbar
Right-click the toolbar
Select “Customize Toolbar…”
Find your enabled tool in the customization tray
Drag it into the toolbar
Note: Enabling “Show in Toolbar” does not automatically add the tool, it only makes it available for selection.
Full List of changes
New
Tags autocompletion feature in the Preview Pane and Info Window based on the list of tags added under ForkLift > Settings > Tags
Option to assign a tag to a file by dropping a file onto a tag in the sidebar
Option to add custom tools created under ForkLift > Settings > Tools to the toolbar
Option to hide and display tags in the sidebar under ForkLift > Settings > Tags through the right-click context menu
Improvements
Improved expandable Tags field in the Preview Pane, now able to display all tags
Option to exit the Tags field using the Esc key
Option to select multiple tools in the Tools section of the Settings
Fixes
Fixes an issue that indicated it was possible to add remote files as favorites to the sidebar by drag and drop
Fixes an issue in Column View where, after disconnecting from a remote location, the location did not change back to the starting directory
Fixes an issue on macOS Tahoe where hitting Esc in the tool editor while adding a new item did not cancel the operation and instead added the new item
CAD: Cost Anomaly Detection or Create Astounding Debt?
The world of AI is exciting, but there are plenty of expensive pitfalls ready to catch out the unwary, as one Register reader found when taking Anthropic’s Claude Opus for a spin courtesy of Amazon Bedrock.
Our reader managed to run up Bedrock charges totaling $30,141.33 in April 2026, despite using AWS Cost Anomaly Detection (CAD) to avoid any nasty surprises. Thirty-three days before our reader’s first use of Bedrock, the threshold in CAD was set to “Absolute ≥ $100 AND Relative ≥ 40%” so alerts should have fired if things got too spendy.
Advertisement
As for which services to monitor, our reader chose “AWS Services,” which Amazon says “tracks all AWS services automatically.” Except it apparently doesn’t, at least not in the way our reader expected. The problem is that AWS Marketplace isn’t supported by CAD, so costs incurred wouldn’t trigger an alert.
And how are Anthropic Claude models billed? Through the AWS Marketplace.
After burning through our reader’s AWS Activate credits (totaling $8,026.54 in this case), Amazon started charging for model inference on the Bedrock Marketplace, racking up $30,141.33, plus another $675.07 in AWS infrastructure charges, without a peep from the CAD service.
“The credits masking made it worse,” our reader told us. “AWS Activate credits did cover the first ~$8k of charges, which meant the Marketplace billing was silently working for weeks before the credits ran out. There was no notification when credits were exhausted – the charges simply started accumulating as invoiced amounts.”
Advertisement
The first warning that things were mounting up came in the form of a surprisingly large invoice.
Corey Quinn, a cloud economist at the Duckbill Group and occasional contributor to this publication, told The Register: “It’s unintuitive that Bedrock model spend is Marketplace unless you’re entirely too familiar with AWS.”
Quinn told us he does most of his Claude inference directly with Anthropic to take advantage of the company’s real-time billing, alerts, cutoffs, per-key limits, and so on. The approach has avoided some potentially expensive mistakes.
As far as AWS is concerned, the lack of CAD support for AWS Marketplace charges makes it all too easy to run up a big bill without realizing it, particularly when it comes to AI usage.
Advertisement
This could be regarded as a cautionary tale. If one digs deeply enough into the AWS documentation on CAD, there is a line that warns that AWS Marketplace is an unsupported service. However, it isn’t clear that Claude on Bedrock is billed through the AWS Marketplace. The fact that Marketplace billing bypasses the monitoring tools compounds the issue, and could easily leave a customer getting an unpleasant surprise at invoice time.
An AWS spokesperson told The Register: “AWS offers multiple tools to help customers manage spend, including AWS Budgets, which covers Amazon Bedrock spend on AWS Marketplace and other services. As noted in our documentation, AWS Marketplace charges are not currently supported by Cost Anomaly Detection. Customers with questions should reach out to AWS Support.” ®
Campbell Brown has spent her career chasing accurate information, first as a renowned TV journalist, then as Facebook’s first, and only, dedicated news chief. Now, watching AI reshape how people consume information, she sees history threatening to repeat itself. This time, she’s not waiting for someone else to fix it.
Her company, Forum AI — which she discussed recently with TechCrunch’s Tim Fernholz at a StrictlyVC evening in San Francisco — evaluates how foundation models perform on what she calls “high-stakes topics” — geopolitics, mental health, finance, hiring — subjects where “there are no clear yes-or-no answers, where it’s murky and nuanced and complex.”
The idea is to find the world’s foremost experts, have them architect benchmarks, then train AI judges to evaluate models at scale. For Forum AI’s geopolitics work, Brown has recruited Niall Ferguson, Fareed Zakaria, former Secretary of State Tony Blinken, former House Speaker Kevin McCarthy, and Anne Neuberger, who led cybersecurity in the Obama administration. The goal is to get AI judges to roughly 90% consensus with those human experts, a threshold she says Forum AI has been able to reach.
Brown traces the origin of Forum AI, founded 17 months ago in New York, to specific moment. “I was at Meta when ChatGPT was first released publicly,” she recalled, “and I remember really shortly after realizing this is going to be the funnel through which all information flows. And it’s not very good.” The implications for her own children made the moment feel almost existential. “My kids are going to be really dumb if we don’t figure out how to fix this,” she recalled thinking.
Advertisement
What frustrated her most was that accuracy didn’t seem to be anyone’s priority. Foundation model companies, she said, are “extremely focused on coding and math,” whereas news and information are harder. But harder, she argued, doesn’t mean optional.
Indeed, when Forum AI began evaluating the leading models, the findings weren’t exactly encouraging. She cited Gemini pulling from Chinese Communist Party websites “for stories that have nothing to do with China,” and noted a left-leaning political bias across nearly all models. Subtler failures abound too, she said, including missing context, missing perspectives, straw-manning arguments without acknowledgment. “There’s a long way to go,” she said. “But I also think that there are some very easy fixes that would vastly improve the outcomes.”
Brown spent years at Facebook watching what happens when a platform optimizes for the wrong thing. “We failed at a lot of the things we tried,” she told Fernholz. The fact-checking program she built no longer exists. The lesson, even if social media has turned a blind eye to it, is that optimizing for engagement has been lousy for society and left many less informed.
Her hope is that AI can break that cycle. “Right now it could go either way,” she said; companies could give users what they want, or they could “give people what’s real and what’s honest and what’s truthful.” She acknowledged the idealistic version of that — AI optimizing for truth — might sound naive. But she thinks enterprise may be the unlikely ally here. Businesses using AI for credit decisions, lending, insurance, and hiring care about liability, and “they’re going to want you to optimize for getting it right.”
Advertisement
That enterprise demand is also what Forum AI is betting its business on, though turning compliance interest into consistent revenue remains a challenge, particularly given that much of the current market is still satisfied with checkbox audits and standardized benchmarks that Brown considers inadequate.
The compliance landscape, she said, is “a joke.” When New York City passed the first hiring bias law requiring AI audits, the state comptroller found more than half had violations that went undetected. Real evaluation, she said, requires domain expertise to work through not just known scenarios but edge cases that “can get you into trouble that people don’t think about.” And that work takes time. “Smart generalists aren’t going to cut it.”
Brown — whose company last fall raised $3 million led by Lerer Hippeau — is uniquely positioned to describe the disconnect between the AI industry’s self-image and the reality for most users. “You hear from the leaders of the big tech companies, ‘This technology is going to change the world,’ ‘it’s going to put you out of work,’ ‘it’s going to cure cancer,’” she said. “But then to a normal person who’s just using a chatbot to ask basic questions, they’re still getting a lot of slop and wrong answers.”
Trust in AI sits at extraordinarily low levels, and she thinks that skepticism is, in many cases, justified. “The conversation is sort of happening in Silicon Valley around one thing, and a totally different conversation is happening among consumers.”
Advertisement
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
AI already listening in to call handlers in real time, conducting live database searches
Police forces across England, Wales and Northern Ireland will add personalization and artificial intelligence (AI) to their jointly run digital contact systems through a £72 million contract to manage and develop these.
Almost all police forces in the three nations use the Digital Public Contact’s Single Online Home web platform for their own websites, with the platform also running Police.uk, a national information site, and Data.police.uk, which provides information on police-recorded crime.
Advertisement
The Metropolitan Police Service (MPS), which hosts Digital Public Contact services on behalf of the National Police Chiefs Council, hopes to find a single supplier for these under a new contract running from July 2027 to December 2029, with a possible three-year extension, according to a market engagement procurement notice published on 12 May.
Existing Digital Public Contact services include the Single Online Home websites, linked services that pass information on crimes and incidents from the public to relevant officers; and the National My Police Portal, a new service using GOV.UK’s One Login to links victims with officers in charge of cases, which South Yorkshire Police started using in January.
The new contract will also cover use of AI. In March West Yorkshire Police and Digital Public Contact started using AI to extract material from old control room calls, which at present are normally recorded but not transcribed.
In the procurement notice, the MPS said that AI could also be used in reporting, analysis, conversational interactions and staff assistance. In a speech on the development of Digital Public Contact last October, Cambridgeshire’s chief constable Simon Megicks said that the work also includes developing a natural language switchboard that can help direct incoming calls and live services to assist operators, which is being piloted by Humberside Police.
Advertisement
“It supports call handlers in real time, and as they converse, the AI listens in and conducts live database searches, surfacing relevant information instantly,” he said of the assistance service at a National Police Chiefs Council innovation event. “Operators are empowered to make better decisions, quicker: reducing risk and improving outcomes for the public.”
In the King’s Speech on 13 May the government confirmed plans to merge forces in England and Wales and establish a National Police Service. The procurement notice says that the new contract will provide “a robust foundation” supporting these structural changes, although they are likely to take place beyond the end of the contract.
Following a market engagement event on 9 June, the MPS plans to publish a tender notice for the work around the end of July. ®
For most of the history of software, planning was sacred. You had to plan before anyone touched a keyboard, because the cost of building the wrong thing could be so punishing, especially for startups, that getting it right upfront was the only rational strategy.
Implementation was expensive, engineering time was scarce, and changing direction once the team had committed to an approach could set you back months.
The entire apparatus of modern software development, the roadmaps, the prioritization frameworks, the quarterly planning rituals, grew up as a response to that single economic fact.
That fact is no longer true, and most engineering organizations haven’t caught up.
AI coding tools have collapsed the cost of turning an idea into working software. What used to take weeks of implementation can now be explored in hours.
Advertisement
You can ask an agent to prototype three competing approaches overnight, and throw away the two that don’t hold up when you wake up in the morning.
You can challenge an assumption with a working demo instead of a slide deck. The economics have inverted: planning and process used to be cheaper than building, and now building is cheaper than the meetings you’d hold to decide what to build or how to build it.
This changes everything about how engineering teams should operate. There is no such thing as a perfect plan anymore, and even if there were, the time it would take to produce one means you’ve already lost to someone who just started building.
At Synthesia, we decided to test this idea in the most direct way we could. Every quarter, our product, engineering, and R&D teams come together in London to plan the next three months of work.
Advertisement
Historically, we’d spend most of that time in rooms analyzing, debating, and prioritizing. The goal was to emerge with a plan that was good enough to justify the cost of implementation.
During our most recent meeting, we flipped the sequence. We replaced the first two days of planning with a hackathon. 200 people from across engineering, product, design, legal, research, and talent formed 70 teams and built for 28 hours straight.
The brief was simple: take an idea, build it, turn the result into a two-minute demo video. No detailed specs, no over-planning – just build.
What happened surprised us.
Advertisement
One of the winning teams, a group of five engineers, completely rebuilt our video editor from scratch. The video editor provides a PowerPoint-like interface where users of our platform create videos with AI avatars.
The engineers delivered a full end-to-end reimagining of the product, focused on interactivity, branching narratives, and multi-avatar storytelling.
This wasn’t an outlier; across all 70 teams, the same pattern emerged: when you give people focus and remove friction, they can move far faster than anyone expected.
The lesson we took from this experiment is that execution is no longer the constraint, judgement is.
Advertisement
This might contradict the operating assumption that most engineering leaders have been working with for their entire careers. We have spent years building organizations optimized for execution throughput: how many features shipped, how many story points completed, how quickly the backlog shrinks.
But when building becomes cheap, the bottleneck moves upstream. The hard part is no longer getting the code written. Instead, it is knowing what code is worth writing in the first place.
When I say judgement, I mean four specific things. First, the ability to help product managers address the right customer problem faster, which requires distinguishing between what’s intellectually interesting and what actually matters to users and to the business.
Secondly, defining what “great” looks like before you start, because if you can’t articulate that standard, you won’t recognize it when you see it.
Advertisement
Thirdly, it’s about knowing when something is good enough to put in front of a user, not perfect, not polished, just sufficient to learn from. And finally, being able to kill ideas quickly.
When you can try many things in parallel, the most valuable skill becomes letting go of the ones that aren’t working, rather than falling in love with your first attempt because it costs so much to produce.
The best engineering teams in the next few years will not win on code output, they’ll win on taste.
This has real implications for how we think about the engineering role itself. We are moving from being builders to being orchestrators. AI agents can now execute large parts of the development process end to end.
Advertisement
The engineer’s job increasingly becomes choosing the right problems, reviewing outputs, and iterating at speed. Less time writing every line and more time directing systems that write lines for you.
Some people find this threatening. I think it’s the opposite. The tedious parts of engineering, the boilerplate, the repetitive wiring, the work that was never actually the interesting part, that’s what gets automated first.
What remains is the work that engineers have always wished they could spend more time on: understanding the problem deeply, designing elegant solutions, making the hard calls about what to build and what to throw away. The craft gets distilled to its essence.
We’re holding ourselves accountable to this shift at Synthesia. We’re tracking week-over-week usage of AI coding tools like Claude Code and Codex, and we’re measuring how quickly teams can move from idea to prototype to user feedback. The metric that matters now is the speed of the learning loop, not the volume of code produced.
Advertisement
The direction we’re headed is what I’d call auto-mode development: tight loops from prototype to user testing to shipping to refinement. Agile is being replaced by something faster still, something where the gap between having an insight and testing it against reality shrinks to nearly nothing.
So the question that matters for every engineering leader reading this is no longer “can we build this?” That question has been answered. You can build almost anything, remarkably fast, with a small team and the right tools.
The question now is: what should you build? And do you have the judgement to know?
Around the world, “sticker shock” is a growing concern for CIOs and IT leaders confronting the rising cost and limited availability of enterprise components, driven by insatiable demands for AI infrastructure.
Organizations are experiencing a structural shift in infrastructure economics, described as a “memory super-cycle,” whereby demand for infrastructure capable of supporting AI workloads is placing pressure on the supply of standard components. And it’s traditional enterprises that are struggling to keep pace.
Krish Prasad
Senior Vice President and General Manager, VCF Division at Broadcom.
While hyperscalers purchased years of capacity in advance, manufacturers are prioritizing high-bandwidth memory for GPUs, the specialist chips used to power AI and data heavy workloads.
Latest Videos From
This is driving up infrastructure costs, constraining availability and extending lead times, creating challenges that not all enterprises are equipped to manage.
Advertisement
Scaling infrastructure used to be simpler – when demand increased, organizations added more hardware. But this is no longer an option. We are at an inflection point where enterprises can’t buy their way out of the problem. The solution to the hardware crisis isn’t producing hardware – there needs to be a focus on smarter software.
The limits of hardware-first thinking
Enterprise IT has long relied on adding capacity to address performance challenges, but the current supply crunch is exposing the limitations of this approach. As demand for AI-ready infrastructure accelerates, memory costs have surged – often accounting for more than 50% of total system spend – while supply remains limited.
Consequently, simply adding capacity is becoming increasingly expensive and, in many cases, unsustainable. Enterprises are being forced to rethink their infrastructure strategy and try and do more with less.
Advertisement
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
This is where optimization comes in; using software to manage resources more intelligently and efficiently. What began as a response to cost pressure now represents a broader transformation in how infrastructure is designed and operated.
In practice, this means rebalancing resource utilization. In many environments, CPU capacity, the processing power provided by central processing units, remains underused, while workloads are constrained by memory availability.
Techniques such as high-speed NVMe memory tiering, which moves less active data from expensive DRAM to cost-effective NVMe storage, allow organizations to significantly reduce memory costs and increase VM density.
Advertisement
At the same time, extending the life and value of existing infrastructure has become a priority. Approaches such as intelligent oversubscription, workload balancing, and memory optimization enable higher workload density without compromising performance.
Storage efficiency also plays a key role, with data reduction techniques increasing effective capacity while unlocking stranded CPU and memory trapped in rigid configurations.
Together, these software-led strategies are reinforcing the role of private cloud platforms as a control layer for modern infrastructure, empowering enterprises with greater visibility over how resources are allocated and optimized in response to industry constraints.
Advertisement
Smarter software is the answer
The structural supply crisis is a wake-up call, signaling a broader reset in how enterprise IT should be operated.
The long-standing model of addressing performance challenges by purchasing more hardware is no longer sustainable. In response to the memory super-cycle, a software-defined approach centred on optimisation and flexibility is taking hold.
By adopting software-driven optimization within private cloud environments, enterprises can increase efficiency, improve agility, and scale more effectively, meeting evolving infrastructure demands without relying on continual hardware investment.
Advertisement
Organizations that embrace this shift will be better positioned to navigate ongoing constraints, control costs, and sustain digital transformation.
This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.
The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit
Last month, Denon unveiled its first new A/V receiver (AVR) since 2023, the AV-S980H, the most powerful in their budget-priced “S” series lineup. But it turns out, this was just the beginning. Today, Denon unveiled two more new AVR models, this time in their upgraded “X Series,” which is known for its higher performance, broad feature set and punchy dynamic sound. The AVR-X2900H ($1,349) replaces the AVR-X2800H and the AVR-X3900H ($1,849) replaces the venerable AVR-X3800H, an eCoustics Editors’ Choice Award winner.
Denon’s AVR-X3900H (pictured) replaces the popular AVR-X3800H from 2022.
Stylistically, the new models bear a strong resemblance to their predecessors, though Denon reps tell us they have been tweaked internally for higher performance as well as adding some interesting new features like the ability to add wireless rear speakers.
We got a chance to check out these new receivers at Denon’s headquarters in Japan last month, and were impressed by what we saw and heard. In a series of movie clips, the AVR-X3900H filled a large listening room with immersive Dolby Atmos sound, powering a 7.2.2-channel Bowers & Wilkins speaker system featuring the company’s 800 series flagship speakers. The system was able to reach near cinematic reference output levels with no audible strain.
Denon’s new AVR-X3900H AVR put out some room-filling cinematic sound in the listening room at Denon’s headquarters in Kawasaki, Japan, Photo by Tony Ware.
Powered by Heos
Both the AVR-X2900H and AVR-X3900H feature the latest HEOS module for whole home wireless music streaming, with support for lossless and high resolution audio from compatible streaming services such as TIDAL, Spotify, Amazon Music, Qobuz and Apple Music (via Air Play 2). Also, both receivers will support the use of Denon’s new Home 200, Home 400 and Home 600 speakers as wireless rear channels, via a future free over-the-air software upgrade. This gives home theater fans and A/V hobbyists the ability to do real discrete immersive surround sound without having to run speaker wires to the back of their rooms.
Both new models feature updated internal components and a high performance 32-bit multi channel DAC (Digital Analog Conversion) architecture to deliver improved imaging, clearer high frequency detail and more authoritative low frequency energy to every speaker. Both models include Audyssey calibration on-board with the option to upgrade to DIRAC Live room correction for an additional fee.
“Denon’s X-Series has always been about uncompromised performance,” said Lyle Smith, President of Sound United at Denon’s parent company, HARMAN. “With our newest additions to the series, we’ve gone further by combining expanded room calibration capabilities with adaptable system designs to deliver enhanced audio quality and greater flexibility for people who take their sound seriously.”
Advertisement
A peek under the hood at the inside of the Denon AVR-X3900H A/V receiver.
We also had a chat with Denon’s Sound Master, Shinichi Yamauchi. His job is to listen to Denon’s new product designs throughout their development cycle at Shirakawa Audio Works in Japan. With a trained ear and a background in engineering, Yamauchi-San makes suggestions to the development and engineering team, based on his extensive listening sessions with each new product. A new Denon product only makes it into production once Yamauchi-San signs off on it. And both the X2900H and X3900H got his seal of approval earlier this year.
Denon Sound Master Yamauchi-San shares some of his favorite tracks with us at Denon’s listening room at the company’s headquarters in Kawasaki, Japan.
Denon AVR-X2900H
The Denon AVR-X2900H offers seven channels of amplification and dual subwoofer outputs. It’s rated at 95 watts/channel into 8 ohms (2 channels driven). It can decode Dolby Atmos and DTS-X immersive sound formats with discrete height and surround speaker outputs for a 5.2.2-channel implementation. While Audyssey MultEQ XT calibration software is included, the X2900H is Denon’s most affordable AVR that also offers an optional upgrade to DIRAC Live room correction and calibration software. The license for DIRAC Live currently costs $259 for the limited bandwidth version or $299 for the full bandwidth version.
Denon AVR-X2900H A/V receiver on display at Denon’s headquarters in Kawasaki, Japan.
The receiver offers six HDMI inputs with support for VRR, ALLM, 8K/60 Hz or 4K/120 Hz and dual HDMI outputs, including one with ARC/eARC audio return channel. It also offers multiple analog and digital inputs for legacy gear, including a turntable input with built-in moving magnet phono preamp.
In addition to a fully wired speaker set-up, the AVR-X2900H will support wireless rear channels using the Denon Home 200, Home 400 or Home 600 speakers (this will be delivered in a future software update). While the 95-watt power rating is measured with two channels driven, Denon guarantees that the AVR-X2900H will deliver at least 70% of that power rating when five channels are driven. It supports Bluetooth, Wi-Fi and hard-wired network connectivity with HEOS built-in for whole home music streaming from a wide selection of compatible streaming apps.
Denon AVR-X3900H
The Denon AVR-X3900H includes nine channels of amplification and four independently controlled subwoofer outputs for precise in-room bass optimization. It is rated at 105 watts/channel with two channels driven with a guarantee of at least 70% of that rated output when driving five channels. Out of the box, the X3900H supports 5.4.4 or 7.4.2-channel immersive sound applications.
Advertisement. Scroll to continue reading.
With eleven channels of processing, the X3900H can be upgraded to support a 7.4.4 channel surround system by adding a 2-channel power amplifier. If you only need a five or seven channel surround system, you can use the additional on-board amps to drive speakers in another room. And if you need more power, the AVR-X3900H has a full set of eleven preamp outputs which can be connected to outboard power amplifiers.
Denon AVR-X3900H at Denon’s headquarters in Kawasaki, Japan.
In addition to Dolby Atmos and DTS:X, the AVR-X3900H can decode Sony 360 Reality Audio, MPEG-H and AURO-3D immersive surround formats, for maximum compatibility and flexibility. It is also IMAX Enhanced certified so it can apply IMAX EQ to DTS-X soundtracks that are IMAX Enhanced.
The X3900H includes Audyssey MultEQ XT32 calibration software on-board, and can be upgraded to DIRAC Live – all the way up to DIRAC Live ART – via an additional license purchase (currently priced at $259 to $799, depending on options).
Advertisement
Like the X2900H, the X3900H includes six HDMI inputs with support for VRR, ALLM and resolution up to 8K @ 60 Hz or 4K @ 120 Hz. But the X3900H adds a third HDMI output for a projector, monitor or TV. The X3900H also supports HDMI ARC/eARC for single cable connection to your display of choice. It also offers multiple analog and digital audio inputs for legacy gear, including a turntable input with built-in moving magnet phono preamp.
Interestingly enough, unlike some competitive models, like the Onkyo TX-RZ30, neither of these new Denon receivers offers any analog video inputs (composite, component or S-Video). So if you’re still rocking a VCR or LaserDisc player, you’ll need to plug these directly into your TV or get an analog to digital video converter. I should note that this is nothing new as the predecessor models also lacked analog video inputs. It’s also not particularly critical to most potential buyers. The current AVR-X4800H model does include analog video inputs so we’d imagine any future replacement step-up model may continue that tradition.
The AVR-X3900H offers a bevy of inputs and outputs including 6 HDMI ports, nine pairs of speaker outputs, four independent subwoofer outputs and preamp level outputs for eleven channels.
Like the X2900H, the AVR-X3900H will get an update later this year to support wireless rear surround channels by adding a pair of Home 200, Home 400 or Home 600 speakers.
Two things I had hoped to see on the new receivers (but didn’t) were support for Dolby Atmos music within the HEOS streaming platform and on-board decoding for Eclipsa Audio (IAMF) immersive surround. For now, Denon recommends connecting an outboard source device like an Apple TV 4K or FireTV stick to one of the receiver’s HDMI ports if you want to listen to Dolby Atmos music from streaming sources like Amazon Music Unlimited, TIDAL or Apple Music. I listen to Dolby Atmos and 360RA music this way on my current AVR-X3800H and it works reliably (which is nice).
With the company’s recent acquisition by Harman (which in turn is owned by Samsung), we believe it would be a natural fit for Denon receivers to support the new open IAMF/Eclipsa Audio immersive sound format, which was developed by Samsung, Google and others. Although it isn’t widely used yet, Eclipsa Audio is the only immersive surround format supported on YouTube so over time it could become more popular.
They say, “If it ain’t broke, don’t fix it” and this saying holds true for Denon’s latest X series receivers. The models they replaced were highly rated and were popular sellers in the category and these new models look and feel quite similar, offering everything from their predecessors and more. The AVR-X2900H brings DIRAC Live room correction to Denon’s most affordable price point yet, though it does require an additional license purchase to use. Also, both models will get the option for wireless rear speakers in a future software update.
While we were disappointed to find out that the receivers don’t support Dolby Atmos music in HEOS yet or Eclipsa Audio decoding, it’s possible that either or both of these features could be added at some time in the future via a software upgrade. In the meantime, the otherwise comprehensive immersive format support of the AVR-X3900H is unmatched in the industry at this price point and we’d highly recommend it to anyone in search of a robust and future-proofed A/V receiver.
Pricing & Availability:
Both new 2026 Denon X-Series AVRs will be available to order on May 14, 2026 at the following prices:
Linux distros are rolling out patches for a new high-severity kernel privilege escalation vulnerability that allows attackers to run malicious code as root.
Known as Fragnasia and tracked as CVE-2026-46300, this security flaw stems from a logic bug in the Linux XFRM ESP-in-TCP subsystem that can enable unprivileged local attackers to gain root privileges by writing arbitrary bytes to the kernel page cache of read-only files.
Zellic’s head of assurance, William Bowling, who discovered this new universal local privilege escalation flaw, also shared a proof-of-concept (PoC) exploit that achieves a memory-write primitive in the kernel that is used to corrupt the page cache memory of the /usr/bin/su binary to get a shell with root privileges on vulnerable systems.
Bowling said this flaw belongs to the Dirty Frag vulnerability class, which was disclosed last week, and affects all Linux kernels released before May 13, 2026. Just as Fragnasia, Dirty Frag has a publicly available PoC exploit that local attackers can use to gain root privileges on major Linux distributions.
Advertisement
However, Dirty Frag works by chaining two separate kernel flaws, the xfrm-ESP Page-Cache Write vulnerability (CVE-2026-43284) and a RxRPC Page-Cache Write security issue (CVE-2026-43500), to achieve privilege escalation by modifying protected system files in memory.
“Fragnesia is a member of the Dirty Frag vulnerability class. This is a separate bug in the ESP/XFRM from dirtyfrag which has received its own patch. However, it is in the same surface and the mitigation is the same as for dirtyfrag,” Bowling said.
“It abuses a logic bug in the Linux XFRM ESP-in-TCP subsystem to achieve arbitrary byte writes into the kernel page cache of read-only files, without requiring any race condition.”
To secure systems against attacks, Linux users are advised to apply kernel updates for their environment as soon as possible.
Those who can’t immediately patch their devices should use the same mitigation used for Dirty Frag commands to remove vulnerable kernel modules (however, it’s important to note that this will break AFS distributed network file systems and IPsec VPNs):
Fragnasia’s disclosure comes as Linux distros are still rolling out patches for “Copy Fail,” another privilege escalation vulnerability now actively exploited in the wild.
“This type of vulnerability is a frequent attack vector for malicious cyber actors and poses significant risks to the federal enterprise,” the U.S. cybersecurity agency warned. “Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.”
In April, Linux distros patched another root-privilege escalation vulnerability (dubbed Pack2TheRoot) in the PackageKit daemon that had gone unnoticed for a decade.
AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.
At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.
You must be logged in to post a comment Login