A man in Ireland has figured out a way to find the cheapest pint of Guinness using just AI programs. Matt Cortland became frustrated when he paid nearly $9 for a pint at a pub in Dublin, and decided he had to figure out a way to track prices across the country.
The first step was to find out the prices. To do that, Cortland created “Rachel” using the AI voice-generation platform ElevenLabs, then had her call every pub across Ireland — with a Northern Irish accent, of course. She ended up calling over 6,000 pubs, asking each one what their price was for a pint of Guinness.
The second step was to sort the data. He used Claude to create a price index called “Guinndex,” which he can update himself, or bartenders can update whenever prices change. This offers Cortland — and anyone else craving a Guinness — up-to-date prices.
Advertisement
The key was making AI feel authentic over the phone
Drazen Zigic/Getty Images
While it all sounds pretty methodical, the most successful part of Cortland’s AI procedure was making Rachel feel human. Rachel was inspired by Rachel Duffy, the winner of the U.K. reality show “The Traitors” – but given a Northern Irish accent. Cortland reported that most pubs across Ireland couldn’t even tell Rachel was an AI over the phone, which likely yielded more results.
A wide range of industries has started using AI to make phone calls. A study of car dealerships found that when AI handled customer service calls, it actually seemed more successful than average phone calls across the industry. Data from Regal found that humans actually appear to prefer talking to AI representatives more than human ones, staying on the phone longer and providing longer responses. Rachel’s phone calls with pubs appeared to reflect this, with bartenders happily telling her that she could even come in and get a pint for free.
Advertisement
It seemed like they didn’t even know she was AI — in reality, AI robots aren’t having as much success in that category. People have also reported not enjoying AI-led job interviews, likely already biased since they know it’s AI on the other end. Maybe let’s stick to the AI pint trackers.
The company also launched the latest iteration of its TPUs.
Google has made a series of new enterprise-focused launches, including a new platform to build and manage AI agents and the latest generation of its AI-specific Tensor Processing Units (TPU), as competition between tech giants targeting the lucrative enterprise sector continues to intensify.
The announcements were made at the company’s annual Cloud Next conference in Las Vegas yesterday (22 April), with around 32,000 in attendance.
There is no shortage of companies offering agentic AI services, including OpenAI, Anthropic, Microsoft and China’s Alibaba, among more, with Google being the latest to join the enterprise AI race.
To bolster its positioning, Google launched a new Gemini Enterprise Agent platform to build scale, govern, and optimise agents.
Users can manage aspects of the agents and deliver them through the company’s existing Gemini Enterprise platform, which saw a 40pc growth in paid monthly active users quarter-to-quarter in Q1.
The new launch is Google’s answer to Amazon’s Bedrock AgentCore and Microsoft Foundry.
Advertisement
The agent platform provides access to Gemini 3.1 Pro – Google’s most advanced model yet – the viral Nano Banana 2, audio model Lyria 3, and leading models from Anthropic, including Claude Opus, Sonnet, Haiku and Claude Opus 4.7.
Plus, a central monitoring unit lets users oversee and guide all agents from one location.
“The agentic enterprise is real – and deployed at a scale the world has never before seen,” said Thomas Kurian, Google Cloud’s CEO.
In a blogpost on the company’s site, CEO Sundar Pichai noted: “The conversation has gone from ‘Can we build an agent?’ to ‘How do we manage thousands of them?’”
Advertisement
Alongside this, Google is adding to its vertical stack offerings with a new cybersecurity platform that combines Google’s Threat Intelligence and Security Operations with Wiz’s cloud and AI security platform to detect and respond to threats.
Moreover, the company also launched the latest iteration of its TPUs, but this time, separating them into two distinct processors. Both chips will become available later this year.
TPU 8t will be used for “accelerated” training, while 8i will be used for “near-zero latency” inference, the company said.
These new systems are key components of Google Cloud’s AI Hypercomputer, an integrated supercomputing architecture that combines hardware, software and networking to power the full AI life cycle, Google said.
Advertisement
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Jury selection begins Monday in Musk v. Altman, the federal trial over whether OpenAI’s nonprofit-to-profit conversion constitutes unjust enrichment and breach of charitable trust. Musk dropped fraud claims Friday to sharpen focus on the two remaining counts. The most damaging evidence is Greg Brockman’s 2017 diary entry calling the nonprofit commitment “a lie.” Judge Gonzalez Rogers found “ample evidence” and rejected nearly every dismissal attempt. The advisory jury will hear testimony from Musk, Altman, Nadella, Murati, and Sutskever, but the judge alone decides remedies, which could include $150 billion in damages and the unwinding of the conversion.
Jury selection begins Monday in Oakland federal court for the trial that will determine whether OpenAI’s conversion from a nonprofit to one of the most valuable companies in the world was a breach of charitable trust. Elon Musk, who co-founded OpenAI in 2015 and donated at least $38 million to it, is suing Sam Altman, Greg Brockman, and OpenAI on two remaining claims: unjust enrichment and breach of charitable trust. He wants up to $150 billion in damages directed to the nonprofit arm, the ouster of Altman and Brockman from leadership, and a court order unwinding the for-profit conversion. On Friday, Musk voluntarily dropped his fraud and constructive fraud claims, narrowing the case from 26 claims to two but sharpening the focus on the question that has defined the dispute since it began: did OpenAI’s leadership promise a nonprofit and build a $852 billion company instead?
Advertisement
The evidence
The most damaging piece of evidence in the case is not an email from Sam Altman. It is a diary entry from Greg Brockman, OpenAI’s co-founder and president, written in 2017: “I cannot believe that we committed to non-profit if three months later we’re doing b-corp then it was a lie.” Judge Yvonne Gonzalez Rogers, who is presiding over the case and who will make the final ruling on remedies if the jury finds liability, cited that entry directly in her January 15 ruling that sent the case to trial. She found “ample evidence” supporting Musk’s claims and rejected “nearly every attempt by OpenAI and Microsoft to make the lawsuit disappear.” The ruling was a 28-page signal that the court considers the case serious enough for a jury to hear, which in itself is a significant validation of the underlying allegations.
Musk’s legal team has also produced a 2017 email in which Altman claimed he remained “enthusiastic about the non-profit structure” after Musk threatened to cut off funding, a statement that Musk’s attorneys frame as a misrepresentation designed to keep donations flowing while leadership privately planned a different path. Hundreds of pages of discovery materials unsealed from depositions in the autumn of 2025 include emails, texts, and Slack messages that Musk’s team says show leadership “said one thing publicly and planned something completely different privately.” A February 2023 text from Altman to Musk, sent after Musk had publicly criticised OpenAI, read: “You’re my hero and that’s what it feels like when you attack OpenAI.” The witness list reads like a Silicon Valley tell-all: Musk, Altman, Microsoft CEO Satya Nadella, former OpenAI CTO Mira Murati, co-founder Ilya Sutskever, and Shivon Zilis.
OpenAI has called the lawsuit “baseless” and described it as a “harassment campaign that’s driven by ego, jealousy and a desire to slow down a competitor.” The competitor is xAI, the AI company Musk founded in 2023 and recently merged with SpaceX in an all-stock transaction valuing the combined entity at $1.25 trillion.Musk’s own AI venture was folded into SpaceX in a $1.25 trillion all-stock dealthat raised its own corporate governance questions, a fact OpenAI’s defence team will use to argue that Musk’s motivations are competitive rather than charitable. OpenAI contends that Musk left the board in February 2018, reneged on a larger planned donation, and has no standing to dictate the organisation’s structure years after his departure. Judge Gonzalez Rogers herself noted that “this country likes competition,” flagging the potential self-interest in Musk’s claims.
The structural defence is that OpenAI’s conversion was reviewed by attorneys general in both California and Delaware, that the nonprofit entity now operates as the OpenAI Foundation holding approximately 26% of the company’s valuation, roughly $130 billion, and that the Foundation retains oversight of mission alignment and the ability to appoint members of the for-profit board.The $25 billion commitment the Foundation announced when OpenAI completed its recapitalisationmakes it one of the most well-endowed philanthropic organisations in the world. OpenAI argues this structure preserves the charitable mission while enabling the scale of investment required to pursue artificial general intelligence. Altman, Brockman, and Microsoft have all denied wrongdoing.
The structure
The trial structure is unusual. The nine-member jury’s verdict on liability will be advisory only. Judge Gonzalez Rogers, not the jury, will make the final determination on both liability and remedies. Opening arguments are expected Tuesday. The liability phase runs through mid-May. If OpenAI is found liable, the remedies phase begins May 18, where the court will consider Musk’s requests for damages, the ouster of leadership, and the unwinding of the conversion. The advisory jury format means that even a unanimous jury verdict does not bind the judge, but a strong jury consensus would carry significant moral authority in the judge’s deliberations.
Advertisement
Musk’s decision to drop the fraud claims on Friday was strategic, not a concession. Fraud requires proving intentional deception, a higher evidentiary bar that would have diverted the trial into arguments about Altman’s state of mind. Unjust enrichment and breach of charitable trust focus on outcomes rather than intent: did the conversion enrich insiders at the expense of the charitable mission, and did it violate the trust under which the nonprofit’s assets were held? These claims are easier to prove because the facts are largely undisputed. OpenAI was founded as a nonprofit. It converted to a for-profit. Its leaders hold equity in the for-profit entity. The question is whether that sequence constitutes a legal violation, not whether anyone intended it to be one. In his April 2026 amendment, Musk asked that Altman and Brockman be required to hand over “all equity and other personal financial benefits they obtained as a result of OpenAI’s for-profit operations” to the OpenAI charity.
OpenAI quickly stepped in to fill Anthropic’s Pentagon contract with no usage restrictionsafter Anthropic refused the military work on principled grounds, a contrast that has become part of the broader governance debate about whether OpenAI’s “benefit all of humanity” charter survived the conversion. Eyes on OpenAI, a coalition of more than 60 California nonprofits, has separately argued that the restructuring deal is “full of holes” and could establish a precedent for startups to use nonprofit status for tax advantages before converting to for-profit. Public Citizen and the San Francisco Foundation have urged the California attorney general to ensure that conversion payments go to a new, independent charitable enterprise rather than one controlled by the same leadership that approved the conversion.
The trial is not only about OpenAI. It is about whether the nonprofit-to-profit conversion model is legally sustainable in AI. OpenAI was not the first technology organisation to start as a nonprofit and accumulate enormous value. Mozilla did. Wikipedia resisted. The question the Oakland courtroom will address over the next month is whether the people who built OpenAI with charitable donations and a stated commitment to benefit humanity can legally convert that work into an $852 billion for-profit enterprise and keep the equity. Musk says they cannot. Altman says the conversion serves the mission better than the original structure ever could. Brockman’s diary says it was a lie. The jury will hear all of it, and the judge will decide what it means.
Shooting high-quality 4K footage, the Arlo Ultra 3 4K is one of the best wireless security cameras you can get. This new version charges via USB-C and has a higher-density battery for longer life. Arlo remains one of the best security camera platforms, packed with intelligence. The downsides are that you have to subscribe to the highest tier to get 4K footage, and the improvement from Arlo’s 2K to 4K cameras isn’t as big a jump as you might hope for.
Excellent video
Highly customisable detection
Brilliant app
Expensive cloud subscription required for most features
Expensive to buy
Key Features
Advertisement
Review Price:
£259.99
4K resolution
Advertisement
Shoot high quality footage day and night
Battery powered
Advertisement
Up to six months of battery life on a full charge
Requires a SmartHub
Advertisement
Wireless signal from the camera must go to a SmartHub
Introduction
The Arlo Ultra was the first 4K security camera that I remember reviewing. Now we’re onto the 3rd generation of product with the Arlo Ultra 3. Offering better battery life than the previous generation, along with better range, this new version is more of a tweak than a revamped camera.
Launching in a world with more competition and where 2K footage is pretty standard, does the Arlo Ultra 3 4K do enough to stand out? Read on for my verdict.
Advertisement
Advertisement
Design and Installation
USB charging
New high-density battery
Needs a SmartHub to work
Externally, the Arlo Ultra 3 looks very much like all of the other wireless cameras in Arlo’s line-up. In fact, the casing and mounting options are the same for this camera as the Arlo Pro 6. And, the mounting options go a few generations back.
That’s actually handy, as I could swap out an older Arlo camera for the new one without having to change the mount, as the same one is provided with the Ultra 3.
Image Credit (Trusted Reviews)
This mount is a decent one, offering a good degree of movement, so it’s easy to line the camera up with the area that you want to watch.
One big change from the Arlo Ultra 3 and the previous version, the Arlo Ultra 2 is that this newer version is charged via USB-C, rather than the old proprietary magnetic connector.
Advertisement
Advertisement
There’s a flap under the camera to access this slot, and any USB-C cable can be used, which makes charging easier, and means no hunting for that proprietary cable.
Image Credit (Trusted Reviews)
Inside the camera is the new higher-density battery, which has 15% more power than with the previous model.
Image Credit (Trusted Reviews)
As with all Ultra cameras I’ve reviewed, the Ultra 3 4K has to connect via the Smarthub VM5000. If you’ve got one already, you can just buy the camera only (£259.99); you have to buy the Ultra 3 in at least a two-pack with the Smarthub (from £529.99). That makes this system very expensive.
As well as providing the connection, the Smarthub has a microSD card slot underneath, which you can use for offline recording. Just be aware that doing this cuts out many of the more advanced features that you only get by subscribing to Arlo Secure.
Advertisement
I think that this camera is best with a cloud subscription. If you don’t want to pay for cloud storage, then buy the EufyCam S4 instead.
Advertisement
Features
Requires an expensive subscription for the main features
Excellent list of object detection
Custom AI detection
As with Arlo’s other cameras, the Ultra 3 is controlled via the excellent and flexible Arlo app. If you have storage inserted into the Smarthub, you can record video offline, but you miss out on all of the detection features.
For all practical purposes, you need to have an Arlo Secure plan. To get 4K recording you need Arlo Secure Plus, which costs £19.99 a month, and supports unlimited cameras with 14 days of storage and a host of AI features.
That is a very expensive subscription, particularly as the recording history is so short. Buy the 4K Ring Outdoor Cam Pro, and you can get cloud storage from £4.99 a month.
Expensive as it may be, the overall Arlo experience is one of the best. From the main screen I can choose which cameras I want to see by adding widgets. Widgets let me jump into a camera’s live feed, and see what’s going on, turning on two-way talk if needed. You do need to enable local 4K streaming if you want the best quality.
Advertisement
Advertisement
Image Credit (Trusted Reviews)
This home page also gives quick access to the three modes: Arm Away, Arm Home, and Standby. Similar to the modes in an alarm system, I can configure what cameras do in each mode. I tend to have indoor cameras off for the Home and Standby modes, and on for Away; outdoor cameras are on unless the system is in Standby. Modes can be scheduled or automated via your location.
With the recommended Arlo Secure Plus subscription, the level of control over motion detection is incredible. I could set motion zones and then choose to record and get alerts about my choice of people, animals and vehicles.
Image Credit (Trusted Reviews)
This high-level package also adds in-person recognition, which can be trained by uploading photos of people you know, and then refined in-app. And, there’s also vehicle recognition, which is similar but designed to spot cars that you know.
Image Credit (Trusted Reviews)
Advertisement
Oddly, the vehicle recognition is available on all cameras connected to your account, but the person recognition can only be implemented on a single camera.
Custom Detection is a brand new feature, with up to three custom detection events available per camera. Give the system a before and after screenshot, say showing a gate closed and open, and the system looks for the change.
Image Credit (Trusted Reviews)
Notifications can then be given based when motion is detected, at a set time or when the mode changes. For example, I can tell the system to tell me if the back door is open when I change the mode to Arm Away.
It’s a very flexible system that makes Arlo more powerful than its competition, but it is an expensive choice. And, Custom Detection, in my experience, needs to look for a big enough change to work properly. Trying to get the system to recognise when my glass kitchen door was open, proved hard.
Advertisement
Video is recorded to the cloud, and is available in the Feed section, which is organised by day. Events can be filtered by date, camera and event type, the latter of which has too many choices to list. What I can say is that it’s remarkably easy to find an event, although I would like the option of an AI search, as you get with the high-end Ring plan.
Advertisement
Image Credit (Trusted Reviews)
Performance
Sharp video
Not a huge step up from 2K
Clean full-colour night vision
When the Arlo Ultra came out, it was a huge step up from the 1080p video cameras that were around. Today, a lot of cameras have a 2K resolution. While 4K has 66% more pixels than 2K, the Arlo Ultra 3 records at the same bitrate as the Arlo Pro 6 (55kbit/s), so there’s more compression going on with the 4K camera.
During the day, the Arlo Ultra 3 does look great. The image is very sharp, with detail right to the back of the frame. Is it that much better than the footage shot on the Arlo Pro 6? No. There’s a definite improvement with having 4K, but not by as much as you’d expect. The main difference is that the Arlo Ultra 3 has wider 180° field of view (the Arlo Pro 6 has a 160° field of view).
Image Credit (Trusted Reviews)
Advertisement
With a spotlight, the Ultra 3 can shoot full colour nighttime footage. As the light turns on, it takes a second or so for the image to stabilise before it settles down and delivers the full-colour image.
Advertisement
Moving images blur more than during the day, and the overall image is a touch softer, but I could always find a frame or two where faces were in detail. If anything, the gap between the Pro 6 and Ultra 3 is narrower here; the 4K camera is better, but only by a little.
Image Credit (Trusted Reviews)
Arlo says that the Ultra 3 can last up to six months on a charge, although that does depend on how many times the camera is triggered per day. In my garden, I’d say that four months plus seems more realistic for me.
Should you buy it?
You want high-quality video
If you want sharp 4K footage and brilliant AI detection, this camera is for you.
Advertisement
You want something better value
Advertisement
Expensive to buy and expensive to run, there are far more wallet-friendly options to choose from.
Advertisement
Advertisement
Final Thoughts
There’s no denying that the Arlo Ultra 3 is a great security camera: it shoots high-quality footage, and the Arlo app is great. The issue is that you have to pay much more for the camera than for the 2K Pro 6, but the footage isn’t that much better.
And, to get 4K footage, you have to subscribe to the highest tier of the cloud subscription package, so it’s a big price commitment. If you want the best, buy this camera; otherwise, go for the Arlo Pro 6 2K or another option in my guide to the best outdoor security cameras.
How we test
Unlike other sites, we test every security camera we review thoroughly over an extended period of time. We use industry standard tests to compare features properly. We’ll always tell you what we find. We never, ever, accept money to review a product.
Find out more about how we test in our ethics policy.
Advertisement
Used as our main security camera for the review period
We test compatibility with the main smart systems (HomeKit, Alexa, Google Assistant, SmartThings, IFTTT and more) to see how easy each camera is to automate.
We take samples during the day and night to see how clear each camera’s video is.
FAQs
Does the Arlo Ultra 3 4K need a cloud subscription?
If you want the main detection options you need a cloud subscription and you need the most expensive tier to record 4K footage.
Advertisement
Test Data
Full Specs
Arlo Ultra 3 4K Review
Manufacturer
–
Size (Dimensions)
52 x 78 x 89 MM
Release Date
2026
First Reviewed Date
17/03/2026
Model Number
Arlo Ultra 3 4K
Resolution
3840 x 2160
Battery Length
6 months
Smart assistants
Yes
App Control
Yes
Camera Type
Indoor/outdoor wireless
Mounting option
Wall
View Field
180 degrees
Recording option
Cloud (subscription required), local (via SmartHub)
Apple’s iPhones are known for many things, one of which is the quality of their cameras. The iPhone 16 Pro and 15 Max made their way onto our list of the best smartphone cameras of 2025, and their successor, the iPhone 17 Pro Max, produced photos that impressed us greatly when we tested one in late 2025.
The hardware is just part of the photographic process, though; Apple’s default Camera app also plays a major role in the overall experience. It offers a streamlined, easy-to-use experience — especially in iOS 26 — that ensures that almost anyone can start taking photos after unboxing their iPhone without spending any time going through menus and setting things up.
That said, while it’s entirely possible to take excellent photos with the Camera app the way it is from the factory, sticking with the defaults won’t necessarily offer an in-depth enough experience for the hardcore photographers out there. Thankfully, there many things you may not have realized the Camera app can do, and Apple’s app also has advanced settings that you can adjust. These range from changing the image resolution to tweaking the app’s user interface. Of course, Apple’s app will never be as flexible as Camera alternatives like Halide, but depending on your needs, these may be all the changes you’ll ever have to make.
Advertisement
Change the main camera’s resolution
Azzief Khaliq/SlashGear
A modern iPhone takes photos at 24 MP by default. On an iPhone 17, this results in 5,712 x 4,814 photos that generally take up about 3 MB or so of storage. While this will be adequate for most people, those who want to either maximize storage space or eke out some more resolution can change this.
On iOS 26, go into Settings, then Camera. To change resolution, tap on Formats. You’ll see a few more options here, but the one you want to look for is Photo Mode. Tapping on that will let you choose between 24 MP and 12 MP, with the latter dropping file sizes to around 2 MB or so. While that’s not a huge reduction in file size, it will add up, especially if you’re rocking an older iPhone with less built-in storage.
Advertisement
You can also increase the resolution to 48 MP if you desire by enabling the Resolution Control and ProRAW settings (depending on the iPhone you have). This doesn’t, however, let you select 48 MP in the above submenu. Instead, it’ll expose a new control in the top-left section of the Camera app, which is where you’ll choose the higher resolution. Now, while this may seem appealing, do remember that more megapixels don’t always make for a better image, and you won’t suddenly be taking photos as detailed as those of a full-frame camera. This also comes with a big storage penalty, with these 8,064 x 6,048 photos capable of hitting 9 MB or more.
Advertisement
Customize the main camera lens
RYO Alexandre/Shutterstock
The Camera app usually lets you choose between 0.5x, 1x, 2x, and 3x (or 5x) zoom settings, but certain iPhone models offer a couple of extra “lenses” you can toggle and even set as default if you prefer them. This setting isn’t available on all iPhones — it’s only accessible on the Pro and Pro Max versions of the iPhone 15, 16, and 17, as well as the iPhone Air.
From the Camera settings menu, tap Main Camera or Fusion Camera, depending on your model. Here, you’ll find a section called Additional Lenses, where you can toggle two extra “lenses” (which are really zoomed-in versions of the main camera’s image). If you’re using any of the Pro or Pro Max iPhones, you’ll see 1.2x and 1.5x lenses, while the iPhone Air offers 1.1x and 1.4x lenses. Regardless, both equate to the same simulated 28 and 35 mm lenses.
Swapping between these variants of the main 1x lens is easy, and you can just quickly tap in the Camera app to cycle between them. However, if you prefer one of the two closer zooms, you can also make use of the Default Lens section below Additional Lenses. As the name suggests, this section lets you choose the zoom level your Camera app uses by default. Do note, however, that this feature requires you to set the Camera to 24 MP.
Advertisement
Decide whether you see what’s beyond the frame
Matt Mcnulty/Getty Images
By default, the iOS 26 Camera app shows you a darkened preview of what’s beyond the current frame on all iPhones since the iPhone 11. This is generally a very useful feature, acting as a reminder of all the extra image content you could capture by swapping to a different lens.
Let’s say, for example, that you’re trying to take a big group photo using the 1x lens but are having trouble squeezing everyone in. You could, of course, move back or try a different angle, but this feature, which Apple calls View Outside the Frame, means that the app will also show you what would be in the image if you swapped to the 0.5x lens instead. Depending on the situation, that may be a much quicker solution than repositioning yourself.
This feature can also be useful when taking more artistic photos, as it lets you orient yourself and remain aware of other ways to compose a photo. That said, if you prefer a distraction-free Camera preview, you can disable this feature by going to the Camera settings menu and toggling View Outside the Frame.
Advertisement
Enable or disable lens correction
valiantsin suprunovich/Shutterstock
Ever since the iPhone 12, the iOS Camera app has had a feature — enabled by default — that helps compensate for some of the optical faults present in the front and ultra-wide rear cameras. Wide lenses, such as the iPhone 17’s ultra-wide camera (which has a 13 mm focal length) are prone to barrel distortion, which can make images look warped and fish-eyed, especially toward the edges of the frame.
How severe this distortion is depends on the subject matter, but you’ll generally want to keep this feature enabled to nip any issues in the bud — especially if you like to take photos of architecture. However, the feature isn’t always perfect, and iPhone 12 reviews from the likes of The Verge even revealed some situations where the correction made things worse. While Apple has likely improved things over the years, those of you who want to avoid any possible processing issues can disable the lens correction feature by going into the Camera’s settings and toggling Lens Correction.
There are, of course, other reasons to disable this feature, too. For instance, you may want to experiment with your photos, leaning into the ultra-wide lens’ character and working with, not against, the distortion. Conversely, more experienced photographers could conceivably want to perform their own lens correction in a photo-editing program like Adobe Lightroom instead. No matter the reason, though, it’s thankfully a very simple setting to change.
Advertisement
Choose between rapid photos or better processing
Lokyo Multimedia JP/Shutterstock
Smartphones, including the iPhone, rely heavily on image processing to compensate for the relatively small size of their sensors. Thus, instead of capturing a ton of raw data, they use use their beefy CPUs to enhance what the sensors can capture, generating the high-quality photos we’re all familiar with. While the results are generally desirable, this process takes time, which can raise problems in those situations where a user needs to take mulitple photos in succession.
To that end, iOS 26 enables a feature called Prioritize Faster Shooting out of the box. This reduces the quality of the image processing when you press the shutter repeatedly, ensuring you can snap more photos faster. While disabling this may sound like an easy image quality win, it’s not really that simple: PetaPixel’s 2023 test of the feature found that not only were the quality reductions minimal, the changes in processing only happened once the user tapped the shutter “roughly more than three times in a second.” That is very quick and likely not what most users will be doing, especially since Burst mode lets iPhone owners do just that without any image quality downgrades.
Advertisement
That said, those who want to eliminate the possibility, however remote, of having worse images than they would otherwise get — even at the risk of missing out on rapid-fire photos — can disable this feature. To do so, head into the Camera’s settings menu and scroll down until you find the Prioritize Faster Shooting toggle, and disable it. We think it’s a crucial iPhone Camera setting for better photos, but hey, you do you.
The SamsungGalaxy Z TriFold is, by almost every measure, a phone that shouldn’t exist in the first place, and yet here we are: a massive 10-inch screen, two hinges, and a price tag that might make your wallet cry.
Samsung knew it was a first-generation device, which is why it kept production intentionally limited, a controlled showcase of engineering ambition rather than a full market rollout.
However, “more hits than misses” is not the bar you set for a device that costs almost as much as two or three conventional smartphones. For now, the TriFold is gone, but its successor — the Galaxy Z TriFold 2 — is reportedly on the company’s roadmap, perhaps being sketched, argued over, and stress-tested in a lab.
Spec
Samsung Galaxy Z TriFold
Display
10-inch main (AMOLED, 120Hz) + 6.5-inch cover
Peak Brightness
1,600 nits (main) / 2,600 nits (cover)
Chipset
Snapdragon 8 Elite for Galaxy
RAM / Storage
16GB RAM / 512GB or 1TB
Rear Cameras
200MP wide + 12MP ultrawide + 10MP 3x telephoto
Front Cameras
10MP (cover) + 10MP (foldable screen)
Battery / Charging
5,600mAh / 45W wired, 15W wireless
Ingress Protection
IP48
Dimensions
3.9–4.2mm unfolded / 12.9mm folded / 309g
5 things that the Galaxy Z TriFold 2 desperately needs to fix
When the Galaxy Z TriFold 2 arrives, it needs to arrive differently, not just as a thinner, shinier version of the current-generation foldable, but more as a phone that earns its place in more pockets.
Advertisement
Here is a list of things that need to change in the Galaxy Z TriFold 2, in my frank opinion, as they could seriously make the difference between a phone people admire (from a distance) and one that they actually want to buy.
A thinner, more durable hinge and chassis
John McCann / Digital Trends
The original TriFold’s dual-hinge system was, in my opinion, an engineering marvel, but it was also the most obvious compromise. At 12.9mm thick when folded and weighing 309 grams, the TriFold seemed gargantuan compared to Samsung’s Fold 7. For those catching up, the Fold 7 measures 8.9 mm thick and weighs just 215 grams.
Now, I understand that two hinges will always take up more space than one, which explains the TriFold’s thickness. However, this is where the single-fold Fold 7 feels more like a polished product, and the TriFold doesn’t. The good news is that the company already knows this.
Galaxy Z TriFold’s side profileJohn McCann / Digital TrendsGalaxy Z Fold 7’s thicknessNirave Gondhia / Digital Trends
Recent rumors suggest that Samsung is developing an “entirely new hinge solution” from the ground up for the TriFold 2, with the objective of making it meaningfully slimmer. Thinness alone, however, is not enough. If the phone wants to be considered as a daily driver, it needs to survive the brutal reality of everyday life.
Dust, drops, the unorganized items inside a bag, and the pressure that tight jeans pockets apply on a phone: the TriFold 2 must be able to survive all of this better than the TriFold, and slimming the hinge shouldn’t come at the cost of structural integrity.
Phone
Type
Unfolded Thickness
Folded Thickness
Weight
Samsung Galaxy Z TriFold
Tri-fold
3.9–4.2mm
12.9mm
309g
Huawei Mate XT Ultimate
Tri-fold
3.6-4.8mm
12.8mm
298g
Samsung Galaxy Z Fold 7
Dual-fold
4.2mm
8.9mm
215g
Google Pixel 10 Pro Fold
Dual-fold
5.2mm
10.8mm
257g
A better ingress protection rating
John McCann / Digital Trends
The Galaxy Z TriFold shipped with an IP48 rating, the same as the Fold 7, and already better than the Huawei Mate XT (which came with an IPX8 rating without any dust protection).
However, “better than Huawei’s Mate XT” isn’t exactly a glorifying benchmark, especially when the Pixel 10 Pro Fold has become the first foldable to achieve a full IP68 rating, the same as conventional flagships.
Advertisement
For a device positioned as the pinnacle of Samsung’s engineering, an IP48 feels less assuring. The TriFold 2, in my opinion, needs to match the IP68 as a baseline, and so does the Fold 7.
Phone
Type
IP Rating
Dust Protection
Water Protection
Samsung Galaxy Z TriFold
Tri-fold
IP48
Partial (particles over 1mm)
Up to 1.5m for 30 mins
Samsung Galaxy Z Fold 7
Dual-fold
IP48
Partial (particles over 1mm)
Up to 1.5m for 30 mins
Huawei Mate XT Ultimate
Tri-fold
IPX8
None
Up to 1.5m for 30 mins
Google Pixel 10 Pro Fold
Dual-fold
IP68
Full (dust-tight)
Up to 1.5m for 30 mins
Higher peak brightness for the inner display
John McCann / Digital Trends
Screen estate is the TriFold’s entire argument. It’s the reason you’re paying the premium, for the idea of fitting a large-screen foldable smartphone in your pocket (technically, you can). However, to me, it’s genuinely baffling that the phone’s main 10-inch screen peaks at just 1,600 nits, which is lesser than the Galaxy Z Fold 5’s inner screen from 2023.
For context, the Galaxy Z Fold 7’s inner screen hits 2,600 nits, as does the Galaxy S26 Ultra, and the TriFold’s outer screen. And while these might sound like bare numbers, they’re very important when you’re using the smartphone outdoors, under direct sunlight.
It’s the difference between holding the phone confidently in the street on a bright sunny day, and running into the shade to read the notification and replying. Given the company’s strong hold over its displays, I would really appreciate a brighter display for everyday use, on par with modern flagships and regular foldables.
Phone
Type
Inner Display Brightness
Cover Display Brightness
Samsung Galaxy Z TriFold
Tri-fold
1,600 nits
2,600 nits
Samsung Galaxy Z Fold 7
Dual-fold
2,600 nits
2,600 nits
Google Pixel 10 Pro Fold
Dual-fold
3,000 nits
3,000 nits
A more powerful chip for better multitasking
Chris Hall / Digital Trends
The Galaxy Z TriFold featured the Snapdragon 8 Elite chip, which, at the time, was the most powerful smartphone chip. However, due to thermal constraints, the device ran slower than the other 8 Elite-powered smartphones, such as the S25 Ultra.
While I’m not expecting the TriFold 2 to fix that issue entirely, given that it would also feature a thin chassis with very limited space for a dedicated cooling mechanism, a chipset upgrade could surely improve multitasking, gaming, and overall responsiveness.
Advertisement
This year, the TriFold 2 should feature the Snapdragon 8 Elite Gen 5 chip, the one we’ve seen on the Galaxy S26 Ultra (globally) and the S26 and S26 Plus (in the U.S., China, and Japan). Even with thermal throttling, the chipset could surely unlock a meaningful performance upgrade.
Phone
Type
Chipset
Availability
Samsung Galaxy Z TriFold
Tri-fold
Snapdragon 8 Elite for Galaxy
Global
Samsung Galaxy Z Fold 7
Dual-fold
Snapdragon 8 Elite for Galaxy
Global
Samsung Galaxy S25 Ultra
Slab
Snapdragon 8 Elite for Galaxy
Global
Samsung Galaxy S26 Ultra
Slab
Snapdragon 8 Elite Gen 5 for Galaxy
Global
The TriFold 2 desperately needs better selfie cameras
John McCann / Digital Trends
The Galaxy Z TriFold’s rear camera setup is still better. A 200MP main camera, a 10MP telephoto, and a 12MP ultrawide; all of these let users fiddle with multiple perspectives and zoom levels to get the picture they want without moving around too much. Selfie cameras, however, are a slightly different story.
The TriFold’s selfie camera setup is quite symmetrical: a 10MP (f/2.2) on the cover screen and a 10MP (f/2.2) camera on the main 10-inch foldable screen. But in my opinion, it isn’t something buyers expect to get with one of the most expensive smartphones money can buy.
While the selfie shots might not be the problem for most buyers, I’m not even sure whether they’re looking at it from a quality perspective; it’s the software that’s doing the heavy lifting there.
I appreciate the ultrawide field of view from the inner-screen sensor, I really do, as it helps get more people in a selfie, but I sincerely want Samsung to increase the resolution for both sensors. Additionally, selfie cameras could use slightly larger sensors for better low-light performance.
— Former Microsoft quantum lead Jeff Henshaw has joined IonQ as senior vice president of quantum compute products.
Henshaw said on LinkedIn that he has advised “dozens of quantum companies, from early-stage startups to industry titans,” and cited IonQ’s rapidly scaling quantum systems and a business “roadmap grounded in practical engineering” as draws to the Maryland-based company.
Henshaw spent most of his career at Microsoft, working there from 1989 to 2022 with a two-year break in the mid-2000s to serve as CTO of music tech venture DeepRockDrive. At Microsoft he started on Internet Explorer and Xbox programs before eventually leading the creation of Microsoft’s Quantum Development Kit. Henshaw is also co-owner of the Seattle Seawolves rugby team.
Joe Beda. (LinkedIn Photo)
— Joe Beda is now CTO of Stacklok, a Seattle startup developing AI assistants, agents and models. Beda and Stacklok co-founder and CEO Craig McLuckie previously teamed up to launch Heptio, a cloud tech company acquired by VMWare for $550 million in 2018.
Following the acquisition Beda served as principal engineer at VMWare for more than three years and then stepped away from the workforce in 2022.
Earlier in their careers, Beda and McLuckie worked together at Google where they helped create Kubernetes, the open-source container system that simplified how developers deploy software.
Advertisement
Beda shared his thoughts on joining Stacklok in a LinkedIn post, saying he was eager to connect AI “in a safe way, with the rest of our world.”
“It took a lot for me to exit ‘semi-retirement,’” he added. “I was recovering from burnout and learning how to slow down. But this was just too good of an opportunity for me to pass up.”
Harshit Shah. (LinkedIn Photo)
— Harshit Shah is now CTO of LVT (LiveView Technologies). Shah was CTO of both Kyruus Health and mental health startup Spring Health and previously served as head of engineering at Amazon Web Services. He also brings more than a decade of Microsoft experience, where he was a group PM manager for products including Bing Search SaaS services and Microsoft Edge.
“Harshit is already diving deep into our mission, using his expertise in GenAI and machine learning to help LVT build the future of edge intelligence,” the Utah-based physical security company said on LinkedIn.
Kevin Malgesini. (LinkedIn Photo)
— Pacific Science Center named Kevin Malgesini as its next CEO and president. He succeeds Will Daugherty, who is departing after more than a decade leading the Seattle science education nonprofit.
Malgesini joins from Seattle Children’s Theatre, where he has served for more than eight years and is currently managing director. He was previously development director at Town Hall Seattle and led significant fundraising campaigns at both institutions.
Advertisement
“Kevin is the right person to carry our mission of igniting curiosity and fostering a passion for discovery forward,” said Jembaa Mai, chair of the PacSci board of directors. Mai added that the board was “profoundly grateful to Will Daugherty for the extraordinary foundation he has built.”
PacSci has been navigating serious financial challenges, including reduced admissions during Covid and long-deferred upgrades and maintenance of facilities built for the 1962 World’s Fair. Under Daugherty’s leadership, the organization struck a deal to sell some of its real estate and sharpened its focus on hands-on innovation experiences.
Before leading PacSci, Daugherty held executive positions at Amazon, Expedia and AT&T. He was featured as a GeekWire Working Geek in 2019.
The leadership transition will take place June 1.
Advertisement
Stefan Karisch. (LinkedIn Photo)
— Stefan Karisch, who has led Amazon‘s Air Science & Tech organization as executive director for the past five years, is leaving the company at the end of April. In a LinkedIn post, he called the role a “privilege” and praised his colleagues for making “all the difference.”
Karisch joined Amazon from Boeing, serving in the Seattle area as a chief engineer in digital solutions and analysis for the aerospace giant’s global services division. He did not share his next move.
— John Doyle, global CTO for Healthcare & Life Sciences at Microsoft, has joined the board of Helio Genomics. The California-based healthcare company is developing a blood-based tool for early cancer detection.
Kevin Varadian. LinkedIn Photo)
— Kevin Varadianis now chief revenue officer for avante. The Seattle startup’s software helps companies reduce HR administrative workload and gives employees an AI assistant for benefits guidance.
“Making sense of benefits requires real two-way communication and the ability to handle messy, unstructured benefits data,” Varadian said on LinkedIn. “Until recently, the underlying technology just wasn’t there.”
Varadian, who is based in New York City, was CRO at HiredScore until its 2024 acquisition by Workday, after which he became head of go-to-market for Workday’s HiredScore AI. He has also held leadership roles at LinkedIn, WeWork and CoachHub.
The redesigned Tesla Model Y Juniper builds on the success of the first generation car, but it’s mostly evolutionary rather than revolutionary. Tesla improved the car’s range and efficiency, as well as giving the exterior a makeover with new headlight and new taillight designs. It’s certainly not a radical departure from the original, but the new taillights were different enough to get one Model Y owner in trouble with law enforcement.
As first reported by Tesla Oracle, one owner said that a police officer pulled them over while they were driving and claimed that their car’s taillights weren’t illuminated. The cop was reportedly confused by the Model Y’s taillight bar, claiming that the light cluster that houses the brake lights should also be illuminated at night. The owner pointed out that the taillight bar was in fact a design feature, and was supposed to serve as the primary taillight, but the officer still wasn’t convinced.
Advertisement
In the end, the driver couldn’t convince the officer that the taillight bar was a factory installation, and they were given a warning to get their car fixed, although they escaped a ticket. Although the officer was ultimately wrong in thinking that the taillight was faulty, it’s not difficult to see where the confusion came from. The taillight bar reflects red light off the surface of the car rather than directly projecting it, which could easily confuse drivers, or cops, about its intended purpose.
Advertisement
Tesla says its taillight meets federal requirements
Despite the potential for confusion, Tesla says that the design meets the necessary federal design requirements. When legendary collector Jay Leno pointed out the unusual design on an episode of Jay Leno’s Garage in 2025, Tesla’s chief designer Franz von Holzhausen said that it was a “first in the industry,” calling the design an “indirect running light.” Leno then questioned the brand’s lead engineer Lars Moravy about the taillight meeting government regulations, and Moravy said that the regulation stipulates “how much lumens come off the surface, but it never defines what kind of surface that has to be.”
Moravy added that the taillight design was so unusual that Tesla had to work with its suppliers to source entirely new machines to construct it, since nothing like it had been built before. It’s certainly innovative, but the question is whether that innovation is coming at the cost of safety. While an ugly taillight can ruin a car’s design, they still need to be able to fulfill their primary purpose as a safety feature, regardless of styling. If a cop can’t figure out how the Model Y’s taillights are supposed to work, there’s a chance that other drivers who aren’t as familiar with modern car design won’t know what they’re looking at either.
The company reported $13.6 billion in revenue for the quarter, up 7% year over year and well above analyst expectations. Intel also raised its current-quarter revenue guidance to between $13.8 billion and $14.8 billion, exceeding the roughly $13 billion analysts had projected. Read Entire Article Source link
Honor has unveiled its mid-range 600 series in Malaysia, and we’re keen to see how the specs measure up to its competitors ahead of its UK launch.
We’ve compared the Honor 600’s specs to the four-star Google Pixel 10a, and highlighted the key differences between the two handsets below.
We’ll be sure to update this versus once we review the Honor 600. In the meantime, visit our list of the best Android phones and best mid-range phones to find your next investment.
Price and Availability
At the time of writing, the Honor 600 and Honor 600 Pro are only available to buy in Malaysia. While they will eventually launch in the UK and Europe, Honor is yet to reveal the RRP for the series.
Advertisement
Having launched earlier this year, the Pixel 10a is available to buy now and has a starting RRP of £499/$499.
Advertisement
SQUIRREL_PLAYLIST_10208265
Snapdragon 7 Gen 4 vs Tensor G4
Powering the Honor 600 is Qualcomm’s Snapdragon 7 Gen 4 chip, the same processor that’s behind the Nothing Phone 4a Pro. We found that the Phone 4a Pro was able to handle everyday, casual use-cases with enough speed and responsiveness for most users, while less-demanding games can be played reliably too. With this in mind, we’d expect a similar performance with the Honor 600, though we’ll have to wait until we get our hands on the phone to confirm this.
Advertisement
Honor 600. Image Credit (Honor)
However, it’s worth noting that the Snapdragon 7 Gen 4 is a mid-range chip and can’t compete with the likes of Snapdragon 8 Elite Gen 5. In addition, during its benchmark tests the Nothing Phone 4a Pro couldn’t quite reach the results of the Pixel 10a.
Speaking of which, the Pixel 10a runs on Google’s 2025 Tensor G4 chip – the same as the Pixel 9a and rest of the 2025 Pixel 9 series. While it’s a shame Google didn’t fit its budget-friendly handset with the newer Tensor G5 processor, G4 is still perfectly capable and can handle just about anything you can throw at it with ease.
Google Pixel 10a. Image Credit (Trusted Reviews)
Advertisement
While Tensor G4 doesn’t quite measure up to Qualcomm’s 2025 flagship, Snapdragon 8 Elite, it’s still fast and smooth in everyday use and can handle basic gaming too.
Honor 600’s bezel are more narrow
One of our biggest issues with the Pixel 10a’s design is its thick bezel. Sure, they’re slimmer than the ridiculously large Pixel 9a’s, but overall the bezel makes the handset look more dated than many of the best Android phones.
With this in mind, Honor’s promise that the 600 series boasts the “narrowest black bezel on the market” all the more impressive. At just 0.98mm, the Honor 600’s bezel is near-on invisible and should help the handset feel more premium as a result.
Advertisement
Honor 600 bezel. Image Credit (Honor)
Honor 600 has a larger battery and supports faster charging
Unsurprisingly, the Honor 600 is equipped with a significantly larger battery and faster charging compared to the Pixel 10a. While the Pixel 10a’s cell is pretty average at 5100mAh – and larger than the premium Samsung Galaxy S26 Ultra – the Honor 600’s battery is 7000mAh instead. Even so, it’s worth noting that we found the Pixel 10a’s battery life to be solid, and saw us comfortably through a day’s use before conking out.
Fast charging on Pixel 10a. Image Credit (Trusted Reviews)
Advertisement
Honor promises that the 600 should also offer a full-day of battery life alongside five years battery health protection too.
When it does come time to recharge, the Pixel 10a supports 45W wired and 10W wireless speeds whereas the Honor 600 boasts support for 80W wired speeds. While Honor has disclosed that the 600 can support 27W reverse charging, its exact wireless speeds are still at large.
Honor 600 has a 200MP main camera
Both handsets are equipped with two rear lenses: a main and an ultrawide. However, the Honor 600 sports a whopping 200MP main lens while the Pixel 10a is fitted with a 48MP main instead.
Although the difference may seem pretty hefty, we should note that the Pixel 10a is a brilliant camera phone, especially when you consider its price tag. We found that pictures are detailed with true-to-life colours, while the lenses can handle even complex lighting conditions with ease.
Advertisement
Captured on Pixel 10a. Image Credit (Trusted Reviews)
In comparison, Honor promises the main lens offers an “industry-leading” low-light performance, true-to-life authentic colour reproduction and AI enhanced night photography too. However, as we’re yet to review the Honor 600, we’ll have to wait and see how its camera fares.
Advertisement
Perhaps one of the key reasons to opt for a Pixel phone is its plethora of AI-powered features. Alongside the likes of Circle to Search, Live Translate and Call Assist, there’s built-in Gemini and Google’s Photo Editing tools too.
While the Honor 600 isn’t quite as equipped, that’s not to say there aren’t AI tools to play around with – including Gemini. In fact, one of Honor’s headline features is AI Image to Video 2.0 which allows users to turn up to three images and prompts into a video.
Early Verdict
It’s difficult to give even an early verdict as we don’t know how much the Honor 600 will cost in the UK. However, with a 200MP main lens, a near-invisible bezel and mighty battery, the Honor 600 is undoubtedly a promising Android phone.
Advertisement
On the other hand, the Pixel 10a is one of the best mid-range phones you can get your hands on, thanks to its solid and reliable camera set-up, all-day battery life and plethora of AI tools.
We’ll be sure to update this versus once we review the Honor 600.
The Shiller CAPE ratio stands at 38-40, the second-highest in 155 years behind only the dot-com peak of 44.19, and S&P 500 top-10 concentration exceeds dot-com levels by nearly 50%. But AI companies are massively profitable unlike their dot-com predecessors, with Nvidia alone earning $120 billion in net income and the tech sector trading at 30x forward earnings versus 50x at the 2000 peak. The resolution depends on whether $660-690 billion in annual hyperscaler capex generates returns that justify the investment, a question that cannot be answered until the infrastructure cycle produces results.
The Shiller cyclically adjusted price-to-earnings ratio for the S&P 500 stands at approximately 38 to 40, depending on the day you check. In 155 years of recorded data, the CAPE has been higher exactly once: March 2000, when it reached 44.19, one month before the Nasdaq began a decline that would erase 78% of its value over the following two and a half years. The ten largest companies in the S&P 500 now account for 36% to 40% of the index’s total market capitalisation, nearly 50% above the dot-com peak concentration of roughly 27%. Deutsche Bank’s latest fund manager survey found that 57% of institutional investors now identify an AI valuation crash as the single greatest risk to markets. Jeremy Grantham, the co-founder of GMO who correctly called the dot-com and housing bubbles, has said there is “slim to none” chance the current AI rally does not end in a bust. These are the numbers that make the comparison to 2000 feel inevitable. They are also, by themselves, incomplete.
The case for alarm
The structural parallels between the current AI equity rally and the dot-com bubble are not superficial. They are mechanical. Market concentration has exceeded dot-com levels by a wide margin. The Nasdaq-100’s performance is dominated by a handful of companies whose valuations are predicated on AI revenue growth that has not yet fully materialised at the scale the market is pricing. Hyperscaler capital expenditure, the combined infrastructure spending of Microsoft, Google, Amazon, and Meta, is approaching $660 billion to $690 billion in 2026, a figure that represents the largest corporate investment programme in history outside of wartime mobilisation. That spending is being funded, in part, by converting human labour into AI infrastructure:Meta and Microsoft collectively cut up to 23,000 jobswhile simultaneously committing to record capital expenditure, a direct transfer from payroll to data centre construction.
Bank of America’s Savita Subramanian has set a year-end S&P 500 target of 7,100, with a bear case of 5,500, and expects multiple compression as earnings growth slows in the second half of 2026. The Motley Fool identified four factors it associates with bubble conditions: retail investor euphoria, speculative capital concentration, decoupling of valuations from fundamentals, and a narrative so compelling that scepticism feels intellectually disreputable. All four are present.OpenAI’s $852 billion valuationprices a company that has never earned a profit at roughly double the market capitalisation of Coca-Cola, a company that has earned profits continuously since the 1890s.Accel’s $5 billion AI-focused fund, the largest in venture capital history, exemplifies the capital flooding into AI at the private market level. The public and private markets are reinforcing each other: venture-backed AI companies raise at extraordinary valuations, public AI companies spend at extraordinary rates to stay ahead of them, and the cycle pushes both valuations and capital expenditure higher.
The most important difference between 2000 and 2026 is profitability. At the dot-com peak, the technology companies driving the market were, in aggregate, destroying capital. Cisco traded at 200 times earnings. Pets.com had no earnings. The entire thesis rested on future revenue from an internet economy that, while real, was years from generating the cash flows the market was discounting. In 2026, the companies driving the AI rally are among the most profitable in corporate history. Nvidia reported net income exceeding $120 billion for fiscal 2026. Its forward price-to-earnings ratio is approximately 41, elevated but not in the same postcode as Cisco at 200. The technology sector’s aggregate forward P/E is roughly 30, compared with 50 at the dot-com peak. Apple, Microsoft, Alphabet, Amazon, and Meta generated a combined $350 billion in free cash flow in their most recent fiscal years. These are not speculative enterprises burning venture capital. They are cash-generating machines that have chosen to reinvest at historically unusual rates.
Advertisement
Capital Economics analyst John Higgins has made the most nuanced version of this argument. He distinguishes between a “stock bubble” and a “fundamental bubble.” The stock bubble, in his analysis, may already be deflating: the Nasdaq-100 corrected more than 10% from its February 2026 highs before recovering on trade deal optimism and strong earnings. But the fundamental bubble, the one built on actual earnings growth, is still expanding. Nasdaq-100 earnings grew 19% year over year in the most recent quarter. As long as AI-related revenue continues growing at that pace, the earnings justify elevated multiples. The bubble pops not when P/E ratios are high, but when the “E” stops growing. JPMorgan has suggested the S&P 500 could reach 8,000 if earnings momentum continues. Goldman Sachs sees a multi-year AI “supercycle.” The bull case is not that valuations are reasonable. It is that earnings growth will make today’s prices look reasonable in retrospect, the same argument that was wrong about Cisco in 2000 and right about Amazon.
The capex question
The variable that will determine which analogy holds is capital expenditure returns. Hyperscalers are spending $660 billion to $690 billion this year building AI infrastructure.Meta’s $27 billion deal with Nebiusfor AI cloud capacity is one transaction among dozens, each individually larger than most companies’ entire capital budgets. The question is not whether this infrastructure will be used. It almost certainly will. The question is whether it will generate returns that justify the investment at the price paid. The fibre-optic cables laid in 1999 carry today’s internet. The companies that laid them went bankrupt. The technology was correct. The financial model was not.
There are structural reasons to believe the AI capex cycle is better supported than the fibre-optic buildout. Cloud computing operates on a consumption model where customers pay for usage, providing revenue visibility that speculative fibre networks lacked. The hyperscalers building the infrastructure are also the primary consumers of it, reducing the demand uncertainty that destroyed independent fibre companies. Oracle’s $553 billion in remaining performance obligations, Microsoft’s Azure backlog, and Amazon’s AWS contract pipeline all represent committed future revenue. But committed revenue is not collected revenue, and the concentration of AI demand in a small number of large model developers and enterprise customers creates fragility. If OpenAI, the anchor tenant of Oracle’s Stargate project, were to experience financial difficulty, the ripple effect through the infrastructure financing chain would be severe. If enterprise AI adoption plateaus at the “copilot” stage without progressing to the autonomous agent workflows that justify the next order of magnitude in compute spending, the return on $660 billion in annual capex would fall below the cost of capital.
The verdict the market cannot reach
Both sides of the debate are correct, which is what makes the current moment so difficult to navigate. The bears are right that market concentration, CAPE ratios, and speculative euphoria have reached or exceeded dot-com levels. The bulls are right that the underlying companies are profitable in ways their dot-com predecessors were not. The resolution depends on a variable that neither side can observe directly: the long-term return on the hundreds of billions being invested in AI infrastructure this year. If those returns materialise, the current valuations will be seen as fair prices paid early for a genuine technological transformation. If they do not, the CAPE chart will add a second peak to match the one from March 2000, and the comparisons that feel alarmist today will feel prescient.
Advertisement
The Federal Reserve’s benchmark rate sits at 3.50% to 3.75%, providing less of a cushion than the near-zero rates that inflated asset prices between 2020 and 2022 but not the restrictive rates that typically trigger corrections. Section 122 tariffs of 10% to 15% on a range of imports expire on July 24, 2026, and their renewal or escalation will affect corporate earnings forecasts and consumer spending.The trajectory that brought technology markets to this point, a year of accelerating AI investment, record venture funding, and corporate restructuring around artificial intelligence, has created conditions that resemble a late-stage expansion more than an early-stage bubble. Late-stage expansions can last longer than sceptics expect. They also end more abruptly than optimists imagine. The honest answer to whether AI stocks are in a bubble is that the question cannot be answered until the capex cycle produces results, and the capex cycle has barely begun. Grantham is betting it ends badly. Goldman is betting it does not. The market is pricing in both possibilities simultaneously, which is why it has been volatile in both directions, and will remain so until the revenue either arrives or does not.
You must be logged in to post a comment Login