Tech
Big on screen but light on thrills
Verdict
The Samsung Galaxy A57 5G nails the basics with a slim, premium-feeling design, a bright 6.7-inch AMOLED display and dependable all-round performance for everyday use. Its weak points are easier to forgive at this price, but middling cameras, average battery life and a steep jump for extra storage stop it from being a true standout.
-
Lightweight and thin design for a phone of its size
-
Brilliant, big display that’s great for media
-
IP68 water and dust resistance
-
Fingerprint sensor is slow and unreliable
-
Battery life not as strong as expected
-
Not the smoothest, fastest performance around
-
A bump in storage costs a fortune
Key Features
-
Review Price:
£529
-
Slim, lightweight build
The Galaxy A57 measures in at 6.9mm thick and just 179g, an impressive combination considering its large screen.
-
Premium-looking screen
The 6.7-inch AMOLED screen looks more premium than ever, with slimmed down, (nearly) symmetrical bezels.
-
Full dust and water resistance
The Galaxy A57 is the first in Samsung’s A-series to offer full IP68 dust and water resistance.
Introduction
For the right person, a mid-range phone can be the perfect balance of features and cost. It’s a delicate balance, because you’ll inevitably lose out on something when you compare it to more expensive phones.
Samsung walks that tightrope every year, focusing on a couple of key parts of the experience while compromising on a few others to bring the price down to a more palatable level.
The Samsung Galaxy A57 is the latest in a long line of mid-market phones, and while not perfect, it hits the mark in a few areas. Let’s get into it.
Design
- Premium metal and glass build stands out
- Lightweight design
- A bit of a fingerprint magnet
There are both good and bad elements in the design department, but for the most part, the A57 does a really good job of disguising the fact that it’s not one of the more expensive phones. There’s no plastic to be found anywhere, with both front and back adorned with Gorilla Glass Victus+.


I really like how Samsung’s played with glass finishes to add some visual contrast. The dark, glossy back plays off nicely with the slightly opaque, frosted finish on the camera island, making it look better than the more expensive Galaxy S series in some ways.


At least, it is until you actually pick it up and use it, because that glossy, dark finish on the back is a proper fingerprint magnet. One touch, and smears will appear. It’s the reason so many more expensive phones now use a frosted, matte glass finish. This tends to hide fingerprints much better. Perhaps then, a case of giving with one hand and taking away with the other.


It’s a similar thought when looking at the aluminium frame and the front of the phone. Because I do really like that raised area on the right edge where the volume and side buttons live. It makes that aluminium edge less boring somehow, but then, there’s a bit of a chin in the bezel, where Samsung hasn’t quite managed to give us a uniform bezel on all four sides and corners around the display.
Still, it comes across as a well-thought-through and purposeful design. The thing I noticed first when I unboxed it was how thin and lightweight it seemed. For a phone with such a large display, it has a nimbleness that belies the numbers on a spec sheet. Still, it’s slightly thinner and lighter than the Galaxy S26 Plus, and considerably more so than the Galaxy S26 Ultra.


It’s obviously still a way off being as skinny as the S25 Edge, but at the same time, when you realise it’s packed in a battery that’s the same capacity as the larger and heavier S26 Ultra, it’s impossible not to be impressed.
It shares the same water and dust resistance rating as its more expensive cousins too. So if you happen to like walks in the rain, it should cope just fine.
Display
- 6.7-inch 120Hz AMOLED display
- A fantastic panel for the price
- Optical fingerprint sensor is hit-and-miss
Just like its design and build, the display is a highlight on the A57. Using it to watch movies or game on, I was never left with a sense that I was using an inferior display, even though technically I was.


It doesn’t quite reach the same brightness levels of the S26 series, but it’s still very bright, vivid and colour-rich, making it a joy to stream videos and get hooked into social media feeds on. The fact that it’s 6.7 inches diagonally helps too. It’s an expansive canvas with few noticeable weaknesses.
Any weaknesses it does have only show in other areas. As an example, it can hit 120Hz refresh rates, meaning it can ramp up to be super smooth and sharp, even with quick animations. However, because it’s not an LTPO panel, it can’t adapt those refresh rates at small 1Hz increments like the top-tier phones.


You might not notice it at all while watching video or even gaming, but you might notice it when moving quickly from a static page to a moving one. Like when you swipe quickly to go to the Home Screen from a browser page. Going from not moving, to moving, the display often leads to the odd frame drop and stuttering animation. It’s not horrendous, and maybe not even noticeable if you’re not used to the most premium devices on the market.
This lack of ultra-adaptive refresh rate also affects the battery life. But I’ll get more into that later on. The short version of that takeaway is that the less efficient display means that if you use your phone a lot, you’ll drain the battery faster than you would with a more expensive Galaxy S-series phone.


One other weakness in the display has nothing to do with the display itself, but rather with the fingerprint sensor built into it. Unlike the more premium models, it doesn’t have an ultrasonic sensor, but uses an optical one. And it’s not a great optical sensor, in my experience. It takes a comparatively long time to set up, and you have to hold your finger on it for a second or two before it registers. Plus, in my experience, the first attempt fails fairly frequently.
There’s a possibility I’ve just become too accustomed to the high-end ultrasonic scanners on more premium devices, but I’ve also used mid-range devices with optical scanners that aren’t as slow and finicky as this one.


Despite the minor compromises, however, I will say this: if your time is spent mostly on social media, YouTube and video watching or casual games, I think you’ll struggle to find a better display than the A57’s for that. It’s a really wonderful canvas for just about everything.
Software
- OneUI 8.5 based on Android 16
- Smattering of AI features, but not full Galaxy AI support
- Six years of OS upgrades
There’s not a huge amount to cover on the software side that hasn’t already been addressed in our other recent Samsung reviews. The One UI 8.5 version of Samsung’s Android skin is largely the same as what you’ll find on the Galaxy S26 series.


That includes a smattering of AI features built into apps like the Gallery app for editing photos using voice dictation, or getting Bixby (Samsung’s oft-neglected built-in assistant) to change your phone settings for you.
It does lack some of the features that require more power, though. You’re not going to get DeX, Samsung’s desktop-like interface for external monitors, as an example. The more proactive and pervasive system-wide AI features are also missing. Elements like ‘Now Nudge’ can remind you of upcoming appointments, but they lack the agentic feel of the built-in AI tools.


Cameras
- Three cameras, but includes a junk macro lens
- Solid performance, but can struggle with HDR processing
- No telephoto lens for zoom, but digital zoom is solid
It’s in the camera department that you start to see the obvious signs that we’re dealing with a mid-range phone. There are three cameras, as is fairly typical, but one of those is a low-resolution macro camera, which effectively acts as a backup lens for close-up photography. You do get an ultrawide lens as well as the main camera, alongside that macro camera.


How well this camera serves you will likely depend on when you usually take photos, and in what conditions. Outside in bright daylight, it does a pretty good job of delivering sharp, bright and vibrant photographs.
There’s a little over-sharpening and contrast boosting in the processing that makes images ‘pop’ on screen. Being critical, it can often appear overexposed, particularly in the brighter parts of the image, but that’s being quite nitpicky.
It struggles at times with scenes where there’s bright backlighting and HDR needs to kick in, often leaving the shadowed foreground object a little too dark. On that note, there are times when shadowed areas in not-so-well-lit indoor scenes, or even grey clouds in the skies, can be a little grainy and noisy.
One of the plus points is that the main sensor is large enough and pixel dense enough that you can punch in to 2x zoom and still get a pretty decent image that doesn’t obviously lack in sharpness.
It makes up for the lack of a dedicated zoom camera slightly, but putting it side-by-side with the Galaxy S26, with its 3x zoom camera, the A57 does struggle with anything beyond 2x zoom. Image quality falls away quite rapidly once you go above that 2x mark and, in my testing, really not worth going anywhere beyond 5x zoom.
Indoors, away from super bright light sources, it does a decent job of capturing colour and detail. You will probably start seeing that aforementioned noise and grain in darker parts of the image, and see the camera struggle a bit with focusing, especially moving objects like Richard Parker – my pet cat.
At night time, launching into the dedicated night mode can result in some bright, in-focus images. The primary camera is definitely stronger than the ultrawide, which can sometimes struggle to contain details in brighter parts of the image. But it’s hard to be too critical. The important thing to note is that regardless of the conditions, it’s possible to get a good enough image that you’d be happy to share on social media.
Of course, it’s not as strong or versatile as phones that cost twice as much, but as I suspect anyone buying this will be happy enough with the results.
As a video maker, the lack of 4K recording at 60fps was a tad disappointing. Shooting 4K at 30fps is okay, but I often found the footage a little grainy, lacking in sharpness and smoothness. Particularly when panning across a scene, there was some stuttering and a rolling shutter-like effect. Having to jump down to 1080p to get 60fps means you effectively have the choice between sharp footage or smooth footage; you can’t have both. So I’d say it’s definitely not the phone for wannabe content creators.
Performance
- Mid-range Exynos 1680 power
- Can handle most daily tasks just fine
- Not quite powerful enough for high-res gaming
Performance, like cameras, is another area where you’ll see a difference between these mid-range phones and the top-tier models. But, just like the camera department, whether or not it’s got enough juice depends very much on what you do.
For the most part, the experience of using the A57 is fluid and smooth. As mentioned when I was talking about the display, there’s a little bit of stutter and frame-dropping in the user interface when going between static and moving content on screen, but once it’s going, it’s responsive and quick.


Inside, the phone has the Exynos 1680, which is a very middle-of-the-road type of processor. That said, it’s got enough grunt that it’ll handle most of your casual tasks easily enough.
Casual games aren’t a struggle at all, but I did notice it would often drop the resolution in some games to keep gameplay smooth. Mario Kart Tour, which has long been my go-to game on mobile, didn’t look as sharp as it does on more powerful phones. But, crucially, the gameplay isn’t hampered by frame-dropping at all, and so it feels pretty smooth.
It’s a powerful enough chipset that it can also handle quite a lot of the AI-based tasks on the phone. Using Bixby to call up settings options wasn’t as snappy and instant as the S26, but it wasn’t too much of a hindrance either.


As is always the case, tempering expectations is advised with performance. You’re not going to be able to play the highest fidelity games in their highest settings. If you did, you’d soon find the phone chugging to a halt. But if your game time mostly involves games like Block Blast, Mario Kart or something more casual, the A57 has more than enough grunt for those.
Battery Life
- Same 5000mAh battery as S26 Ultra
- One day for most users, but can squeeze more out
- Full charge in 75 minutes
Tempering expectations is also advised with the battery. Samsung advertises this phone as having a two-day battery, and whether or not that’s achievable very much depends on your screen time and the type of phone user you are. But, for busy power users, I think you’ll get one full day, not two.


When the screen’s on, even playing the casual games I mentioned before, the battery seems to drain a little quicker than the more powerful S26 models, even the smallest one, which has a smaller battery. My suspicion is that because it can’t drop as low as 1Hz on static pages, and is at a minimum of 60Hz all the time, it uses a lot more energy to power that display. Especially considering how bright it is.
On my lighter days (I’m a pretty light user already) I could make it last two days. But that’s true of most phones these days. It’s very conservative with battery use in standby mode with the always-on display disabled, so if your screen use is 2-3 hours a day and mostly low-intensity tasks, I think two days might just be possible.


Charging speeds when empty are fast enough to be convenient, but not market-leading. A full charge takes about 75 minutes, but you can get 50% topped up in 25 minutes in those times when you’ve run empty and you’re in a rush to get out again. You just need to make sure you have a compatible 45W charger to get those speeds.
Should you buy it?
You want a premium-feeling mid-ranger
With its combination of aluminium frame and glass rear, the A57 5G doesn’t feel as cheap as most mid-range phones.
You want great performance
The Exynos chipset inside the A57 is fine for day-to-day tasks, but it can’t compete with the most powerful mid-rangers around.
Final Thoughts
On the whole, Samsung’s Galaxy A57 shares many of the same strengths as previous models. It’s a capable phone with a brilliant display built into a big phone that’s remarkably lightweight and thin-feeling.
Any compromises, like imperfect cameras, performance and battery life, are largely expected at this price point. Costing just over £/$500 for the base model is about on par with what you’d expect for this phone from Samsung.
What’s a little harder to accept is that if you want more storage than the 256GB base model, you’re going to need nearly £/$200 more to get it. And at that price, you can get a much better phone from just about anyone.
How We Test
We test every mobile phone we review thoroughly. We use industry-standard tests to compare features properly and we use the phone as our main device over the review period. We’ll always tell you what we find and we never, ever, accept money to review a product.
- Used as a main phone for over a week
- Thorough camera testing in a variety of conditions
- Tested and benchmarked using respected industry tests and real-world data
FAQs
Samsung has committed to six years of OS upgrades and security patches.
No, despite offering 45W fast charge support, you won’t get a charging brick in the box in most regions.
Test Data
| Samsung Galaxy A57 5G | |
|---|---|
| Geekbench 6 single core | 1375 |
| Geekbench 6 multi core | 4503 |
| Geekbench 6 GPU | 6642 |
| Time from 0-100% charge | 75 min |
| Time from 0-50% charge | 25 Min |
| 30-min recharge (no charger included) | 59 % |
| 15-min recharge (no charger included) | 32 % |
| 3D Mark – Wild Life | 1697 |
| 3D Mark – Wild Life Stress Test | 99.6 % |
Full Specs
| Samsung Galaxy A57 5G Review | |
|---|---|
| UK RRP | £529 |
| USA RRP | $549 |
| Manufacturer | Samsung |
| Screen Size | 6.7 inches |
| Storage Capacity | 256GB, 512GB |
| Rear Camera | 50MP + 12MP + 5MP |
| Front Camera | 12MP |
| Video Recording | Yes |
| IP rating | IP68 |
| Battery | 5000 mAh |
| Fast Charging | Yes |
| Size (Dimensions) | 76.8 x 6.9 x 161.5 MM |
| Weight | 179 G |
| Operating System | One UI 8.5 (Android 16) |
| Release Date | 2026 |
| First Reviewed Date | 21/04/2026 |
| Resolution | 1080 x 2340 |
| HDR | Yes |
| Refresh Rate | 120 Hz |
| Ports | USB-C |
| Chipset | Samsung Exynos 1680 |
| RAM | 12GB, 8GB |
| Colours | Lilac, Navy, Icyblue and Grey |
| Stated Power | 45 W |
Tech
This smart pillow ensures you never sleep through an emergency alarm, or even a phone call
Sleeping through a phone call is annoying. Sleeping through a fire alarm is a whole different level of bad. So this new smart pillow idea feels a lot more useful than gimmicky. Researchers at Nottingham Trent University have developed a smart pillow sleeve designed to help deaf users wake up to important nighttime alerts.
Unlike a typical smart pillow, the team developed a smart sleeve that is designed to fit over a standard pillow. It slips inside a normal pillowcase, and vibrates when connected alarms or calls come through.
What problem does it solve?
The project came out of feedback from members of the Deaf community, who told the researchers that existing under-pillow alert devices are often too bulky and uncomfortable to sleep on. In response, the team built a much thinner electronic textile sleeve with four tiny haptic actuators embedded into a yarn-like structure.

Each actuator measures just 3.4mm by 12.7mm, and the electronics are small enough that users are not supposed to feel them while seeping. So the safety product is both handy and comfortable to use.
How it can even save lives
The sleeve connects to a smartphone through a microcontroller, and that setup can then link wirelessly to household alarms. When something goes off, the pillow vibrates intensely enough to wake the user, with distinct patterns used to signal different kinds of alerts. This means a user with a hearing impairment can be alerted of a fire alarm, a burglar alarm, or even an incoming phone call.
This extra layer basically makes the feature thoughtful. The goal here is to wake up someone and also give them enough information to know why they are being woken up in the first place.
The researchers say the yarn used in the sleeve has already passed durability testing, including multiple washing cycles, which suggests they are treating this as a real product concept rather than a lab-only demo. The work was presented at the ACM CHI conference in Barcelona, and the team is now looking for an industrial partner to help bring it to market. Tech Xplore also quotes supervisor Theo Hughes-Riley calling it a significant step toward more inclusive emergency alert systems for deaf and deaf-blind individuals.
Tech
Android finally gets a fitting answer to the iPad mini, and it looks stunning
Apple has owned the compact premium tablet segment for years, but there’s a new contender in the market that runs on Android and takes on the iPad mini for everything it stands for.
Unveiled alongside the Find X9 Ultra, the Oppo Pad Mini comes with an 8.8-inch 2.5K OLED panel (2520 x 1680 pixels) in a 3:2 aspect ratio. This is the same, near-square aspect ratio that makes the iPad mini ideal for reading, note-taking, consuming content, and other productive workflows.

What makes the Oppo Pad Mini worth comparing to the iPad mini?
The tablet’s bezels are remarkably thin at 2.99 mm, and the screen can achieve up to 1,600 nits of peak brightness with a variable refresh rate between 60 and 144 Hz. There’s an optional matte version of the tablet that mimics a paper-like surface, something that the iPad mini doesn’t offer.
Where Apple puts an A17 Pro inside its mini, the Oppo Pad Mini comes with a Snapdragon 8 Gen 5 (3nm) chipset paired with up to 12GB of LPDDR5X RAM and 512GB of UFS 4.1 storage, which, in my opinion, is a capable combination.
For those wondering, the Snapdragon chip provides better multi-core performance, but its single-core performance matches that of the A17 Pro. In addition, the type of memory and storage should make the Oppo tablet feel more responsive and snappy.

How does it hold up in terms of portability and battery?
At just 5.39mm thick and weighing 279 grams, the Oppo Pad Mini is designed for portability, to the extent that it can fit in relatively larger pockets and small bags. The iPad mini, by comparison, weighs 293 grams and measures 6.3mm.
The 8,000 mAh battery supports 67W wired charging (full charge in around an hour), something that the iPad mini lacks. Pricing starts at CNY 3,199, which is around $470 for the baseline variant with 8GB of RAM and 256GB of storage, rising to around $590 for the variant with 12GB of RAM and 512GB of storage.
While the sales for the iPad mini alternative commence on April 24, 2026, it won’t be available in the United States, at least for now. To me, Oppo’s entry into the premium small-screen tablet segment signals that Android OEMs are taking the category seriously.

For now, the Oppo Pad Mini isn’t a direct competitor to the iPad mini, primarily because it isn’t available in the United States. However, if and when the product arrives in the region, it could easily take up a good chunk of iPad mini’s sales, providing Android users with a top-notch experience in a smaller form factor without paying a hefty price.
Tech
Notification bug that let FBI access messages patched with iOS 26.4.2
People being investigated by the FBI deleted Signal, but some messages were still retrievable from the iPhone’s notification database. The latest iOS update patches this vulnerability.

iPhones may be secure, but they aren’t invulnerable to bugs
Users should reasonably expect that deleting an app from their iPhone will remove all associated data. However, a recent case involving the FBI showed that some notification data was being retained by mistake.
The iOS 26.4.2, iPadOS 26.4.2, iOS 18.7.8, and iPadOS 18.7.8 updates released on Wednesday address the notification database issue directly. The notes simply say that “a logging issue was addressed with improved data redaction.”
Continue Reading on AppleInsider | Discuss on our Forums
Tech
Tesla Plaid Owner Learns The Hard Way It Can’t Keep Up With A Corvette
Car enthusiasts love comparing vehicle performance, especially when you can see it play out on a drag strip. A YouTube video recently went viral of a very unlikely matchup: a Tesla Model S Plaid versus a Chevrolet Corvette ZR1X. In the video shared by DragTimes, the ZR1X took on three Model S Plaids in the quarter mile at the TX2K event at Texas Motorplex in Ennis, Texas.
The first Tesla Model S Plaid driver wasn’t sure if he’d beat the ZR1X, but he felt it would be really close. However, it was clear from the launch that it wasn’t close at all — the ZR1X left the Plaid far behind. The ZR1X was able to get up to 60 miles per hour in 1.95 seconds, beating the Plaid’s 2.26 seconds. The ZR1X finished the quarter mile in 8.92 seconds, hitting nearly 154 miles per hour. The Plaid finished in 9.65 seconds, with a top speed of 140 mph. It was a similar story with the other two Plaids.
Why is the Corvette ZR1X better than the Model S Plaid on the drag strip?
The Corvette ZR1X and the Model S Plaid that raced that day were both stock with all-season tires, meaning the quarter mile race was a true indicator of the vehicles’ performance without enhancements. To be fair to the Model S Plaid, it beat the Corvette ZR1 in a previous video due to its incredible speed, which is why Brooks Weisblat took out the ZR1X, which pairs the twin-turbo 5.5L LT7 V8 engine with a front-axle electric motor for 1,250 horsepower. That’s more than the Plaid’s tri-motor setup, which produces 1,020 hp. The Plaid is also 4,802 pounds (about 1,000 more than the ZR1X).
With more horsepower and a lighter weight, it’s no surprise that the ZR1X had a faster launch. The Plaid still impressed since it had 70,000 miles on it and 85% battery. EVs slightly slow down over time.
The Tesla Model S Plaid has a top speed of 163 mph without the added $20,000 Track Package while the ZR1X can reach 225 mph. With the ZR1X already ahead, it’s no surprise that it was able to remain far ahead of the Plaid as they raced down the track. While the Plaid is so fast that it was previously banned from NHRA races, the Plaid was no match for what Corvette considers a track-focused “hypercar.”
Tech
Anthropic’s Mythos Model Is Being Accessed by Unauthorized Users
Bloomberg reports that a small group of unauthorized users gained access to Anthropic’s restricted Mythos model through a mix of contractor-linked access and online sleuthing. Anthropic says it is investigating and has no evidence the access extended beyond a third-party vendor environment or affected its own systems. From the report: The users relied on a mix of tactics to get into Mythos. These included using access the person had as a worker at a third-party contractor for Anthropic and trying commonly used internet sleuthing tools often employed by cybersecurity researchers, the person said. The users are part of a private Discord channel that focuses on hunting for information about unreleased models, including by using bots to scour for details that Anthropic and others have posted on unsecured websites such as GitHub. […] To access Mythos, the group of users made an educated guess about the model’s online location based on knowledge about the format Anthropic has used for other models, the person said, adding that such details were revealed in a recent data breach from Mercor, an AI training startup that works with a number of top developers.
Crucially, the person also has permission to access Anthropic models and software related to evaluating the technology for the startup. They gained this access from a company for which they have performed contract work evaluating Anthropic’s AI models. Bloomberg is not naming the company for security reasons. The group is interested in playing around with new models, not wreaking havoc with them, the person said. The group has not run cybersecurity-related prompts on the Mythos model, the person said, preferring instead to try tasks like building simple websites in an attempt to avoid detection by Anthropic. The person said the group also has access to a slew of other unreleased Anthropic AI models.
Tech
Intel’s upcoming gaming CPU specs have leaked
Pointing squarely at AMD’s Ryzen range, Intel’s next-generation desktop CPU lineup has leaked, with the Nova Lake-S architecture set to arrive with up to 288MB of L3 cache across a range expected to carry the Core Ultra 400 branding.
That cache figure dwarfs the 36MB found in Intel’s current flagship Core Ultra 9 285K, and comfortably exceeds the 96MB and 192MB L3 totals found in AMD’s Ryzen 7 9850X3D and Ryzen 9 9950X3D, respectively.
The leak originates from X user Jaykihn, an established source of CPU specification information, who confirmed that the flagship Nova Lake-S chip will carry 16 P-Cores, 32 E-Cores, and four LPE-Cores alongside the 288MB L3 cache figure, with LPE-Cores representing a new low-power efficiency core tier introduced specifically with this architecture.
That core configuration marks a substantial step up from the Core Ultra 9 285K’s eight P-Cores and 16 E-Cores, with the addition of LPE-Cores extending the architectural complexity beyond what Intel’s current Arrow Lake desktop lineup offers at any price point.
Cache capacity matters in gaming because processors can access it far faster than system RAM, reducing latency during gameplay in scenarios where data retrieval speed determines frame time consistency, which explains why AMD’s X3D chips have maintained a performance lead in gaming workloads despite competitive core counts from Intel.
Two unnamed chips sitting above the Core Ultra 9 designation in the leaked table carry 52 and 44 total cores respectively, suggesting Intel plans a tiered flagship structure that extends beyond its current naming scheme for the Nova Lake-S generation.
Intel has not confirmed any specifications for the Nova Lake-S lineup, though Computex in early June represents a credible window for an official announcement, with AMD also expected to reveal details of its next-generation Zen 6 architecture at the same event.
Tech
Mozilla fixes 271 Firefox vulnerabilities found by Anthropic’s Claude Mythos in a single evaluation pass
Summary: Mozilla released Firefox 150 with fixes for 271 security vulnerabilities identified by Anthropic’s Claude Mythos Preview, an unreleased frontier AI model distributed under the restricted Project Glasswing programme. The collaboration began with Claude Opus 4.6 finding 22 bugs in Firefox 148 earlier this year; Mythos produced more than twelve times as many. Firefox CTO Bobby Holley said the defects are “finite” and that defenders can “finally find them all,” while the UK AI Security Institute confirmed Mythos can also execute autonomous multi-stage network attacks, making the dual-use tension the central policy question.
Mozilla released Firefox 150 on Monday with fixes for 271 security vulnerabilities identified by Anthropic’s Claude Mythos Preview, an unreleased frontier AI model restricted to a handful of organisations under Project Glasswing. The number is striking not because the bugs were exotic but because they were not. “We haven’t seen any bugs that couldn’t have been found by an elite human researcher,” Mozilla said in a blog post titled “The zero-days are numbered.” The point is that no human team could have found 271 of them this fast.
The collaboration between Mozilla and Anthropic began earlier this year with a more modest effort. Starting in February, Firefox’s security team used Claude Opus 4.6 to scan nearly 6,000 C++ files across the browser’s codebase. That pass produced 112 unique reports, of which 22 were confirmed as security-sensitive bugs and shipped as fixes in Firefox 148. Fourteen were classified as high severity, representing almost a fifth of all high-severity Firefox vulnerabilities remediated in 2025. The Mythos evaluation, which followed as part of the continued partnership, produced more than twelve times as many confirmed vulnerabilities. Bobby Holley, Firefox’s chief technology officer, described the experience as giving the team “vertigo.”
What Mythos is, and who gets to use it
Claude Mythos Preview is the model at the centre of Anthropic’s restricted Mythos model programme, Project Glasswing, announced on 7 April. It is a general-purpose frontier model, not a security-specific tool, but its coding capabilities have crossed a threshold that Anthropic considers significant enough to warrant controlled distribution. The UK’s AI Security Institute evaluated the model and found it capable of executing multi-stage network attacks autonomously, completing a 32-step corporate network attack simulation called “The Last Ones” in three out of ten attempts. It can chain multiple small vulnerabilities into a single devastating attack, reconstruct source code from deployed software to find exploitable weaknesses, and build custom tools for lateral movement and data extraction once inside a network.
Access is restricted to 12 named launch partners, including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks, with roughly 40 additional organisations granted access for defensive security work. Anthropic committed up to $100 million in usage credits and $4 million in direct donations to open-source security organisations, including $2.5 million to Alpha-Omega and OpenSSF through the Linux Foundation and $1.5 million to the Apache Software Foundation. The model is available to Glasswing participants at $25 per million input tokens and $125 per million output tokens through the Claude API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry.
The restricted rollout has already been tested. On the same day Anthropic announced Glasswing, a group of unauthorised users gained access to Mythos Preview by guessing the model’s URL through a third-party vendor environment, an incident Anthropic said it is investigating.
The defender’s argument
Holley framed the 271 vulnerabilities not as an indictment of Firefox’s code quality but as evidence that the security landscape is shifting in favour of defenders for the first time. “A gap between machine-discoverable and human-discoverable bugs favors the attacker, who can concentrate many months of costly human effort to find a single bug,” he wrote. “Closing this gap erodes the attacker’s long-term advantage by making all discoveries cheap.”
The logic is straightforward. A zero-day vulnerability is valuable to an attacker precisely because it is unknown. If a defender can find and patch the same bug before an attacker discovers it, the bug has no offensive value. The cost asymmetry has historically favoured attackers: a browser like Firefox has millions of lines of code, and a single undiscovered flaw in any of them is enough for exploitation. An elite human security researcher might spend weeks or months finding one such flaw. A model like Mythos can scan the entire codebase in a fraction of that time. Mozilla’s thesis is that this changes the economics permanently. “Software like Firefox is designed in a modular way for humans to be able to reason about its correctness,” the blog post stated. “It is complex, but not arbitrarily complex. The defects are finite, and we are entering a world where we can finally find them all.”
The claim is bold and deliberately so. Mozilla is arguing that the age of zero-day vulnerabilities in well-structured software has an expiration date, not because attackers will stop looking, but because defenders will get there first.
The numbers in context
The 271 figure requires some unpacking. Mozilla’s official security advisory for Firefox 150, MFSA 2026-30, lists 41 CVE entries, three of which are standard memory-safety roll-ups that aggregate multiple individual bugs under a single identifier. The 271 number represents the total count of discrete code defects identified by Mythos during its evaluation, many of which were grouped into those CVE bundles. The distinction matters because the headline number and the formal advisory number measure different things: one measures what the AI found, the other measures how much AI-generated code actually ships through the industry’s standard vulnerability disclosure process.
The most dangerous flaws include use-after-free vulnerabilities in the DOM and WebRTC components, the kinds of memory safety bugs that have been the bread and butter of browser exploitation for two decades. These are not novel attack surfaces. They are the same categories of bugs that Google’s Project Zero has been finding across browsers since 2014. Google’s own AI vulnerability research programme, Big Sleep, a collaboration between Project Zero and DeepMind, found a zero-day in SQLite in October 2024 and has since expanded to discover multiple flaws in widely used software. The difference with Mozilla’s effort is scale: 271 bugs in a single evaluation pass, patched before release, across a codebase that has accumulated technical debt over more than two decades.
The dual-use problem
The UK AI Security Institute’s evaluation of Mythos Preview confirmed what the Mozilla results imply from the other direction: the same capabilities that make the model effective at finding vulnerabilities make it effective at exploiting them. The model became the first AI to complete “The Last Ones,” a benchmark designed to simulate a full corporate network compromise. It succeeded in three out of ten attempts, averaging 22 of 32 steps across all runs. Independent testing confirmed that Mythos cannot reliably execute autonomous attacks against organisations with well-hardened defences, but the trajectory is clear. Each generation of frontier model has performed better on offensive security benchmarks than the last.
This is the tension that Project Glasswing is designed to manage. By restricting Mythos to vetted organisations with defensive mandates, Anthropic is attempting to give defenders a structural head start, a window in which the good actors can scan and patch before the capabilities proliferate. The strategy depends on the restriction holding. The vendor breach on launch day suggests that containment is harder than access control. Anthropic has also identified thousands of zero-day vulnerabilities across every major operating system and every major web browser using Mythos, findings it is disclosing to the affected vendors through Glasswing.
Anthropic’s expanding enterprise footprint, from legal contract review in Microsoft Word to cybersecurity through Glasswing, reflects a company that is monetising Claude across every professional vertical where accuracy matters. The Mozilla partnership is the most dramatic demonstration yet, not because the model did something no human could do, but because it did what only a handful of humans can do, and did it 271 times in a single pass.
Holley’s conclusion captures both the promise and the vertigo: “Our work isn’t finished, but we’ve turned the corner and can glimpse a future much better than just keeping up. Defenders finally have a chance to win, decisively.” Whether that future arrives depends on whether the models that find the bugs remain in the hands of the people who fix them, or whether the capabilities leak faster than the patches ship. For now, Firefox 150 has 271 fewer ways to be broken. That is not a small thing. The question is how long that advantage lasts when the tool that found them is commanding extraordinary valuations precisely because of what it can do.
Tech
The 'Missing-Scientist' Story Is Unbelievably Dumb
Longtime Slashdot reader mmarlett writes: The Atlantic has a long article on the story of missing scientists recently featured here on Slashdot. In short, it is an incoherent conspiracy theory that spreads wide and far, not paying any attention to boundaries of time, space, or area of expertise. “Which is all to say that another piece of flagrant nonsense has ascended to the highest levels of U.S. politics and media,” writes the Atlantic’s Daniel Engber. “To call it a conspiracy theory would be far too kind, because no comprehensive theory has been floated to explain the pattern of events. But then, even the phrase pattern of events is imprecise, because there is no pattern here at all. Given all the people who could have been roped into this narrative but weren’t, any hope of finding meaning falls away. Barring any dramatic new disclosures, the mystery of the missing scientists has the dubious honor of being a sham in every way at once.”
Read more of this story at Slashdot.
Tech
Stop Begging Big Tech To Fix Your Social Media Experience. You Can Do It Yourself.
from the vibe-code-your-social-experience dept
Disclaimer: This post talks about Bluesky and an offering from Bluesky and I am on the Bluesky board. Take everything I say with whatever size grains of salt you feel is appropriate.
I’ve written a few times now about how I think that AI tools, used carefully and thoughtfully, represent our best chance at taking back control over the open web. I know this is not a popular opinion with many Techdirt readers, but I’m hoping some of you will read through this to try to understand and engage with the points I’m making here. I truly do believe that if used well and appropriately, these tools can serve to put power back into the hands of users, rather than giant centralized companies who are more interested in exploiting your attention.
Over the last few weeks I’ve been playing around with an AI-powered tool that Bluesky has released (much to the chagrin of many users) to a relatively small group of early beta testers. I think the negative reaction to the product announcement is understandable, given the general distrust of all AI tools, but it’s really worth examining what this tool is and what it can enable, including really empowering people to take back control over their own social experience. It literally gives you a path to routing around Bluesky’s own design features if you don’t like them.
Yes, a lot of AI is overhyped garbage being shoved at people who don’t want it — but that doesn’t mean the underlying tools can’t be useful when applied carefully by those who choose to use the tools appropriately.
It means not outsourcing your brain to the tool, but rather using it the way any skilled person automates some aspect of work that they do. I’ve sanded and restained the floors of my house, and while I could have done the whole thing by hand with a stack of sandpaper, it was helpful to rent a floor sander from a local hardware store, learn how to use it properly, and then use it so that I could finish the job in a day rather than weeks. I view AI tools the same way. If you learn how to use them properly, as an assistive tool rather than a replacement for your brain, they can help you accomplish useful things.
Let me give an example: a couple of weeks ago, law professor Blake Reid wrote a short thread on Bluesky about how he needed to take a break from social media, because he worried that it was eating up too much of his time and he was better off just stopping cold turkey, to avoid getting sucked into unproductive discussions that push him to (as he put it) “get over my skis” in engaging in conversations where he’s tempted to weigh in despite not having much expertise (a common thing on social media). It’s a worthwhile thread.
But in that thread he mentioned that he was hopeful that maybe some day technology itself could help him use social media in a healthier way, to dial back how much time he spent on it, and get him focused on the more productive and useful discussions (which he admits also happen regularly on Bluesky).
What was amusing to me was that the only reason I saw that post by Reid was because I’ve been beta testing a new tool that… kinda does that. When he wrote that thread, I was actually on vacation, hiking in the National Parks in Utah, and mostly offline. But in the evenings, I would check in, and rather than sorting through everything I missed on social media that day, I had a tool just show me things that I would find useful that I might have missed.
But using an AI tool, I had built an entirely personalized news aggregator, which had access to my Bluesky account, Techdirt’s RSS feed, and the knowledge that I had been out all day and wanted not just a summary of what news might be interesting to me as the editor of Techdirt, but also what people on Bluesky were saying about it. Here’s a screenshot of what my first attempt at this looks like:

The tool that let me do this is an advanced version of Attie, which I also recognize is extremely controversial among users on Bluesky, many of whom vocally have expressed their hatred of the very idea of it when it was announced last month. But, my main interest is in figuring out to empower users who want to take control over their own social experience, and this seems like a clear example of that. I’ll note that this version of Attie has not yet rolled out to most of the beta testers (I believe some have access to it — but this is one small benefit of being on the board).
Honestly, I think the way Bluesky announced Attie may have done it an injustice, positioning it as a kind of AI-powered feed generator. There are multiple other feed generator tools for Bluesky out there, many of which are really fantastic. For a while now I’ve used both Graze.social and Surf.social to make AI-powered feeds (which never seemed to generate much controversy).
But generating feeds alone isn’t all that interesting. With the more advanced version of Attie, I can take much more control over my entire social experience. The fact that with a single prompt I could build that personalized aggregator (based not just on my own feed, but Techdirt’s RSS) is something more powerful, including the fact that the tool knows to summarize a whole days’ worth of posts, because I’m trying to see in a glance if there’s anything relevant for Techdirt and I’d been offline the entire day.
Rather than just letting a single company (in this case Bluesky) define my entire experience for me, I can vibe-code my social experience. I can tell it not just the types of content I want to see, but how I want to see it. And for what reason. And how much (or how little) content to show me. And with what context around it. It’s all based on what I expressly want. Not what any company thinks I want.
And I keep experimenting with other versions of this as well. In one test, I had it also try to summarize stories and tell me why it thought I’d find them useful for Techdirt:

In this case it not only found a story that is interesting to me, but it suggested multiple sources for me to read about it, even noting (for example) that Professor Eric Goldman’s blog post is “the definitive blog post” for my coverage (it’s not wrong).
I go back to the piece I wrote a little while back about the kind of learned helplessness of social media users. We’ve had two decades of billionaires deciding exactly how they wanted to intermediate your social experience. How your feed looks. What kind of algorithm you’ll see. What sorts of content will be put in your feed. They got to focus on engagement maxxing. You just had to deal with it.
In such a world, the only thing users felt they could do in response was to yell. They could yell at the CEOs of these platforms. Or at the government, telling them to yell at the CEOs of these platforms.
But with an AI tool that explores an open social ecosystem, you don’t need to yell at a CEO or a regulator. You can just tell the tool what you want, what you don’t want, how you want (or don’t want) to see it, and what context would be useful. It puts you in control.
And yes, sometimes it makes mistakes. It can recommend a story I’m not interested in. But, then I can just tell it that such and such story isn’t useful and why… and it will update the system for me.
Once again, I understand that some people hate any and all uses of AI. And I’m not suggesting you have to run out and use the tools yourself. You do you. But showing concrete use cases where these tools actually deliver more user agency — more control over your online environment, rather than deferring to the whims of any particular company — matters.
The larger point here isn’t really about Attie specifically (indeed, anyone could build their own version of this thanks to open protocols). It’s that for two decades, users have been trained to believe their only options are to accept whatever a platform gives them, or yell loudly enough that someone powerful might change it. That’s the learned helplessness I wrote about earlier, and it’s corrosive.
Tools like this — built on open protocols, not locked inside a corporate walled garden — represent a different path. One where you don’t petition a billionaire for a better feed algorithm. You don’t petition the government to try to put time limits on social media. You just build the experience you want. You tell it to make you a better interface that matches what you want. You tell it you don’t want to spend that much time. That’s what “protocols, not platforms” actually looks like in practice, helped along by agentic tools, and it’s why I think this matters well beyond whether any particular AI tool is good or not.
Filed Under: ai, attie, custmization, decentralization, vibe coding
Companies: bluesky
Tech
Pichai opens Cloud Next 2026 with $240B backlog, 750M Gemini users, and a plan to turn Search into an agent manager
Summary: Sundar Pichai opened Cloud Next 2026 with Google Cloud at $70 billion in annual revenue, 48% growth, a $240 billion backlog that doubled in a year, and $175-185 billion in planned capital expenditure. The Gemini app has 750 million monthly users, AI Overviews reach two billion, and the Gemini API processed 85 billion requests in January alone. Pichai framed the conference around Search evolving from a retrieval engine into an “agent manager” and announced the Universal Commerce Protocol with Shopify, Target, and Walmart, while positioning Google’s full-stack integration from custom silicon to consumer distribution as the advantage competitors cannot replicate.
Sundar Pichai opened Google Cloud Next 2026 on Tuesday with a set of numbers that reframe the competitive dynamics of enterprise AI. Google Cloud is now generating more than $70 billion in annual revenue, growing at 48% year on year, with a backlog of $240 billion, up 55% and more than double the roughly $155 billion of a year ago. The number of billion-dollar deals Google Cloud signed in 2025 exceeded the combined total of the three previous years. Existing customers are outpacing their own commitments by 30%, spending faster than they contracted. Google has committed $175 billion to $185 billion in capital expenditure for 2026, nearly doubling the $91.4 billion it spent last year. Pichai described the moment as “a fundamental rewiring of technology and an accelerant of human ingenuity.” The money suggests he may not be exaggerating.
The keynote, titled “The Agentic Cloud,” was less a product launch than a thesis statement. Google is positioning itself not as a cloud provider that offers AI but as the operating system for what it calls the agentic enterprise: a model in which AI agents handle routine business operations autonomously, communicate with each other across platforms, and interact with the physical world through commerce, search, and real-time data. The pitch is that Google is the only company that controls every layer of that stack, from the custom silicon that runs inference, to the frontier models that power reasoning, to the cloud platform that hosts the agents, to the productivity suite and search engine through which three billion users interact with them.
The scale of the machine
The Gemini app has reached 750 million monthly active users as of the fourth quarter of 2025, up 100 million from the previous quarter. AI Overviews, Google’s AI-generated search summaries, reach two billion monthly users across more than 200 countries and drive 10% more search queries globally. AI Overviews now trigger on approximately 48% of all tracked queries, up from 31% in February 2025, a 58% increase in a year. The Gemini API processed 85 billion requests in January 2026, a 142% increase from 35 billion in March 2025. Eight million paid Gemini Enterprise seats are deployed across 2,800 companies. Thirteen million developers are building with Google’s generative models. Gemini 3 Pro has had, in Pichai’s words, “the fastest adoption of any model in our history.”
These are not cloud metrics. They are platform metrics. Google is arguing that its advantage over AWS, Azure, OpenAI, and Anthropic lies not in any single product but in the fact that it reaches more users, processes more queries, and touches more surfaces than any competitor. Search alone handles more than a billion shopping interactions per day. Workspace has more than three billion users. Android runs on billions of devices. The thesis is that when AI agents become the primary interface for work and commerce, the company with the largest existing surface area wins, because the agents need somewhere to run, something to connect to, and someone to serve.
Search becomes the agent manager
Pichai’s most consequential framing may have come in a podcast appearance earlier this month: “A lot of what are just information-seeking queries will be agentic in Search. You’ll be completing tasks. You’ll have many threads running.” He described Search evolving from a retrieval engine into an “agent manager,” an orchestration layer that dispatches AI agents to complete tasks on a user’s behalf rather than returning a list of links.
The infrastructure for this is already being built. Google announced the Universal Commerce Protocol at NRF in January, an open-source standard for agentic commerce co-developed with Shopify, Etsy, Wayfair, Target, and Walmart. More than 20 partners have endorsed it, including Adyen, American Express, Best Buy, Flipkart, Macy’s, Mastercard, Stripe, The Home Depot, Visa, and Zalando. UCP is built on REST and JSON-RPC transports with the Agent2Agent protocol, Model Context Protocol, and a new Agent Payments Protocol built in. It lets AI agents treat any participating store as a programmable service, with the merchant remaining the merchant of record. Pichai, who described himself as “an indecisive shopper,” said he is “looking forward to the day when agents can help me get from discovery to purchase.”
The implications for the advertising industry are significant. If Search shifts from showing links that users click to dispatching agents that complete purchases, the entire cost-per-click model that funds Google’s advertising business, and by extension the businesses of every company that advertises on Google, changes. Retailers are already deploying AI-powered shopping through Gemini, ChatGPT, and Copilot. The question is whether agentic commerce cannibalises Google’s own advertising revenue or whether Google can capture a larger share of the transaction itself. UCP suggests Google is betting on the latter.
The full-stack argument
The competitive positioning at Cloud Next was unusually direct. Thomas Kurian said competitors are “handing you the pieces, not the platform,” leaving enterprise teams to integrate components themselves. The claim rests on Google’s vertical integration: Ironwood TPUs and the forthcoming eighth-generation split into Broadcom-designed training chips and MediaTek-designed inference chips provide the silicon. Gemini 3 Pro, 3 Flash, and 3.1 Pro provide the models. The Gemini Enterprise Agent Platform, formerly Vertex AI, provides the developer tools and runtime. Workspace Studio provides the no-code agent builder. Search and Android provide the consumer distribution. No other company assembles all of these under one roof.
The argument has a specific target: Microsoft Copilot, which despite being embedded in virtually every Fortune 500 company has struggled with adoption. Only 3.3% of Microsoft 365 users with Copilot access actually pay for it, and its accuracy net promoter score deteriorated to negative 24.1 by September 2025. Google’s eight million paid Gemini Enterprise seats in roughly four months represents a faster trajectory, though from a much smaller base. GitHub has frozen new Copilot sign-ups because agentic coding sessions consume more compute than users pay for, illustrating why owning the silicon layer, as Google does, is not just a technical advantage but an economic one.
The capital question
The $175 billion to $185 billion in planned capital expenditure is the number that makes the rest of the strategy credible or alarming, depending on how the next two years unfold. Roughly 60% goes to servers and 40% to data centres and networking equipment. Combined with Microsoft, Meta, and Amazon, total big tech AI infrastructure spending is approaching $700 billion this year, a figure large enough to reshape energy markets and strain power grids. Pichai acknowledged on the fourth-quarter earnings call that the “top question is definitely around compute capacity and all the constraints, be it power, land, supply chain,” and expects Google to remain supply-constrained through 2026.
The backlog provides the justification. At $240 billion, it represents more than three years of current revenue contracted but not yet delivered. Thirteen product lines each generate more than $1 billion in annual revenue. The ServiceNow deal alone was worth $1.2 billion over five years. If the demand is real, and the backlog suggests it is, then the capital expenditure is not a gamble but an obligation: the cost of building the infrastructure to fulfil commitments already made.
Google Cloud holds roughly 11% of the cloud infrastructure market, behind AWS at 31% and Azure at 25%. The gap has narrowed: Google grew at 48% in the fourth quarter of 2025, the fastest of the three, and achieved sustained profitability for the first time. But the gap remains. What Pichai presented at Cloud Next is not a plan to close that gap through incremental cloud sales. It is a plan to redefine what the cloud is, from a place where companies store data and run workloads to a platform where AI agents perform work, make decisions, complete purchases, and coordinate with each other across organisational boundaries. If that transition happens, the company that built the agents, the models, the chips, the protocols, and the distribution channels stands to capture a share of the value that the current market share numbers do not reflect. That is the bet. Cloud Next 2026 is the moment Google made it explicit.
-
Fashion5 days agoWeekend Open Thread: Theodora Dress
-
Sports6 days agoNWFL Suspends Two Players Over Post-Match Clash in Ado-Ekiti
-
Politics5 days agoPalestine barred from entering Canada for FIFA Congress
-
Entertainment3 days ago
NBA Analyst Charles Barkley Chimes in on Ice Spice McDonald’s Fiasco
-
Business3 days agoPowerball Result April 18, 2026: No Jackpot Winner in Powerball Draw: $75 Million Rolls Over
-
Tech4 days agoAuto Enthusiast Scores Running Tesla Model 3 for Two Grand and Turns It Into Bare-Bones Go-Kart
-
Politics4 days agoZack Polanski demands ‘council homes not luxury flats for foreign investors’
-
Crypto World5 days agoRussia Pushes Bill to Criminalize Unregistered Crypto Services
-
Politics2 days agoGary Stevenson delivers timely reminder to register to vote as deadline TODAY
-
Tech7 days ago‘Avatar: Aang, The Last Airbender’ Leaked Online. Some Fans Say Paramount Deserves the Fallout
-
Business6 days agoCreo Medical agree sale of its manufacturing operation
-
Business12 hours agoRolls-Royce Voted UK’s Most Iconic Trade Mark as IPO Register Hits 150
-
Politics4 hours agoDisabled people challenge government SEND proposals over segregation concerns
-
Politics3 hours agoMaking troops accountable for war crimes threatens US alliance, ex-SAS colonel warns
-
Crypto World5 days agoRussia Introduces Bill To Criminalize Unregistered Crypto Services
-
Politics4 hours agoStarmer handler McSweeney to be dragged from shadows by Foreign Affairs Committee
-
Politics4 hours agoZack Polanski responds to home secretary’s taser threat
-
Politics5 hours ago
Wings Over Scotland | How To Get Away With Crimes
-
Crypto World4 days agoKelp DAO rsETH Bridge Hack Drains $292M as DeFi Losses Top $600M in Two Weeks
-
Tech7 days agoFord EV and tech chief leaving automaker

You must be logged in to post a comment Login