Connect with us
DAPA Banner

Tech

Bug-fix updates for iOS 26.4.2, iPadOS 26.4.2 are out now

Published

on

Apple has released updates for iOS 26.4.2 and iPadOS 26.4.2, as well as version 18.7.8 for older devices, providing bug fixes and security updates to all users.

Close-up of two white smartphones with prominent rear cameras, a metal smart watch with dark screen, and a pair of white wireless earbuds on a gray fabric surface
Apple’s new update can be applied to all current-gen iPhones.

Incremental updates for Apple’s operating systems provide some much-needed bug fixes, security updates, and performance improvements between major updates. On Wednesday, Apple issued the second incremental update of version 26.4, bringing iOS and iPadOS up to 26.4.2.
The previous incremental update, for iOS 26.4.1 and iPadOS 16.4.2, landed on April 8.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Android finally gets a fitting answer to the iPad mini, and it looks stunning

Published

on

Apple has owned the compact premium tablet segment for years, but there’s a new contender in the market that runs on Android and takes on the iPad mini for everything it stands for. 

Unveiled alongside the Find X9 Ultra, the Oppo Pad Mini comes with an 8.8-inch 2.5K OLED panel (2520 x 1680 pixels) in a 3:2 aspect ratio. This is the same, near-square aspect ratio that makes the iPad mini ideal for reading, note-taking, consuming content, and other productive workflows.  

What makes the Oppo Pad Mini worth comparing to the iPad mini?

The tablet’s bezels are remarkably thin at 2.99 mm, and the screen can achieve up to 1,600 nits of peak brightness with a variable refresh rate between 60 and 144 Hz. There’s an optional matte version of the tablet that mimics a paper-like surface, something that the iPad mini doesn’t offer.

Where Apple puts an A17 Pro inside its mini, the Oppo Pad Mini comes with a Snapdragon 8 Gen 5 (3nm) chipset paired with up to 12GB of LPDDR5X RAM and 512GB of UFS 4.1 storage, which, in my opinion, is a capable combination. 

For those wondering, the Snapdragon chip provides better multi-core performance, but its single-core performance matches that of the A17 Pro. In addition, the type of memory and storage should make the Oppo tablet feel more responsive and snappy. 

Advertisement

How does it hold up in terms of portability and battery?

At just 5.39mm thick and weighing 279 grams, the Oppo Pad Mini is designed for portability, to the extent that it can fit in relatively larger pockets and small bags. The iPad mini, by comparison, weighs 293 grams and measures 6.3mm. 

The 8,000 mAh battery supports 67W wired charging (full charge in around an hour), something that the iPad mini lacks. Pricing starts at CNY 3,199, which is around $470 for the baseline variant with 8GB of RAM and 256GB of storage, rising to around $590 for the variant with 12GB of RAM and 512GB of storage. 

While the sales for the iPad mini alternative commence on April 24, 2026, it won’t be available in the United States, at least for now. To me, Oppo’s entry into the premium small-screen tablet segment signals that Android OEMs are taking the category seriously. 

For now, the Oppo Pad Mini isn’t a direct competitor to the iPad mini, primarily because it isn’t available in the United States. However, if and when the product arrives in the region, it could easily take up a good chunk of iPad mini’s sales, providing Android users with a top-notch experience in a smaller form factor without paying a hefty price.

Source link

Advertisement
Continue Reading

Tech

Notification bug that let FBI access messages patched with iOS 26.4.2

Published

on

People being investigated by the FBI deleted Signal, but some messages were still retrievable from the iPhone’s notification database. The latest iOS update patches this vulnerability.

Close-up of an iPhone lock screen showing a locked padlock and large Face ID smiley icon on a dark display, against a purple gradient background.
iPhones may be secure, but they aren’t invulnerable to bugs

Users should reasonably expect that deleting an app from their iPhone will remove all associated data. However, a recent case involving the FBI showed that some notification data was being retained by mistake.
The iOS 26.4.2, iPadOS 26.4.2, iOS 18.7.8, and iPadOS 18.7.8 updates released on Wednesday address the notification database issue directly. The notes simply say that “a logging issue was addressed with improved data redaction.”
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Tesla Plaid Owner Learns The Hard Way It Can’t Keep Up With A Corvette

Published

on





Car enthusiasts love comparing vehicle performance, especially when you can see it play out on a drag strip. A YouTube video recently went viral of a very unlikely matchup: a Tesla Model S Plaid versus a Chevrolet Corvette ZR1X. In the video shared by DragTimes, the ZR1X took on three Model S Plaids in the quarter mile at the TX2K event at Texas Motorplex in Ennis, Texas. 

The first Tesla Model S Plaid driver wasn’t sure if he’d beat the ZR1X, but he felt it would be really close. However, it was clear from the launch that it wasn’t close at all — the ZR1X left the Plaid far behind. The ZR1X was able to get up to 60 miles per hour in 1.95 seconds, beating the Plaid’s 2.26 seconds. The ZR1X finished the quarter mile in 8.92 seconds, hitting nearly 154 miles per hour. The Plaid finished in 9.65 seconds, with a top speed of 140 mph. It was a similar story with the other two Plaids. 

Advertisement

Why is the Corvette ZR1X better than the Model S Plaid on the drag strip?



Advertisement

The Corvette ZR1X and the Model S Plaid that raced that day were both stock with all-season tires, meaning the quarter mile race was a true indicator of the vehicles’ performance without enhancements. To be fair to the Model S Plaid, it beat the Corvette ZR1 in a previous video due to its incredible speed, which is why Brooks Weisblat took out the ZR1X, which pairs the twin-turbo 5.5L LT7 V8 engine with a front-axle electric motor for 1,250 horsepower. That’s more than the Plaid’s tri-motor setup, which produces 1,020 hp. The Plaid is also 4,802 pounds (about 1,000 more than the ZR1X).

With more horsepower and a lighter weight, it’s no surprise that the ZR1X had a faster launch. The Plaid still impressed since it had 70,000 miles on it and 85% battery. EVs slightly slow down over time. 

The Tesla Model S Plaid has a top speed of 163 mph without the added $20,000 Track Package while the ZR1X can reach 225 mph. With the ZR1X already ahead, it’s no surprise that it was able to remain far ahead of the Plaid as they raced down the track. While the Plaid is so fast that it was previously banned from NHRA races, the Plaid was no match for what Corvette considers a track-focused “hypercar.” 

Advertisement



Source link

Advertisement
Continue Reading

Tech

Anthropic’s Mythos Model Is Being Accessed by Unauthorized Users

Published

on

Bloomberg reports that a small group of unauthorized users gained access to Anthropic’s restricted Mythos model through a mix of contractor-linked access and online sleuthing. Anthropic says it is investigating and has no evidence the access extended beyond a third-party vendor environment or affected its own systems. From the report: The users relied on a mix of tactics to get into Mythos. These included using access the person had as a worker at a third-party contractor for Anthropic and trying commonly used internet sleuthing tools often employed by cybersecurity researchers, the person said. The users are part of a private Discord channel that focuses on hunting for information about unreleased models, including by using bots to scour for details that Anthropic and others have posted on unsecured websites such as GitHub. […] To access Mythos, the group of users made an educated guess about the model’s online location based on knowledge about the format Anthropic has used for other models, the person said, adding that such details were revealed in a recent data breach from Mercor, an AI training startup that works with a number of top developers.

Crucially, the person also has permission to access Anthropic models and software related to evaluating the technology for the startup. They gained this access from a company for which they have performed contract work evaluating Anthropic’s AI models. Bloomberg is not naming the company for security reasons. The group is interested in playing around with new models, not wreaking havoc with them, the person said. The group has not run cybersecurity-related prompts on the Mythos model, the person said, preferring instead to try tasks like building simple websites in an attempt to avoid detection by Anthropic. The person said the group also has access to a slew of other unreleased Anthropic AI models.

Source link

Continue Reading

Tech

Intel’s upcoming gaming CPU specs have leaked

Published

on

Pointing squarely at AMD’s Ryzen range, Intel’s next-generation desktop CPU lineup has leaked, with the Nova Lake-S architecture set to arrive with up to 288MB of L3 cache across a range expected to carry the Core Ultra 400 branding.

That cache figure dwarfs the 36MB found in Intel’s current flagship Core Ultra 9 285K, and comfortably exceeds the 96MB and 192MB L3 totals found in AMD’s Ryzen 7 9850X3D and Ryzen 9 9950X3D, respectively.

The leak originates from X user Jaykihn, an established source of CPU specification information, who confirmed that the flagship Nova Lake-S chip will carry 16 P-Cores, 32 E-Cores, and four LPE-Cores alongside the 288MB L3 cache figure, with LPE-Cores representing a new low-power efficiency core tier introduced specifically with this architecture.

That core configuration marks a substantial step up from the Core Ultra 9 285K’s eight P-Cores and 16 E-Cores, with the addition of LPE-Cores extending the architectural complexity beyond what Intel’s current Arrow Lake desktop lineup offers at any price point.

Advertisement

Cache capacity matters in gaming because processors can access it far faster than system RAM, reducing latency during gameplay in scenarios where data retrieval speed determines frame time consistency, which explains why AMD’s X3D chips have maintained a performance lead in gaming workloads despite competitive core counts from Intel.

Advertisement

Two unnamed chips sitting above the Core Ultra 9 designation in the leaked table carry 52 and 44 total cores respectively, suggesting Intel plans a tiered flagship structure that extends beyond its current naming scheme for the Nova Lake-S generation.

Intel has not confirmed any specifications for the Nova Lake-S lineup, though Computex in early June represents a credible window for an official announcement, with AMD also expected to reveal details of its next-generation Zen 6 architecture at the same event.

Advertisement

Source link

Continue Reading

Tech

Mozilla fixes 271 Firefox vulnerabilities found by Anthropic’s Claude Mythos in a single evaluation pass

Published

on

Summary: Mozilla released Firefox 150 with fixes for 271 security vulnerabilities identified by Anthropic’s Claude Mythos Preview, an unreleased frontier AI model distributed under the restricted Project Glasswing programme. The collaboration began with Claude Opus 4.6 finding 22 bugs in Firefox 148 earlier this year; Mythos produced more than twelve times as many. Firefox CTO Bobby Holley said the defects are “finite” and that defenders can “finally find them all,” while the UK AI Security Institute confirmed Mythos can also execute autonomous multi-stage network attacks, making the dual-use tension the central policy question.

Mozilla released Firefox 150 on Monday with fixes for 271 security vulnerabilities identified by Anthropic’s Claude Mythos Preview, an unreleased frontier AI model restricted to a handful of organisations under Project Glasswing. The number is striking not because the bugs were exotic but because they were not. “We haven’t seen any bugs that couldn’t have been found by an elite human researcher,” Mozilla said in a blog post titled “The zero-days are numbered.” The point is that no human team could have found 271 of them this fast.

The collaboration between Mozilla and Anthropic began earlier this year with a more modest effort. Starting in February, Firefox’s security team used Claude Opus 4.6 to scan nearly 6,000 C++ files across the browser’s codebase. That pass produced 112 unique reports, of which 22 were confirmed as security-sensitive bugs and shipped as fixes in Firefox 148. Fourteen were classified as high severity, representing almost a fifth of all high-severity Firefox vulnerabilities remediated in 2025. The Mythos evaluation, which followed as part of the continued partnership, produced more than twelve times as many confirmed vulnerabilities. Bobby Holley, Firefox’s chief technology officer, described the experience as giving the team “vertigo.”

What Mythos is, and who gets to use it

Claude Mythos Preview is the model at the centre of Anthropic’s restricted Mythos model programme, Project Glasswing, announced on 7 April. It is a general-purpose frontier model, not a security-specific tool, but its coding capabilities have crossed a threshold that Anthropic considers significant enough to warrant controlled distribution. The UK’s AI Security Institute evaluated the model and found it capable of executing multi-stage network attacks autonomously, completing a 32-step corporate network attack simulation called “The Last Ones” in three out of ten attempts. It can chain multiple small vulnerabilities into a single devastating attack, reconstruct source code from deployed software to find exploitable weaknesses, and build custom tools for lateral movement and data extraction once inside a network.

Advertisement

Access is restricted to 12 named launch partners, including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks, with roughly 40 additional organisations granted access for defensive security work. Anthropic committed up to $100 million in usage credits and $4 million in direct donations to open-source security organisations, including $2.5 million to Alpha-Omega and OpenSSF through the Linux Foundation and $1.5 million to the Apache Software Foundation. The model is available to Glasswing participants at $25 per million input tokens and $125 per million output tokens through the Claude API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry.

Advertisement

The restricted rollout has already been tested. On the same day Anthropic announced Glasswing, a group of unauthorised users gained access to Mythos Preview by guessing the model’s URL through a third-party vendor environment, an incident Anthropic said it is investigating.

The defender’s argument

Holley framed the 271 vulnerabilities not as an indictment of Firefox’s code quality but as evidence that the security landscape is shifting in favour of defenders for the first time. “A gap between machine-discoverable and human-discoverable bugs favors the attacker, who can concentrate many months of costly human effort to find a single bug,” he wrote. “Closing this gap erodes the attacker’s long-term advantage by making all discoveries cheap.”

The logic is straightforward. A zero-day vulnerability is valuable to an attacker precisely because it is unknown. If a defender can find and patch the same bug before an attacker discovers it, the bug has no offensive value. The cost asymmetry has historically favoured attackers: a browser like Firefox has millions of lines of code, and a single undiscovered flaw in any of them is enough for exploitation. An elite human security researcher might spend weeks or months finding one such flaw. A model like Mythos can scan the entire codebase in a fraction of that time. Mozilla’s thesis is that this changes the economics permanently. “Software like Firefox is designed in a modular way for humans to be able to reason about its correctness,” the blog post stated. “It is complex, but not arbitrarily complex. The defects are finite, and we are entering a world where we can finally find them all.”

The claim is bold and deliberately so. Mozilla is arguing that the age of zero-day vulnerabilities in well-structured software has an expiration date, not because attackers will stop looking, but because defenders will get there first.

Advertisement

The numbers in context

The 271 figure requires some unpacking. Mozilla’s official security advisory for Firefox 150, MFSA 2026-30, lists 41 CVE entries, three of which are standard memory-safety roll-ups that aggregate multiple individual bugs under a single identifier. The 271 number represents the total count of discrete code defects identified by Mythos during its evaluation, many of which were grouped into those CVE bundles. The distinction matters because the headline number and the formal advisory number measure different things: one measures what the AI found, the other measures how much AI-generated code actually ships through the industry’s standard vulnerability disclosure process.

The most dangerous flaws include use-after-free vulnerabilities in the DOM and WebRTC components, the kinds of memory safety bugs that have been the bread and butter of browser exploitation for two decades. These are not novel attack surfaces. They are the same categories of bugs that Google’s Project Zero has been finding across browsers since 2014. Google’s own AI vulnerability research programme, Big Sleep, a collaboration between Project Zero and DeepMind, found a zero-day in SQLite in October 2024 and has since expanded to discover multiple flaws in widely used software. The difference with Mozilla’s effort is scale: 271 bugs in a single evaluation pass, patched before release, across a codebase that has accumulated technical debt over more than two decades.

The dual-use problem

The UK AI Security Institute’s evaluation of Mythos Preview confirmed what the Mozilla results imply from the other direction: the same capabilities that make the model effective at finding vulnerabilities make it effective at exploiting them. The model became the first AI to complete “The Last Ones,” a benchmark designed to simulate a full corporate network compromise. It succeeded in three out of ten attempts, averaging 22 of 32 steps across all runs. Independent testing confirmed that Mythos cannot reliably execute autonomous attacks against organisations with well-hardened defences, but the trajectory is clear. Each generation of frontier model has performed better on offensive security benchmarks than the last.

This is the tension that Project Glasswing is designed to manage. By restricting Mythos to vetted organisations with defensive mandates, Anthropic is attempting to give defenders a structural head start, a window in which the good actors can scan and patch before the capabilities proliferate. The strategy depends on the restriction holding. The vendor breach on launch day suggests that containment is harder than access control. Anthropic has also identified thousands of zero-day vulnerabilities across every major operating system and every major web browser using Mythos, findings it is disclosing to the affected vendors through Glasswing.

Advertisement

Anthropic’s expanding enterprise footprint, from legal contract review in Microsoft Word to cybersecurity through Glasswing, reflects a company that is monetising Claude across every professional vertical where accuracy matters. The Mozilla partnership is the most dramatic demonstration yet, not because the model did something no human could do, but because it did what only a handful of humans can do, and did it 271 times in a single pass.

Holley’s conclusion captures both the promise and the vertigo: “Our work isn’t finished, but we’ve turned the corner and can glimpse a future much better than just keeping up. Defenders finally have a chance to win, decisively.” Whether that future arrives depends on whether the models that find the bugs remain in the hands of the people who fix them, or whether the capabilities leak faster than the patches ship. For now, Firefox 150 has 271 fewer ways to be broken. That is not a small thing. The question is how long that advantage lasts when the tool that found them is commanding extraordinary valuations precisely because of what it can do.

Source link

Advertisement
Continue Reading

Tech

The 'Missing-Scientist' Story Is Unbelievably Dumb

Published

on

Longtime Slashdot reader mmarlett writes: The Atlantic has a long article on the story of missing scientists recently featured here on Slashdot. In short, it is an incoherent conspiracy theory that spreads wide and far, not paying any attention to boundaries of time, space, or area of expertise. “Which is all to say that another piece of flagrant nonsense has ascended to the highest levels of U.S. politics and media,” writes the Atlantic’s Daniel Engber. “To call it a conspiracy theory would be far too kind, because no comprehensive theory has been floated to explain the pattern of events. But then, even the phrase pattern of events is imprecise, because there is no pattern here at all. Given all the people who could have been roped into this narrative but weren’t, any hope of finding meaning falls away. Barring any dramatic new disclosures, the mystery of the missing scientists has the dubious honor of being a sham in every way at once.”

Read more of this story at Slashdot.

Source link

Advertisement
Continue Reading

Tech

Stop Begging Big Tech To Fix Your Social Media Experience. You Can Do It Yourself.

Published

on

from the vibe-code-your-social-experience dept

Disclaimer: This post talks about Bluesky and an offering from Bluesky and I am on the Bluesky board. Take everything I say with whatever size grains of salt you feel is appropriate.

I’ve written a few times now about how I think that AI tools, used carefully and thoughtfully, represent our best chance at taking back control over the open web. I know this is not a popular opinion with many Techdirt readers, but I’m hoping some of you will read through this to try to understand and engage with the points I’m making here. I truly do believe that if used well and appropriately, these tools can serve to put power back into the hands of users, rather than giant centralized companies who are more interested in exploiting your attention.

Over the last few weeks I’ve been playing around with an AI-powered tool that Bluesky has released (much to the chagrin of many users) to a relatively small group of early beta testers. I think the negative reaction to the product announcement is understandable, given the general distrust of all AI tools, but it’s really worth examining what this tool is and what it can enable, including really empowering people to take back control over their own social experience. It literally gives you a path to routing around Bluesky’s own design features if you don’t like them.

Yes, a lot of AI is overhyped garbage being shoved at people who don’t want it — but that doesn’t mean the underlying tools can’t be useful when applied carefully by those who choose to use the tools appropriately.

Advertisement

It means not outsourcing your brain to the tool, but rather using it the way any skilled person automates some aspect of work that they do. I’ve sanded and restained the floors of my house, and while I could have done the whole thing by hand with a stack of sandpaper, it was helpful to rent a floor sander from a local hardware store, learn how to use it properly, and then use it so that I could finish the job in a day rather than weeks. I view AI tools the same way. If you learn how to use them properly, as an assistive tool rather than a replacement for your brain, they can help you accomplish useful things.

Let me give an example: a couple of weeks ago, law professor Blake Reid wrote a short thread on Bluesky about how he needed to take a break from social media, because he worried that it was eating up too much of his time and he was better off just stopping cold turkey, to avoid getting sucked into unproductive discussions that push him to (as he put it) “get over my skis” in engaging in conversations where he’s tempted to weigh in despite not having much expertise (a common thing on social media). It’s a worthwhile thread.

But in that thread he mentioned that he was hopeful that maybe some day technology itself could help him use social media in a healthier way, to dial back how much time he spent on it, and get him focused on the more productive and useful discussions (which he admits also happen regularly on Bluesky).

What was amusing to me was that the only reason I saw that post by Reid was because I’ve been beta testing a new tool that… kinda does that. When he wrote that thread, I was actually on vacation, hiking in the National Parks in Utah, and mostly offline. But in the evenings, I would check in, and rather than sorting through everything I missed on social media that day, I had a tool just show me things that I would find useful that I might have missed.

Advertisement

But using an AI tool, I had built an entirely personalized news aggregator, which had access to my Bluesky account, Techdirt’s RSS feed, and the knowledge that I had been out all day and wanted not just a summary of what news might be interesting to me as the editor of Techdirt, but also what people on Bluesky were saying about it. Here’s a screenshot of what my first attempt at this looks like:

The tool that let me do this is an advanced version of Attie, which I also recognize is extremely controversial among users on Bluesky, many of whom vocally have expressed their hatred of the very idea of it when it was announced last month. But, my main interest is in figuring out to empower users who want to take control over their own social experience, and this seems like a clear example of that. I’ll note that this version of Attie has not yet rolled out to most of the beta testers (I believe some have access to it — but this is one small benefit of being on the board).

Honestly, I think the way Bluesky announced Attie may have done it an injustice, positioning it as a kind of AI-powered feed generator. There are multiple other feed generator tools for Bluesky out there, many of which are really fantastic. For a while now I’ve used both Graze.social and Surf.social to make AI-powered feeds (which never seemed to generate much controversy).

But generating feeds alone isn’t all that interesting. With the more advanced version of Attie, I can take much more control over my entire social experience. The fact that with a single prompt I could build that personalized aggregator (based not just on my own feed, but Techdirt’s RSS) is something more powerful, including the fact that the tool knows to summarize a whole days’ worth of posts, because I’m trying to see in a glance if there’s anything relevant for Techdirt and I’d been offline the entire day.

Rather than just letting a single company (in this case Bluesky) define my entire experience for me, I can vibe-code my social experience. I can tell it not just the types of content I want to see, but how I want to see it. And for what reason. And how much (or how little) content to show me. And with what context around it. It’s all based on what I expressly want. Not what any company thinks I want.

Advertisement

And I keep experimenting with other versions of this as well. In one test, I had it also try to summarize stories and tell me why it thought I’d find them useful for Techdirt:

In this case it not only found a story that is interesting to me, but it suggested multiple sources for me to read about it, even noting (for example) that Professor Eric Goldman’s blog post is “the definitive blog post” for my coverage (it’s not wrong).

I go back to the piece I wrote a little while back about the kind of learned helplessness of social media users. We’ve had two decades of billionaires deciding exactly how they wanted to intermediate your social experience. How your feed looks. What kind of algorithm you’ll see. What sorts of content will be put in your feed. They got to focus on engagement maxxing. You just had to deal with it.

In such a world, the only thing users felt they could do in response was to yell. They could yell at the CEOs of these platforms. Or at the government, telling them to yell at the CEOs of these platforms.

But with an AI tool that explores an open social ecosystem, you don’t need to yell at a CEO or a regulator. You can just tell the tool what you want, what you don’t want, how you want (or don’t want) to see it, and what context would be useful. It puts you in control.

Advertisement

And yes, sometimes it makes mistakes. It can recommend a story I’m not interested in. But, then I can just tell it that such and such story isn’t useful and why… and it will update the system for me.

Once again, I understand that some people hate any and all uses of AI. And I’m not suggesting you have to run out and use the tools yourself. You do you. But showing concrete use cases where these tools actually deliver more user agency — more control over your online environment, rather than deferring to the whims of any particular company — matters.

The larger point here isn’t really about Attie specifically (indeed, anyone could build their own version of this thanks to open protocols). It’s that for two decades, users have been trained to believe their only options are to accept whatever a platform gives them, or yell loudly enough that someone powerful might change it. That’s the learned helplessness I wrote about earlier, and it’s corrosive.

Tools like this — built on open protocols, not locked inside a corporate walled garden — represent a different path. One where you don’t petition a billionaire for a better feed algorithm. You don’t petition the government to try to put time limits on social media. You just build the experience you want. You tell it to make you a better interface that matches what you want. You tell it you don’t want to spend that much time. That’s what “protocols, not platforms” actually looks like in practice, helped along by agentic tools, and it’s why I think this matters well beyond whether any particular AI tool is good or not.

Advertisement

Filed Under: ai, attie, custmization, decentralization, vibe coding

Companies: bluesky

Source link

Advertisement
Continue Reading

Tech

Pichai opens Cloud Next 2026 with $240B backlog, 750M Gemini users, and a plan to turn Search into an agent manager

Published

on

Summary: Sundar Pichai opened Cloud Next 2026 with Google Cloud at $70 billion in annual revenue, 48% growth, a $240 billion backlog that doubled in a year, and $175-185 billion in planned capital expenditure. The Gemini app has 750 million monthly users, AI Overviews reach two billion, and the Gemini API processed 85 billion requests in January alone. Pichai framed the conference around Search evolving from a retrieval engine into an “agent manager” and announced the Universal Commerce Protocol with Shopify, Target, and Walmart, while positioning Google’s full-stack integration from custom silicon to consumer distribution as the advantage competitors cannot replicate.

Sundar Pichai opened Google Cloud Next 2026 on Tuesday with a set of numbers that reframe the competitive dynamics of enterprise AI. Google Cloud is now generating more than $70 billion in annual revenue, growing at 48% year on year, with a backlog of $240 billion, up 55% and more than double the roughly $155 billion of a year ago. The number of billion-dollar deals Google Cloud signed in 2025 exceeded the combined total of the three previous years. Existing customers are outpacing their own commitments by 30%, spending faster than they contracted. Google has committed $175 billion to $185 billion in capital expenditure for 2026, nearly doubling the $91.4 billion it spent last year. Pichai described the moment as “a fundamental rewiring of technology and an accelerant of human ingenuity.” The money suggests he may not be exaggerating.

The keynote, titled “The Agentic Cloud,” was less a product launch than a thesis statement. Google is positioning itself not as a cloud provider that offers AI but as the operating system for what it calls the agentic enterprise: a model in which AI agents handle routine business operations autonomously, communicate with each other across platforms, and interact with the physical world through commerce, search, and real-time data. The pitch is that Google is the only company that controls every layer of that stack, from the custom silicon that runs inference, to the frontier models that power reasoning, to the cloud platform that hosts the agents, to the productivity suite and search engine through which three billion users interact with them.

The scale of the machine

The Gemini app has reached 750 million monthly active users as of the fourth quarter of 2025, up 100 million from the previous quarter. AI Overviews, Google’s AI-generated search summaries, reach two billion monthly users across more than 200 countries and drive 10% more search queries globally. AI Overviews now trigger on approximately 48% of all tracked queries, up from 31% in February 2025, a 58% increase in a year. The Gemini API processed 85 billion requests in January 2026, a 142% increase from 35 billion in March 2025. Eight million paid Gemini Enterprise seats are deployed across 2,800 companies. Thirteen million developers are building with Google’s generative models. Gemini 3 Pro has had, in Pichai’s words, “the fastest adoption of any model in our history.”

Advertisement

These are not cloud metrics. They are platform metrics. Google is arguing that its advantage over AWS, Azure, OpenAI, and Anthropic lies not in any single product but in the fact that it reaches more users, processes more queries, and touches more surfaces than any competitor. Search alone handles more than a billion shopping interactions per day. Workspace has more than three billion users. Android runs on billions of devices. The thesis is that when AI agents become the primary interface for work and commerce, the company with the largest existing surface area wins, because the agents need somewhere to run, something to connect to, and someone to serve.

Search becomes the agent manager

Pichai’s most consequential framing may have come in a podcast appearance earlier this month: “A lot of what are just information-seeking queries will be agentic in Search. You’ll be completing tasks. You’ll have many threads running.” He described Search evolving from a retrieval engine into an “agent manager,” an orchestration layer that dispatches AI agents to complete tasks on a user’s behalf rather than returning a list of links.

The infrastructure for this is already being built. Google announced the Universal Commerce Protocol at NRF in January, an open-source standard for agentic commerce co-developed with Shopify, Etsy, Wayfair, Target, and Walmart. More than 20 partners have endorsed it, including Adyen, American Express, Best Buy, Flipkart, Macy’s, Mastercard, Stripe, The Home Depot, Visa, and Zalando. UCP is built on REST and JSON-RPC transports with the Agent2Agent protocol, Model Context Protocol, and a new Agent Payments Protocol built in. It lets AI agents treat any participating store as a programmable service, with the merchant remaining the merchant of record. Pichai, who described himself as “an indecisive shopper,” said he is “looking forward to the day when agents can help me get from discovery to purchase.”

The implications for the advertising industry are significant. If Search shifts from showing links that users click to dispatching agents that complete purchases, the entire cost-per-click model that funds Google’s advertising business, and by extension the businesses of every company that advertises on Google, changes. Retailers are already deploying AI-powered shopping through Gemini, ChatGPT, and Copilot. The question is whether agentic commerce cannibalises Google’s own advertising revenue or whether Google can capture a larger share of the transaction itself. UCP suggests Google is betting on the latter.

The full-stack argument

The competitive positioning at Cloud Next was unusually direct. Thomas Kurian said competitors are “handing you the pieces, not the platform,” leaving enterprise teams to integrate components themselves. The claim rests on Google’s vertical integration: Ironwood TPUs and the forthcoming eighth-generation split into Broadcom-designed training chips and MediaTek-designed inference chips provide the silicon. Gemini 3 Pro, 3 Flash, and 3.1 Pro provide the models. The Gemini Enterprise Agent Platform, formerly Vertex AI, provides the developer tools and runtime. Workspace Studio provides the no-code agent builder. Search and Android provide the consumer distribution. No other company assembles all of these under one roof.

Advertisement

The argument has a specific target: Microsoft Copilot, which despite being embedded in virtually every Fortune 500 company has struggled with adoption. Only 3.3% of Microsoft 365 users with Copilot access actually pay for it, and its accuracy net promoter score deteriorated to negative 24.1 by September 2025. Google’s eight million paid Gemini Enterprise seats in roughly four months represents a faster trajectory, though from a much smaller base. GitHub has frozen new Copilot sign-ups because agentic coding sessions consume more compute than users pay for, illustrating why owning the silicon layer, as Google does, is not just a technical advantage but an economic one.

The capital question

The $175 billion to $185 billion in planned capital expenditure is the number that makes the rest of the strategy credible or alarming, depending on how the next two years unfold. Roughly 60% goes to servers and 40% to data centres and networking equipment. Combined with Microsoft, Meta, and Amazon, total big tech AI infrastructure spending is approaching $700 billion this year, a figure large enough to reshape energy markets and strain power grids. Pichai acknowledged on the fourth-quarter earnings call that the “top question is definitely around compute capacity and all the constraints, be it power, land, supply chain,” and expects Google to remain supply-constrained through 2026.

The backlog provides the justification. At $240 billion, it represents more than three years of current revenue contracted but not yet delivered. Thirteen product lines each generate more than $1 billion in annual revenue. The ServiceNow deal alone was worth $1.2 billion over five years. If the demand is real, and the backlog suggests it is, then the capital expenditure is not a gamble but an obligation: the cost of building the infrastructure to fulfil commitments already made.

Google Cloud holds roughly 11% of the cloud infrastructure market, behind AWS at 31% and Azure at 25%. The gap has narrowed: Google grew at 48% in the fourth quarter of 2025, the fastest of the three, and achieved sustained profitability for the first time. But the gap remains. What Pichai presented at Cloud Next is not a plan to close that gap through incremental cloud sales. It is a plan to redefine what the cloud is, from a place where companies store data and run workloads to a platform where AI agents perform work, make decisions, complete purchases, and coordinate with each other across organisational boundaries. If that transition happens, the company that built the agents, the models, the chips, the protocols, and the distribution channels stands to capture a share of the value that the current market share numbers do not reflect. That is the bet. Cloud Next 2026 is the moment Google made it explicit.

Advertisement

Source link

Continue Reading

Tech

The Ghost in the Machine: How AI is Crafting the Future of Gaming Worlds

Published

on

For decades, playing a video game was like following someone’s elaborate script. Every character and branching path was meticulously created by a developer. While impressive, these environments were ultimately finite and predictable. They had boundaries, not just on the map, but in their very code. Modern reality has changed it. Artificial intelligence is transforming the virtual world from static landscapes to dynamic systems with no pre-written steps. The gaming environment is becoming smart, and the players enjoy total immersion and engagement in the process.

Beyond the script: creating characters that think

The most noticeable impact of AI falls on the inhabitants of these virtual worlds, Non-Player Characters (NPCs). We’ve all seen a classic city guard who repeats the same line of dialogue endlessly or an enemy running along a predictable path. Modern AI leaves these simplistic automatons behind.

Instead of a rigid script, today’s NPCs perceive and react to the world around them. They utilize complex algorithms to navigate difficult environments, find cover, or coordinate group attacks. More impressively, they learn from player behavior. Imagine an enemy that notices you always use stealth and begins setting traps. This creates a much more engaging experience, the world feels less like programmed challenges and more like intelligent agents with their own goals.

  • Dynamic pathfinding: Characters don’t follow predefined routes. They can analyze the environment in real time and figure out the best way to the destination point. Remarkably, they cope with that even if the terrain changes suddenly.
  • Behavioral trees: Developers apply complex decision-making models. This allows NPCs to choose from a wide range of actions based on current situations, making them highly unpredictable.
  • Machine learning: Some advanced systems train NPCs by having them observe human players. This allows them to adopt effective strategies that a developer might never have programmed manually.

Worlds without end: the magic of procedural generation

Creating a whole world where gamers will learn to survive takes much time and effort. Building every tree or mountain manually is a rigorous task. AI-driven Procedural Content Generation (PCG) turns out to be a solution here. Designers, technical artists, and engineers use the PCG as a toolset of helpful components. The framework creates game content automatically and generates believable environments.

AI technologies allow designers to avoid manually scattering random trees if they need to depict a credible forest landscape. Instead, the AI algorithm learns the rules of a forest ecosystem. The combination of realistic views and the engineer’s initial intent in the setting captures players and makes them enjoy the game. For example, No Man’s Sky used PCG to create a virtual galaxy with billions of planets. Planets have their unique flora and fauna. Players can fight with alien species or trade with them to get necessary resources or equipment. The game fosters a sense of exploration and impresses with its scale. The future of AI in GameDev lies in this ability to create believable worlds.

Advertisement

A game that knows you: the personalized experience

A person gaming on a laptop

It is interesting to play a game as long as it is unpredictable. AI allows for tailoring playing experiences to individuals. This is possible due to the AI analyzing the skill levels, performance, and preferences of players. The game adapts to your style of playing and makes subtle adjustments to the game in real time. This is far more than just a simple “easy, normal, hard” difficulty setting.

  • Dynamic difficulty adjustment: The system detects your performance and adjusts the game levels accordingly. For example, it might slightly reduce enemy numbers or provide more resources. Vice versa, if you’re doing well, the algorithm keeps the challenge.
  • Personalized content: It’s great to know your decisions impact the storyline of the game. AI might notice you prefer a certain weapon type and start dropping more powerful versions of it. In narrative games, it can alter future plot points based on the choices and emotional reactions it observes from the player. Besides, the system might adapt in-game rewards to players’ preferences. For example, you can receive new gear, abilities, or characters.  
  • Social customization: AI may suggest players with the same skill levels to keep the competitive environment. At the same time, it may also offer personalized NPCs, which adds to the general immersive experience.

Conclusion

To summarize what was mentioned before, AI allows for never having the same gaming experience twice. This makes gameplay exciting for gamers, yet the development process becomes challenging and demands high competence from the specialists. Therefore, game studios partner with a specialized AI development company in the United States to create unforgettable playing grounds. And the amazing news is that it is only the beginning. AI continues to develop and inspire improvements in all the spheres where it is applied.

Source link

Continue Reading

Trending

Copyright © 2025