Connect with us
DAPA Banner

Tech

Pichai opens Cloud Next 2026 with $240B backlog, 750M Gemini users, and a plan to turn Search into an agent manager

Published

on

Summary: Sundar Pichai opened Cloud Next 2026 with Google Cloud at $70 billion in annual revenue, 48% growth, a $240 billion backlog that doubled in a year, and $175-185 billion in planned capital expenditure. The Gemini app has 750 million monthly users, AI Overviews reach two billion, and the Gemini API processed 85 billion requests in January alone. Pichai framed the conference around Search evolving from a retrieval engine into an “agent manager” and announced the Universal Commerce Protocol with Shopify, Target, and Walmart, while positioning Google’s full-stack integration from custom silicon to consumer distribution as the advantage competitors cannot replicate.

Sundar Pichai opened Google Cloud Next 2026 on Tuesday with a set of numbers that reframe the competitive dynamics of enterprise AI. Google Cloud is now generating more than $70 billion in annual revenue, growing at 48% year on year, with a backlog of $240 billion, up 55% and more than double the roughly $155 billion of a year ago. The number of billion-dollar deals Google Cloud signed in 2025 exceeded the combined total of the three previous years. Existing customers are outpacing their own commitments by 30%, spending faster than they contracted. Google has committed $175 billion to $185 billion in capital expenditure for 2026, nearly doubling the $91.4 billion it spent last year. Pichai described the moment as “a fundamental rewiring of technology and an accelerant of human ingenuity.” The money suggests he may not be exaggerating.

The keynote, titled “The Agentic Cloud,” was less a product launch than a thesis statement. Google is positioning itself not as a cloud provider that offers AI but as the operating system for what it calls the agentic enterprise: a model in which AI agents handle routine business operations autonomously, communicate with each other across platforms, and interact with the physical world through commerce, search, and real-time data. The pitch is that Google is the only company that controls every layer of that stack, from the custom silicon that runs inference, to the frontier models that power reasoning, to the cloud platform that hosts the agents, to the productivity suite and search engine through which three billion users interact with them.

The scale of the machine

The Gemini app has reached 750 million monthly active users as of the fourth quarter of 2025, up 100 million from the previous quarter. AI Overviews, Google’s AI-generated search summaries, reach two billion monthly users across more than 200 countries and drive 10% more search queries globally. AI Overviews now trigger on approximately 48% of all tracked queries, up from 31% in February 2025, a 58% increase in a year. The Gemini API processed 85 billion requests in January 2026, a 142% increase from 35 billion in March 2025. Eight million paid Gemini Enterprise seats are deployed across 2,800 companies. Thirteen million developers are building with Google’s generative models. Gemini 3 Pro has had, in Pichai’s words, “the fastest adoption of any model in our history.”

Advertisement

These are not cloud metrics. They are platform metrics. Google is arguing that its advantage over AWS, Azure, OpenAI, and Anthropic lies not in any single product but in the fact that it reaches more users, processes more queries, and touches more surfaces than any competitor. Search alone handles more than a billion shopping interactions per day. Workspace has more than three billion users. Android runs on billions of devices. The thesis is that when AI agents become the primary interface for work and commerce, the company with the largest existing surface area wins, because the agents need somewhere to run, something to connect to, and someone to serve.

Search becomes the agent manager

Pichai’s most consequential framing may have come in a podcast appearance earlier this month: “A lot of what are just information-seeking queries will be agentic in Search. You’ll be completing tasks. You’ll have many threads running.” He described Search evolving from a retrieval engine into an “agent manager,” an orchestration layer that dispatches AI agents to complete tasks on a user’s behalf rather than returning a list of links.

The infrastructure for this is already being built. Google announced the Universal Commerce Protocol at NRF in January, an open-source standard for agentic commerce co-developed with Shopify, Etsy, Wayfair, Target, and Walmart. More than 20 partners have endorsed it, including Adyen, American Express, Best Buy, Flipkart, Macy’s, Mastercard, Stripe, The Home Depot, Visa, and Zalando. UCP is built on REST and JSON-RPC transports with the Agent2Agent protocol, Model Context Protocol, and a new Agent Payments Protocol built in. It lets AI agents treat any participating store as a programmable service, with the merchant remaining the merchant of record. Pichai, who described himself as “an indecisive shopper,” said he is “looking forward to the day when agents can help me get from discovery to purchase.”

The implications for the advertising industry are significant. If Search shifts from showing links that users click to dispatching agents that complete purchases, the entire cost-per-click model that funds Google’s advertising business, and by extension the businesses of every company that advertises on Google, changes. Retailers are already deploying AI-powered shopping through Gemini, ChatGPT, and Copilot. The question is whether agentic commerce cannibalises Google’s own advertising revenue or whether Google can capture a larger share of the transaction itself. UCP suggests Google is betting on the latter.

The full-stack argument

The competitive positioning at Cloud Next was unusually direct. Thomas Kurian said competitors are “handing you the pieces, not the platform,” leaving enterprise teams to integrate components themselves. The claim rests on Google’s vertical integration: Ironwood TPUs and the forthcoming eighth-generation split into Broadcom-designed training chips and MediaTek-designed inference chips provide the silicon. Gemini 3 Pro, 3 Flash, and 3.1 Pro provide the models. The Gemini Enterprise Agent Platform, formerly Vertex AI, provides the developer tools and runtime. Workspace Studio provides the no-code agent builder. Search and Android provide the consumer distribution. No other company assembles all of these under one roof.

Advertisement

The argument has a specific target: Microsoft Copilot, which despite being embedded in virtually every Fortune 500 company has struggled with adoption. Only 3.3% of Microsoft 365 users with Copilot access actually pay for it, and its accuracy net promoter score deteriorated to negative 24.1 by September 2025. Google’s eight million paid Gemini Enterprise seats in roughly four months represents a faster trajectory, though from a much smaller base. GitHub has frozen new Copilot sign-ups because agentic coding sessions consume more compute than users pay for, illustrating why owning the silicon layer, as Google does, is not just a technical advantage but an economic one.

The capital question

The $175 billion to $185 billion in planned capital expenditure is the number that makes the rest of the strategy credible or alarming, depending on how the next two years unfold. Roughly 60% goes to servers and 40% to data centres and networking equipment. Combined with Microsoft, Meta, and Amazon, total big tech AI infrastructure spending is approaching $700 billion this year, a figure large enough to reshape energy markets and strain power grids. Pichai acknowledged on the fourth-quarter earnings call that the “top question is definitely around compute capacity and all the constraints, be it power, land, supply chain,” and expects Google to remain supply-constrained through 2026.

The backlog provides the justification. At $240 billion, it represents more than three years of current revenue contracted but not yet delivered. Thirteen product lines each generate more than $1 billion in annual revenue. The ServiceNow deal alone was worth $1.2 billion over five years. If the demand is real, and the backlog suggests it is, then the capital expenditure is not a gamble but an obligation: the cost of building the infrastructure to fulfil commitments already made.

Google Cloud holds roughly 11% of the cloud infrastructure market, behind AWS at 31% and Azure at 25%. The gap has narrowed: Google grew at 48% in the fourth quarter of 2025, the fastest of the three, and achieved sustained profitability for the first time. But the gap remains. What Pichai presented at Cloud Next is not a plan to close that gap through incremental cloud sales. It is a plan to redefine what the cloud is, from a place where companies store data and run workloads to a platform where AI agents perform work, make decisions, complete purchases, and coordinate with each other across organisational boundaries. If that transition happens, the company that built the agents, the models, the chips, the protocols, and the distribution channels stands to capture a share of the value that the current market share numbers do not reflect. That is the bet. Cloud Next 2026 is the moment Google made it explicit.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

SpaceX S-1 warns orbital AI data centres may not be viable, months after Musk called space-based AI a no-brainer

Published

on

Summary: SpaceX’s confidential S-1 pre-IPO filing warns that its orbital AI data centre plans “involve significant technical complexity and unproven technologies, and may not achieve commercial viability,” contradicting Elon Musk’s January claim at Davos that space-based AI was a “no-brainer” achievable within two to three years. The filing comes as SpaceX targets a $1.75 trillion IPO valuation and has applied to the FCC for one million data centre satellites, while competitors Starcloud, Google (Project Suncatcher), and Blue Origin pursue their own orbital compute programmes.

SpaceX told prospective investors in its confidential S-1 pre-IPO filing that its plans for orbital AI data centres “involve significant technical complexity and unproven technologies, and may not achieve commercial viability.” The company warned that any future space-based compute infrastructure will operate “in the harsh and unpredictable environment of space, exposing them to a wide and unique range of space-related risks that could cause them to malfunction or fail.” The disclosure, first reported by Reuters on Monday, is legally standard for a company approaching what could be the largest initial public offering in history. It is also a remarkable piece of bureaucratic candour from the same organisation whose chief executive described data centres in orbit as a “no-brainer” three months ago.

At the World Economic Forum in Davos in January, Elon Musk said the lowest-cost place to put AI would be in space “within two years, maybe three at the latest.” He called space-based solar “10 times cheaper than terrestrial solar” because “you don’t need any batteries,” described the cooling problem as solved by simply pointing a radiator away from the sun at three degrees Kelvin, and predicted that more AI capacity would sit in orbit than on Earth within five years. In February, SpaceX filed with the Federal Communications Commission to launch and operate up to one million satellites as the “SpaceX Orbital Data Center system” at altitudes between 500 and 2,000 kilometres. The filing described satellites that would “directly harness near-constant solar power with little operating or maintenance cost.” The S-1, filed confidentially with the Securities and Exchange Commission ahead of a targeted June listing at a $1.75 trillion valuation and a $75 billion raise, says something different.

The physics of the problem

The contradiction between Musk’s public statements and SpaceX’s legal disclosures maps onto a set of engineering constraints that have not changed since Davos. In vacuum, all heat dissipation happens through radiation. There is no convection, no liquid cooling, no fans. To radiate just one megawatt of heat at 20 degrees Celsius, an orbital data centre would need roughly 1,200 square metres of radiator surface, the area of four tennis courts. The International Space Station’s entire electrical system produces only 0.2 megawatts; ground-based hyperscale data centres are racing toward gigawatt scale. The three-degree background temperature of space is irrelevant if the radiators needed to exploit it weigh more than the servers they are cooling.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Power is equally constrained. Solar panels in orbit receive roughly five times more energy than on the ground, with no atmosphere, weather, or nighttime in certain orbits. But it would take approximately one square mile of solar array in Earth orbit to produce one gigawatt at 30% cell efficiency. The ISS produces 0.2 megawatts from arrays that span the length of a football field. Scaling to the gigawatts that a single hyperscale data centre consumes on Earth would require deploying and maintaining solar infrastructure orders of magnitude larger than anything humans have built in space.

Hardware obsolescence may be the most underappreciated constraint. GPUs depreciate as new architectures emerge every two to three years. On Earth, racks are swapped continuously. In orbit, every hardware replacement requires a launch, docking, or robotic servicing mission. Radiation exposure causes bit flips and permanent circuit damage. Radiation-hardened chips lag multiple generations behind commercial processors. Triple modular redundancy, running three parallel systems and taking the majority vote, would triple the hardware requirements. The AI’s soaring energy demands, which the IEA projects will push data centre electricity consumption past 1,000 terawatt-hours by the end of 2026, are real. The question is whether solving them in orbit creates more problems than it solves.

Advertisement

The competitive landscape in orbit

SpaceX is not the only company pursuing orbital compute, which makes the S-1 disclaimer more strategically significant than a standard risk factor. Starcloud, formerly Lumen Orbit, launched the first high-powered GPU into orbit in November 2025, an Nvidia H100 that represented 100 times more compute than had ever operated in space. In December, Starcloud became the first company to run a large language model, Google’s Gemma, and the first to perform in-orbit LLM training. By March 2026 it had raised $170 million at a $1.1 billion valuation, the fastest unicorn in Y Combinator’s history. Its next satellite, targeting 200 kilowatts and a cost of roughly $0.05 per kilowatt-hour, is planned for October.

Google’s Project Suncatcher, a partnership with Planet Labs, plans to launch two test satellites carrying Google TPUs by early 2027 and envisions one-kilometre arrays of 81-satellite compute clusters in dawn-dusk sun-synchronous orbit. Google’s analysis suggests launch costs may fall below $200 per kilogram by the mid-2030s, making space data centres cost-comparable to terrestrial energy costs at that point. Nvidia announced Vera Rubin Space-1, a chip system designed specifically for orbital data centres. Blue Origin filed its own FCC application for 51,600 data centre satellites. The a16z-funded startup Orbital is building an AI satellite constellation. The idea is not fringe. It is attracting serious capital and serious engineering talent. SpaceX’s S-1 is notable precisely because the company that controls the launch vehicles and the satellite internet constellation, the company best positioned to make orbital compute work, is the one telling investors it might not.

The terrestrial alternatives

The S-1 disclosure arrives in a week when the terrestrial alternatives are absorbing enormous investment. Massive AI infrastructure deals like Meta’s $27 billion commitment to Nebius illustrate the scale of spending on ground-based compute. Nuclear-powered AI data centres are attracting dedicated funding, with Valar Atomics raising $450 million at a $2 billion valuation to build small modular reactors purpose-built for AI workloads. The US Department of Energy has identified 16 federal sites for data centre construction adjacent to existing nuclear facilities. By 2026, 18 nuclear-powered AI facilities with a combined capacity of 31.2 gigawatts are tracked globally. Microsoft’s Project Natick deployed an undersea data centre capsule designed for AI workloads in February 2025. The tech industry spent roughly $580 billion in 2025 turning deserts and abandoned factories into GPU-packed facilities.

The pattern is consistent: every approach to the AI power problem that keeps the servers on Earth, or at most underwater, is attracting more capital and progressing faster than the orbital alternatives. Nuclear reactors are a proven technology being adapted to a new use case. Orbital data centres are an unproven technology being proposed for a use case that may not require them. The S-1 language suggests SpaceX’s own engineers and lawyers recognise the distinction, even if the company’s public messaging has not caught up.

Advertisement

The IPO context

The S-1 filing serves two masters. SpaceX needs to present orbital data centres as a credible growth story to justify a $1.75 trillion valuation, the highest ever for a pre-IPO company. It also needs to disclose the risks clearly enough to protect itself from securities litigation if the plans do not materialise. The result is a document that simultaneously promotes and disclaims the same initiative. This is not unusual in IPO filings. It is unusual when the chief executive has spent the preceding three months describing the initiative as inevitable, obvious, and cheaper than the alternatives.

The SpaceX-xAI merger in February, an all-stock transaction valuing the combined entity at $1.25 trillion, was explicitly motivated by orbital data centres. Musk said integrating Starlink’s global satellite mesh with xAI’s large language models was a primary rationale. Musk’s AI chip ambitions through the Terafab project with Intel include dedicated processors for orbital deployments. The one million satellites in the FCC filing would represent a hundred-fold increase over the current population of low Earth orbit. Ars Technica estimated the barebones deployment cost at “at least $1 trillion.” The vast majority of more than 1,000 public comments to the FCC opposed the plan, citing debris, light pollution, and the risk of Kessler syndrome, a cascading chain of collisions that could render entire orbital altitudes unusable.

SpaceX may eventually prove that orbital compute works. The company has a record of achieving what others said was impossible, most notably reusable rockets. But the S-1 filing is not the language of a company that has solved the problem. It is the language of a company that wants credit for trying and protection if it fails. The gap between Davos in January and the SEC in April is the gap between a pitch and a prospectus. Both are real. Only one carries legal liability.

Advertisement

Source link

Continue Reading

Tech

Washington state flunks school phone policy rankings

Published

on

(BigStock Photo)

A new scorecard rating school cellphone policies nationwide gave Washington state and four others a failing grade. Washington lacks statewide rules setting limits on phone use in the classroom and on campus, allowing districts to set their own policies.

The rankings, first covered by Axios, evaluated states on how strictly they limit phone use during the school day. Four states received “A” grades — North Dakota, Kansas, Rhode Island and Indiana — for requiring that cellphones are fully inaccessible during the entire school day, or “bell-to-bell.”

Here’s how the rest broke out in the Phone-Free Schools State Report Card:

  • The 19 states earning a “B” have all-day restrictions on phone use, but the devices are stored in lockers or backpacks, making them potentially accessible.
  • The eight states with “C” grades have rules that restrict phone use during instructional time only.
  • The nine “D” states require policies, but don’t say what those rules should include.
  • Four states have pending legislation and didn’t receive a grade.

Within Washington, OSPI reports that 53% of districts in the state have policies limiting smart devices during instructional time only, while 31% required phones to be stored from bell-to-bell.

At the local level, Seattle Public Schools has not issued a district-wide policy, though at least three public middle schools in the district have banned phones at school, and at least one high school prohibits their use during classes.

The urgency behind these policies is backed by recent research. A study published in January from the University of Washington School of Medicine and others found that U.S. adolescents ages 13–18 spend more than one hour per day on phones during school hours, with “addictive” social media apps accounting for the largest share of use.

Advertisement

Despite mounting concern, Washington has moved cautiously on the issue. Last month, legislators passed a law requiring the Office of the Superintendent of Public Instruction (OSPI) to study the issue, produce a report on district policies, review research on phone impacts, and gather student input on regulations. The analysis is due at the end of 2027.

The UW’s Youth Advisory Board, a group of approximately 20 teens from Seattle-area schools, recently published a memo tackling the contentious issue of phones in school. The document weighs the pros and cons of phone bans and offers recommendations on how schools should draft and communicate their policies. 

RELATED:

Source link

Advertisement
Continue Reading

Tech

LinkedIn CEO change: Daniel Shapero takes the helm as Microsoft broadens leadership team

Published

on

New LinkedIn CEO Daniel Shapero, left, and Ryan Roslansky, EVP of LinkedIn and Microsoft Office, at LinkedIn headquarters. (LinkedIn Photo)

LinkedIn has a new CEO for the first time in six years.

Daniel Shapero, the company’s chief operating officer since 2021, is stepping into the top job, reporting to Ryan Roslansky, who was elevated last year to executive vice president overseeing both LinkedIn and Microsoft Office.

The changes come as LinkedIn crosses $5 billion in quarterly revenue for the first time, putting it on an annual run rate of more than $20 billion. The business social network has been owned by Microsoft since its acquisition by the Seattle-area tech giant for $26.2 billion in 2016.

Roslansky announced the changes Wednesday (in a post on LinkedIn, of course) saying he also asked Mohak Shroff, LinkedIn’s longtime engineering leader, to take on the new role of president of platforms and digital work. Both Shapero and Shroff report to Roslansky.

Roslansky put the moves in the context of the accelerating impact of AI on the labor market.

Advertisement

“Last year when Satya Nadella asked me to lead LinkedIn and Microsoft Office, I knew what he was betting on: AI is going to transform how people work and grow in their careers faster than most people expect,” he wrote. “And LinkedIn and Office would be at the center of that.

As LinkedIn CEO since 2020, Roslansky nearly tripled the company’s revenue and grew the platform to more than 1.3 billion members, 70 million companies, and 42,000 skills, according to the company.

He took on the additional role of EVP of Office last June, adding oversight of Outlook, Word, Excel, PowerPoint, and Microsoft 365 Copilot as Microsoft pursued its “agentic web” AI strategy.

The new leadership structure is designed to free Roslansky to focus on that broader portfolio. Shapero will run LinkedIn day to day, while Shroff will work across LinkedIn and Microsoft on longer-term technology strategy and innovation.

Advertisement

Shapero joined LinkedIn in 2008 as roughly its 300th employee. He rose through sales, product, and operations before becoming COO in 2021. In his own LinkedIn post, he called his time at the company “one of the most meaningful experiences of my life” and said he would start by “learning and listening.”

Source link

Continue Reading

Tech

Arkansas Tried To Pass An Unconstitutional Social Media Law. Again. It Lost. Again.

Published

on

from the maybe-read-your-laws-before-passing-them dept

Back in 2023, Arkansas passed a social media age verification law so poorly drafted that the bill’s own sponsor couldn’t accurately describe who it covered. The law appeared to exempt TikTok, Snapchat, and YouTube while the sponsor publicly claimed those were the exact platforms being targeted. When the state’s own expert witness testified that Snapchat was covered, the state’s own attorney disagreed with his own witness in the same hearing. That law was struck down on First Amendment and vagueness grounds, and then permanently enjoined earlier this year in a suit brought by the trade group NetChoice.

So Arkansas went back to the drawing board and passed Act 900, which was supposed to fix all the problems with the original. Judge Timothy Brooks of the Western District of Arkansas has now preliminarily enjoined that law too, in a ruling that reads like a patient teacher explaining to a student why the homework still doesn’t work despite a rewrite.

The legislature did manage to fix the content-based definition problem that sank the first law, but the progress stops there. Act 900 imposes four main new requirements on social media platforms: a prohibition on “addictive practices,” default settings for minors (including a nighttime notification blackout), privacy default settings at the most protective level, and a parental dashboard requirement. Every single one of these provisions fell apart on review, each in its own special way.

The “addictive practices” provision might be the most impressively broken. Here’s what it actually says platforms must do:

Advertisement

Consistent with contemporary understanding of addiction, compulsory behavior, and child cognitive development, ensure that the social media platform does not engage in practices to evoke any addiction or compulsive behaviors in an Arkansas user who is a minor, including without limitation through notifications, recommended content, artificial sense of accomplishment, or engagement with online bots that appear human.

“Contemporary understanding of addiction” is doing a lot of work here, and it’s not up to the job. There is no consensus that social media constitutes addiction in any clinical sense. So it’s entirely unclear what a company would need to do here, which is fatal in a First Amendment context. And yet, the law is designed such that violations are strict liability and ridiculously broad. A plain reading of the law shows that it is not limited to addiction to the platform itself; a platform can apparently be held liable if its practices “evoke” addiction to off-platform activities. And the statute uses the singular “user,” meaning a single child’s response triggers liability.

As the court puts it:

Not only does Act 900 impose liability based on a single child’s response to the platform, it does so on a strict liability basis—a platform is liable for a practice the evokes addiction in a single child even if it could not have known through the exercise of reasonable care that the practice would have such an effect. “Businesses of ordinary intelligence cannot reliably determine what compliance requires.”

The state, realizing belatedly that it had written an unworkable law, asked the court to just sort of ignore the strict liability language and read in a specific intent requirement that doesn’t exist anywhere in the text. As the judge notes, that’s not how any of this works. The courts interpret the law as written and are not there to fix the legislature’s mistakes:

Instead of defending the statute the General Assembly enacted, Defendants ask the Court to rewrite it by ignoring the strict liability provision altogether and inserting a specific intent requirement that appears nowhere in the text. The Court cannot do so.

Then there’s the default provisions. The court was actually somewhat sympathetic to the idea that the state has a legitimate interest in helping kids sleep. The problem is that the law itself undermines that interest by letting parents flip the nighttime notification blackout off. And the government is not there to fix what parents refuse to do:

Advertisement

While Defendants justify the notification default as an aid to parental authority, they ignore their own evidence that parents are part of the problem. If parents wanted to prevent their children’s sleep from being disrupted by late-night notifications, they have a readily available, free, no-tech solution already at their disposal: taking devices away at night. Yet “86% of adolescents sleep with their phone in the bedroom.” …. The State has provided no evidence that parents lack the tools to assert their authority in this domain, so it appears unlikely that the State’s deferential approach to restricting nighttime notifications will actually serve its stated interest in ensuring minors get enough sleep. This “is not how one addresses a serious social problem.”

The privacy default is worse. It requires platforms to set privacy controls to their most restrictive level for minors — but says nothing about who can change them. Meaning, as the court notes, the minor can just… change them. The state argued this was necessary to protect children from sexual exploitation online. The court points out the obvious problem:

On the other hand, because the default can be changed by the minor, this provision is also wildly underinclusive. Defendants say children need this law to protect them from sexual exploitation online. But the law, in effect, allows children to decide whether they need protection from sexual exploitation online because they are free to depart from the protective default. As Defendants’ evidence shows, teenagers’ developing brains make them less likely than adults to appreciate the risks associated with, for example, making their profiles public… Like the notification default, while the burdens imposed by the privacy default may be slight, they do not appear likely to serve the State’s asserted interest at all. Imposing small burdens on vast quantities of speech for no appreciable benefit is not consistent with the First Amendment. Arkansas cannot sentence speech on the internet to death by a thousand cuts.

Any law that burdens First Amendment speech has to be tailored precisely to a compelling goal. And if it’s either under or over-inclusive, it’s going to have problems surviving. Making it such that kids could just turn off the privacy controls fails that test.

But the dashboard provision is where things get genuinely hilarious, in that dark way where you wonder if anyone read the bill before voting on it. Act 900 has three separate definitions for people who interact with platforms: “account holders,” “users,” and “Arkansas users.” The problem is that, according to the statute’s own definitions, a “user” is specifically someone who is not an account holder — in other words, just a visitor to the site who doesn’t have an account. Yes, it’s confusing. The court is confused. Everyone is confused.

Act 900 has one particularly noteworthy problem: “users.” Act 900 has three different definitions for relationships a person can have with a platform. First, an “account holder” is “an individual who primarily uses, manages, or otherwise controls an account or a profile to use a social media platform.” Id. sec. 1, § 4-88-1401(1). “Account holder” is not used in any of the Act’s operative provisions. Second, a “user” is “a person who has access to view all or some of the posts and content on a social media platform but is not an account holder.” Id. § 1401(12). Third, an “Arkansas user” is “an individual who is a resident of the State of Arkansas and who accesses or attempts to access a social media platform while present in this state.” Id. § 1401(2). “Arkansas users” include both “account holders” and “users,” but “users” are definitionally not “account holders.” The addictive practices provision and the default provisions therefore apply to all Arkansas minors, whether they have a social media account or are merely a website visitor. Worse, the dashboard provision applies only to minor “users,” not account holders.

Again: the dashboard provision requires platforms to build parental supervision tools for minor “users.”

Advertisement

Not account holders. Users. Which, as the court notes, definitionally does not include “account holders.” Meaning it only applies to… random anonymous visitors to the website. Those who have accounts… apparently aren’t covered?

As the court explains, taking the statute at its word would require platforms to:

(1) collect age information from everyone who visits a covered platform to identify minors; and (2) collect and store identity information for every minor who visits a platform to track their “use habits,” connect them with their parents, and effectuate “tools for a parent to restrict his or her minor child’s access.”

This is a law that claims to be about children’s privacy that accidentally requires mass surveillance and identity collection on every anonymous visitor to a website, just in case one of them turns out to be an Arkansas minor. The court openly “questions whether this was the General Assembly’s intended result” but notes it can’t just rewrite the statute because the legislature picked the wrong word. That’s on them. Just like the earlier provision that the state asked the court to quietly rewrite.

The Arkansas legislature does not appear to be a detail-oriented body.

Advertisement

Oh, and there’s also an audit requirement directing platforms to conduct quarterly audits to ensure their products aren’t “causing minors to engage in compulsory or addiction-driven behavior” — again, including off-platform behavior, apparently. How a platform is supposed to audit for behaviors that happen when users aren’t on the platform is left as an exercise for the reader.

What makes this all so maddening is that none of these problems are subtle. The “user” vs. “account holder” mixup is the kind of thing that any lawyer should catch on a close read. The strict liability plus singular “user” combination in the addictive practices provision is exactly the drafting error that made the 2023 law fail. The defaults that can be changed by the very minor they’re supposed to protect — that’s not a hard problem to spot.

There is a reason this pattern keeps repeating.

Passing an unconstitutional law to “protect the kids” from Big Tech generates headlines, press conferences, and signing ceremonies. Governor Sarah Huckabee Sanders got to tweet about how “social media companies have gotten away with exploiting kids for profit” when she signed the original law. That made the news. The permanent injunction three years later, overturning that same law? Barely a ripple. Act 900 itself got its own round of celebratory press. The injunction we’re discussing here will get a fraction of that coverage.

Advertisement

The political asymmetry is kind of the point. State legislatures have figured out that there is essentially no downside to passing obviously unconstitutional social media laws. The upside is maximal: you get to posture as tough on Big Tech, protective of children, and responsive to moral panics about screens and teens. The downside — losing in federal court, wasting state resources on legal fees, and getting lectured by judges about basic First Amendment doctrine — happens quietly, years later, long after the political benefits have been banked.

Arkansas will almost certainly lose its appeal, and either way the legislature will be back next session with a new hastily drafted law that fixes some of Act 900’s problems while introducing fresh ones. And then that will get struck down. And then they’ll try again. Texas, Florida, California, Ohio, Utah, Mississippi, Tennessee, Georgia, and a growing list of other states are running the same play on roughly the same schedule.

The courts keep doing their jobs. NetChoice keeps winning. Judges keep writing careful opinions explaining, for what feels like the hundredth time, that strict scrutiny means what it means, vagueness doctrine exists for a reason, and you cannot simply compel platforms to do whatever you want because you have invoked The Children.

None of it matters to the incentive structure. The headline from the signing ceremony is worth more than the opinion from the courthouse. Until that changes — until voters start holding legislators accountable for passing laws that can’t survive even the most basic constitutional review — we’re going to keep reading rulings like this one. Arkansas just provided the latest installment. There will be more.

Advertisement

Filed Under: 1st amendment, arkansas, free speech, privacy, protect the children, social media, social media addiction, social media safety act

Companies: netchoice

Source link

Advertisement
Continue Reading

Tech

Kalshi suspended three political candidates from its platform for insider trading

Published

on

Prediction market Kalshi has taken action against three political candidates, alleging that each was engaged with insider trading of information about their campaigns. The company implemented new rules last month aimed at preventing politicians and athletes from placing bets on events they can control, and it said those guardrails helped to flag this trio of cases.

The three candidates are Mark Moran of Virginia, Matt Klein of Minnesota and Ezekiel Enriquez of Texas. Kalshi reached settlements with Klein and Enriquez, both of whom cooperated in the platform’s investigations. Each will face a fine of less than $1,000 and suspensions of up to five years. Moran’s case has resulted in a disciplinary action, with a five year suspension and a fine of more than $6,000. He posted on X about the situation and claimed this was essentially a stunt to see if he’d be caught and “to highlight how this company is destroying young men.”

Kalshi and other prediction markets have been the subject of several lawsuits by state attorneys general that are attempting to regulate the sector as gambling. Nevada, Arizona and New York have cases underway, but the state-level attempts are not looking promising. An appeals court ruled against New Jersey’s effort to govern this industry, and the US Commodity Futures Trading Commission has launched a lawsuit of its own in an effort to ensure it will be the only party to regulate prediction markets.

Source link

Advertisement
Continue Reading

Tech

Digital Hopes, Real Power: The Rise Of Network Shutdowns

Published

on

from the the-internet-is-a-political-weapon dept

Iran’s internet has been intermittently disrupted for months. After years of bombardment, Gaza’s telecommunications infrastructure remains fragile. In India, recurring shutdowns and throttling have become a routine response to protests and unrest, cutting millions off from news, work, and basic services. Across dozens of other countries, governments increasingly treat connectivity itself as something that can be weaponized—cut, slowed, or selectively restored to shape what people can see, say, and share. In 2024 alone, authorities imposed 304 internet shutdowns across 54 countries—the highest number ever recorded.

In 2011, when protesters in Tunisia, Egypt, and beyond used social media to broadcast their uprisings to the world, many observers heralded a new era of networked freedom. Governments, however, responded quickly by developing and refining systems of control that have only grown more sophisticated over time. Today’s landscape of regulation, blackouts, and degraded networks reflects that trajectory, as early experiments in censorship and disruption have hardened into a durable system of control—what began as an emergency measure has become a normalized infrastructure of control.

A Brief History of Internet Shutdowns

Egypt’s 2011 internet shutdown wasn’t the first. Although the government’s heavy-handed response after just two days of protests caught the world’s attention, GuineaNepalMyanmar, and a handful of other countries had previously enacted shutdowns. But Egypt marked a turning point. In the years that followed, shutdowns increased sharply worldwide, suggesting that governments had taken note—adopting network disruptions as a tactic for suppressing dissent and limiting the flow of information within and beyond their borders.

Advertisement

On January 28, 2011, at 12:34 a.m. local time, five of Egypt’s internet service providers (ISPs) shut down their networks. At least one provider—Noor, which also hosted the Egyptian stock exchange—remained online, leaving only about 7% of the country connected. 

In the aftermath of President Hosni Mubarak’s resignation, rights groups sought to understand how such a sweeping shutdown had been possible—and how future incidents might be prevented. There was no centralized “kill switch.” Instead, authorities leveraged the country’s highly consolidated telecommunications sector, which all operate by government license. With only a handful of ISPs, a small number of directives was enough to bring most of the network offline.

In the years following Egypt’s 2011 shutdown, telecommunications companies—many of which had been directly implicated in enabling state-ordered disruptions—began to organize around a shared set of human rights challenges. Beginning that same year, a group of operators and vendors quietly convened to examine how the UN Guiding Principles on Business and Human Rights applied to their sector, particularly in contexts where government demands could translate into sweeping restrictions on access. By 2013, this effort had formalized into the Telecommunications Industry Dialogue, bringing together major global firms to develop common principles on freedom of expression and privacy and, through a partnership with the Global Network Initiative, engage more directly with civil society. The initiative reflected a growing recognition that telecom companies—unlike platforms—operate at a critical chokepoint in the network. But it also underscored the limits of voluntary approaches: while the Dialogue helped establish shared norms, it did little to constrain the legal and political pressures that continue to drive shutdowns—or to prevent companies from complying with them.

From Emergency Measure to Legal Authority

If the early aughts were defined by improvised shutdowns, the years since have seen governments formalize their power to control networks. What was once exceptional is now often embedded in law.

Advertisement

In India, the 2017 Temporary Suspension of Telecom Services Rules—issued under the Telegraph Act—provided a clear legal pathway for cutting connectivity. The Telecommunications Act, 2023, further entrenched the government’s ability to enact shutdowns, granting the central and state governments, or “authorised officers” the power to suspend telecommunications services in the interest of public safety or sovereignty, or during emergencies. The government has used these measures repeatedly, particularly in Jammu and Kashmir. India’s Software Freedom Law Centre’s Shutdown Tracker shows India as instigating more than 900 shutdowns, 447 of which were in Jammu and Kashmir.

In Kazakhstan, shutdowns have also become common. Over the years, the government has passed legislation that allows state agencies to shut down the internet. The 2012 law on national security enabled the government to disrupt communications channels during anti-terrorist operations and to contain riots. In 2014 and 2016, laws were further amended to expand the number of actors able to shut down the internet without a court decision, and a government decree in 2018 enabled shutdowns in the event of a “social emergency.” 

Elsewhere, governments have built or expanded legal and technical frameworks that enable similar control over information flows. Ethiopia’s state-dominated telecom sector has facilitated sweeping shutdowns during periods of conflict, including the war in Tigray, where the internet was disconnected for more than two years. In Iran, authorities have developed regulatory and infrastructural capacity to isolate domestic networks from the global internet, allowing them to restrict external visibility while maintaining limited internal connectivity. This year alone, Iranians have spent one third of the year offline. And amidst the ongoing war, Iranian officials have made it clear that the internet is a privilege for those who toe the government’s official line.

Even where laws do not explicitly authorize shutdowns, broadly worded provisions around national security or public order are routinely used to justify them. The result is a growing legal architecture that treats network disruptions not as extraordinary measures, but as standard tools for managing populations.

Advertisement

When that authority is exercised over a population beyond a state’s own citizens, the consequences can be even more severe. Israel’s Ministry of Communications controls the flow of communications in and out of Palestine and has used that power to shut down internet access during periods of conflict. Over the past two and a half years, Gaza has experienced repeated outages, and experts now estimate that roughly 75% of its telecommunications infrastructure has been damaged—leaving essential services severely disrupted.

Elections and the Expansion of Control

Historically, most blackouts have occurred during moments of intense political tension. But authorities are increasingly using them as a tool to preempt dissent.

In 2024, as more than half the world’s population headed to the polls, shutdowns followed. That year alone, authorities imposed 304 internet shutdowns across 54 countries—the highest number ever recorded, surpassing the previous record set just a year earlier. The geographic spread also widened significantly, with shutdowns affecting more countries than ever before. The Comoros imposed a shutdown for the first time, while other countries, such as Mauritius, instituted broad bans on social media platforms during elections.

Advertisement

At least 24 countries holding elections in 2024 had a prior history of shutdowns, putting billions of people at risk of disruptions during critical democratic moments.

What stands out is not just the scale, but the normalization. Notably, the number of shutdowns in 2025 broke the record set the year prior. Whereas network disruptions were once a rare occurrence, they are now a routine measure, increasingly treated by authorities as a standard response to periods of heightened political sensitivity. 

Civil Society Fights Back

Governments use all sorts of justifications—national security, curbing the spread of disinformation, and even preventing students from cheating on exams—for internet shutdowns. But civil society is watching, and documenting, network disruptions and their impact on citizens.

Advertisement

In 2016, as shutdowns became an increasingly common tool of state control, Access Now launched the #KeepItOn campaign to coordinate global advocacy against network disruptions. The campaign includes a coalition composed of 345 advocacy groups (including EFF), research centers, detection networks, and others who work together to report on, and fight back against, internet shutdowns. Anyone can get involved by signing on to campaign action alerts, sharing their story, or reporting a shutdown in their jurisdiction.

Ending this harmful practice remains the goal. In 2016, the UN passed a landmark resolution supporting human rights online and condemning internet shutdowns, and UN agencies have continued to warn against the practice. But the fight to change government practices remains an uphill battle, leading civil society—and even companies—to get creative. 

During repeated shutdowns in Gaza, grassroots efforts mobilised to distribute eSIMs so Palestinians could stay connected. In 2024, EFF recognized Connecting Humanity, a Cairo-based non-profit providing eSIM access in Gaza, with its annual award for its vital work. Satellite internet such as Starlink has been supplied to people in Ukraine and Iran, though it, too, is not immune to state control. Alongside these efforts, civil society continues to share practical guidance on circumventing shutdowns and maintaining access to information.

EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world—and we’ll continue to fight back against internet shutdowns wherever they occur.

Advertisement

Republished from the EFF’s Deeplinks blog. This is the fourth installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. Read the rest of the series here.

Filed Under: democracy, elections, internet shutdowns, protests

Source link

Advertisement
Continue Reading

Tech

UL looking for ‘changemakers’ amid Research Week 2026

Published

on

UL’s vice-president for research and innovation Prof Kevin Ryan discusses the university’s Changemakers initiative and what people can expect for Research Week 2026.

Every year, University of Limerick (UL) hosts a week-long event that highlights a variety of innovative research being carried out on its campus.

This year’s Research Week will begin next Monday (27 April), with numerous projects exploring areas such from sustainability to cancer research set to be presented to attendees on UL’s campus.

While the annual event is underpinned by UL’s ‘Wisdom for Action’ strategy – a five-year plan to build, support and boost the university’s research community – 2026 has also seen the introduction of a new initiative to expand its research ecosystem.

Advertisement

In February, UL launched an internationally focused recruitment campaign designed to attract exceptional researchers to the university.

The multimillion-euro ‘Changemakers’ initiative was launched with an initial 35 academic posts available across the organisation in areas such as social justice, AI, pharmaceutical science and health services research, to name a few.

But what defines a changemaker?

Advertisement

UL’s vice-president of research and innovation Prof Kevin Ryan says a changemaker is somebody with a very excellent research profile who is willing to come to the university to “essentially develop their research to the next level and create those innovations”.

“So they have to have that excellence, that curiosity in terms of new research discoveries, and that drive to continue that research excellence and grow that research excellence at the University of Limerick,” he adds.

Speaking to SiliconRepublic.com, Ryan says there’s a number of reasons why UL wants international leading researchers to consider the university for their career.

“It’s an open, innovative university,” he says. “We have a high level of academic freedom.

Advertisement

“We have a very collaborative environment where we have researchers who work in very multidisciplinary activities.”

As an example of what is currently happening at UL, Ryan talks about the ageing research work of Prof Rose Galvin and her research group, which won the President’s Research Excellence Award in 2023.

“[Ageing research] has particular importance in our local environments, but also nationally and internationally, because it’s dealing with how the ageing population interact with the hospital system and ensuring that you’re getting better outcomes for healthcare,” explains Ryan.

Spotlight on innovation

But that’s just one example of academic investigation happening at the university, with UL’s upcoming Research Week 2026 set to highlight a total of 29 different projects over the course of five days, according to Ryan.

Advertisement

“Essentially that’s 29 different research areas that are covered and that covers right through from ageing, cancer research, health and wellbeing, through to battery research,” he says.

The importance of Research Week, Ryan says, is the opportunity it provides researchers to showcase their work for UL’s community, as well as the general public.

And a significant focus of UL’s Research Week is not only spotlighting the research itself, but the reason the projects are instigated in the first place, and the long-term results of the work.

“So the range and the breadth of research is significant, but in each of these you’ll really see an inspiring story of where that research originated, the impact of that research in terms of nationally, internationally,” explains Ryan.

Advertisement

“I suppose that’s something we’re always working on, is to grow our research base and ensure that we can have sufficient funding to support our PhD students, to support research activities, to support the teams that are required to generate those discoveries.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

How Roku’s 55″ Select Series Smart TV Delivers Everyday Wins That Keep You Coming Back

Published

on

55" Roku Select Smart Series Smart TV
Roku Select Smart TV screen sizes range from 43 to 85 inches, which is large enough to fit almost any living room or apartment without requiring a compromise. When it’s on sale, like today, you can get the 55-inch model for roughly $250 (was $350), and it arrives ready to use without the need to plug in any additional devices. Everything is plugged in and set up right out of the box using the familiar Roku system, which quickly launches the apps and remains out of the way.



The picture on the 4K panel is stunning, especially when HDR10 is enabled for movies and shows. The Roku Smart Picture setting does its job by adjusting the image on the fly to provide the most natural look for your environment. With a 60-hertz refresh rate and a game mode that reduces lag for console gaming, as well as no blur on sports or action scenes, this TV has a lot going for it. However, while it can handle casual daytime viewing just fine, direct sunlight does wash out the picture slightly.

Using the remote feels silky smooth from the first press, as you can drag and arrange the apps on the home screen to keep your favorites front and center. Voice search is also very effective; you may quickly find the station or show you’re looking for. There are also shortcuts on the remote that take you directly to Netflix or live news with a single click. Even if you misplace the remote, the built-in finder will alert you to its location like an AirTag.

55" Roku Select Smart Series Smart TV
The calibrated speakers and Dolby Audio processing ensure that dialogue-heavy shows and movies sound crystal clear. Volume reaches acceptable levels for normal rooms without being distorted. If you want to listen secretly while the rest of the household goes about its business, simply put on your Bluetooth headphones. For larger rooms or if you simply want a little extra oomph, there is an HDMI port that allows you to connect a soundbar in under two seconds.

55" Roku Select Smart Series Smart TV
Setting up the TV takes only a few minutes, as it scans for Wi-Fi, downloads the most recent software, and only asks for accounts when necessary. Apple AirPlay makes it simple to share photographs or movies from your phone, and you can even use voice commands with Siri, Alexa, or Google Assistant to change the channel or turn up the volume. You get over 500 free live channels that provide live news, sports highlights, and the occasional movie, all without having to pay for a subscription, and the TV will even auto-update with new apps and features over time.

55" Roku Select Smart Series Smart TV
The connections include three HDMI inputs for all your consoles and other devices, an Ethernet port for connected connectivity, and a USB slot for loading media on the move. The frameless design is very sleek and lays flat against the wall or on a stand, as it is one of those things that attracts your attention to the image rather than the edges. At only a few pounds, you can mount the TV on your own.

Source link

Advertisement
Continue Reading

Tech

Microsoft issues emergency update for macOS and Linux ASP.NET threat

Published

on

Microsoft released an emergency patch for its ASP.NET Core to fix a high-severity vulnerability that allows unauthenticated attackers to gain SYSTEM privileges on devices that use the Web development framework to run Linux or macOS apps.

The software maker said Tuesday evening that the vulnerability, tracked as CVE-2026-40372, affects versions 10.0.0 through 10.0.6 of the Microsoft.AspNetCore.DataProtection NuGet, a package that’s part of the framework. The critical flaw stems from a faulty verification of cryptographic signatures. It can be exploited to allow unauthenticated attackers to forge authentication payloads during the HMAC validation process, which is used to verify the integrity and authenticity of data exchanged between a client and a server.

Beware: Forged credentials survive patching

During the time users ran a vulnerable version of the package, they were left open to an attack that would allow unauthenticated people to gain sensitive SYSTEM privileges that would allow full compromise of the underlying machine. Even after the vulnerability is patched, devices may still be compromised if authentication credentials created by a threat actor aren’t purged.

“If an attacker used forged payloads to authenticate as a privileged user during the vulnerable window, they may have induced the application to issue legitimately-signed tokens (session refresh, API key, password reset link, etc.) to themselves,” Microsoft said. “Those tokens remain valid after upgrading to 10.0.7 unless the DataProtection key ring is rotated.”

Advertisement

Microsoft describes ASP.NET Core as a “high-performance” web development framework for writing .Net apps that run on Windows, macOS, Linux, and Docker. The open-source package is “designed to allow runtime components, APIs, compilers, and languages [to] evolve quickly, while still providing a stable and supported platform to keep apps running.”

Source link

Advertisement
Continue Reading

Tech

This smart pillow ensures you never sleep through an emergency alarm, or even a phone call

Published

on

Sleeping through a phone call is annoying. Sleeping through a fire alarm is a whole different level of bad. So this new smart pillow idea feels a lot more useful than gimmicky. Researchers at Nottingham Trent University have developed a smart pillow sleeve designed to help deaf users wake up to important nighttime alerts.

Unlike a typical smart pillow, the team developed a smart sleeve that is designed to fit over a standard pillow. It slips inside a normal pillowcase, and vibrates when connected alarms or calls come through.

What problem does it solve?

The project came out of feedback from members of the Deaf community, who told the researchers that existing under-pillow alert devices are often too bulky and uncomfortable to sleep on. In response, the team built a much thinner electronic textile sleeve with four tiny haptic actuators embedded into a yarn-like structure.

Each actuator measures just 3.4mm by 12.7mm, and the electronics are small enough that users are not supposed to feel them while seeping. So the safety product is both handy and comfortable to use.

How it can even save lives

The sleeve connects to a smartphone through a microcontroller, and that setup can then link wirelessly to household alarms. When something goes off, the pillow vibrates intensely enough to wake the user, with distinct patterns used to signal different kinds of alerts. This means a user with a hearing impairment can be alerted of a fire alarm, a burglar alarm, or even an incoming phone call.

Advertisement

This extra layer basically makes the feature thoughtful. The goal here is to wake up someone and also give them enough information to know why they are being woken up in the first place.

The researchers say the yarn used in the sleeve has already passed durability testing, including multiple washing cycles, which suggests they are treating this as a real product concept rather than a lab-only demo. The work was presented at the ACM CHI conference in Barcelona, and the team is now looking for an industrial partner to help bring it to market. Tech Xplore also quotes supervisor Theo Hughes-Riley calling it a significant step toward more inclusive emergency alert systems for deaf and deaf-blind individuals.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025