Connect with us
DAPA Banner

Tech

Epstein Files: Ex-Windows chief Sinofsky wanted to meet Tim Cook

Published

on

Tim Cook has been dragged into the Epstein email fiasco, after the discovery that former Microsoft Windows chief Steven Sinofsky used Epstein to get a meeting with the Apple CEO.

Older man with short gray hair and glasses looks seriously at camera, superimposed over blurred email screenshots containing blacked-out names and text about a meeting with Tim Cook
Tim Cook was mentioned in some released Epstein emails.

Friday’s release of emails by the Justice Department has led to many news stories about billionaires, politicians, and royalty, and their dealings with the convicted sexual predator and financier Jeffrey Epstein. While Apple CEO Tim Cook has stayed out of trouble, he is briefly mentioned in some communications with the deceased scandal magnet.
However, while dirt-rakers may be keen to see some juicy Apple tidbits in the document trove, the reality is much more pedestrian. Really, it’s about career prospects.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Epic vs. Apple lawsuit over App Store fees is moving to the Supreme Court, again

Published

on

The Apple vs. Epic Games saga over App Store fees continues, as Apple hopes the Supreme Court will rule in its favor the second time around and possibly stop previous punishments from being enforced.

iPhone Air in blue facedown on a cloth surface showing its camera bar and single lens
Apple’s control of the App Store on iPhone continues to be challenged in court

The Supreme Court will soon have to weigh in on Apple’s fees for app-related external purchases, after the United States Court of Appeals for the Ninth Circuit denied a request for a rehearing in March 2026.
Apple has been fighting a December 2025 decision that sought to lower its 27% fee on purchases made outside the App Store.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Trump Administration Bans Chinese Routers. Phones and Cameras Could Follow

Published

on

The Federal Communications Commission continued its crackdown on Chinese tech on Friday, issuing a new proposal that would extend a ban on companies to products previously authorized.

In 2021, companies such as Huawei, Hikvision, Dahua, Hytera and ZTE were added to the FCC’s Covered List, a record of companies and products that the FCC believes pose a national security risk to the US, under the Secure Networks Act. The Chinese companies produce mobile phones, security cameras and other tech products.

But the 2021 ban applied only to new models that the FCC hadn’t authorized, and companies were free to keep selling models that had already received the FCC’s stamp of approval. If approved, the new proposal would ban these companies entirely, including those previously approved products. 

Advertisement

“Older models of covered equipment pose an unacceptable risk today when imported or marketed in the United States, not only when such equipment is new to the market,” an FCC report from October said.

The proposal will be open for comment until May 6, after which the commission will vote on whether to adopt the rules. The ban won’t affect devices already owned by Americans.

Read more: My Expert Advice: Don’t Buy a Router Until We Know More About the FCC’s Ban

Millions of consumers and businesses rely on Wi-Fi routers, telecommunications equipment and security cameras every day, making these devices critical links in both home and office networks. The Federal Communications Commission shocked the broadband industry on March 23 by effectively banning the sale of future foreign-made Wi-Fi routers (including some of the biggest router brands). 

Advertisement

In recent years, Chinese telecommunications companies have faced restrictions on operating in the US. In 2020, The Wall Street Journal cited US officials who reportedly said that Chinese companies, including Huawei, used backdoor access intended for law enforcement to track sensitive information.

But this ban could be implemented quickly. The FCC proposes that “all parties [will have to] cease all importation and marketing activities within 30 days of the effective date of the prohibition.”

This proposition doesn’t reflect a final legal ruling on telecommunications imports, but it does reflect how the Trump administration has been increasingly pressuring Chinese tech companies in recent months.

The foreign-made router ban was only the latest in a string of decisions that have placed restrictions on Chinese tech companies operating in the US.

Advertisement

In December, the FCC banned the importation of Chinese-made drones into the US. Just months before that, the agency voted to block new approvals for any device containing parts manufactured by companies on the Covered List.

Representatives from the FCC and Huawei didn’t immediately respond to requests for comment.

Source link

Advertisement
Continue Reading

Tech

Google’s quantum warning suggests Bitcoin encryption may fail sooner as reduced qubit requirements shift assumptions about future cybersecurity risks

Published

on


  • Quantum resource estimates suggest encryption barriers may fall faster than expected
  • Reduced qubit requirements bring theoretical attacks closer to practical reality
  • Bitcoin’s cryptographic foundations face pressure from advancing quantum algorithm efficiency

Google researchers have revised expectations around the computational requirements needed to break widely used cryptographic systems protecting cryptocurrencies.

The company’s latest whitepaper claims a future quantum machine could solve the elliptic curve discrete logarithm problem using significantly fewer resources than previously assumed.

Source link

Advertisement
Continue Reading

Tech

New Jersey has no right to ban Kalshi’s prediction market, US appeals court rules

Published

on

Kalshi can’t be stopped in New Jersey. A 3rd US Circuit Court of Appeals panel ruled on Monday that New Jersey has no authority to regulate Kalshi’s prediction market allowing people to bet on the outcome of sports events. That power rests with the Commodity Futures Trading Commission, the panel ruled 2-1.

The CFTC is headed by President Donald Trump appointee Michael Selig, who vocally and actively supports prediction markets like Kalshi and Polymarket, calling them “exciting products.” The Trump family agrees: Donald Trump Jr. is a paid adviser to Kalshi and an unpaid adviser to Polymarket, and Truth Social, which is run by the Trump Media and Technology Group, is set to start a prediction market of its own.

Online prediction markets are an emerging phenomenon that allow users to bet on the outcome of basically anything, from local athletic competitions to lethal military invasions. Though they’re new, these marketplaces have already shown evidence of insider trading on an extreme scale, with suspicious bets and big payouts tied to the US and Israel’s military strikes in Iran, and also the US’ brief invasion in Venezuela. According to blockchain analyst DeFi Oasis, fewer than 0.04 percent of Polymarket accounts captured more than 70 percent of profits, totaling $3.7 billion.

Multiple state gaming regulators have filed legal challenges against Kalshi and Polymarket in recent months, and just last week the CFTC sued Arizona, Connecticut and Illinois over their attempts to regulate prediction markets. While each state has its own angle of attack, from election issues to underage betting, they’re all broadly claiming that prediction markets are just illegal gambling businesses. Today’s ruling marks the first federal-level decision in one of these cases and it’s in favor of the prediction markets.

Advertisement

New Jersey sent Kalshi a cease and desist letter in 2025, claiming the service violated the state’s ban on collegiate sports betting. Kalshi escalated the situation and sued New Jersey, arguing that its sports contracts are actually swaps, a type of financial investment that’s (conveniently) regulated by the CFTC. A lower-court judge previously sided with Kalshi, prompting New Jersey to appeal. Two of the three judges in that appeal ruled that Kalshi’s sports-related event contracts were indeed swaps. Kalshi CEO Tarek Mansour called Monday’s ruling “a big win for the industry.”

US Circuit Judge Jane Richards Roth dissented, writing that Kalshi’s “offerings were virtually indistinguishable from the ​betting products available on online sportsbooks, such as DraftKings and FanDuel.”

New Jersey Attorney General Jennifer Davenport has the option to ask the full 3rd Circuit to rehear the case, and the issue is also pending in several other courts.

Source link

Advertisement
Continue Reading

Tech

New GPUBreach attack enables system takeover via GPU rowhammer

Published

on

New GPUBreach attack enables system takeover via GPU rowhammer

A new attack, dubbed GPUBreach, can induce Rowhammer bit-flips on GPU GDDR6 memories to escalate privileges and lead to a full system compromise.

GPUBreach was developed by a team of researchers at the University of Toronto, and full details will be presented at the upcoming IEEE Symposium on Security & Privacy on April 13 in Oakland.

The researchers demonstrated that Rowhammer-induced bit flips in GDDR6 can corrupt GPU page tables (PTEs) and grant arbitrary GPU memory read/write access to an unprivileged CUDA kernel.

Wiz

An attacker may then chain this into a CPU-side escalation by exploiting memory-safety bugs in the NVIDIA driver, potentially leading to complete system compromise without the need to disable Input-Output Memory Management Unit (IOMMU) protection.

GPUBreach attack steps
GPUBreach attack steps
Source: University of Toronto

IOMMU is a hardware unit that protects against direct memory attacks. It controls and restricts how devices access memory by managing which memory regions are accessible to each device.

Despite being an effective measure against most direct memory access (DMA) attacks, IOMMU does not stop GPUBreach.

Advertisement

“GPUBreach shows that GPU Rowhammer attacks can move beyond data corruption to real privilege escalation,” the researchers explain.

“By corrupting GPU page tables, an unprivileged CUDA kernel can gain arbitrary GPU memory read/write, and then chain that capability into CPU-side escalation by exploiting newly discovered memory-safety bugs in the NVIDIA driver.”

“The result is system-wide compromise up to a root shell, without disabling IOMMU, unlike contemporary works, making GPUBreach a more potent threat.”

Overview of how GPUBreach works
Overview of how GPUBreach works
Source: University of Toronto

The same researchers previously demonstrated GPUHammer, the first attack showing that Rowhammer attacks on GPUs are practical, prompting NVIDIA to issue a warning to users and suggesting the activation of the System Level Error-Correcting Code mitigation to block such attempts on GDDR6 memory.

However, GPUBreach is taking the threat to the next level, showing that it is possible not only to corrupt data but also to gain root privileges with IOMMU enabled.

Advertisement

The researchers exemplified the results with an NVIDIA RTX A6000 GPU with GDDR6. This model is widely used in AI development and training workloads.

Comparison to other attacks
Comparison to other GPU attacks
Source: University of Toronto

Disclosure and mitigations

The University of Toronto researchers reported their findings to NVIDIA, Google, AWS, and Microsoft on November 11, 2025.

Google acknowledged the report and awarded the researchers a $600 bug bounty.

NVIDIA stated that it may update its existing security notice from July 2025 to include the newly discovered attack possibilities.

As demonstrated by the researchers, IOMMU alone is insufficient if GPU-controlled memory can corrupt trusted driver state, so users at risk should rely solely on that security measure.

Advertisement

Error Correcting Code (ECC) memory helps correct single-bit flips and detect double-bit flips, but it is not reliable against multi-bit flips.

Ultimately, the researchers underlined that GPUBreach is completely unmitigated for consumer GPUs without ECC.

The researchers will publish the full details of their work, including a technical paper and a GitHub repository with the reproduction package and scripts, on April 13.

Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.

This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.

Advertisement

Source link

Continue Reading

Tech

Spain’s Xoople raises $130m to build the data infrastructure AI needs to understand Earth

Published

on

In short: Xoople, a Madrid-based geospatial data company founded in 2019, has raised a $130 million Series B led by Nazca Capital, bringing its total funding to $225 million and pushing its valuation into unicorn territory. The round was co-invested by MCH Private Equity, CDTI (the Spanish government’s technology development fund), Buenavista Equity Partners, and Endeavor Catalyst. Alongside the raise, Xoople announced a partnership with US space and defence contractor L3Harris Technologies to build sensors for its own satellite constellation, designed to produce Earth surface data it says will be “two orders of magnitude better than existing monitoring systems.” The company’s EarthAI platform, built on Microsoft Azure and distributed through Microsoft and Esri, delivers continuous surface intelligence for insurers, farmers, governments, and infrastructure operators.

Xoople has spent seven years building something that did not previously exist in a commercially deployable form: a continuous, AI-native data layer for the Earth’s surface. The Madrid startup, founded in 2019, emerged from that development period with a €115 million in prior funding, a platform embedded in the two most widely used enterprise geospatial ecosystems in the world, and a thesis that the AI era will require a fundamentally different approach to Earth observation — one designed from the ground up for machine learning rather than adapted from satellite imagery workflows built for human analysts. The $130 million Series B, led by Nazca Capital, confirms that investors believe that thesis is credible enough to back at scale.

CEO and co-founder Fabrizio Pirondini told TechCrunch the raise brings Xoople’s total funding to $225 million and puts the company in unicorn territory on valuation. The round was joined by MCH Private Equity, CDTI, the Spanish government-backed technology development fund that has also backed Nazca Capital’s aerospace and defence fund, Buenavista Equity Partners, and Endeavor Catalyst.

What EarthAI actually does

Xoople’s core product, EarthAI, is an end-to-end Earth intelligence system. It ingests continuous surface data, currently sourced from government spacecraft and third-party satellite networks, and processes it into AI-ready datasets that can be queried for change detection, risk prediction, and environmental monitoring. The key design choice is continuity: rather than producing point-in-time images for human review, EarthAI is built to stream a persistent, structured view of the planet’s surface into AI models that need regular, reliable ground truth.

Advertisement

The use cases span industries that share a dependence on understanding what is happening on the physical surface of the Earth. For agriculture, EarthAI provides early detection of crop stress, monitors soil health and water conditions, and generates data that enables farmers to participate in carbon credit markets. For insurance, it enables more precise climate risk pricing and real-time verification of natural disaster claims, removing the delay and subjectivity of ground-based assessments. For infrastructure operators, it monitors physical assets for signs of stress or degradation before failures occur. For governments, it supports emergency planning, environmental enforcement, and humanitarian response. Capital flowing into specialised AI applications at the intersection of science, data, and infrastructure has accelerated considerably over the past year, and Xoople sits precisely at that intersection.

Advertisement

The satellite play

The $130 million will fund Xoople’s transition from a platform built on others’ data to one powered by its own. Alongside the Series B, the company announced a partnership with L3Harris Technologies, a US space and defence contractor, to design and manufacture sensors for Xoople’s own satellite constellation. The sensors will collect optical data. Pirondini told TechCrunch that the constellation is designed to produce “a stream of data that is going to be two orders of magnitude better than existing monitoring systems“, a claim that, if borne out, would represent a substantial leap over the imagery quality currently available from commercial earth observation operators.

That claim is where Xoople meets its competitive reality. The company is entering a market that includes Vantor (formerly Maxar Intelligence, rebranded in October 2025), Planet Labs, BlackSky, Airbus Defence and Space, ICEYE, and Capella Space — all of which have satellites already in orbit and established AI-focused data processing pipelines. Companies building the hardware and data layers that AI depends on face a lengthy gap between the announcement of a new approach and its delivery in deployable form, and Xoople’s constellation is not yet in orbit. For now, EarthAI runs on data it did not produce. The L3Harris partnership signals that the proprietary data supply is the next phase.

Distribution before data

Xoople’s strategic sequencing is unusual for an Earth observation company. Most competitors in the space led with hardware — launching satellites, then figuring out distribution. Xoople did the reverse: it spent its first seven years embedding its platform into Microsoft and Esri, the two dominant environments where enterprise buyers, governments, and GIS professionals already live. Neither Microsoft nor Esri has its own proprietary satellite data. Xoople positioned itself to supply that gap from inside the platforms where the purchasing decisions are made.

The Microsoft relationship is structural: Xoople’s platform runs on Azure, and the company is integrated with Microsoft’s Planetary Computer Pro, which delivers AI-powered geospatial insights for enterprise use. Esri, the world’s largest geospatial software company, is a partner distributor. The implication is that when Xoople’s own constellation is operational and its data quality delivers on the “two orders of magnitude” promise, it will have distribution in place that its newer competitors would need years to replicate. The investment flowing into cloud-based AI data infrastructure has made the ability to process and deliver petabytes of Earth surface data at low latency a tractable problem; the scarcity is in the quality and continuity of the underlying data itself.

Advertisement

A Spanish unicorn in a European context

Xoople’s raise is one of the larger deep tech rounds to come out of Spain in recent years, and it lands in a moment that the European space and defence investment community has been accelerating. Nazca Capital, which led the Series B, runs Spain’s largest private equity fund specialised in aerospace and defence, a fund that also received a €294 million commitment from CDTI and a €40 million investment from the European Investment Fund. The investor composition of the Xoople round,government-backed funds, European private equity, and Endeavor Catalyst, which focuses on high-impact technology entrepreneurs, reflects the persistent tension in European technology between deep technical ambition and the capital required to realise it: the funding is patient, multi-source, and has a public interest dimension that pure venture rounds often lack.

The earth observation market was valued at $7.04 billion in 2025 and is projected to reach $14.55 billion by 2034, growing at just over 8% annually. Xoople is betting that as AI models grow more capable and more dependent on real-world data, the market for continuous, structured Earth surface intelligence, rather than periodic imagery, will grow faster than that aggregate. A year in which the appetite for AI applications in climate, infrastructure, and environmental risk grew considerably provided the validation Xoople needed; the $130 million is the bet that the second half of the decade will prove it right at scale.

Source link

Advertisement
Continue Reading

Tech

Closing the data security maturity gap: Embedding protection into enterprise workflows

Published

on

Presented by Capital One


Data security remains one of the least mature domains in enterprise cybersecurity. According to IBM, 35% of breaches in 2025 involved unmanaged data source or “shadow data.” This reveals a systemic lack of basic data awareness. It’s not because of a lack of tooling or investment. It’s because many organizations still struggle with the most fundamental questions: What data do we have? Where does it live? How does it move? And who is responsible for it?

In an increasingly complex ecosystem of data sources, cloud platforms, SaaS applications, APIs, and AI models, those questions are only becoming more difficult to answer. Closing the maturity gap in data security demands a cultural shift where security is no longer treated as an afterthought. Instead, protection is embedded throughout the full data lifecycle, grounded in a robust inventory, clear classification, and scalable mechanisms that translate policy into automated guardrails.

Visibility as the foundation

The most persistent barrier to data security maturity is basic visibility. Organizations often focus on how much data they hold, but not on what that data is made up of. Does it contain personally identifiable information (PII)? Financial data? Health information? Intellectual property? Without this level of understanding and inventory, it’s a lot tougher to implement meaningful protection.

Advertisement

This can be avoided, however, by prioritizing enterprise capabilities that can detect sensitive data at scale across a large and varied footprint. Detection must be paired with action, deleting data where it’s no longer needed, and securing data where it is by aligning enforcement to a well-defined policy.

Mature organizations should start by treating data security as an “understanding your environment” problem. Maintain an inventory, classify what’s in the ecosystem, and align protections with the classification rather than solely relying on perimeter controls or point solutions to scale.

Securing chaotic data

One reason data security has lagged behind other security domains is that data itself is inherently chaotic. Unlike perimeter security, which relies on explicit ports and defined boundaries, data is largely unpredictable. That is to say, the same underlying information may appear across very different formats: structured databases, unstructured documents, chat transcripts, or analytics pipelines. Each may have slightly different encodings or transformations that introduce unforeseen, and often undetected, changes to the data itself.

Human behavior compounds the challenge, with different actions introducing risks in ways that perimeter controls simply can’t anticipate. This could be anything from a credit card number copied into a free-form comment field, a spreadsheet emailed outside its intended audience, or a dataset repurposed for a new workflow.

Advertisement

When protection is bolted on at the end of a workflow, organizations create blind spots. They rely on downstream checks to catch upstream design flaws. Over time, complexity accumulates and the risk of exposure becomes a question of when, not if.

A more resilient model assumes that sensitive data will surface in unexpected places and formats, so protection is embedded from the moment data is captured. Defense-in-depth becomes a design principle: segmentation, encryption at rest and in transit, tokenization, and layered access controls.

Critically, these safeguards travel with the data lifecycle, from ingestion to processing, analytics and publishing. Instead of retrofitting controls, organizations design for chaos. They accept variability as a given and build systems that remain secure even when data diverges from expectations.

Scaling governance with automation

Data security becomes operationally sustainable when governance is enforced through automation from its genesis. When coupled with clear expectations to create bounded contexts: teams understand what is permitted, under what conditions, and with what protections data can be used effectively.

Advertisement

This matters more than ever today. AI systems often require access to huge volumes of data, across domains. This makes policy implementation particularly challenging. To do so effectively and safely requires deep understanding, strong governance policies, and automated protection.

Security techniques such as synthetic data and token replacement enable organizations to preserve analytical context while making sensitive values harder to read. Policy-as-code patterns, APIs, and automation can handle tokenization, deletion, retention constraints, and dynamic access controls. With guardrails built into the platforms they use, engineers can focus more on innovating with data and elevating business outcomes securely.

AI systems must also operate within the same governance and monitoring expectations as human workflows. Permissions, telemetry, and controls around what models can access, along with the information they can publish, are essential. Governance will always introduce a degree of friction. The goal is to make that friction well understood, navigable and increasingly automated. Confirming purpose, registering a use case, and provisioning access dynamically based on role and need should be clear, repeatable processes.

At enterprise scale, this requires centralized capabilities that implement cyber security policy in the data domain. This includes detection and classification engines, tokenization and detokenization services, retention enforcement, and ownership and taxonomy mechanisms that cascade risk management expectations into daily execution.

Advertisement

When done well, governance becomes an enablement layer rather than a bottleneck. Metadata and classification drive protection decisions automatically while accelerating business discovery and usage. Data is protected across its lifecycle by strong defenses like tokenization and deleted when required by regulation or internal policy. There should be no need for teams to “touch the data” manually for every control decision, with policy enforced by design.

Building for the future

Put simply, closing the data security maturity gap is less about adopting a single breakthrough technology and more about operational discipline. Build the map. Classify what you have. Embed protection into workflows so that security is repeatable at scale.

For business leaders seeking measurable progress over the next 18–24 months, three priorities stand out.

First, establish a robust inventory and metadata-rich map of the data ecosystem. Visibility is non-negotiable. Second, implement classification tied to clear, actionable policy expectations. Make it obvious what protections each category demands. And finally, invest in scalable, automated protection schemes that integrate directly into development and data workflows.

Advertisement

When protection shifts from reactive bolt-on controls to proactive built-in guardrails, compliance becomes simpler, governance becomes stronger, and AI readiness becomes achievable, without compromising rigor.

Learn more how Capital One Databolt, the enterprise data security solution from Capital One Software, can help your business become AI-ready by securing sensitive data at scale.


Andrew Seaton is Vice President, Data Engineering – Enterprise Data Detection & Protection, Capital One.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.

Advertisement

Source link

Continue Reading

Tech

UK Politicians Continue To Miss The Point In Latest Social Media Ban Proposal

Published

on

from the does-no-one-remember-being-a-teen? dept

The UK is moving forward with its efforts to ban social media for young people. Ahead of this week’s House of Lords debate on the topic, we’re getting you situated with a primer on what’s been happening and what it all means.

What was the last vote about? 

On 9 March, the House of Commons discussed amendments tabled by the House of Lords in the government’s flagship legislation, the Children’s Wellbeing and Schools Bill. 

The House of Lords previously tabled an amendment to “prevent children under the age of 16 from becoming or being users” of “all regulated user-to-user services,” to be implemented by “highly-effective age assurance measures,” which effectively banned under-16s from social media. When this proposal came before the House of Commons, MPs defeated it by 307 votes to 173. 

Instead, the Commons proposed its own amendment: enabling the Secretary of State to introduce provisions “requiring providers of specified internet services” to prevent access by children, under age 18 rather than 16, to specified internet services or to specified features; and to restrict access by children to specified internet services which ministers provide. 

Advertisement

Who does this give powers to?

The Commons proposal redirects power from the UK Parliament and the UK’s independent telecom regulator Ofcom to the Secretary of State for Science, Innovation and Technology, currently Liz Kendall, who will be able to restrict internet access for young people and determine what content is considered harmful…just because she can. The amendment also empowers the Secretary of State to limit VPN use for under 18s, as well as restrict access to addictive features and change the age of digital consent in the country; for example, preventing under-18s from playing games online after a certain time.  

Why is this a problem? 

This process is devoid of checks or accountability mechanisms as ministers will not be required to demonstrate specific harms to young people, which essentially unravels years-long efforts by Ofcom to assess online services according to their risks. And given the moment the UK is currently in, such as refusing to protect trans and LGBTQ+ communities and flaming hostile and racist discourses, it is not unlikely that we’ll see ministers start restricting content that they ideologically or morally feel opposed to, rather than because the content is harmful based, as established by evidence and assessed pursuant to established human rights principles. 

We know from other jurisdictions like the United States that legislation seeking to protect young people typically sweeps up a slew of broadly-defined topics. Some block access to websites that contain some “sexual material harmful to minors,” which has historically meant explicit sexual content. But some states are now defining the term more broadly so that “sexual material harmful to minors” could encompass anything like sex education; others simply list a variety of vaguely-defined harms. In either instance, this bill would enable ministers to target LGBTQ+ content online by pushing this behind an under-18s age gate, and this risk is especially clear given what we already know about platform content policies. 

How will this impact young people? 

The internet is an essential resource for young people (and adults) to access information, explore community, and find themselves. Beyond being spaces where people can share funny videos and engage with enjoyable content, social media enables young people to engage with the world in a way that transcends their in-person realm, as well as find information they may not feel safe to access offline, such as about family abuse or their sexuality. In severing this connection to people and information by banning social media, politicians are forcing millions of young people into a dark and censored world. 

Advertisement

How did each party vote? 

The initial push to ban under-16s from social media came from the Conservative Party, who have since accused the UK’s Prime Minister Keir Starmer of “dither and delay” for not committing to the ban. The Liberal Democrats have also called this “not good enough.” The Labour Party itself is split, with 107 Labour Party MPs abstaining in the vote on the House of Lords amendment. 

But we know that the issue of young people’s online safety is a polarizing topic that politicians have—and will continue to—weaponize for public support, regardless of their actual intentions. This is why we will continue to urge policymakers and regulators to protect people’s rights and freedoms online at all moments, and not just take the easy route for a quick boost in the polls.

How does this bill connect to the Online Safety Act?

The draft Children’s Wellbeing and Schools Bill that came from the Lords provided that any regulation pertaining to the well-being of young people on social media “must be treated as an enforceable requirement” with the Online Safety Act. The Commons amendment, however, starts out by inserting a new clause that amends the Online Safety Act. 

For more than six years, we’ve been calling on the UK government to pass better legislation around regulating the internet, and when the Online Safety Act passed we continued to advocate for the rights of people on the internet—including young people—as Ofcom implemented the legislation. This has been a protracted effort by civil society groupstechnologiststech companies, and others participating in Ofcom’s consultation process and urging the regulator to protect internet users in the UK.

Advertisement

The MPs amendment essentially rips this up. Technology Secretary Liz Kendall recently said that ministers intended to go further than the existing Online Safety Act because it was “never meant to be the end point, and we know parents still have serious concerns. That is why I am prepared to take further action.” But when this further action is empowering herself to make arbitrary decisions on content and access, and banning under-18s from social media, this causes much more harm than it solves. 

Is the UK alone in pushing legislation like this? 

Sadly, no. Calls to ban social media access for young people have gained traction since Australia became the first country in the world to enforce one back in December. On 5 March, Indonesia announced a ban on social media and other “high-risk” online platforms for users under 16. A few days later, new measures came into effect in Brazil that restricts social media access for under-16s, who must now have their accounts linked to a legal guardian. Other countries like Spain and the Philippines have this year announced plans to ban social media for under-16s, with legislation currently pending to implement this.

What are the next steps?

The Children’s Wellbeing and Schools Bill returns to the House of Lords on 25 March for consideration of the new Commons amendments. The bill will only become law if both Houses agree to the final draft. 

We will continue to stand up against these proposals—not only to young people’ free expression rights, but also to safeguard the free flow of information that is vital to a democratic society. The issue of online safety is not solved through technology alone, especially not through a ban, and young people deserve a more intentional approach to protecting their safety and privacy online, not this lazy strategy that causes more harm than it solves. 

Advertisement

We encourage politicians in the UK to look into what is best, not what is easy, and explore less invasive approaches to protect all people from online harms. 

Republished from the EFF’s Deeplinks blog.

Filed Under: social media, social media ban, teens, uk

Advertisement

Source link

Continue Reading

Tech

AI data centers are cooking the planet, creating extreme heat islands that affect millions in cities and rural regions alike

Published

on


  • AI data centers are producing extreme heat islands that extend miles beyond facilities
  • Over 340 million people experience elevated temperatures due to hyperscale AI facilities
  • Extreme temperature spikes of up to 16.4 °F have been recorded near data centers

The expansion of AI-driven data centers is having a more immediate environmental impact than previously understood, experts have warned.

A research team led by Andrea Marinoni at the University of Cambridge claims these facilities, often sprawling over a million square feet, are not only consuming massive amounts of energy but also generate extreme local heating effects, known as heat islands.

Source link

Advertisement
Continue Reading

Tech

Netflix is expanding into kids’ games with a new standalone app

Published

on

Netflix is launching a new standalone app for kids’ games called Netflix Playground, the company announced on Monday. Netflix Playground is available as part of a Netflix subscription, and doesn’t have any ads or in-app purchases.

Netflix says the app gives children access to an “ever-growing” library of games for kids. Netflix Playground is launching with titles featuring characters from popular kids’ shows.

The app, which is designed for children ages eight and under, is now available in the U.S., Canada, the U.K., Australia, the Philippines, and New Zealand. It will roll out worldwide on April 28. The app is available on both iOS and Android.

It can be accessed offline without a mobile or Wi-Fi connection, which the company says makes it the “perfect companion for long airplane rides or grocery trips.”

Advertisement
Image Credits:Netflix

For example, one game is titled “Playtime With Peppa Pig,” and sees players “jump into Peppa’s world with a collection of playful activities.” There’s also a “Sesame Street” game where players practice matching with memory cards or coordination with connect-the-dots. Other titles include “Let’s Color,” “Storybots,” “Bad Dinosaurs,” and more.

“We’re building a world where kids can not only watch their favorite stories, they can step inside them and interact with their favorite characters,” said John Derderian, Netflix Vice President of Animation Series + Kids & Family TV, in a press release. “We’re creating a seamless destination for discovery, learning, and play. Whether it’s reuniting with Hank and the ‘Trash Truck’ crew for new adventures or making a smoothie with ‘Peppa Pig,’ watching and playing on Netflix can be the fun and easiest part of every family’s day.”

Netflix first launched games in 2021 and had ambitious plans for the space, but has since dialed them back after its titles failed to gain traction. The streaming giant has also shut down several video game studios like Boss Fight, Spry Fox, and an AAA studio.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Advertisement

Late last year, Netflix forayed into TV gaming with a slate of new party titles meant to be played in groups, including TV versions of Tetris and Pictionary. The company has also said it will prioritize cloud gaming, but has noted that it’s still in the early stages of these plans.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025