Building the next generation of robots for successful integration into our homes, offices, and factories is more than just solving the hardware and software problems – we also need to understand how they will be perceived and how they can work effectively with people in those spaces.
In summer 2025, RAI Institute set up a free popup robot experience in the CambridgeSide mall, designed to let people experience state-of-the-art robotics first hand. While news stories about robots and AI are common, with some being overly critical and some overly optimistic, most people have not encountered robots in the flesh (or metal) as it were. With no direct experience, their opinions are largely shaped by pop culture and social media, both of which are more focused on sensational stories instead of accurate information about how the robots might be used effectively and where the technology still falls short. Our goal with the popup was two-fold: first, to give people an opportunity to see robots that they would otherwise not have a chance to experience and second, to better understand how the public feels about interacting with these robots.
Designing a Robot Experience for the General Public
Some earlier versions legged robots, built by the RAI Institute’s Executive Director, Marc RaibertRAI Institute
The ANYmal by ANYrobotics (left) and a previous model of the RAI Institute’s UMV (right)RAI Institute
The pop-up space had two areas: a museum area where people could see historical and modern robots, including some RAI Institute builds like the UMV and an interactive experience called “Drive-a-Spot”. This area was a driving arena where anyone who came by could take the controls of a Spot quadruped, one of the more recognizable, commercially available robots available today.
The guest robot drivers used a custom controller built on an adaptive video game controller that was designed so that anyone of any age could use it. It featured basic controls: move forward, back, left, right, adjust height, sit, stand, and tilt. The buttons were large so that tiny or elderly hands could use the controller and the people who drove Spot ranged in age from two to over 90.
Advertisement
The guest robot drivers used a custom controller built on an adaptive video game controller that was designed so that anyone of any age could use it.RAI Institute
The demo area was designed to be a bit challenging for the Spot robot to maneuver in – it contained tight passages, low obstacles to step over, a barrier to crouch under, and taller objects the robot had to avoid. Much to the surprise of many of our guests, Spot is able to autonomously adjust itself to traverse and avoid those obstacles when being supervised by the joystick.
RAI Institute
The driving arena’s theme rotated every few weeks across four scenarios: a factory, a home, a hospital, and an outdoor/disaster environment. These were chosen to contrast settings where robots are broadly accepted (industrial, emergency response) with settings where public ambivalence is well-documented (domestic, healthcare).
The visitors who chose to drive the Spot robot could also participate in a short survey before and after their driving experience. The survey focused on two core dimensions:
Advertisement
Comfort: how comfortable would you feel if you encountered a robot in a factory, home, hospital, office, or outdoor/disaster scenario?
Suitability: how well would this robot work in each of those contexts?
The survey also recorded emotional reactions immediately after driving, likelihood to recommend the experience, and open-ended responses about what they found memorable or surprising. The researchers were careful to separate the environment participants drove through from the scenarios they were asked to evaluate in the survey). This distinction is important for interpreting the results given below.
Did Interacting with the Robot Change People’s Feelings about Robots?
Out of approximately 10,000 guests that visited the Robot Lab, 10 percent of those drove the Spot and opted-in to our surveys. Of those surveyed, more than 65% of people had seen images or videos of Spot robots online, but most had never seen one of the robots in person.
Increased Comfort Through Experience
Across all five contexts presented in the survey (factory, home, hospital, office, and outdoor/disaster scenarios), comfort scores increased significantly after the driving session. The effects were small to moderate in magnitude, but they were consistent and statistically robust after correcting for multiple comparisons across all participants spanning children to older adults.
The largest gain appeared in the outdoor/disaster context, which started with low comfort despite high-perceived suitability. People already thought Spot would be useful in search-and-rescue scenarios; they just weren’t comfortable with it performing in that scenario. This discomfort may stem from media portrayals of quadruped robots in military contexts. A few minutes of hands-on control appears to partially dissolve that apprehension.
Participants who drove through the factory-themed arena showed no significant increase in comfort, but this scenario already had the highest rating of any rated context at baseline, leaving little room for improvement.
Advertisement
No matter their previous experience, most people were neutral about having a Spot robot in their home before their driving experience. However, after the experience of controlling the Spot robot, people had a statistically significant increase in their comfort at having a Spot in their home and also felt that a Spot robot was more suitable for work in any environment, not just the one they had driven it in.
Better Understanding of Where Robots Can Fit into Daily Life
Perceived suitability for Spot to operate in each context also increased. However, the pattern in the data is different. The largest gains weren’t in the high-baseline industrial and outdoor contexts. They were in home, office, and hospital – the very environments where people started out most skeptical.
Participants who drove the Spot robot in a home-themed environment didn’t just consider homes more suitable for robots; they also rated hospitals and offices as more suitable. This result suggests that hands-on control alters something more fundamental than just context-specific familiarity. It may change a person’s underlying understanding of a robot’s capabilities and, consequently, where they believe robots are appropriate.
Results by Demographic
The hands-on experience seems to be similarly effective across genders, although it does not completely eliminate existing disparities. For example, men reported higher baseline comfort than women across all five contexts. However, all genders improved at similar rates after interaction. The gap didn’t significantly widen or close in most contexts, though it did narrow in factory and office settings.
Advertisement
Age effects were more context dependent. Children (aged 8–17) rated factory environments as less comfortable and less suitable before the study. However, this could be because most children do not have experience with factory settings or industrial environments. After interaction, this gap largely persisted. By contrast, children showed stronger gains in office comfort than older adults and entered the study rating home contexts more favorably than adults did.
Participants ranged from age 8 to over age 75.RAI Institute
Participants who had previously driven Spot (mainly robotics professionals) began with higher comfort across the board. But after the hands-on session, people with no prior exposure caught up to experienced drivers. This level of familiarity would be difficult to replicate with images and videos alone.
Post-Interaction Results
Post-interaction emotional data was overwhelmingly positive. “Excitement” was reported by 74% of participants, “happiness” by 50%, and only 12% reported “nervousness.” Over 55% rated the experience as “brilliant” and 62% said they were very likely to recommend it to a friend.
The open-ended responses added a lot more color. The most commonly mentioned moments were locomotion and terrain adaptation (22%). This included the way Spot navigated steps, tight spaces, and uneven ground and expressive tilt movements (22%), which people found surprisingly dog-like or dance-like. A smaller set of responses (3%) described anthropomorphic reactions: worrying about “hurting” the robot or finding its behavior “silly” in a way that prompted genuine emotional response.
Advertisement
When asked what tasks they’d want a robot to perform, responses shifted meaningfully. Before driving, answers clustered around domestic assistance and heavy or hazardous labor. After driving, domestic help remained prominent, but entertainment and play jumped from 7.5% to 19.4%. Companionship also appeared at 5%. References to hazardous or industrial tasks declined as people who had operated the robot began imagining it as a companion and playmate, not just a labor-replacement tool.
Key Takeaways from The Robot Lab
In the not-so-distant future, robots will become more common in public and private spaces. But whether that integration into daily life will be accepted by the general public remains to be seen. The standard approach to building acceptance has been passive exposure such as videos, exhibits, and articles. This study suggests giving people agency and letting them actually operate a robot is a qualitatively different intervention.
Short, well-designed, hands-on encounters can raise comfort in precisely the social domains where ambivalence is highest and where future robotics deployment will likely take place. This hands-on experience shouldn’t be limited to tech conferences and museums, as it may be more valuable than just entertaining.
Fun for all ages!RAI Institute
We consider the popup a success, but as with all experiments, we also learned a lot along the way. For our takeaways, in addition to the increased comfort with robots, we also found that the guests to our space really enjoyed talking to the robotics experts that staffed the location. For many people, the opportunity to talk to a roboticist was as unique as the opportunity to drive a robot, and in the future, we are excited to continue to share our technical work as well as the experiences of our humans in addition to our humanoids.
Advertisement
Does building a space where folks can experience robots firsthand have the potential to create meaningful, long-term attitude shifts? That remains an open question. But the effect’s direction and consistency across different situations, ages, and genders are hard to ignore.
The Apple vs. Epic Games saga over App Store fees continues, as Apple hopes the Supreme Court will rule in its favor the second time around and possibly stop previous punishments from being enforced.
Apple’s control of the App Store on iPhone continues to be challenged in court
The Supreme Court will soon have to weigh in on Apple’s fees for app-related external purchases, after the United States Court of Appeals for the Ninth Circuit denied a request for a rehearing in March 2026. Apple has been fighting a December 2025 decision that sought to lower its 27% fee on purchases made outside the App Store. Continue Reading on AppleInsider | Discuss on our Forums
The Federal Communications Commission continued its crackdown on Chinese tech on Friday, issuing a new proposal that would extend a ban on companies to products previously authorized.
In 2021, companies such as Huawei, Hikvision, Dahua, Hytera and ZTE were added to the FCC’s Covered List, a record of companies and products that the FCC believes pose a national security risk to the US, under the Secure Networks Act. The Chinese companies produce mobile phones, security cameras and other tech products.
But the 2021 ban applied only to new models that the FCC hadn’t authorized, and companies were free to keep selling models that had already received the FCC’s stamp of approval. If approved, the new proposal would ban these companies entirely, including those previously approved products.
Advertisement
“Older models of covered equipment pose an unacceptable risk today when imported or marketed in the United States, not only when such equipment is new to the market,” an FCC report from October said.
The proposal will be open for comment until May 6, after which the commission will vote on whether to adopt the rules. The ban won’t affect devices already owned by Americans.
Millions of consumers and businesses rely on Wi-Fi routers, telecommunications equipment and security cameras every day, making these devices critical links in both home and office networks. The Federal Communications Commission shocked the broadband industry on March 23 by effectively banning the sale of future foreign-made Wi-Fi routers (including some of the biggest router brands).
Advertisement
In recent years, Chinese telecommunications companies have faced restrictions on operating in the US. In 2020, The Wall Street Journal cited US officials who reportedly said that Chinese companies, including Huawei, used backdoor access intended for law enforcement to track sensitive information.
But this ban could be implemented quickly. The FCC proposes that “all parties [will have to] cease all importation and marketing activities within 30 days of the effective date of the prohibition.”
This proposition doesn’t reflect a final legal ruling on telecommunications imports, but it does reflect how the Trump administration has been increasingly pressuring Chinese tech companies in recent months.
The foreign-made router ban was only the latest in a string of decisions that have placed restrictions on Chinese tech companies operating in the US.
Quantum resource estimates suggest encryption barriers may fall faster than expected
Reduced qubit requirements bring theoretical attacks closer to practical reality
Bitcoin’s cryptographic foundations face pressure from advancing quantum algorithm efficiency
Google researchers have revised expectations around the computational requirements needed to break widely used cryptographic systems protecting cryptocurrencies.
The company’s latest whitepaper claims a future quantum machine could solve the elliptic curve discrete logarithm problem using significantly fewer resources than previously assumed.
Earlier estimates suggested millions of qubits would be required to break encryption schemes such as secp256k1, which underpins Bitcoin security.
Article continues below
Advertisement
New quantum findings reduce crypto security timelines
The new findings indicate fewer than 500,000 physical qubits could be sufficient, representing a substantial reduction in expected hardware requirements.
The research outlines two quantum circuit designs capable of executing Shor’s algorithm, requiring under 1,500 logical qubits and tens of millions of quantum gate operations.
Advertisement
Under standard assumptions about hardware performance, these computations could be completed within minutes on a sufficiently advanced system.
This marks a continuation of incremental improvements in quantum algorithm efficiency, rather than a sudden breakthrough in hardware capabilities.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Google states that the intent behind publishing these findings is not to create alarm but to encourage preparation within the cryptocurrency ecosystem.
Advertisement
“We want to raise awareness on this issue and are providing the cryptocurrency community with recommendations to improve security and stability before this is possible, including transitioning blockchains to post-quantum cryptography,” Google executives, Ryan Babbush and Hartmut Neven said.
The company adopted a controlled disclosure strategy, sharing verifiable findings through a zero-knowledge proof mechanism without exposing sensitive implementation details that could enable misuse.
This approach reflects established practices in cybersecurity, where vulnerabilities are disclosed in a coordinated manner to allow time for mitigation.
Advertisement
However, disclosure in blockchain systems introduces additional complexity, as confidence in the network plays a direct role in asset value.
Researchers note that exaggerated or poorly substantiated claims could contribute to instability through fear and uncertainty, even in the absence of immediate technical risk.
Most blockchain systems currently rely on elliptic curve cryptography, which remains secure against classical computing attacks but is vulnerable in a quantum scenario.
Google points to post-quantum cryptography as a viable pathway, emphasizing that alternative algorithms based on more complex mathematical structures are already under development.
Advertisement
These methods aim to resist quantum attacks while maintaining compatibility with existing systems.
Despite the availability of potential solutions, implementation across decentralized networks is expected to be gradual.
The researchers stress the importance of early planning, including reducing exposure of vulnerable wallet addresses and considering policies for inactive or abandoned digital assets.
Kalshi can’t be stopped in New Jersey. A 3rd US Circuit Court of Appeals panel ruled on Monday that New Jersey has no authority to regulate Kalshi’s prediction market allowing people to bet on the outcome of sports events. That power rests with the Commodity Futures Trading Commission, the panel ruled 2-1.
The CFTC is headed by President Donald Trump appointee Michael Selig, who vocally and actively supports prediction markets like Kalshi and Polymarket, calling them “exciting products.” The Trump family agrees: Donald Trump Jr. is a paid adviser to Kalshi and an unpaid adviser to Polymarket, and Truth Social, which is run by the Trump Media and Technology Group, is set to start a prediction market of its own.
Online prediction markets are an emerging phenomenon that allow users to bet on the outcome of basically anything, from local athletic competitions to lethal military invasions. Though they’re new, these marketplaces have already shown evidence of insider trading on an extreme scale, with suspicious bets and big payouts tied to the US and Israel’s military strikes in Iran, and also the US’ brief invasion in Venezuela. According to blockchain analyst DeFi Oasis, fewer than 0.04 percent of Polymarket accounts captured more than 70 percent of profits, totaling $3.7 billion.
Multiple state gaming regulators have filed legal challenges against Kalshi and Polymarket in recent months, and just last week the CFTC sued Arizona, Connecticut and Illinois over their attempts to regulate prediction markets. While each state has its own angle of attack, from election issues to underage betting, they’re all broadly claiming that prediction markets are just illegal gambling businesses. Today’s ruling marks the first federal-level decision in one of these cases and it’s in favor of the prediction markets.
Advertisement
New Jersey sent Kalshi a cease and desist letter in 2025, claiming the service violated the state’s ban on collegiate sports betting. Kalshi escalated the situation and sued New Jersey, arguing that its sports contracts are actually swaps, a type of financial investment that’s (conveniently) regulated by the CFTC. A lower-court judge previously sided with Kalshi, prompting New Jersey to appeal. Two of the three judges in that appeal ruled that Kalshi’s sports-related event contracts were indeed swaps. Kalshi CEO Tarek Mansour called Monday’s ruling “a big win for the industry.”
US Circuit Judge Jane Richards Roth dissented, writing that Kalshi’s “offerings were virtually indistinguishable from the betting products available on online sportsbooks, such as DraftKings and FanDuel.”
New Jersey Attorney General Jennifer Davenport has the option to ask the full 3rd Circuit to rehear the case, and the issue is also pending in several other courts.
A new attack, dubbed GPUBreach, can induce Rowhammer bit-flips on GPU GDDR6 memories to escalate privileges and lead to a full system compromise.
GPUBreach was developed by a team of researchers at the University of Toronto, and full details will be presented at the upcoming IEEE Symposium on Security & Privacy on April 13 in Oakland.
The researchers demonstrated that Rowhammer-induced bit flips in GDDR6 can corrupt GPU page tables (PTEs) and grant arbitrary GPU memory read/write access to an unprivileged CUDA kernel.
An attacker may then chain this into a CPU-side escalation by exploiting memory-safety bugs in the NVIDIA driver, potentially leading to complete system compromise without the need to disable Input-Output Memory Management Unit (IOMMU) protection.
GPUBreach attack steps Source: University of Toronto
IOMMU is a hardware unit that protects against direct memory attacks. It controls and restricts how devices access memory by managing which memory regions are accessible to each device.
Despite being an effective measure against most direct memory access (DMA) attacks, IOMMU does not stop GPUBreach.
Advertisement
“GPUBreach shows that GPU Rowhammer attacks can move beyond data corruption to real privilege escalation,” the researchers explain.
“By corrupting GPU page tables, an unprivileged CUDA kernel can gain arbitrary GPU memory read/write, and then chain that capability into CPU-side escalation by exploiting newly discovered memory-safety bugs in the NVIDIA driver.”
“The result is system-wide compromise up to a root shell, without disabling IOMMU, unlike contemporary works, making GPUBreach a more potent threat.”
Overview of how GPUBreach works Source: University of Toronto
The same researchers previously demonstrated GPUHammer, the first attack showing that Rowhammer attacks on GPUs are practical, prompting NVIDIA to issue a warning to users and suggesting the activation of the System Level Error-Correcting Code mitigation to block such attempts on GDDR6 memory.
However, GPUBreach is taking the threat to the next level, showing that it is possible not only to corrupt data but also to gain root privileges with IOMMU enabled.
Advertisement
The researchers exemplified the results with an NVIDIA RTX A6000 GPU with GDDR6. This model is widely used in AI development and training workloads.
Comparison to other GPU attacks Source: University of Toronto
Disclosure and mitigations
The University of Toronto researchers reported their findings to NVIDIA, Google, AWS, and Microsoft on November 11, 2025.
Google acknowledged the report and awarded the researchers a $600 bug bounty.
NVIDIA stated that it may update its existing security notice from July 2025 to include the newly discovered attack possibilities.
As demonstrated by the researchers, IOMMU alone is insufficient if GPU-controlled memory can corrupt trusted driver state, so users at risk should rely solely on that security measure.
Advertisement
Error Correcting Code (ECC) memory helps correct single-bit flips and detect double-bit flips, but it is not reliable against multi-bit flips.
Ultimately, the researchers underlined that GPUBreach is completely unmitigated for consumer GPUs without ECC.
The researchers will publish the full details of their work, including a technical paper and a GitHub repository with the reproduction package and scripts, on April 13.
Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.
This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.
In short:Xoople, a Madrid-based geospatial data company founded in 2019, has raised a $130 million Series B led by Nazca Capital, bringing its total funding to $225 million and pushing its valuation into unicorn territory. The round was co-invested by MCH Private Equity, CDTI (the Spanish government’s technology development fund), Buenavista Equity Partners, and Endeavor Catalyst. Alongside the raise, Xoople announced a partnership with US space and defence contractor L3Harris Technologies to build sensors for its own satellite constellation, designed to produce Earth surface data it says will be “two orders of magnitude better than existing monitoring systems.” The company’s EarthAI platform, built on Microsoft Azure and distributed through Microsoft and Esri, delivers continuous surface intelligence for insurers, farmers, governments, and infrastructure operators.
Xoople has spent seven years building something that did not previously exist in a commercially deployable form: a continuous, AI-native data layer for the Earth’s surface. The Madrid startup, founded in 2019, emerged from that development period with a €115 million in prior funding, a platform embedded in the two most widely used enterprise geospatial ecosystems in the world, and a thesis that the AI era will require a fundamentally different approach to Earth observation — one designed from the ground up for machine learning rather than adapted from satellite imagery workflows built for human analysts. The $130 million Series B, led by Nazca Capital, confirms that investors believe that thesis is credible enough to back at scale.
CEO and co-founder Fabrizio Pirondini told TechCrunch the raise brings Xoople’s total funding to $225 million and puts the company in unicorn territory on valuation. The round was joined by MCH Private Equity, CDTI, the Spanish government-backed technology development fund that has also backed Nazca Capital’s aerospace and defence fund, Buenavista Equity Partners, and Endeavor Catalyst.
What EarthAI actually does
Xoople’s core product, EarthAI, is an end-to-end Earth intelligence system. It ingests continuous surface data, currently sourced from government spacecraft and third-party satellite networks, and processes it into AI-ready datasets that can be queried for change detection, risk prediction, and environmental monitoring. The key design choice is continuity: rather than producing point-in-time images for human review, EarthAI is built to stream a persistent, structured view of the planet’s surface into AI models that need regular, reliable ground truth.
The use cases span industries that share a dependence on understanding what is happening on the physical surface of the Earth. For agriculture, EarthAI provides early detection of crop stress, monitors soil health and water conditions, and generates data that enables farmers to participate in carbon credit markets. For insurance, it enables more precise climate risk pricing and real-time verification of natural disaster claims, removing the delay and subjectivity of ground-based assessments. For infrastructure operators, it monitors physical assets for signs of stress or degradation before failures occur. For governments, it supports emergency planning, environmental enforcement, and humanitarian response.Capital flowing into specialised AI applications at the intersection of science, data, and infrastructurehas accelerated considerably over the past year, and Xoople sits precisely at that intersection.
Advertisement
The satellite play
The $130 million will fund Xoople’s transition from a platform built on others’ data to one powered by its own. Alongside the Series B, the company announced a partnership with L3Harris Technologies, a US space and defence contractor, to design and manufacture sensors for Xoople’s own satellite constellation. The sensors will collect optical data. Pirondini told TechCrunch that the constellation is designed to produce “a stream of data that is going to be two orders of magnitude better than existing monitoring systems“, a claim that, if borne out, would represent a substantial leap over the imagery quality currently available from commercial earth observation operators.
That claim is where Xoople meets its competitive reality. The company is entering a market that includes Vantor (formerly Maxar Intelligence, rebranded in October 2025), Planet Labs, BlackSky, Airbus Defence and Space, ICEYE, and Capella Space — all of which have satellites already in orbit and established AI-focused data processing pipelines.Companies building the hardware and data layers that AI depends on face a lengthy gap between the announcement of a new approach and its delivery in deployable form, and Xoople’s constellation is not yet in orbit. For now, EarthAI runs on data it did not produce. The L3Harris partnership signals that the proprietary data supply is the next phase.
Distribution before data
Xoople’s strategic sequencing is unusual for an Earth observation company. Most competitors in the space led with hardware — launching satellites, then figuring out distribution. Xoople did the reverse: it spent its first seven years embedding its platform into Microsoft and Esri, the two dominant environments where enterprise buyers, governments, and GIS professionals already live. Neither Microsoft nor Esri has its own proprietary satellite data. Xoople positioned itself to supply that gap from inside the platforms where the purchasing decisions are made.
The Microsoft relationship is structural: Xoople’s platform runs on Azure, and the company is integrated with Microsoft’s Planetary Computer Pro, which delivers AI-powered geospatial insights for enterprise use. Esri, the world’s largest geospatial software company, is a partner distributor. The implication is that when Xoople’s own constellation is operational and its data quality delivers on the “two orders of magnitude” promise, it will have distribution in place that its newer competitors would need years to replicate.The investment flowing into cloud-based AI data infrastructurehas made the ability to process and deliver petabytes of Earth surface data at low latency a tractable problem; the scarcity is in the quality and continuity of the underlying data itself.
Advertisement
A Spanish unicorn in a European context
Xoople’s raise is one of the larger deep tech rounds to come out of Spain in recent years, and it lands in a moment that the European space and defence investment community has been accelerating. Nazca Capital, which led the Series B, runs Spain’s largest private equity fund specialised in aerospace and defence, a fund that also received a €294 million commitment from CDTI and a €40 million investment from the European Investment Fund. The investor composition of the Xoople round,government-backed funds, European private equity, and Endeavor Catalyst, which focuses on high-impact technology entrepreneurs, reflectsthe persistent tension in European technology between deep technical ambition and the capital required to realise it: the funding is patient, multi-source, and has a public interest dimension that pure venture rounds often lack.
The earth observation market was valued at $7.04 billion in 2025 and is projected to reach $14.55 billion by 2034, growing at just over 8% annually. Xoople is betting that as AI models grow more capable and more dependent on real-world data, the market for continuous, structured Earth surface intelligence, rather than periodic imagery, will grow faster than that aggregate.A year in which the appetite for AI applications in climate, infrastructure, and environmental risk grew considerablyprovided the validation Xoople needed; the $130 million is the bet that the second half of the decade will prove it right at scale.
Data security remains one of the least mature domains in enterprise cybersecurity. According to IBM, 35% of breaches in 2025 involved unmanaged data source or “shadow data.” This reveals a systemic lack of basic data awareness. It’s not because of a lack of tooling or investment. It’s because many organizations still struggle with the most fundamental questions: What data do we have? Where does it live? How does it move? And who is responsible for it?
In an increasingly complex ecosystem of data sources, cloud platforms, SaaS applications, APIs, and AI models, those questions are only becoming more difficult to answer. Closing the maturity gap in data security demands a cultural shift where security is no longer treated as an afterthought. Instead, protection is embedded throughout the full data lifecycle, grounded in a robust inventory, clear classification, and scalable mechanisms that translate policy into automated guardrails.
Visibility as the foundation
The most persistent barrier to data security maturity is basic visibility. Organizations often focus on how much data they hold, but not on what that data is made up of. Does it contain personally identifiable information (PII)? Financial data? Health information? Intellectual property? Without this level of understanding and inventory, it’s a lot tougher to implement meaningful protection.
Advertisement
This can be avoided, however, by prioritizing enterprise capabilities that can detect sensitive data at scale across a large and varied footprint. Detection must be paired with action, deleting data where it’s no longer needed, and securing data where it is by aligning enforcement to a well-defined policy.
Mature organizations should start by treating data security as an “understanding your environment” problem. Maintain an inventory, classify what’s in the ecosystem, and align protections with the classification rather than solely relying on perimeter controls or point solutions to scale.
Securing chaotic data
One reason data security has lagged behind other security domains is that data itself is inherently chaotic. Unlike perimeter security, which relies on explicit ports and defined boundaries, data is largely unpredictable. That is to say, the same underlying information may appear across very different formats: structured databases, unstructured documents, chat transcripts, or analytics pipelines. Each may have slightly different encodings or transformations that introduce unforeseen, and often undetected, changes to the data itself.
Human behavior compounds the challenge, with different actions introducing risks in ways that perimeter controls simply can’t anticipate. This could be anything from a credit card number copied into a free-form comment field, a spreadsheet emailed outside its intended audience, or a dataset repurposed for a new workflow.
Advertisement
When protection is bolted on at the end of a workflow, organizations create blind spots. They rely on downstream checks to catch upstream design flaws. Over time, complexity accumulates and the risk of exposure becomes a question of when, not if.
A more resilient model assumes that sensitive data will surface in unexpected places and formats, so protection is embedded from the moment data is captured. Defense-in-depth becomes a design principle: segmentation, encryption at rest and in transit, tokenization, and layered access controls.
Critically, these safeguards travel with the data lifecycle, from ingestion to processing, analytics and publishing. Instead of retrofitting controls, organizations design for chaos. They accept variability as a given and build systems that remain secure even when data diverges from expectations.
Scaling governance with automation
Data security becomes operationally sustainable when governance is enforced through automation from its genesis. When coupled with clear expectations to create bounded contexts: teams understand what is permitted, under what conditions, and with what protections data can be used effectively.
Advertisement
This matters more than ever today. AI systems often require access to huge volumes of data, across domains. This makes policy implementation particularly challenging. To do so effectively and safely requires deep understanding, strong governance policies, and automated protection.
Security techniques such as synthetic data and token replacement enable organizations to preserve analytical context while making sensitive values harder to read. Policy-as-code patterns, APIs, and automation can handle tokenization, deletion, retention constraints, and dynamic access controls. With guardrails built into the platforms they use, engineers can focus more on innovating with data and elevating business outcomes securely.
AI systems must also operate within the same governance and monitoring expectations as human workflows. Permissions, telemetry, and controls around what models can access, along with the information they can publish, are essential. Governance will always introduce a degree of friction. The goal is to make that friction well understood, navigable and increasingly automated. Confirming purpose, registering a use case, and provisioning access dynamically based on role and need should be clear, repeatable processes.
At enterprise scale, this requires centralized capabilities that implement cyber security policy in the data domain. This includes detection and classification engines, tokenization and detokenization services, retention enforcement, and ownership and taxonomy mechanisms that cascade risk management expectations into daily execution.
Advertisement
When done well, governance becomes an enablement layer rather than a bottleneck. Metadata and classification drive protection decisions automatically while accelerating business discovery and usage. Data is protected across its lifecycle by strong defenses like tokenization and deleted when required by regulation or internal policy. There should be no need for teams to “touch the data” manually for every control decision, with policy enforced by design.
Building for the future
Put simply, closing the data security maturity gap is less about adopting a single breakthrough technology and more about operational discipline. Build the map. Classify what you have. Embed protection into workflows so that security is repeatable at scale.
For business leaders seeking measurable progress over the next 18–24 months, three priorities stand out.
First, establish a robust inventory and metadata-rich map of the data ecosystem. Visibility is non-negotiable. Second, implement classification tied to clear, actionable policy expectations. Make it obvious what protections each category demands. And finally, invest in scalable, automated protection schemes that integrate directly into development and data workflows.
Advertisement
When protection shifts from reactive bolt-on controls to proactive built-in guardrails, compliance becomes simpler, governance becomes stronger, and AI readiness becomes achievable, without compromising rigor.
Learn more how Capital One Databolt, the enterprise data security solution from Capital One Software, can help your business become AI-ready by securing sensitive data at scale.
Andrew Seaton is Vice President, Data Engineering – Enterprise Data Detection & Protection, Capital One.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.
The UK is moving forward with its efforts to ban social media for young people. Ahead of this week’s House of Lords debate on the topic, we’re getting you situated with a primer on what’s been happening and what it all means.
What was the last vote about?
On 9 March, the House of Commons discussed amendments tabled by the House of Lords in the government’s flagship legislation, the Children’s Wellbeing and Schools Bill.
The House of Lords previously tabled an amendment to “prevent children under the age of 16 from becoming or being users” of “all regulated user-to-user services,” to be implemented by “highly-effective age assurance measures,” which effectively banned under-16s from social media. When this proposal came before the House of Commons, MPs defeated it by 307 votes to 173.
Instead, the Commons proposed its own amendment: enabling the Secretary of State to introduce provisions “requiring providers of specified internet services” to prevent access by children, under age 18 rather than 16, to specified internet services or to specified features; and to restrict access by children to specified internet services which ministers provide.
Advertisement
Who does this give powers to?
The Commons proposal redirects power from the UK Parliament and the UK’s independent telecom regulator Ofcom to the Secretary of State for Science, Innovation and Technology, currently Liz Kendall, who will be able to restrict internet access for young people and determine what content is considered harmful…just because she can. The amendment also empowers the Secretary of State to limit VPN use for under 18s, as well as restrict access to addictive features and change the age of digital consent in the country; for example, preventing under-18s from playing games online after a certain time.
Why is this a problem?
This process is devoid of checks or accountability mechanisms as ministers will not be required to demonstrate specific harms to young people, which essentially unravels years-long efforts by Ofcom to assess online services according to their risks. And given the moment the UK is currently in, such as refusing to protect trans and LGBTQ+ communities and flaming hostile and racist discourses, it is not unlikely that we’ll see ministers start restricting content that they ideologically or morally feel opposed to, rather than because the content is harmful based, as established by evidence and assessed pursuant to established human rights principles.
We know from other jurisdictions like the United States that legislation seeking to protect young people typically sweeps up a slew of broadly-defined topics. Some block access to websites that contain some “sexual material harmful to minors,” which has historically meant explicit sexual content. But some states are now defining the term more broadly so that “sexual material harmful to minors” could encompass anything like sex education; others simply list a variety of vaguely-defined harms. In either instance, this bill would enable ministers to target LGBTQ+ content online by pushing this behind an under-18s age gate, and this risk is especially clear given what we already know about platform content policies.
How will this impact young people?
The internet is an essential resource for young people (and adults) to access information, explore community, and find themselves. Beyond being spaces where people can share funny videos and engage with enjoyable content, social media enables young people to engage with the world in a way that transcends their in-person realm, as well as find information they may not feel safe to access offline, such as about family abuse or their sexuality. In severing this connection to people and information by banning social media, politicians are forcing millions of young people into a dark and censored world.
Advertisement
How did each party vote?
The initial push to ban under-16s from social media came from the Conservative Party, who have since accused the UK’s Prime Minister Keir Starmer of “dither and delay” for not committing to the ban. The Liberal Democrats have also called this “not good enough.” The Labour Party itself is split, with 107 Labour Party MPs abstaining in the vote on the House of Lords amendment.
But we know that the issue of young people’s online safety is a polarizing topic that politicians have—and will continue to—weaponize for public support, regardless of their actual intentions. This is why we will continue to urge policymakers and regulators to protect people’s rights and freedoms online at all moments, and not just take the easy route for a quick boost in the polls.
How does this bill connect to the Online Safety Act?
The draft Children’s Wellbeing and Schools Bill that came from the Lords provided that any regulation pertaining to the well-being of young people on social media “must be treated as an enforceable requirement” with the Online Safety Act. The Commons amendment, however, starts out by inserting a new clause that amends the Online Safety Act.
For more than six years, we’ve been calling on the UK government to pass better legislation around regulating the internet, and when the Online Safety Act passed we continued to advocate for the rights of people on the internet—including young people—as Ofcom implemented the legislation. This has been a protracted effort by civil society groups, technologists, tech companies, and others participating in Ofcom’s consultation process and urging the regulator to protect internet users in the UK.
Advertisement
The MPs amendment essentially rips this up. Technology Secretary Liz Kendall recently said that ministers intended to go further than the existing Online Safety Act because it was “never meant to be the end point, and we know parents still have serious concerns. That is why I am prepared to take further action.” But when this further action is empowering herself to make arbitrary decisions on content and access, and banning under-18s from social media, this causes much more harm than it solves.
Is the UK alone in pushing legislation like this?
Sadly, no. Calls to ban social media access for young people have gained traction since Australia became the first country in the world to enforce one back in December. On 5 March, Indonesia announced a ban on social media and other “high-risk” online platforms for users under 16. A few days later, new measures came into effect in Brazil that restricts social media access for under-16s, who must now have their accounts linked to a legal guardian. Other countries like Spain and the Philippines have this year announced plans to ban social media for under-16s, with legislation currently pending to implement this.
What are the next steps?
The Children’s Wellbeing and Schools Bill returns to the House of Lords on 25 March for consideration of the new Commons amendments. The bill will only become law if both Houses agree to the final draft.
We will continue to stand up against these proposals—not only to young people’ free expression rights, but also to safeguard the free flow of information that is vital to a democratic society. The issue of online safety is not solved through technology alone, especially not through a ban, and young people deserve a more intentional approach to protecting their safety and privacy online, not this lazy strategy that causes more harm than it solves.
Advertisement
We encourage politicians in the UK to look into what is best, not what is easy, and explore less invasive approaches to protect all people from online harms.
AI data centers are producing extreme heat islands that extend miles beyond facilities
Over 340 million people experience elevated temperatures due to hyperscale AI facilities
Extreme temperature spikes of up to 16.4 °F have been recorded near data centers
The expansion of AI-driven data centers is having a more immediate environmental impact than previously understood, experts have warned.
A research team led by Andrea Marinoni at the University of Cambridge claims these facilities, often sprawling over a million square feet, are not only consuming massive amounts of energy but also generate extreme local heating effects, known as heat islands.
Marinoni claims, “there are still big gaps in our understanding of the impacts of data centers,” emphasizing these effects have been largely overlooked.
Article continues below
Advertisement
Measuring heat impacts across global AI data centers
The team analysed temperature data from more than 6,000 hyperscale facilities over the past two decades, carefully accounting for global warming trends, seasonal changes, and other local influences.
The study found surface temperatures near data centers increased on average by 3.6 °F after operations began, with extreme cases recording rises to 16.4 °F.
Advertisement
These heat increases extend far beyond the immediate facility, sometimes affecting areas up to 6.2 miles away.
When the affected zones were mapped against population data, over 340 million people across North America, Europe, and Asia were affected, experiencing elevated local temperatures.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Observations in Mexico’s Bajio region and Aragon, Spain, revealed temperature increases that were inconsistent with those in the surrounding provinces.
Advertisement
This suggests that the heat effects were directly attributable to the data centers themselves rather than other environmental factors.
“The planned scale-up of data centers could have dramatic impacts on society,” Marinoni said.
Experts express concern over the rapid pace of AI infrastructure development, which may be outpacing sustainable planning.
Advertisement
“The ‘rush for AI-gold’ appears to be overriding good practice and systemic thinking…and is developing far more rapidly than any broader, more sustainable systems,” said Deborah Andrews, emeritus professor at London South Bank University
However, experts argue that further research is required to confirm these findings, particularly given the unusually high local temperature spikes reported.
The long-term consequences of energy-intensive AI operations warrant greater attention, as climate discussions have historically focused on emissions rather than direct heat effects.
Rethinking data center design and operational strategies could enable continued AI expansion while minimizing additional heat stress on neighboring communities and ecosystems.
Advertisement
In a world already experiencing intensified extreme weather events, the rapid proliferation of ultra-hot data centers may amplify local and regional environmental challenges.
Energy emissions remain a primary concern, but the localized warming caused by hyperscale facilities adds a new dimension of environmental risk that needs evaluation.
Netflix is launching a new standalone app for kids’ games called Netflix Playground, the company announced on Monday. Netflix Playground is available as part of a Netflix subscription, and doesn’t have any ads or in-app purchases.
Netflix says the app gives children access to an “ever-growing” library of games for kids. Netflix Playground is launching with titles featuring characters from popular kids’ shows.
The app, which is designed for children ages eight and under, is now available in the U.S., Canada, the U.K., Australia, the Philippines, and New Zealand. It will roll out worldwide on April 28. The app is available on both iOS and Android.
It can be accessed offline without a mobile or Wi-Fi connection, which the company says makes it the “perfect companion for long airplane rides or grocery trips.”
Advertisement
Image Credits:Netflix
For example, one game is titled “Playtime With Peppa Pig,” and sees players “jump into Peppa’s world with a collection of playful activities.” There’s also a “Sesame Street” game where players practice matching with memory cards or coordination with connect-the-dots. Other titles include “Let’s Color,” “Storybots,” “Bad Dinosaurs,” and more.
“We’re building a world where kids can not only watch their favorite stories, they can step inside them and interact with their favorite characters,” said John Derderian, Netflix Vice President of Animation Series + Kids & Family TV, in a press release. “We’re creating a seamless destination for discovery, learning, and play. Whether it’s reuniting with Hank and the ‘Trash Truck’ crew for new adventures or making a smoothie with ‘Peppa Pig,’ watching and playing on Netflix can be the fun and easiest part of every family’s day.”
Netflix first launched games in 2021 and had ambitious plans for the space, but has since dialed them back after its titles failed to gain traction. The streaming giant has also shut down several video game studios like Boss Fight, Spry Fox, and an AAA studio.
Techcrunch event
San Francisco, CA | October 13-15, 2026
Advertisement
Late last year, Netflix forayed into TV gaming with a slate of new party titles meant to be played in groups, including TV versions of Tetris and Pictionary. The company has also said it will prioritize cloud gaming, but has noted that it’s still in the early stages of these plans.
You must be logged in to post a comment Login