Samsung has spent the better part of the last decade dominating the TV market and building a soundbar empire, but dedicated two-channel speakers and a whole home music ecosystem have never really been part of the conversation, until now. With the $499 Music Studio 7 (LS70H) and $299 Music Studio 5 (LS50H), Samsung is making a direct move into wireless whole home audio for 2026, and it’s not doing it quietly.
Following its latest OLED, Neo QLED, MiniLED, and Frame TV launches, these new Wi-Fi speakers, first previewed at CES 2026 and now fully detailed—pair a more refined, room-friendly sound with a distinctive “dot” design from Erwan Bouroullec that actually gives them an identity in a sea of forgettable boxes. Samsung isn’t chasing louder or flashier. It’s aiming for flexible multi-room and true two-channel performance wrapped in something people might actually want to look at for more than five minutes.
What sets Samsung’s Music Studio speakers apart from most competitors is that they can be used both for whole home audio (up to 10 speakers in the home) and also used as part of a multi-speaker home theater audio system (up to 5 speakers).
Music Studio 7 and 5 Shared Features
Here are some key features that the Music Studio 7 and 5 have in common:
Advertisement
Style: The Music Studio 7 and 5 feature a distinctive “dot” design concept created by renowned designer Erwan Bouroullec. The idea draws from a universal symbol found throughout music and visual art, while remaining rooted in Samsung’s current industrial design language. The result is a speaker that blends into a room naturally—doing its job without screaming for attention, which is how most people actually want their speakers to behave.
Wireless Streaming: Music Studio speakers support both Bluetooth and Wi-Fi streaming, with compatibility for Google Cast, AirPlay, and Roon Ready systems. That gives users real flexibility across platforms without being locked into a single ecosystem.
Voice Assistants and Control: Users can control the Music Studio 7 and 5 via voice commands using Alexa, Google Assistant, and Bixby. Non-voice control is available through onboard controls and the Samsung Sound App (coming soon). There is also a dedicated Spotify Connect button for direct playback. A traditional remote control is not included.
Audio Lab Pattern Control: This technology manages how sound is distributed across channels, reducing overlap and congestion so effects, music, and dialogue remain clearly defined.
Advertisement
AI Dynamic Bass Control: Designed to deliver deeper, more controlled low frequencies with minimal distortion, this system dynamically adjusts bass output in real time while supporting high-resolution audio up to 24-bit/96kHz.
Active Voice Amplifier Pro: Samsung’s AVA analyzes ambient noise in real time so voice audio remains clear and intelligible. Enabling this feature boosts dialogue from the Music Studio 7 and 5, making it easier to hear over background noise without cranking the overall volume. This is particularly handy for listening to podcasts, audiobooks, weather and news reports in a busy home.
Wireless Dolby Atmos: The Music Studio 7 includes a Dolby Atmos-compatible HDMI eARC connection with up-firing driver for height effects, while the Music Studio 5 offers neither of these things. Both speakers can reproduce Dolby Atmos music over a wireless connection from compatible streaming services, however, the Music Studio 5 virtualizes the height effects while the Music Studio 7 offers a discrete up-firing driver for the height channel. Both speakers can be a part of a Wireless Dolby Atmos system over Wi-Fi when used with compatible Samsung TVs and select streaming sources.
Advertisement. Scroll to continue reading.
Advertisement
Pro Tip: Samsung’s Wireless Dolby Atmos implementation is not the same thing as Dolby Atmos FlexConnect. Although the two systems share some features and functionality, they are entirely different implementations.
Eclipsa Audio: Samsung’s Music Studio wireless speakers incorporate Eclipsa Audio, an open immersive surround sound format developed by Samsung in partnership with Google and other companies. Similar to Dolby Atmos, Eclipsa Audio expands on traditional surround sound with the addition of height information. With Eclipsa Audio-encoded content, sound can come from all around and above the listener. This enables a more enveloping and immersive listening experience with sound emanating from all three dimensions, just like in real life. Eclipsa Audio is currently the only immersive surround sound format supported on YouTube.
Q-Symphony: This feature allows the Music Studio speakers to work in tandem with compatible Samsung TVs, soundbars, and Wi-Fi speakers to create a more immersive home theater system. Q-Symphony supports pairing up to five Samsung audio devices and can automatically optimize sound based on speaker placement within the room.
SpaceFit Sound Pro: Samsung’s room calibration technology is built into both Music Studio models via onboard microphones. SpaceFit analyzes your listening environment and adjusts output accordingly. It can recalibrate automatically on a daily basis or whenever the speaker is moved.
Advertisement
Waveguide: This design technology helps direct and disperse sound more evenly throughout the room, improving coverage so audio remains consistent regardless of where you’re sitting.
Music Studio 7 (LS70H)
The Music Studio 7 (LS70H) is the flagship of Samsung’s 2026 Wi-Fi speaker lineup, designed to deliver a more immersive listening experience from a single enclosure.
On the outside, it features a curved rectangular form that aligns with the series’ distinctive design language. Inside, Samsung has implemented a 3.1.1 channel configuration, including a built in subwoofer, with left, center, right, and top firing drivers working together to create a convincing sense of height and spatial depth without the need for a full surround system.
The LS70H measures 7.28 x 10.59 x 7.50 inches and weighs 12.35 pounds.
Music Studio 5 (LS50H)
The Music Studio 5 (LS50H) sits below the Music Studio 7 in Samsung’s 2026 Wi Fi speaker lineup and takes a different design approach, with a rounded top half and rectangular base that feels more decor friendly than most wireless speakers. It can reproduce stereo sound on its own or be paired with a second unit for a wider more enveloping soundstage. Though it has no built-in height speaker, it can reproduce virtualized Dolby Atmos immersive sound.
While it looks different from the Music Studio 7, the LS50H is still engineered to deliver controlled bass with minimal distortion and supports modern connectivity options, including Wi Fi casting, streaming services, voice control, and Bluetooth for seamless everyday use.
Advertisement
Inside, the Music Studio 5 uses a 2-channel configuration with a 4-inch woofer and dual tweeters, balancing clarity, low end presence, and a form factor that fits more easily into real living spaces.
The LS50H measures 9.88 x 11.18 x 5.39 inches and weighs 5.29 pounds.
The Music Studio 7 and Music Studio 5 mark Samsung’s most credible push yet into wireless whole home audio and two-channel audio. What makes them stand out isn’t just the feature list, it’s the combination of design, flexibility, and ecosystem integration. The Bouroullec “dot” design gives them a visual identity most wireless speakers lack, while support for Wi-Fi streaming, Roon, AirPlay, Google Cast, and Q Symphony makes them far more adaptable than the average plug and play box.
Samsung appears to be intentionally blurring categories here. The Music Studio speakers aren’t just lifestyle speakers. They can run in stereo mode, pair with each other for wider stereo separation, handle Dolby Atmos music, slot into a multi room system, or integrate into a home theater setup with Samsung TVs. That kind of versatility is where Samsung is clearly aiming to separate itself.
But there are tradeoffs. No analog input, no USB playback, and no phono stage means traditional sources are completely off the table without workarounds. If your system still revolves around physical media or external components, these aren’t built for you.
Advertisement
Competition is stiff. Sonos, Bluesound, Denon HEOS, Apple HomePod, and even higher end lifestyle brands like Naim all play in this space, and many offer deeper ecosystems or better support for wired sources. Samsung is betting that its design, TV integration, and Harman backed tuning will be enough to pull people in.
Who are these for? Not the purist with racks of gear and a Thorens spinning in the corner. These are for people building a modern system around streaming, multi room audio, and a Samsung TV who want something that looks good, sounds better than a soundbar on its own, and doesn’t require a weekend to set up.
Samsung isn’t just filling a gap here. It’s trying to create a new lane between soundbars and traditional stereo. Whether that lane gets crowded depends on how good they actually sound – and our initial listening sessions have us optimistic – but for the first time, it feels like Samsung is at least asking the right questions.
In short:Xoople, a Madrid-based geospatial data company founded in 2019, has raised a $130 million Series B led by Nazca Capital, bringing its total funding to $225 million and pushing its valuation into unicorn territory. The round was co-invested by MCH Private Equity, CDTI (the Spanish government’s technology development fund), Buenavista Equity Partners, and Endeavor Catalyst. Alongside the raise, Xoople announced a partnership with US space and defence contractor L3Harris Technologies to build sensors for its own satellite constellation, designed to produce Earth surface data it says will be “two orders of magnitude better than existing monitoring systems.” The company’s EarthAI platform, built on Microsoft Azure and distributed through Microsoft and Esri, delivers continuous surface intelligence for insurers, farmers, governments, and infrastructure operators.
Xoople has spent seven years building something that did not previously exist in a commercially deployable form: a continuous, AI-native data layer for the Earth’s surface. The Madrid startup, founded in 2019, emerged from that development period with a €115 million in prior funding, a platform embedded in the two most widely used enterprise geospatial ecosystems in the world, and a thesis that the AI era will require a fundamentally different approach to Earth observation — one designed from the ground up for machine learning rather than adapted from satellite imagery workflows built for human analysts. The $130 million Series B, led by Nazca Capital, confirms that investors believe that thesis is credible enough to back at scale.
CEO and co-founder Fabrizio Pirondini told TechCrunch the raise brings Xoople’s total funding to $225 million and puts the company in unicorn territory on valuation. The round was joined by MCH Private Equity, CDTI, the Spanish government-backed technology development fund that has also backed Nazca Capital’s aerospace and defence fund, Buenavista Equity Partners, and Endeavor Catalyst.
What EarthAI actually does
Xoople’s core product, EarthAI, is an end-to-end Earth intelligence system. It ingests continuous surface data, currently sourced from government spacecraft and third-party satellite networks, and processes it into AI-ready datasets that can be queried for change detection, risk prediction, and environmental monitoring. The key design choice is continuity: rather than producing point-in-time images for human review, EarthAI is built to stream a persistent, structured view of the planet’s surface into AI models that need regular, reliable ground truth.
The use cases span industries that share a dependence on understanding what is happening on the physical surface of the Earth. For agriculture, EarthAI provides early detection of crop stress, monitors soil health and water conditions, and generates data that enables farmers to participate in carbon credit markets. For insurance, it enables more precise climate risk pricing and real-time verification of natural disaster claims, removing the delay and subjectivity of ground-based assessments. For infrastructure operators, it monitors physical assets for signs of stress or degradation before failures occur. For governments, it supports emergency planning, environmental enforcement, and humanitarian response.Capital flowing into specialised AI applications at the intersection of science, data, and infrastructurehas accelerated considerably over the past year, and Xoople sits precisely at that intersection.
Advertisement
The satellite play
The $130 million will fund Xoople’s transition from a platform built on others’ data to one powered by its own. Alongside the Series B, the company announced a partnership with L3Harris Technologies, a US space and defence contractor, to design and manufacture sensors for Xoople’s own satellite constellation. The sensors will collect optical data. Pirondini told TechCrunch that the constellation is designed to produce “a stream of data that is going to be two orders of magnitude better than existing monitoring systems“, a claim that, if borne out, would represent a substantial leap over the imagery quality currently available from commercial earth observation operators.
That claim is where Xoople meets its competitive reality. The company is entering a market that includes Vantor (formerly Maxar Intelligence, rebranded in October 2025), Planet Labs, BlackSky, Airbus Defence and Space, ICEYE, and Capella Space — all of which have satellites already in orbit and established AI-focused data processing pipelines.Companies building the hardware and data layers that AI depends on face a lengthy gap between the announcement of a new approach and its delivery in deployable form, and Xoople’s constellation is not yet in orbit. For now, EarthAI runs on data it did not produce. The L3Harris partnership signals that the proprietary data supply is the next phase.
Distribution before data
Xoople’s strategic sequencing is unusual for an Earth observation company. Most competitors in the space led with hardware — launching satellites, then figuring out distribution. Xoople did the reverse: it spent its first seven years embedding its platform into Microsoft and Esri, the two dominant environments where enterprise buyers, governments, and GIS professionals already live. Neither Microsoft nor Esri has its own proprietary satellite data. Xoople positioned itself to supply that gap from inside the platforms where the purchasing decisions are made.
The Microsoft relationship is structural: Xoople’s platform runs on Azure, and the company is integrated with Microsoft’s Planetary Computer Pro, which delivers AI-powered geospatial insights for enterprise use. Esri, the world’s largest geospatial software company, is a partner distributor. The implication is that when Xoople’s own constellation is operational and its data quality delivers on the “two orders of magnitude” promise, it will have distribution in place that its newer competitors would need years to replicate.The investment flowing into cloud-based AI data infrastructurehas made the ability to process and deliver petabytes of Earth surface data at low latency a tractable problem; the scarcity is in the quality and continuity of the underlying data itself.
Advertisement
A Spanish unicorn in a European context
Xoople’s raise is one of the larger deep tech rounds to come out of Spain in recent years, and it lands in a moment that the European space and defence investment community has been accelerating. Nazca Capital, which led the Series B, runs Spain’s largest private equity fund specialised in aerospace and defence, a fund that also received a €294 million commitment from CDTI and a €40 million investment from the European Investment Fund. The investor composition of the Xoople round,government-backed funds, European private equity, and Endeavor Catalyst, which focuses on high-impact technology entrepreneurs, reflectsthe persistent tension in European technology between deep technical ambition and the capital required to realise it: the funding is patient, multi-source, and has a public interest dimension that pure venture rounds often lack.
The earth observation market was valued at $7.04 billion in 2025 and is projected to reach $14.55 billion by 2034, growing at just over 8% annually. Xoople is betting that as AI models grow more capable and more dependent on real-world data, the market for continuous, structured Earth surface intelligence, rather than periodic imagery, will grow faster than that aggregate.A year in which the appetite for AI applications in climate, infrastructure, and environmental risk grew considerablyprovided the validation Xoople needed; the $130 million is the bet that the second half of the decade will prove it right at scale.
Data security remains one of the least mature domains in enterprise cybersecurity. According to IBM, 35% of breaches in 2025 involved unmanaged data source or “shadow data.” This reveals a systemic lack of basic data awareness. It’s not because of a lack of tooling or investment. It’s because many organizations still struggle with the most fundamental questions: What data do we have? Where does it live? How does it move? And who is responsible for it?
In an increasingly complex ecosystem of data sources, cloud platforms, SaaS applications, APIs, and AI models, those questions are only becoming more difficult to answer. Closing the maturity gap in data security demands a cultural shift where security is no longer treated as an afterthought. Instead, protection is embedded throughout the full data lifecycle, grounded in a robust inventory, clear classification, and scalable mechanisms that translate policy into automated guardrails.
Visibility as the foundation
The most persistent barrier to data security maturity is basic visibility. Organizations often focus on how much data they hold, but not on what that data is made up of. Does it contain personally identifiable information (PII)? Financial data? Health information? Intellectual property? Without this level of understanding and inventory, it’s a lot tougher to implement meaningful protection.
Advertisement
This can be avoided, however, by prioritizing enterprise capabilities that can detect sensitive data at scale across a large and varied footprint. Detection must be paired with action, deleting data where it’s no longer needed, and securing data where it is by aligning enforcement to a well-defined policy.
Mature organizations should start by treating data security as an “understanding your environment” problem. Maintain an inventory, classify what’s in the ecosystem, and align protections with the classification rather than solely relying on perimeter controls or point solutions to scale.
Securing chaotic data
One reason data security has lagged behind other security domains is that data itself is inherently chaotic. Unlike perimeter security, which relies on explicit ports and defined boundaries, data is largely unpredictable. That is to say, the same underlying information may appear across very different formats: structured databases, unstructured documents, chat transcripts, or analytics pipelines. Each may have slightly different encodings or transformations that introduce unforeseen, and often undetected, changes to the data itself.
Human behavior compounds the challenge, with different actions introducing risks in ways that perimeter controls simply can’t anticipate. This could be anything from a credit card number copied into a free-form comment field, a spreadsheet emailed outside its intended audience, or a dataset repurposed for a new workflow.
Advertisement
When protection is bolted on at the end of a workflow, organizations create blind spots. They rely on downstream checks to catch upstream design flaws. Over time, complexity accumulates and the risk of exposure becomes a question of when, not if.
A more resilient model assumes that sensitive data will surface in unexpected places and formats, so protection is embedded from the moment data is captured. Defense-in-depth becomes a design principle: segmentation, encryption at rest and in transit, tokenization, and layered access controls.
Critically, these safeguards travel with the data lifecycle, from ingestion to processing, analytics and publishing. Instead of retrofitting controls, organizations design for chaos. They accept variability as a given and build systems that remain secure even when data diverges from expectations.
Scaling governance with automation
Data security becomes operationally sustainable when governance is enforced through automation from its genesis. When coupled with clear expectations to create bounded contexts: teams understand what is permitted, under what conditions, and with what protections data can be used effectively.
Advertisement
This matters more than ever today. AI systems often require access to huge volumes of data, across domains. This makes policy implementation particularly challenging. To do so effectively and safely requires deep understanding, strong governance policies, and automated protection.
Security techniques such as synthetic data and token replacement enable organizations to preserve analytical context while making sensitive values harder to read. Policy-as-code patterns, APIs, and automation can handle tokenization, deletion, retention constraints, and dynamic access controls. With guardrails built into the platforms they use, engineers can focus more on innovating with data and elevating business outcomes securely.
AI systems must also operate within the same governance and monitoring expectations as human workflows. Permissions, telemetry, and controls around what models can access, along with the information they can publish, are essential. Governance will always introduce a degree of friction. The goal is to make that friction well understood, navigable and increasingly automated. Confirming purpose, registering a use case, and provisioning access dynamically based on role and need should be clear, repeatable processes.
At enterprise scale, this requires centralized capabilities that implement cyber security policy in the data domain. This includes detection and classification engines, tokenization and detokenization services, retention enforcement, and ownership and taxonomy mechanisms that cascade risk management expectations into daily execution.
Advertisement
When done well, governance becomes an enablement layer rather than a bottleneck. Metadata and classification drive protection decisions automatically while accelerating business discovery and usage. Data is protected across its lifecycle by strong defenses like tokenization and deleted when required by regulation or internal policy. There should be no need for teams to “touch the data” manually for every control decision, with policy enforced by design.
Building for the future
Put simply, closing the data security maturity gap is less about adopting a single breakthrough technology and more about operational discipline. Build the map. Classify what you have. Embed protection into workflows so that security is repeatable at scale.
For business leaders seeking measurable progress over the next 18–24 months, three priorities stand out.
First, establish a robust inventory and metadata-rich map of the data ecosystem. Visibility is non-negotiable. Second, implement classification tied to clear, actionable policy expectations. Make it obvious what protections each category demands. And finally, invest in scalable, automated protection schemes that integrate directly into development and data workflows.
Advertisement
When protection shifts from reactive bolt-on controls to proactive built-in guardrails, compliance becomes simpler, governance becomes stronger, and AI readiness becomes achievable, without compromising rigor.
Learn more how Capital One Databolt, the enterprise data security solution from Capital One Software, can help your business become AI-ready by securing sensitive data at scale.
Andrew Seaton is Vice President, Data Engineering – Enterprise Data Detection & Protection, Capital One.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.
The UK is moving forward with its efforts to ban social media for young people. Ahead of this week’s House of Lords debate on the topic, we’re getting you situated with a primer on what’s been happening and what it all means.
What was the last vote about?
On 9 March, the House of Commons discussed amendments tabled by the House of Lords in the government’s flagship legislation, the Children’s Wellbeing and Schools Bill.
The House of Lords previously tabled an amendment to “prevent children under the age of 16 from becoming or being users” of “all regulated user-to-user services,” to be implemented by “highly-effective age assurance measures,” which effectively banned under-16s from social media. When this proposal came before the House of Commons, MPs defeated it by 307 votes to 173.
Instead, the Commons proposed its own amendment: enabling the Secretary of State to introduce provisions “requiring providers of specified internet services” to prevent access by children, under age 18 rather than 16, to specified internet services or to specified features; and to restrict access by children to specified internet services which ministers provide.
Advertisement
Who does this give powers to?
The Commons proposal redirects power from the UK Parliament and the UK’s independent telecom regulator Ofcom to the Secretary of State for Science, Innovation and Technology, currently Liz Kendall, who will be able to restrict internet access for young people and determine what content is considered harmful…just because she can. The amendment also empowers the Secretary of State to limit VPN use for under 18s, as well as restrict access to addictive features and change the age of digital consent in the country; for example, preventing under-18s from playing games online after a certain time.
Why is this a problem?
This process is devoid of checks or accountability mechanisms as ministers will not be required to demonstrate specific harms to young people, which essentially unravels years-long efforts by Ofcom to assess online services according to their risks. And given the moment the UK is currently in, such as refusing to protect trans and LGBTQ+ communities and flaming hostile and racist discourses, it is not unlikely that we’ll see ministers start restricting content that they ideologically or morally feel opposed to, rather than because the content is harmful based, as established by evidence and assessed pursuant to established human rights principles.
We know from other jurisdictions like the United States that legislation seeking to protect young people typically sweeps up a slew of broadly-defined topics. Some block access to websites that contain some “sexual material harmful to minors,” which has historically meant explicit sexual content. But some states are now defining the term more broadly so that “sexual material harmful to minors” could encompass anything like sex education; others simply list a variety of vaguely-defined harms. In either instance, this bill would enable ministers to target LGBTQ+ content online by pushing this behind an under-18s age gate, and this risk is especially clear given what we already know about platform content policies.
How will this impact young people?
The internet is an essential resource for young people (and adults) to access information, explore community, and find themselves. Beyond being spaces where people can share funny videos and engage with enjoyable content, social media enables young people to engage with the world in a way that transcends their in-person realm, as well as find information they may not feel safe to access offline, such as about family abuse or their sexuality. In severing this connection to people and information by banning social media, politicians are forcing millions of young people into a dark and censored world.
Advertisement
How did each party vote?
The initial push to ban under-16s from social media came from the Conservative Party, who have since accused the UK’s Prime Minister Keir Starmer of “dither and delay” for not committing to the ban. The Liberal Democrats have also called this “not good enough.” The Labour Party itself is split, with 107 Labour Party MPs abstaining in the vote on the House of Lords amendment.
But we know that the issue of young people’s online safety is a polarizing topic that politicians have—and will continue to—weaponize for public support, regardless of their actual intentions. This is why we will continue to urge policymakers and regulators to protect people’s rights and freedoms online at all moments, and not just take the easy route for a quick boost in the polls.
How does this bill connect to the Online Safety Act?
The draft Children’s Wellbeing and Schools Bill that came from the Lords provided that any regulation pertaining to the well-being of young people on social media “must be treated as an enforceable requirement” with the Online Safety Act. The Commons amendment, however, starts out by inserting a new clause that amends the Online Safety Act.
For more than six years, we’ve been calling on the UK government to pass better legislation around regulating the internet, and when the Online Safety Act passed we continued to advocate for the rights of people on the internet—including young people—as Ofcom implemented the legislation. This has been a protracted effort by civil society groups, technologists, tech companies, and others participating in Ofcom’s consultation process and urging the regulator to protect internet users in the UK.
Advertisement
The MPs amendment essentially rips this up. Technology Secretary Liz Kendall recently said that ministers intended to go further than the existing Online Safety Act because it was “never meant to be the end point, and we know parents still have serious concerns. That is why I am prepared to take further action.” But when this further action is empowering herself to make arbitrary decisions on content and access, and banning under-18s from social media, this causes much more harm than it solves.
Is the UK alone in pushing legislation like this?
Sadly, no. Calls to ban social media access for young people have gained traction since Australia became the first country in the world to enforce one back in December. On 5 March, Indonesia announced a ban on social media and other “high-risk” online platforms for users under 16. A few days later, new measures came into effect in Brazil that restricts social media access for under-16s, who must now have their accounts linked to a legal guardian. Other countries like Spain and the Philippines have this year announced plans to ban social media for under-16s, with legislation currently pending to implement this.
What are the next steps?
The Children’s Wellbeing and Schools Bill returns to the House of Lords on 25 March for consideration of the new Commons amendments. The bill will only become law if both Houses agree to the final draft.
We will continue to stand up against these proposals—not only to young people’ free expression rights, but also to safeguard the free flow of information that is vital to a democratic society. The issue of online safety is not solved through technology alone, especially not through a ban, and young people deserve a more intentional approach to protecting their safety and privacy online, not this lazy strategy that causes more harm than it solves.
Advertisement
We encourage politicians in the UK to look into what is best, not what is easy, and explore less invasive approaches to protect all people from online harms.
AI data centers are producing extreme heat islands that extend miles beyond facilities
Over 340 million people experience elevated temperatures due to hyperscale AI facilities
Extreme temperature spikes of up to 16.4 °F have been recorded near data centers
The expansion of AI-driven data centers is having a more immediate environmental impact than previously understood, experts have warned.
A research team led by Andrea Marinoni at the University of Cambridge claims these facilities, often sprawling over a million square feet, are not only consuming massive amounts of energy but also generate extreme local heating effects, known as heat islands.
Marinoni claims, “there are still big gaps in our understanding of the impacts of data centers,” emphasizing these effects have been largely overlooked.
Article continues below
Advertisement
Measuring heat impacts across global AI data centers
The team analysed temperature data from more than 6,000 hyperscale facilities over the past two decades, carefully accounting for global warming trends, seasonal changes, and other local influences.
The study found surface temperatures near data centers increased on average by 3.6 °F after operations began, with extreme cases recording rises to 16.4 °F.
Advertisement
These heat increases extend far beyond the immediate facility, sometimes affecting areas up to 6.2 miles away.
When the affected zones were mapped against population data, over 340 million people across North America, Europe, and Asia were affected, experiencing elevated local temperatures.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Observations in Mexico’s Bajio region and Aragon, Spain, revealed temperature increases that were inconsistent with those in the surrounding provinces.
Advertisement
This suggests that the heat effects were directly attributable to the data centers themselves rather than other environmental factors.
“The planned scale-up of data centers could have dramatic impacts on society,” Marinoni said.
Experts express concern over the rapid pace of AI infrastructure development, which may be outpacing sustainable planning.
Advertisement
“The ‘rush for AI-gold’ appears to be overriding good practice and systemic thinking…and is developing far more rapidly than any broader, more sustainable systems,” said Deborah Andrews, emeritus professor at London South Bank University
However, experts argue that further research is required to confirm these findings, particularly given the unusually high local temperature spikes reported.
The long-term consequences of energy-intensive AI operations warrant greater attention, as climate discussions have historically focused on emissions rather than direct heat effects.
Rethinking data center design and operational strategies could enable continued AI expansion while minimizing additional heat stress on neighboring communities and ecosystems.
Advertisement
In a world already experiencing intensified extreme weather events, the rapid proliferation of ultra-hot data centers may amplify local and regional environmental challenges.
Energy emissions remain a primary concern, but the localized warming caused by hyperscale facilities adds a new dimension of environmental risk that needs evaluation.
Netflix is launching a new standalone app for kids’ games called Netflix Playground, the company announced on Monday. Netflix Playground is available as part of a Netflix subscription, and doesn’t have any ads or in-app purchases.
Netflix says the app gives children access to an “ever-growing” library of games for kids. Netflix Playground is launching with titles featuring characters from popular kids’ shows.
The app, which is designed for children ages eight and under, is now available in the U.S., Canada, the U.K., Australia, the Philippines, and New Zealand. It will roll out worldwide on April 28. The app is available on both iOS and Android.
It can be accessed offline without a mobile or Wi-Fi connection, which the company says makes it the “perfect companion for long airplane rides or grocery trips.”
Advertisement
Image Credits:Netflix
For example, one game is titled “Playtime With Peppa Pig,” and sees players “jump into Peppa’s world with a collection of playful activities.” There’s also a “Sesame Street” game where players practice matching with memory cards or coordination with connect-the-dots. Other titles include “Let’s Color,” “Storybots,” “Bad Dinosaurs,” and more.
“We’re building a world where kids can not only watch their favorite stories, they can step inside them and interact with their favorite characters,” said John Derderian, Netflix Vice President of Animation Series + Kids & Family TV, in a press release. “We’re creating a seamless destination for discovery, learning, and play. Whether it’s reuniting with Hank and the ‘Trash Truck’ crew for new adventures or making a smoothie with ‘Peppa Pig,’ watching and playing on Netflix can be the fun and easiest part of every family’s day.”
Netflix first launched games in 2021 and had ambitious plans for the space, but has since dialed them back after its titles failed to gain traction. The streaming giant has also shut down several video game studios like Boss Fight, Spry Fox, and an AAA studio.
Techcrunch event
San Francisco, CA | October 13-15, 2026
Advertisement
Late last year, Netflix forayed into TV gaming with a slate of new party titles meant to be played in groups, including TV versions of Tetris and Pictionary. The company has also said it will prioritize cloud gaming, but has noted that it’s still in the early stages of these plans.
Although the Sega Dreamcast had many good qualities that made it beloved by the thousands of people who bought the console, one glaring omission was the lack of DVD video capabilities. Despite its optical drive being theoretically capable of such a feat, Sega had opted to use the GD-ROM disc format to not have to cough up DVD licensing fees, while the PlayStation 2 could play DVD movies. Fortunately it’s possible to hack DVD capability into the Dreamcast if you aren’t too fussy about the details, as [Throaty Mumbo] recently demonstrated.
For the Tl;dw folk among us, there’s a GitHub repository that contains the basic summary and all needed files. Suffice it to say that it is a bit of a kludge, but on the bright side it does not require one to modify the Dreamcast. Instead it uses a Pico 2 board that emulates a Sega DreamEye camera on the Dreamcast’s Maple bus via the controller port. The Dreamcast then requests image data as if from said camera.
On the DVD side of things there’s a Raspberry Pi 5 that connects to an external USB DVD drive and which encodes the video for transmission via USB to the Pico 2 board. Although somewhat sketchy, it totally serves to get DVDs playing on the Dreamcast. If only Sega had not skimped on those license fees, perhaps.
We may receive a commission on purchases made from links.
The workshop has become a place with specialized gadgets for just about every task you can imagine. However, all this niche inventory often makes your workspace more complicated. It leaves you with a cluttered toolbox packed with pricey, single-purpose items that rarely get used. For many hobbyists and pros, that high-tech solution or a really specific manual tool can be tough to pass up when you’re browsing the hardware store aisles.
If you take a closer look at how useful these items actually are, you’ll see that the classic, versatile tools that have helped tradespeople for generations are often superior to modern, specialized versions. Many of these niche items aren’t good investments because they lack the adaptability of standard equipment.
Advertisement
By taking a close look at these pricey novelties, you can better appreciate the value of a streamlined, multipurpose tool kit. Tools like speed squares, bungee cords, and extraction sockets can handle a wide range of problems across different projects and have many uses, unlike tools designed for a single use. Even with professional marketing and shiny finishes, you’re probably better off leaving these on the shelf.
Advertisement
Digital Angle Gauge
The Craftsman Digital Angle Gauge is impressive, but it’s a lot more than you probably need. It’s built as a four-function tool, so it works as an angle finder, a compound cut calculator, a protractor, and a standard level. It can measure angles from 0 to 220 degrees and stays accurate to the nearest 0.1 degree. It’s made from durable aluminum, but is still pretty heavy at 2.7 pounds.
This is the kind of tool you could get from Home Depot that you wouldn’t realize existed. Digital gauges are great if you need decimal-point precision, but you don’t really need it for framing walls or building furniture. A standard speed square or a sliding T-bevel will give you plenty of accuracy for almost any project. Bringing a device with two delicate LCD screens onto a dusty, rough job site is just asking for problems.
One dropped board or a misplaced hammer swing can shatter those screens, turning your expensive tool into useless aluminum. You’re also going to get tired of dealing with batteries and electronic quirks. Even though the tool is built to be tough, an analog version will never run out of power in the middle of a measurement.
Advertisement
Universal Nut Cracker
The Craftsman Auto Universal Nut Cracker is meant to save you when a nut is stuck and just won’t budge. It uses a hardened steel cutter to split the hardware, working on sizes from 5/16-inch to 7/8-inch across the flats. It’s designed to break rusted or frozen nuts without messing up the threads on the bolt underneath. While that sounds pretty good, it’s often tough to use in real-world situations, like in a cramped engine bay where the frame just won’t fit.
Even though it looks small, it measures 8.35 inches long, 3.35 inches wide, and 1.34 inches high. The maker says you can’t use power tools with it, so you’re stuck using your hands in tight spots where you probably can’t get much leverage anyway. A good set of extraction sockets is usually a better pick for rounded or stuck nuts, since those work on many sizes and aren’t hard to find. Instead of fighting with this tricky gadget, you could just grab a hacksaw or a torch to get that hardware off.
Advertisement
Even the few people who bought it from Craftsman have left it an average of 1 star out of 5 possible stars. Store reviews, like these bad ones from Ace Hardware, often offer valuable insight from buyers.
Advertisement
Auto Caliper Hanger Set
The Craftsman Auto Caliper Hanger Set is a classic example of a tool you just don’t need to pick up. This universal kit works for cars with disc brakes, and it’s supposed to hold the calipers securely while you’re doing brake work. It’s designed to keep the heavy caliper from hanging on your rubber brake lines, which could really damage them. It’s basically a heavy-duty S-hook with a tough coating, so you can reuse it.
Even with all that in mind, it’s really just a single-purpose item that’ll mostly just clutter up your toolbox, which shouldn’t have tools you never use anyway. You can get the same result with things you probably already have in your garage. A basic bungee cord from Tractor Supply, or even a piece of scrap wire from an old coat hanger works just as well. You just bend the wire into an S-shape, and you’re good to go.
This is basically just a simple piece of bent metal made in China. The set does come with a limited lifetime warranty, and the company says it’ll replace it for any reason, even without a receipt. Still, there’s really no reason to spend your money on a dedicated hanger when alternatives you probably have will work similarly.
Advertisement
Auto LED Inspection Mirror
The Craftsman Auto LED Inspection Mirror might seem like a smart way to check dark engine corners or behind walls, but it’s mostly a gimmick. It comes with a telescoping wand that has a rubber handle, a 2-inch mirror, and a swivel joint to help you get into tricky spots. The shaft begins at 6-1/4 inches and can stretch out to 37-1/2 inches.
The big selling point is its built-in LED light, which is meant to help you spot leaks or dropped bolts. However, that light is actually its main problem. Since it has an LED, the mirror needs a CR2032 battery to operate. These batteries last a while in a key fob, but drain relatively quickly with larger devices.
Advertisement
For daily work, a standard telescoping mirror along with a basic headlamp or flashlight is plenty. When you separate the light from the mirror, you actually get better lighting angles. You can bounce the light off the glass to see what you’re checking out without the glare from the built-in LED messing up the reflection. You could even just put a separate light source in the engine bay to light up the whole area instead of counting on one tiny light on a stick.
Advertisement
3-Jaw Oil Filter Wench
The Craftsman 3-jaw Oil Filter Wrench is another niche item that most people can live without. It’s marketed as a universal way to handle oil changes on different vehicles, promising to make the job simpler for anyone, regardless of their skill level. The tool uses metal jaws made from heat-treated steel. It’s designed to handle filters from 2 inches to 4-1/2 inches in diameter. It’s a low-profile item that’s 1.61 inches high and about 6.85 inches long, weighing in at 0.82 pounds.
Even with those specs and a lifetime warranty, this gadget isn’t a necessary purchase. It uses a gear mechanism to grip the filter while you turn it with a 3/8-inch or 1/2-inch drive ratchet. While it technically works, it’s not as versatile as some options. You likely already have many of the basic oil change tools from a store like Harbor Freight. A pair of filter pliers can handle the same job and will fit a much wider range of filter sizes.
This wrench is a heavy chunk of metal that takes up space. Sticking to a reliable strap wrench or standard pliers will save you money and keep your collection uncomplicated. Those tools also work for basic plumbing repairs, whereas this wrench does only one thing.
Advertisement
Why these were picked
YAKOBCHUK VIACHESLAV/Shutterstock
The hardware aisle is filled with specialized gadgets, like those in the Craftsman catalog, that solve singular problems rather than being multi-function tools. While these get marketed as revolutionary solutions to common mechanical hurdles, they can be a poor investment. These niche items tend to prioritize flashy, single-purpose engineering over the rugged adaptability that has defined the trades for generations.
Standard equipment like speed squares, extraction sockets, bungee cords, and basic strap wrenches gives you a level of durability and broad utility that specialized gear can’t match. These classic alternatives aren’t just way more affordable; they also do the same job without electronic glitches or taking up too much space. Being smart in the workshop is often about being clever, not about buying the fanciest gadgets.
This model essentially bridges the gap between the standard and Ultra Galaxy phones with high-end features, minus the S Pen. Some of these premium features could include the S26 Ultra’s new Privacy Display feature.
All of this sounds smart on paper, but it also sounds like acceptance.
After spending time with the Galaxy S26, I have a recurring thought. This compact phone has a solid software experience, reliable cameras, and is generally easy to recommend as a base flagship. But “reliable” is no longer enough when these devices carry flagship pricing.
Advertisement
The Samsung Galaxy S26 Ultra, angled for its Privacy Display.Tom Bedford / Digital Trends
The regular Galaxy S phones are where the problem is
Samsung’s own S26 comparison page shows the base S26 stuck at 25W charging, while the S26+ goes to 45W, and the Ultra got upgraded to 60W. The camera story lands the same way. Samsung’s Galaxy S26 and Galaxy S26+ share the same 12MP ultrawide, 50MP wide, and 10MP telephoto setup, while the Ultra gets the far more ambitious 50MP ultrawide, 200MP main, and 50MP + 10MP telephoto mix.
So apart from the Ultra, the other two models feel like an afterthought, but an expensive four-digit flagship one at that. This is why a Galaxy S27 Pro could make the S27 lineup feel less lethargic and more energetic. Just like the base Pixel 10 and Pixel 10 Pro and the base iPhone 17 and iPhone 17 Pro, there could be a clear distinction in the intermediate model. Right now, the base and Plus models do just enough. The Ultra does everything.
Digital Trends
The Galaxy S27 Pro needs to be a course correction, not a rebrand
But a Pro model only works if Samsung uses it to create a truly convincing middle. One with faster charging, stronger camera hardware, and a better reason to exist below the Ultra.
I think Samsung is definitely in need of this change. But the name alone won’t be enough. If Samsung wants the Pro phone to matter, it has to make this non-ultra Galaxy S phone feel like more than just a safe default and start making it feel worth the premium money again. Otherwise, the S27 Pro will just be another label slapped onto a lineup where all the excitement only lives at the top.
The app is free to download, and once its Gemma-based automatic speech recognition (ASR) models are downloaded, you can start dictating on your phone. In the app, you can see the live transcription, and when you hit pause, the app automatically filters out filler words like “um” and “ah” and polishes the text.
Below the transcript are options like “Key points”, “Formal”, “Short”, and “Long” to transform the text.
Image Credit: Screenshot by TechCrunchImage Credits:Screenshot by TechCrunch
You can also turn off the cloud mode to use local-only processing. (When cloud mode is on, the app uses cloud-based Gemini models for text cleanup.) The Google AI Edge Eloquent can import certain keywords, names, and jargon from your Gmail account, if desired. Plus, you can add your own custom words to the list.
The app displays the history of the transcription session and lets you search through all of them as well. It can show you words dictated in the last session, your word per minute speed, and the total number of words spoken.
Advertisement
“Google AI Edge Eloquent is an advanced dictation app engineered to bridge the gap between natural speech and professional, ready-to-use text. Unlike standard dictation software that transcribes stumbles and filler words verbatim, Eloquent utilizes AI to capture your intended meaning. It automatically edits out ‘ums,’ ‘uhs,’ and mid-sentence self-corrections, outputting clean, accurate prose,” the company’s App Store description reads.
I was saying “Transcription”. Still early days for this app. Image Credits: TechCrunchImage Credits:Screenshot by TechCrunch
While the app is currently only available on iOS, the App Store description references an Android version. (We have reached out to Google for more information, and will update the story if we hear back.)
According to the description, Eloquent offers “seamless Android integration,” where it can be set as users’ default keyboard for system-wide access across any text field. Plus, the app will be able to use the floating button feature, similar to the one Wispr Flow uses on Android, for easy access to transcription from anywhere.
AI-powered transcription apps are gaining popularity among users as speech-to-text models get better. With this experimental app, Google is joining the trend. If this test is successful, we could see improved transcription features across Android, too.
Enterprise AI programs rarely fail because of bad ideas. More often, they get stuck in ungoverned pilot mode and never reach production. At a recent VentureBeat event, technology leaders from MassMutual and Mass General Brigham explained how they avoided that trap — and what the results look like when discipline replaces sprawl.
At MassMutual, the results are concrete: 30% developer productivity gains, IT help desk resolution times reduced from 11 minutes to one, and customer service calls cut from 15 minutes to just one or two.
“We’re always starting with why do we care about this problem?” Sears Merritt, MassMutual’s head of enterprise technology and experience, said at the event. “If we solve the problem, how are we gonna know we solved it? And, how much value is associated with doing that?”
MassMutual, a 175-year-old company serving millions of policy owners and customers, has pushed AI into production across the business — customer support, IT, customer acquisition, underwriting, servicing, claims, and other areas.
Advertisement
Merritt said his team follows the scientific method, beginning with a hypothesis and testing whether it has an outcome that will tangibly drive the business forward. Some ideas are great, but they may be “intractable in the business” due to factors like lack of data or access, or regulatory constraint.
“We won’t go any further with an idea until we get crystal clear on how we’re going to measure, and how we’re going to define success.”
Ultimately, it’s up to different departments and leaders to define what quality means: Choose a metric and define the minimum level of quality before a tool is placed into the hands of teams and partners.
That starting point creates a quick feedback loop. “The things that we find slow us down is where there isn’t shared clarity on what outcome we’re trying to achieve,” which can lead to confusion and constant re-adjusting, said Merritt. “We don’t go to production until there is a business partner that says, ‘Yes, that works.’”
Advertisement
His team is strategic about evaluating emerging tools, and “extremely rigorous” when testing and measuring what “good” means. For instance, they perform trust scoring to lower hallucination rates, establish thresholds and evaluation criteria, and monitor for feature and output drift.
Merritt also operates with a no-commitment policy — meaning the company doesn’t lock itself into using a particular model. It has what he calls an “incredibly heterogeneous” technology environment combining best of breed models alongside mainframes running on COBOL. That flexibility isn’t accidental. His team built common service layers, microservices and APIs that sit between the AI layer and everything underneath — so when a better model comes along, swapping it in doesn’t mean starting over.
Because, Merritt explained, “the best of breed today might be the worst of breed tomorrow, and we don’t want to set ourselves up to fall behind.”
Credit: Brian Malloy Photo
Advertisement
Weeding instead of letting a thousand flowers bloom
Mass General Brigham (MGB), for its part, took more of a spray and pray approach — at first.
Around 15,000 researchers in the not-for-profit health system have been using AI, ML, and deep learning for the last 10 to 15 years, CTO Nallan “Sri” Sriraman said at the same VB event.
But last year, he made a bold choice: His team shut down a sprawl of non-governed AI pilots. Initially, “we did follow the thousand flowers bloom [methodology], but we didn’t have a thousand flowers, we had probably a few tens of flowers trying to bloom,” he said.
Like Merritt’s team at MassMutual, MGB pivoted to a more holistic view, examining why they were developing certain tools for specific departments of workflows. They questioned what capabilities they wanted and needed and what investment those required.
Advertisement
Sriraman’s team also spoke with their primary platform providers — Epic, Workday, ServiceNow, Microsoft — about their roadmaps. This was a “pivotal moment,” he noted, as they realized they were building in-house tools that vendors were already providing (or were planning to roll out).
As Sriraman put it: “Why are we building it ourselves? We are already on the platform. It is going to be in the workflow. Leverage it.”
That said, the marketplace is still nascent, which can make for difficult decisions. “The analogy I will give is when you ask six blind men to touch an elephant and say, what does this elephant look like?” Sriraman said. “You’re gonna get six different answers.”
There’s nothing wrong with that, he noted; it’s just that everybody is discovering and experimenting as the landscape keeps shifting.
Advertisement
Instead of a wild West environment, Sriraman’s team distributes Microsoft Copilot to users across the business, and uses a “small landing zone” where they can safely test more sophisticated products and control token use.
They also began “consciously embedding AI champions“ across business groups. “This is kind of a reverse of letting a thousand flowers bloom, carefully planting and nourishing,” Sriraman said.
Observability is another big consideration; he describes real-time dashboards that manage model drift and safety and allow IT teams to govern AI “a little more pragmatically.” Health monitoring is critical with AI systems, he noted, and his team has established principles and policies around AI use, not to mention least access privileges.
In clinical settings, the guardrails are absolute: AI systems never issue the final decision. “There’s always going to be a doctor or a physician assistant in the loop to close the decision,” Sriraman said. He cited radiology report generation as one area where AI is used heavily, but where a radiologist always signs off.
Advertisement
Sriraman was clear: “Thou shall not do this: Don’t show PHI [protected health information] in Perplexity. As simple as that, right?”
And, importantly, there must be safety mechanisms in place. “We need a big red button, kill it,” Sriraman emphasized. “We don’t put anything in the operational setting without that.”
Ultimately, while agentic AI is a transformative technology, the enterprise approach to it doesn’t have to be dramatically different. “There is nothing new about this,” Sriraman said. “You can replace the word BPM [business process management] from the ’90s and 2000s with AI. The same concepts apply.”
You must be logged in to post a comment Login