Connect with us
DAPA Banner

Tech

Data visualization all-stars unveil Ridge AI with $2.6M to fix the analytics problem for SaaS apps

Published

on

Ridge AI co-founders Jeffrey Heer and Ellie Fields. (Ridge AI Photo)

Ellie Fields and Jeffrey Heer know data visualization from the inside: Fields spent more than 12 years as a product and marketing leader at Tableau, and Heer is the University of Washington professor whose open-source tools are widely used for web-based visualization.

But even as they and their colleagues pushed the field forward, they couldn’t escape a similar conclusion: presenting and analyzing data on the web is basically still broken.

Their solution: Ridge AI, a Seattle-based startup that uses AI and browser-based technology to help software companies build and deploy interactive dashboards and data agents in hours instead of days or months, embedding them directly in their products for use by their customers.

The company calls its core product a “ridge” — a dashboard and a data agent that share a common data set, letting users get visual context from the dashboard and ask follow-up questions through the agent.

Funding: Ridge AI is emerging from stealth Monday with $2.6 million in pre-seed funding led by Madrona. The Seattle venture capital firm’s investment was spearheaded by Managing Director Tim Porter and Venture Partner Mark Nelson, the former CEO of Tableau.

Advertisement

Joining in the funding is a roster of angel investors that reads like a who’s who of analytics, AI and data: Chris Stolte, Tableau co-founder and former CTO; Carlos Guestrin, co-founder of Turi and director of Stanford’s AI Lab; Adrien Treuille, founder of Streamlit; Elissa Fink, former Tableau CMO; and Jeff Hammerbacher, Cloudera founder, among others.

Target market: Although their technology could be applied broadly, Ridge AI is focusing specifically at the outset on serving software as a service (SaaS) companies, giving them a way to present rich, interactive analytics to the people and businesses that use their products. 

In an interview, Fields said the need is especially acute when a SaaS company is trying to renew a customer’s contract. The product might be delivering real results, but if the people making the buying decision can’t see that in the data, the deal can be at risk.

“The CFO is going to be asking, is anyone even using this?” Fields said, calling it one of the use cases where Ridge AI’s technology could be of significant value to SaaS firms.

Advertisement

The pressure to prove this value has intensified amid the “SaaS-pocalypse,” as it’s known — as companies consolidate their software spending and the rise of custom AI-coded apps makes many of them question whether existing tools are worth keeping.

What they’re solving: Madrona’s Nelson said he experienced the larger problem during his time as CTO of Concur, where the company built an analytics product on top of IBM Cognos, giving customers the ability to glean insights into employee travel and spending.

It was important to the business, he said, but it was a pain to maintain, and it wasn’t in Concur’s core skillset. The problem persists for many SaaS companies to this day.

SaaS companies have historically had to choose between heavyweight business intelligence platforms like Tableau and Power BI, specialty embedded analytics tools, or building their own. Fields said none of those options was purpose-built for the problem Ridge is solving.

Advertisement

Founders: Ridge AI was co-founded by Fields, who serves as CEO, and Heer, chief scientist, who will continue as a UW professor in addition to working on the company.

Also on the team: Andy Caley, a founding engineer who previously worked at Tableau, and Fritz Lekschas, a founding research engineer with a Ph.D. from Harvard and more than 20 publications in data visualization.

From left, Madrona’s Tim Porter, Ridge AI CEO Ellie Fields, and Madrona’s Mark Nelson. (Madrona Photo)

Fields and Heer were introduced by Madrona’s Nelson and Porter. Nelson had known Fields since she worked for him at Tableau and he had separately kept in touch with Heer through his UW work. Porter, meanwhile, had gone to Stanford Business School with Fields. 

“I can’t think of two people I like more, and would bet on more, than Jeff and Ellie,” Nelson said, describing the pairing as an example of what’s possible in Seattle’s tight-knit tech community.

Heer previously co-founded Trifacta, a data transformation company acquired by Alteryx in 2022. He and his academic collaborators have produced some of the most widely used open-source tools in data visualization, including Vega(-Lite), D3.js, and the Mosaic framework that serves as Ridge AI’s technical foundation. 

Advertisement

Fields joined Tableau as its first product marketer and rose to senior vice president of product development over more than 12 years, spanning the company’s IPO and its acquisition by Salesforce. She went on to serve as chief product and engineering officer at SalesLoft, where she experienced firsthand the problem Ridge is now trying to solve.

Technology: Ridge runs in the user’s web browser rather than on a remote server, using Heer’s open-source Mosaic framework and an in-browser database called DuckDB. That architecture delivers near-instant interactivity and means the software company that embeds it doesn’t pay for cloud computing costs with every dashboard interaction. 

On the creation side, AI agents handle the visualization design, so product managers can describe what they want in business terms rather than learning a specialized tool.

What’s next: Fields said Ridge AI plans to focus on its SaaS wedge for at least a couple of years before expanding, noting that the market has historically been under-served. 

Advertisement

The company has been working with a small number of pilot customers, and is now inviting additional companies into a closed beta, accepting applications at ridgedata.ai.

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Spain’s Xoople raises $130m to build the data infrastructure AI needs to understand Earth

Published

on

In short: Xoople, a Madrid-based geospatial data company founded in 2019, has raised a $130 million Series B led by Nazca Capital, bringing its total funding to $225 million and pushing its valuation into unicorn territory. The round was co-invested by MCH Private Equity, CDTI (the Spanish government’s technology development fund), Buenavista Equity Partners, and Endeavor Catalyst. Alongside the raise, Xoople announced a partnership with US space and defence contractor L3Harris Technologies to build sensors for its own satellite constellation, designed to produce Earth surface data it says will be “two orders of magnitude better than existing monitoring systems.” The company’s EarthAI platform, built on Microsoft Azure and distributed through Microsoft and Esri, delivers continuous surface intelligence for insurers, farmers, governments, and infrastructure operators.

Xoople has spent seven years building something that did not previously exist in a commercially deployable form: a continuous, AI-native data layer for the Earth’s surface. The Madrid startup, founded in 2019, emerged from that development period with a €115 million in prior funding, a platform embedded in the two most widely used enterprise geospatial ecosystems in the world, and a thesis that the AI era will require a fundamentally different approach to Earth observation — one designed from the ground up for machine learning rather than adapted from satellite imagery workflows built for human analysts. The $130 million Series B, led by Nazca Capital, confirms that investors believe that thesis is credible enough to back at scale.

CEO and co-founder Fabrizio Pirondini told TechCrunch the raise brings Xoople’s total funding to $225 million and puts the company in unicorn territory on valuation. The round was joined by MCH Private Equity, CDTI, the Spanish government-backed technology development fund that has also backed Nazca Capital’s aerospace and defence fund, Buenavista Equity Partners, and Endeavor Catalyst.

What EarthAI actually does

Xoople’s core product, EarthAI, is an end-to-end Earth intelligence system. It ingests continuous surface data, currently sourced from government spacecraft and third-party satellite networks, and processes it into AI-ready datasets that can be queried for change detection, risk prediction, and environmental monitoring. The key design choice is continuity: rather than producing point-in-time images for human review, EarthAI is built to stream a persistent, structured view of the planet’s surface into AI models that need regular, reliable ground truth.

Advertisement

The use cases span industries that share a dependence on understanding what is happening on the physical surface of the Earth. For agriculture, EarthAI provides early detection of crop stress, monitors soil health and water conditions, and generates data that enables farmers to participate in carbon credit markets. For insurance, it enables more precise climate risk pricing and real-time verification of natural disaster claims, removing the delay and subjectivity of ground-based assessments. For infrastructure operators, it monitors physical assets for signs of stress or degradation before failures occur. For governments, it supports emergency planning, environmental enforcement, and humanitarian response. Capital flowing into specialised AI applications at the intersection of science, data, and infrastructure has accelerated considerably over the past year, and Xoople sits precisely at that intersection.

Advertisement

The satellite play

The $130 million will fund Xoople’s transition from a platform built on others’ data to one powered by its own. Alongside the Series B, the company announced a partnership with L3Harris Technologies, a US space and defence contractor, to design and manufacture sensors for Xoople’s own satellite constellation. The sensors will collect optical data. Pirondini told TechCrunch that the constellation is designed to produce “a stream of data that is going to be two orders of magnitude better than existing monitoring systems“, a claim that, if borne out, would represent a substantial leap over the imagery quality currently available from commercial earth observation operators.

That claim is where Xoople meets its competitive reality. The company is entering a market that includes Vantor (formerly Maxar Intelligence, rebranded in October 2025), Planet Labs, BlackSky, Airbus Defence and Space, ICEYE, and Capella Space — all of which have satellites already in orbit and established AI-focused data processing pipelines. Companies building the hardware and data layers that AI depends on face a lengthy gap between the announcement of a new approach and its delivery in deployable form, and Xoople’s constellation is not yet in orbit. For now, EarthAI runs on data it did not produce. The L3Harris partnership signals that the proprietary data supply is the next phase.

Distribution before data

Xoople’s strategic sequencing is unusual for an Earth observation company. Most competitors in the space led with hardware — launching satellites, then figuring out distribution. Xoople did the reverse: it spent its first seven years embedding its platform into Microsoft and Esri, the two dominant environments where enterprise buyers, governments, and GIS professionals already live. Neither Microsoft nor Esri has its own proprietary satellite data. Xoople positioned itself to supply that gap from inside the platforms where the purchasing decisions are made.

The Microsoft relationship is structural: Xoople’s platform runs on Azure, and the company is integrated with Microsoft’s Planetary Computer Pro, which delivers AI-powered geospatial insights for enterprise use. Esri, the world’s largest geospatial software company, is a partner distributor. The implication is that when Xoople’s own constellation is operational and its data quality delivers on the “two orders of magnitude” promise, it will have distribution in place that its newer competitors would need years to replicate. The investment flowing into cloud-based AI data infrastructure has made the ability to process and deliver petabytes of Earth surface data at low latency a tractable problem; the scarcity is in the quality and continuity of the underlying data itself.

Advertisement

A Spanish unicorn in a European context

Xoople’s raise is one of the larger deep tech rounds to come out of Spain in recent years, and it lands in a moment that the European space and defence investment community has been accelerating. Nazca Capital, which led the Series B, runs Spain’s largest private equity fund specialised in aerospace and defence, a fund that also received a €294 million commitment from CDTI and a €40 million investment from the European Investment Fund. The investor composition of the Xoople round,government-backed funds, European private equity, and Endeavor Catalyst, which focuses on high-impact technology entrepreneurs, reflects the persistent tension in European technology between deep technical ambition and the capital required to realise it: the funding is patient, multi-source, and has a public interest dimension that pure venture rounds often lack.

The earth observation market was valued at $7.04 billion in 2025 and is projected to reach $14.55 billion by 2034, growing at just over 8% annually. Xoople is betting that as AI models grow more capable and more dependent on real-world data, the market for continuous, structured Earth surface intelligence, rather than periodic imagery, will grow faster than that aggregate. A year in which the appetite for AI applications in climate, infrastructure, and environmental risk grew considerably provided the validation Xoople needed; the $130 million is the bet that the second half of the decade will prove it right at scale.

Source link

Advertisement
Continue Reading

Tech

Closing the data security maturity gap: Embedding protection into enterprise workflows

Published

on

Presented by Capital One


Data security remains one of the least mature domains in enterprise cybersecurity. According to IBM, 35% of breaches in 2025 involved unmanaged data source or “shadow data.” This reveals a systemic lack of basic data awareness. It’s not because of a lack of tooling or investment. It’s because many organizations still struggle with the most fundamental questions: What data do we have? Where does it live? How does it move? And who is responsible for it?

In an increasingly complex ecosystem of data sources, cloud platforms, SaaS applications, APIs, and AI models, those questions are only becoming more difficult to answer. Closing the maturity gap in data security demands a cultural shift where security is no longer treated as an afterthought. Instead, protection is embedded throughout the full data lifecycle, grounded in a robust inventory, clear classification, and scalable mechanisms that translate policy into automated guardrails.

Visibility as the foundation

The most persistent barrier to data security maturity is basic visibility. Organizations often focus on how much data they hold, but not on what that data is made up of. Does it contain personally identifiable information (PII)? Financial data? Health information? Intellectual property? Without this level of understanding and inventory, it’s a lot tougher to implement meaningful protection.

Advertisement

This can be avoided, however, by prioritizing enterprise capabilities that can detect sensitive data at scale across a large and varied footprint. Detection must be paired with action, deleting data where it’s no longer needed, and securing data where it is by aligning enforcement to a well-defined policy.

Mature organizations should start by treating data security as an “understanding your environment” problem. Maintain an inventory, classify what’s in the ecosystem, and align protections with the classification rather than solely relying on perimeter controls or point solutions to scale.

Securing chaotic data

One reason data security has lagged behind other security domains is that data itself is inherently chaotic. Unlike perimeter security, which relies on explicit ports and defined boundaries, data is largely unpredictable. That is to say, the same underlying information may appear across very different formats: structured databases, unstructured documents, chat transcripts, or analytics pipelines. Each may have slightly different encodings or transformations that introduce unforeseen, and often undetected, changes to the data itself.

Human behavior compounds the challenge, with different actions introducing risks in ways that perimeter controls simply can’t anticipate. This could be anything from a credit card number copied into a free-form comment field, a spreadsheet emailed outside its intended audience, or a dataset repurposed for a new workflow.

Advertisement

When protection is bolted on at the end of a workflow, organizations create blind spots. They rely on downstream checks to catch upstream design flaws. Over time, complexity accumulates and the risk of exposure becomes a question of when, not if.

A more resilient model assumes that sensitive data will surface in unexpected places and formats, so protection is embedded from the moment data is captured. Defense-in-depth becomes a design principle: segmentation, encryption at rest and in transit, tokenization, and layered access controls.

Critically, these safeguards travel with the data lifecycle, from ingestion to processing, analytics and publishing. Instead of retrofitting controls, organizations design for chaos. They accept variability as a given and build systems that remain secure even when data diverges from expectations.

Scaling governance with automation

Data security becomes operationally sustainable when governance is enforced through automation from its genesis. When coupled with clear expectations to create bounded contexts: teams understand what is permitted, under what conditions, and with what protections data can be used effectively.

Advertisement

This matters more than ever today. AI systems often require access to huge volumes of data, across domains. This makes policy implementation particularly challenging. To do so effectively and safely requires deep understanding, strong governance policies, and automated protection.

Security techniques such as synthetic data and token replacement enable organizations to preserve analytical context while making sensitive values harder to read. Policy-as-code patterns, APIs, and automation can handle tokenization, deletion, retention constraints, and dynamic access controls. With guardrails built into the platforms they use, engineers can focus more on innovating with data and elevating business outcomes securely.

AI systems must also operate within the same governance and monitoring expectations as human workflows. Permissions, telemetry, and controls around what models can access, along with the information they can publish, are essential. Governance will always introduce a degree of friction. The goal is to make that friction well understood, navigable and increasingly automated. Confirming purpose, registering a use case, and provisioning access dynamically based on role and need should be clear, repeatable processes.

At enterprise scale, this requires centralized capabilities that implement cyber security policy in the data domain. This includes detection and classification engines, tokenization and detokenization services, retention enforcement, and ownership and taxonomy mechanisms that cascade risk management expectations into daily execution.

Advertisement

When done well, governance becomes an enablement layer rather than a bottleneck. Metadata and classification drive protection decisions automatically while accelerating business discovery and usage. Data is protected across its lifecycle by strong defenses like tokenization and deleted when required by regulation or internal policy. There should be no need for teams to “touch the data” manually for every control decision, with policy enforced by design.

Building for the future

Put simply, closing the data security maturity gap is less about adopting a single breakthrough technology and more about operational discipline. Build the map. Classify what you have. Embed protection into workflows so that security is repeatable at scale.

For business leaders seeking measurable progress over the next 18–24 months, three priorities stand out.

First, establish a robust inventory and metadata-rich map of the data ecosystem. Visibility is non-negotiable. Second, implement classification tied to clear, actionable policy expectations. Make it obvious what protections each category demands. And finally, invest in scalable, automated protection schemes that integrate directly into development and data workflows.

Advertisement

When protection shifts from reactive bolt-on controls to proactive built-in guardrails, compliance becomes simpler, governance becomes stronger, and AI readiness becomes achievable, without compromising rigor.

Learn more how Capital One Databolt, the enterprise data security solution from Capital One Software, can help your business become AI-ready by securing sensitive data at scale.


Andrew Seaton is Vice President, Data Engineering – Enterprise Data Detection & Protection, Capital One.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.

Advertisement

Source link

Continue Reading

Tech

UK Politicians Continue To Miss The Point In Latest Social Media Ban Proposal

Published

on

from the does-no-one-remember-being-a-teen? dept

The UK is moving forward with its efforts to ban social media for young people. Ahead of this week’s House of Lords debate on the topic, we’re getting you situated with a primer on what’s been happening and what it all means.

What was the last vote about? 

On 9 March, the House of Commons discussed amendments tabled by the House of Lords in the government’s flagship legislation, the Children’s Wellbeing and Schools Bill. 

The House of Lords previously tabled an amendment to “prevent children under the age of 16 from becoming or being users” of “all regulated user-to-user services,” to be implemented by “highly-effective age assurance measures,” which effectively banned under-16s from social media. When this proposal came before the House of Commons, MPs defeated it by 307 votes to 173. 

Instead, the Commons proposed its own amendment: enabling the Secretary of State to introduce provisions “requiring providers of specified internet services” to prevent access by children, under age 18 rather than 16, to specified internet services or to specified features; and to restrict access by children to specified internet services which ministers provide. 

Advertisement

Who does this give powers to?

The Commons proposal redirects power from the UK Parliament and the UK’s independent telecom regulator Ofcom to the Secretary of State for Science, Innovation and Technology, currently Liz Kendall, who will be able to restrict internet access for young people and determine what content is considered harmful…just because she can. The amendment also empowers the Secretary of State to limit VPN use for under 18s, as well as restrict access to addictive features and change the age of digital consent in the country; for example, preventing under-18s from playing games online after a certain time.  

Why is this a problem? 

This process is devoid of checks or accountability mechanisms as ministers will not be required to demonstrate specific harms to young people, which essentially unravels years-long efforts by Ofcom to assess online services according to their risks. And given the moment the UK is currently in, such as refusing to protect trans and LGBTQ+ communities and flaming hostile and racist discourses, it is not unlikely that we’ll see ministers start restricting content that they ideologically or morally feel opposed to, rather than because the content is harmful based, as established by evidence and assessed pursuant to established human rights principles. 

We know from other jurisdictions like the United States that legislation seeking to protect young people typically sweeps up a slew of broadly-defined topics. Some block access to websites that contain some “sexual material harmful to minors,” which has historically meant explicit sexual content. But some states are now defining the term more broadly so that “sexual material harmful to minors” could encompass anything like sex education; others simply list a variety of vaguely-defined harms. In either instance, this bill would enable ministers to target LGBTQ+ content online by pushing this behind an under-18s age gate, and this risk is especially clear given what we already know about platform content policies. 

How will this impact young people? 

The internet is an essential resource for young people (and adults) to access information, explore community, and find themselves. Beyond being spaces where people can share funny videos and engage with enjoyable content, social media enables young people to engage with the world in a way that transcends their in-person realm, as well as find information they may not feel safe to access offline, such as about family abuse or their sexuality. In severing this connection to people and information by banning social media, politicians are forcing millions of young people into a dark and censored world. 

Advertisement

How did each party vote? 

The initial push to ban under-16s from social media came from the Conservative Party, who have since accused the UK’s Prime Minister Keir Starmer of “dither and delay” for not committing to the ban. The Liberal Democrats have also called this “not good enough.” The Labour Party itself is split, with 107 Labour Party MPs abstaining in the vote on the House of Lords amendment. 

But we know that the issue of young people’s online safety is a polarizing topic that politicians have—and will continue to—weaponize for public support, regardless of their actual intentions. This is why we will continue to urge policymakers and regulators to protect people’s rights and freedoms online at all moments, and not just take the easy route for a quick boost in the polls.

How does this bill connect to the Online Safety Act?

The draft Children’s Wellbeing and Schools Bill that came from the Lords provided that any regulation pertaining to the well-being of young people on social media “must be treated as an enforceable requirement” with the Online Safety Act. The Commons amendment, however, starts out by inserting a new clause that amends the Online Safety Act. 

For more than six years, we’ve been calling on the UK government to pass better legislation around regulating the internet, and when the Online Safety Act passed we continued to advocate for the rights of people on the internet—including young people—as Ofcom implemented the legislation. This has been a protracted effort by civil society groupstechnologiststech companies, and others participating in Ofcom’s consultation process and urging the regulator to protect internet users in the UK.

Advertisement

The MPs amendment essentially rips this up. Technology Secretary Liz Kendall recently said that ministers intended to go further than the existing Online Safety Act because it was “never meant to be the end point, and we know parents still have serious concerns. That is why I am prepared to take further action.” But when this further action is empowering herself to make arbitrary decisions on content and access, and banning under-18s from social media, this causes much more harm than it solves. 

Is the UK alone in pushing legislation like this? 

Sadly, no. Calls to ban social media access for young people have gained traction since Australia became the first country in the world to enforce one back in December. On 5 March, Indonesia announced a ban on social media and other “high-risk” online platforms for users under 16. A few days later, new measures came into effect in Brazil that restricts social media access for under-16s, who must now have their accounts linked to a legal guardian. Other countries like Spain and the Philippines have this year announced plans to ban social media for under-16s, with legislation currently pending to implement this.

What are the next steps?

The Children’s Wellbeing and Schools Bill returns to the House of Lords on 25 March for consideration of the new Commons amendments. The bill will only become law if both Houses agree to the final draft. 

We will continue to stand up against these proposals—not only to young people’ free expression rights, but also to safeguard the free flow of information that is vital to a democratic society. The issue of online safety is not solved through technology alone, especially not through a ban, and young people deserve a more intentional approach to protecting their safety and privacy online, not this lazy strategy that causes more harm than it solves. 

Advertisement

We encourage politicians in the UK to look into what is best, not what is easy, and explore less invasive approaches to protect all people from online harms. 

Republished from the EFF’s Deeplinks blog.

Filed Under: social media, social media ban, teens, uk

Advertisement

Source link

Continue Reading

Tech

AI data centers are cooking the planet, creating extreme heat islands that affect millions in cities and rural regions alike

Published

on


  • AI data centers are producing extreme heat islands that extend miles beyond facilities
  • Over 340 million people experience elevated temperatures due to hyperscale AI facilities
  • Extreme temperature spikes of up to 16.4 °F have been recorded near data centers

The expansion of AI-driven data centers is having a more immediate environmental impact than previously understood, experts have warned.

A research team led by Andrea Marinoni at the University of Cambridge claims these facilities, often sprawling over a million square feet, are not only consuming massive amounts of energy but also generate extreme local heating effects, known as heat islands.

Source link

Advertisement
Continue Reading

Tech

Netflix is expanding into kids’ games with a new standalone app

Published

on

Netflix is launching a new standalone app for kids’ games called Netflix Playground, the company announced on Monday. Netflix Playground is available as part of a Netflix subscription, and doesn’t have any ads or in-app purchases.

Netflix says the app gives children access to an “ever-growing” library of games for kids. Netflix Playground is launching with titles featuring characters from popular kids’ shows.

The app, which is designed for children ages eight and under, is now available in the U.S., Canada, the U.K., Australia, the Philippines, and New Zealand. It will roll out worldwide on April 28. The app is available on both iOS and Android.

It can be accessed offline without a mobile or Wi-Fi connection, which the company says makes it the “perfect companion for long airplane rides or grocery trips.”

Advertisement
Image Credits:Netflix

For example, one game is titled “Playtime With Peppa Pig,” and sees players “jump into Peppa’s world with a collection of playful activities.” There’s also a “Sesame Street” game where players practice matching with memory cards or coordination with connect-the-dots. Other titles include “Let’s Color,” “Storybots,” “Bad Dinosaurs,” and more.

“We’re building a world where kids can not only watch their favorite stories, they can step inside them and interact with their favorite characters,” said John Derderian, Netflix Vice President of Animation Series + Kids & Family TV, in a press release. “We’re creating a seamless destination for discovery, learning, and play. Whether it’s reuniting with Hank and the ‘Trash Truck’ crew for new adventures or making a smoothie with ‘Peppa Pig,’ watching and playing on Netflix can be the fun and easiest part of every family’s day.”

Netflix first launched games in 2021 and had ambitious plans for the space, but has since dialed them back after its titles failed to gain traction. The streaming giant has also shut down several video game studios like Boss Fight, Spry Fox, and an AAA studio.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Advertisement

Late last year, Netflix forayed into TV gaming with a slate of new party titles meant to be played in groups, including TV versions of Tetris and Pictionary. The company has also said it will prioritize cloud gaming, but has noted that it’s still in the early stages of these plans.

Source link

Advertisement
Continue Reading

Tech

Playing DVDs On The Sega Dreamcast

Published

on

Although the Sega Dreamcast had many good qualities that made it beloved by the thousands of people who bought the console, one glaring omission was the lack of DVD video capabilities. Despite its optical drive being theoretically capable of such a feat, Sega had opted to use the GD-ROM disc format to not have to cough up DVD licensing fees, while the PlayStation 2 could play DVD movies. Fortunately it’s possible to hack DVD capability into the Dreamcast if you aren’t too fussy about the details, as [Throaty Mumbo] recently demonstrated.

For the Tl;dw folk among us, there’s a GitHub repository that contains the basic summary and all needed files. Suffice it to say that it is a bit of a kludge, but on the bright side it does not require one to modify the Dreamcast. Instead it uses a Pico 2 board that emulates a Sega DreamEye camera on the Dreamcast’s Maple bus via the controller port. The Dreamcast then requests image data as if from said camera.

On the DVD side of things there’s a Raspberry Pi 5 that connects to an external USB DVD drive and which encodes the video for transmission via USB to the Pico 2 board. Although somewhat sketchy, it totally serves to get DVDs playing on the Dreamcast. If only Sega had not skimped on those license fees, perhaps.

Advertisement

Source link

Advertisement
Continue Reading

Tech

5 Niche Craftsman Tools You Probably Shouldn’t Waste Your Money On

Published

on





We may receive a commission on purchases made from links.

The workshop has become a place with specialized gadgets for just about every task you can imagine. However, all this niche inventory often makes your workspace more complicated. It leaves you with a cluttered toolbox packed with pricey, single-purpose items that rarely get used. For many hobbyists and pros, that high-tech solution or a really specific manual tool can be tough to pass up when you’re browsing the hardware store aisles.

If you take a closer look at how useful these items actually are, you’ll see that the classic, versatile tools that have helped tradespeople for generations are often superior to modern, specialized versions. Many of these niche items aren’t good investments because they lack the adaptability of standard equipment.

Advertisement

By taking a close look at these pricey novelties, you can better appreciate the value of a streamlined, multipurpose tool kit. Tools like speed squares, bungee cords, and extraction sockets can handle a wide range of problems across different projects and have many uses, unlike tools designed for a single use. Even with professional marketing and shiny finishes, you’re probably better off leaving these on the shelf.

Advertisement

Digital Angle Gauge

The Craftsman Digital Angle Gauge is impressive, but it’s a lot more than you probably need. It’s built as a four-function tool, so it works as an angle finder, a compound cut calculator, a protractor, and a standard level. It can measure angles from 0 to 220 degrees and stays accurate to the nearest 0.1 degree. It’s made from durable aluminum, but is still pretty heavy at 2.7 pounds.

This is the kind of tool you could get from Home Depot that you wouldn’t realize existed. Digital gauges are great if you need decimal-point precision, but you don’t really need it for framing walls or building furniture. A standard speed square or a sliding T-bevel will give you plenty of accuracy for almost any project. Bringing a device with two delicate LCD screens onto a dusty, rough job site is just asking for problems.

One dropped board or a misplaced hammer swing can shatter those screens, turning your expensive tool into useless aluminum. You’re also going to get tired of dealing with batteries and electronic quirks. Even though the tool is built to be tough, an analog version will never run out of power in the middle of a measurement.

Advertisement

Universal Nut Cracker

The Craftsman Auto Universal Nut Cracker is meant to save you when a nut is stuck and just won’t budge. It uses a hardened steel cutter to split the hardware, working on sizes from 5/16-inch to 7/8-inch across the flats. It’s designed to break rusted or frozen nuts without messing up the threads on the bolt underneath. While that sounds pretty good, it’s often tough to use in real-world situations, like in a cramped engine bay where the frame just won’t fit.

Even though it looks small, it measures 8.35 inches long, 3.35 inches wide, and 1.34 inches high. The maker says you can’t use power tools with it, so you’re stuck using your hands in tight spots where you probably can’t get much leverage anyway. A good set of extraction sockets is usually a better pick for rounded or stuck nuts, since those work on many sizes and aren’t hard to find. Instead of fighting with this tricky gadget, you could just grab a hacksaw or a torch to get that hardware off.

Advertisement

Even the few people who bought it from Craftsman have left it an average of 1 star out of 5 possible stars. Store reviews, like these bad ones from Ace Hardware, often offer valuable insight from buyers. 

Advertisement

Auto Caliper Hanger Set

The Craftsman Auto Caliper Hanger Set is a classic example of a tool you just don’t need to pick up. This universal kit works for cars with disc brakes, and it’s supposed to hold the calipers securely while you’re doing brake work. It’s designed to keep the heavy caliper from hanging on your rubber brake lines, which could really damage them. It’s basically a heavy-duty S-hook with a tough coating, so you can reuse it.

Even with all that in mind, it’s really just a single-purpose item that’ll mostly just clutter up your toolbox, which shouldn’t have tools you never use anyway. You can get the same result with things you probably already have in your garage. A basic bungee cord from Tractor Supply, or even a piece of scrap wire from an old coat hanger works just as well. You just bend the wire into an S-shape, and you’re good to go.

This is basically just a simple piece of bent metal made in China. The set does come with a limited lifetime warranty, and the company says it’ll replace it for any reason, even without a receipt. Still, there’s really no reason to spend your money on a dedicated hanger when alternatives you probably have will work similarly.

Advertisement

Auto LED Inspection Mirror

The Craftsman Auto LED Inspection Mirror might seem like a smart way to check dark engine corners or behind walls, but it’s mostly a gimmick. It comes with a telescoping wand that has a rubber handle, a 2-inch mirror, and a swivel joint to help you get into tricky spots. The shaft begins at 6-1/4 inches and can stretch out to 37-1/2 inches.

The big selling point is its built-in LED light, which is meant to help you spot leaks or dropped bolts. However, that light is actually its main problem. Since it has an LED, the mirror needs a CR2032 battery to operate. These batteries last a while in a key fob, but drain relatively quickly with larger devices.

Advertisement

For daily work, a standard telescoping mirror along with a basic headlamp or flashlight is plenty. When you separate the light from the mirror, you actually get better lighting angles. You can bounce the light off the glass to see what you’re checking out without the glare from the built-in LED messing up the reflection. You could even just put a separate light source in the engine bay to light up the whole area instead of counting on one tiny light on a stick.

Advertisement

3-Jaw Oil Filter Wench

The Craftsman 3-jaw Oil Filter Wrench is another niche item that most people can live without. It’s marketed as a universal way to handle oil changes on different vehicles, promising to make the job simpler for anyone, regardless of their skill level. The tool uses metal jaws made from heat-treated steel. It’s designed to handle filters from 2 inches to 4-1/2 inches in diameter. It’s a low-profile item that’s 1.61 inches high and about 6.85 inches long, weighing in at 0.82 pounds.

Even with those specs and a lifetime warranty, this gadget isn’t a necessary purchase. It uses a gear mechanism to grip the filter while you turn it with a 3/8-inch or 1/2-inch drive ratchet. While it technically works, it’s not as versatile as some options. You likely already have many of the basic oil change tools from a store like Harbor Freight. A pair of filter pliers can handle the same job and will fit a much wider range of filter sizes.

This wrench is a heavy chunk of metal that takes up space. Sticking to a reliable strap wrench or standard pliers will save you money and keep your collection uncomplicated. Those tools also work for basic plumbing repairs, whereas this wrench does only one thing.

Advertisement

Why these were picked

The hardware aisle is filled with specialized gadgets, like those in the Craftsman catalog, that solve singular problems rather than being multi-function tools. While these get marketed as revolutionary solutions to common mechanical hurdles, they can be a poor investment. These niche items tend to prioritize flashy, single-purpose engineering over the rugged adaptability that has defined the trades for generations.

Standard equipment like speed squares, extraction sockets, bungee cords, and basic strap wrenches gives you a level of durability and broad utility that specialized gear can’t match. These classic alternatives aren’t just way more affordable; they also do the same job without electronic glitches or taking up too much space. Being smart in the workshop is often about being clever, not about buying the fanciest gadgets.

Advertisement



Source link

Advertisement
Continue Reading

Tech

If Samsung launches a Galaxy S27 Pro, the name alone won’t save it

Published

on

New rumors have started the chatter of a new Galaxy Pro flagship phone once again, and this immediately makes sense to me—but it’s not for flattering reasons. Samsung may be adding a fourth Galaxy S27 model next year, with a “Pro” variant expected to sit right below the top-of-the-line S27 Ultra.

This model essentially bridges the gap between the standard and Ultra Galaxy phones with high-end features, minus the S Pen. Some of these premium features could include the S26 Ultra’s new Privacy Display feature.

All of this sounds smart on paper, but it also sounds like acceptance.

After spending time with the Galaxy S26, I have a recurring thought. This compact phone has a solid software experience, reliable cameras, and is generally easy to recommend as a base flagship. But “reliable” is no longer enough when these devices carry flagship pricing.

Advertisement

The regular Galaxy S phones are where the problem is

Samsung’s own S26 comparison page shows the base S26 stuck at 25W charging, while the S26+ goes to 45W, and the Ultra got upgraded to 60W. The camera story lands the same way. Samsung’s Galaxy S26 and Galaxy S26+ share the same 12MP ultrawide, 50MP wide, and 10MP telephoto setup, while the Ultra gets the far more ambitious 50MP ultrawide, 200MP main, and 50MP + 10MP telephoto mix.

So apart from the Ultra, the other two models feel like an afterthought, but an expensive four-digit flagship one at that. This is why a Galaxy S27 Pro could make the S27 lineup feel less lethargic and more energetic. Just like the base Pixel 10 and Pixel 10 Pro and the base iPhone 17 and iPhone 17 Pro, there could be a clear distinction in the intermediate model. Right now, the base and Plus models do just enough. The Ultra does everything.

The Galaxy S27 Pro needs to be a course correction, not a rebrand

But a Pro model only works if Samsung uses it to create a truly convincing middle. One with faster charging, stronger camera hardware, and a better reason to exist below the Ultra.

I think Samsung is definitely in need of this change. But the name alone won’t be enough. If Samsung wants the Pro phone to matter, it has to make this non-ultra Galaxy S phone feel like more than just a safe default and start making it feel worth the premium money again. Otherwise, the S27 Pro will just be another label slapped onto a lineup where all the excitement only lives at the top.

Source link

Advertisement
Continue Reading

Tech

Google quietly launched an AI dictation app that works offline

Published

on

Google on Monday quietly released an offline-first dictation app called “Google AI Edge Eloquent” on iOS to take on the likes of Wispr Flow, SuperWhisper, Willow, and others.

The app is free to download, and once its Gemma-based automatic speech recognition (ASR) models are downloaded, you can start dictating on your phone. In the app, you can see the live transcription, and when you hit pause, the app automatically filters out filler words like “um” and “ah” and polishes the text.

Below the transcript are options like “Key points”, “Formal”, “Short”, and “Long” to transform the text.

Image Credit: Screenshot by TechCrunchImage Credits:Screenshot by TechCrunch

You can also turn off the cloud mode to use local-only processing. (When cloud mode is on, the app uses cloud-based Gemini models for text cleanup.) The Google AI Edge Eloquent can import certain keywords, names, and jargon from your Gmail account, if desired. Plus, you can add your own custom words to the list.

The app displays the history of the transcription session and lets you search through all of them as well. It can show you words dictated in the last session, your word per minute speed, and the total number of words spoken.

Advertisement

“Google AI Edge Eloquent is an advanced dictation app engineered to bridge the gap between natural speech and professional, ready-to-use text. Unlike standard dictation software that transcribes stumbles and filler words verbatim, Eloquent utilizes AI to capture your intended meaning. It automatically edits out ‘ums,’ ‘uhs,’ and mid-sentence self-corrections, outputting clean, accurate prose,” the company’s App Store description reads.

I was saying “Transcription”. Still early days for this app. Image Credits: TechCrunchImage Credits:Screenshot by TechCrunch

While the app is currently only available on iOS, the App Store description references an Android version. (We have reached out to Google for more information, and will update the story if we hear back.)

According to the description, Eloquent offers “seamless Android integration,” where it can be set as users’ default keyboard for system-wide access across any text field. Plus, the app will be able to use the floating button feature, similar to the one Wispr Flow uses on Android, for easy access to transcription from anywhere.

AI-powered transcription apps are gaining popularity among users as speech-to-text models get better. With this experimental app, Google is joining the trend. If this test is successful, we could see improved transcription features across Android, too.

Source link

Advertisement
Continue Reading

Tech

How MassMutual and Mass General Brigham turned AI pilot sprawl into production results

Published

on

Enterprise AI programs rarely fail because of bad ideas. More often, they get stuck in ungoverned pilot mode and never reach production. At a recent VentureBeat event, technology leaders from MassMutual and Mass General Brigham explained how they avoided that trap — and what the results look like when discipline replaces sprawl.

At MassMutual, the results are concrete: 30% developer productivity gains, IT help desk resolution times reduced from 11 minutes to one, and customer service calls cut from 15 minutes to just one or two.

“We’re always starting with why do we care about this problem?” Sears Merritt, MassMutual’s head of enterprise technology and experience, said at the event. “If we solve the problem, how are we gonna know we solved it? And, how much value is associated with doing that?”

Defining metrics, establishing strong feedback loops

MassMutual, a 175-year-old company serving millions of policy owners and customers, has pushed AI into production across the business — customer support, IT, customer acquisition, underwriting, servicing, claims, and other areas.

Advertisement

Merritt said his team follows the scientific method, beginning with a hypothesis and testing whether it has an outcome that will tangibly drive the business forward. Some ideas are great, but they may be “intractable in the business” due to factors like lack of data or access, or regulatory constraint.

“We won’t go any further with an idea until we get crystal clear on how we’re going to measure, and how we’re going to define success.”

Ultimately, it’s up to different departments and leaders to define what quality means: Choose a metric and define the minimum level of quality before a tool is placed into the hands of teams and partners.

That starting point creates a quick feedback loop. “The things that we find slow us down is where there isn’t shared clarity on what outcome we’re trying to achieve,” which can lead to confusion and constant re-adjusting, said Merritt. “We don’t go to production until there is a business partner that says, ‘Yes, that works.’”

Advertisement

His team is strategic about evaluating emerging tools, and “extremely rigorous” when testing and measuring what “good” means. For instance, they perform trust scoring to lower hallucination rates, establish thresholds and evaluation criteria, and monitor for feature and output drift.

Merritt also operates with a no-commitment policy — meaning the company doesn’t lock itself into using a particular model. It has what he calls an “incredibly heterogeneous” technology environment combining best of breed models alongside mainframes running on COBOL. That flexibility isn’t accidental. His team built common service layers, microservices and APIs that sit between the AI layer and everything underneath — so when a better model comes along, swapping it in doesn’t mean starting over.

Because, Merritt explained, “the best of breed today might be the worst of breed tomorrow, and we don’t want to set ourselves up to fall behind.”

Ai impact series 2

Credit: Brian Malloy Photo

Advertisement

Weeding instead of letting a thousand flowers bloom

Mass General Brigham (MGB), for its part, took more of a spray and pray approach — at first.

Around 15,000 researchers in the not-for-profit health system have been using AI, ML, and deep learning for the last 10 to 15 years, CTO Nallan “Sri” Sriraman said at the same VB event.

But last year, he made a bold choice: His team shut down a sprawl of non-governed AI pilots. Initially, “we did follow the thousand flowers bloom [methodology], but we didn’t have a thousand flowers, we had probably a few tens of flowers trying to bloom,” he said.

Like Merritt’s team at MassMutual, MGB pivoted to a more holistic view, examining why they were developing certain tools for specific departments of workflows. They questioned what capabilities they wanted and needed and what investment those required.

Advertisement

Sriraman’s team also spoke with their primary platform providers — Epic, Workday, ServiceNow, Microsoft — about their roadmaps. This was a “pivotal moment,” he noted, as they realized they were building in-house tools that vendors were already providing (or were planning to roll out).

As Sriraman put it: “Why are we building it ourselves? We are already on the platform. It is going to be in the workflow. Leverage it.”

That said, the marketplace is still nascent, which can make for difficult decisions. “The analogy I will give is when you ask six blind men to touch an elephant and say, what does this elephant look like?” Sriraman said. “You’re gonna get six different answers.”

There’s nothing wrong with that, he noted; it’s just that everybody is discovering and experimenting as the landscape keeps shifting.

Advertisement

Instead of a wild West environment, Sriraman’s team distributes Microsoft Copilot to users across the business, and uses a “small landing zone” where they can safely test more sophisticated products and control token use.

They also began “consciously embedding AI champions“ across business groups. “This is kind of a reverse of letting a thousand flowers bloom, carefully planting and nourishing,” Sriraman said.

Observability is another big consideration; he describes real-time dashboards that manage model drift and safety and allow IT teams to govern AI “a little more pragmatically.” Health monitoring is critical with AI systems, he noted, and his team has established principles and policies around AI use, not to mention least access privileges.

In clinical settings, the guardrails are absolute: AI systems never issue the final decision. “There’s always going to be a doctor or a physician assistant in the loop to close the decision,” Sriraman said. He cited radiology report generation as one area where AI is used heavily, but where a radiologist always signs off.

Advertisement

Sriraman was clear: “Thou shall not do this: Don’t show PHI [protected health information] in Perplexity. As simple as that, right?”

And, importantly, there must be safety mechanisms in place. “We need a big red button, kill it,” Sriraman emphasized. “We don’t put anything in the operational setting without that.”

Ultimately, while agentic AI is a transformative technology, the enterprise approach to it doesn’t have to be dramatically different. “There is nothing new about this,” Sriraman said. “You can replace the word BPM [business process management] from the ’90s and 2000s with AI. The same concepts apply.”

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025