The mantra of the modern tech industry was arguably coined by Facebook (before it became Meta): “move fast and break things.”
But as enterprise infrastructure has shifted into a dizzying maze of hybrid clouds, microservices, and ephemeral compute clusters, the “breaking” part has become a structural tax that many organizations can no longer afford to pay. Today, two-year-old startup NeuBird AI is launching a full-scale offensive against this “chaos tax,” announcing a $19.3 million funding round alongside the release of its Falcon autonomous production operations agent.
The launch isn’t just a product update; it is a philosophical pivot. For years, the industry has focused on “Incident Response”—making the fire trucks faster and the hoses bigger. NeuBird AI is arguing that the only sustainable path forward is “Incident Avoidance”.
As Venkat Ramakrishnan, President and COO of NeuBird AI, put it in a recent interview: “Incident management is so old school. Incident resolution is so old school. Incident avoidance is what is going to be enabled by AI”.
Advertisement
By grounding AI in real-time enterprise context rather than just large language model reasoning, the company aims to move site reliability engineering and devops teams from a reactive posture to a predictive one.
The AI divide: a reality check on automation
Accompanying the launch is NeuBird AI’s 2026 State of Production Reliability and AI Adoption Report, a survey of over 1,000 professionals that reveals a massive disconnect between the boardroom and the server room.
While 74% of C-suite executives believe their organizations are actively using AI to manage incidents, only 39% of the practitioners—the engineers actually on-call at 2:00 AM—agree.
Advertisement
This 35-point “AI Divide” suggests that while leadership is writing checks for AI platforms, the technology is often failing to reach the frontline.
For engineers, the reality remains manual and grueling: the study found that engineering teams spend an average of 40% of their time on incident management rather than building new products.
Gou Rao, co-founder CEO of NeuBird AI, told VentureBeat that this is a persistent operational reality: “Over the past 18 months that we have been in production, this is not a marketing slide. We have concretely been able to demonstrate a massive reduction in time to incident response and resolution”.
The consequences of this “toil” are more than just lost productivity. Alert fatigue has transitioned from a morale issue to a direct reliability risk.
Advertisement
According to the report, 83% of organizations have teams that ignore or dismiss alerts occasionally, and 44% of companies experienced an outage in the past year tied directly to a suppressed or ignored alert. In many cases, the systems are so noisy that customers discover failures before the monitoring tools do.
Introducing NeuBird AI Falcon
NeuBird AI’s answer to this systemic failure is the Falcon engine. While the company’s previous iteration, Hawkeye, focused on autonomous resolution, Falcon extends that capability into predictive intelligence. “When we launched NeuBird AI in 2023, our first version of the agent was called Hawkeye,” Rao explains. “What we’re announcing next week at HumanX is our next-generation version of the agent, codenamed Falcon. Falcon is easily three times faster than Hawkeye and is averaging around 92% in confidence scores”.
This level of accuracy allows engineers to trust the agent’s output at face value. Falcon represents a significant leap over previous generative AI applications in the space, particularly in its ability to forecast failure. “Falcon is really good at preventive prediction, so it can tell you what can go wrong,” Rao says. “It’s pretty accurate on a 72-hour window, even better at 48 hours, and by 24 hours it gets really, really accurate”.
One of the standout features of the new release is the Advanced Context Map. Unlike static dashboards, this is a real-time view of infrastructure dependencies and service health. It allows teams to visualize the “blast radius” of an issue as it propagates across an environment, helping engineers understand not just what is broken, but why it is failing in the context of its neighbors.
Advertisement
NeuBird AI Advanced Context Map- with Zoom In. Credit: NeuBird AI
‘Minority Report’ for incident management
While many AI tools favor flashy web interfaces, NeuBird AI is leaning into the developer’s native habitat with NeuBird AI Desktop. This allows engineers to invoke the production ops agent directly from a command-line interface to explore root causes and system dependencies.
“Falcon has a desktop mode which allows it to interact with a developer’s local tools,” Rao noted. “We’re getting a lot more traction from a hands-on developer audience, especially as people go to Claude Desktop and Cursor. They’re completing the loop by using production agents talking to their coding agents”.
NeuBird Desktop CLI view. Credit: NeuBird AI
Advertisement
This integration enables a “multi-agent” workflow where an engineer can use NeuBird AI’s agent to diagnose a root cause in production and then hand off that diagnosis to a coding agent like Claude Code to implement the fix.
During a live demo, Rao showcased how the agent could be set to “Sentinel Mode,” constantly sweeping a cluster for risks. If it detects an anomaly—such as a projected 5% spike in AWS costs or a misconfigured Kubernetes pod—it can flag the specific engineer on-call who has the domain expertise to fix it.
“This is like ‘Minority Report for Incident Management’,” one financial services executive reportedly told the team after a demo.
Context engineering: a gateway for security
A primary concern for enterprises deploying AI is security—ensuring large language models don’t go “crazy” or exfiltrate sensitive data. NeuBird AI addresses this through a proprietary approach to “context engineering”.
Advertisement
“The way we implemented our agent is that the large language models themselves are never actually touching the data directly,” Rao explains. “We become the gateway for how the context can be accessed”. This means the model is the reasoning engine, but NeuBird AI is the middleman that wraps the data.
Furthermore, the company has implemented strict guardrails on what the agent can actually execute. “We’ve created a language that confines and restricts the agent from what it can do,” says Rao. “If it comes up with something anomalous, or something we don’t know, it won’t run. We won’t do it”.
This architectural choice allows NeuBird AI to remain model-agnostic. If a newer model from Anthropic or Google outperforms the current reasoning engine, NeuBird AI can simply switch it out without requiring the customer to change their platform. “Customers don’t want to be tied to a specific way of reasoning,” Rao asserts. “They want to be tied to a platform from which they can get the value of an agentic system”.
Displacing the “army”: displacing expensive observability
One of the most radical claims NeuBird AI makes is that agentic systems can actually reduce the amount of data enterprises need to store in the first place. Currently, teams rely on massive storage platforms with complex query languages.
Advertisement
“People use very complex observability tools like Datadog, Dynatrace, and Sysdig,” Rao says. “This is the norm today, which is why it takes an army of people to solve a problem. What we’ve been able to demonstrate with agentic systems is that you don’t need to store all that data in the first place”. Because the agent can reason across raw data sources, it can identify which signals are junk and which are critical. This shift, Rao argues, “reduces human toil and effort while simultaneously reducing your reliance on these insanely expensive observability tools”.
The practical impact of this “incident avoidance” was recently demonstrated at Deep Health. Rao recounts how their agent detected a systemic issue that was invisible to traditional tools: “Our agent was able to go in and prevent an issue from happening which would have caused this company, Deep Health, a major production outage. The customer is completely beside themselves and happy about what it could do”.
FalconClaw: operationalizing ‘tribal knowledge’
One of the most persistent problems in IT operations is the loss of “tribal knowledge”—the hard-won expertise of senior engineers that exists only in their heads. NeuBird AI is attempting to solve this with FalconClaw, a curated, enterprise-grade skills hub compatible with the OpenClaw ecosystem.
FalconClaw allows teams to capture best practices and resolution steps as “validated and compliant skills”. The tech preview launched today with 15 initial skills that work natively with NeuBird AI’s toolchain.
Advertisement
NeuBird FalconClaw Skills view. Credit: NeuBird AI
According to Francois Martel, Field CTO at NeuBird AI, this turns hard-won expertise into a reusable asset that the AI can use automatically.
It’s an attempt to standardize how agents interact with infrastructure, moving away from proprietary “black box” systems toward a multi-agent world where different AI tools can share a common set of operational abilities.
Scaling the moat: funding and leadership
The $19.3 million round was led by Xora Innovation, a Temasek-backed firm, with participation from Mayfield, M12, StepStone Group, and Prosperity7 Ventures. This brings NeuBird AI’s total funding to approximately $64 million.
Advertisement
The investor interest is fueled largely by the pedigree of the founding team. Gou Rao and Vinod Jayaraman previously co-founded Portworx, which was acquired by Pure Storage, and Ocarina Networks, acquired by Dell. They have recently bolstered their leadership with Venkat Ramakrishnan, another Pure Storage veteran, as President and COO.
For investors like Phil Inagaki of Xora, the value lies in NeuBird AI’s “best-in-class results across accuracy, speed and token consumption”. As cloud costs continue to spiral, the ability of an AI agent to not only fix bugs but also optimize infrastructure capacity is becoming a “must-have” rather than a “nice-to-have”. NeuBird AI claims its agent can save enterprise teams more than 200 engineering hours per month.
The path to ‘self-healing’ infrastructure
As the State of Production Reliability report notes, current incident management practices are “no longer sustainable”. With 61% of organizations estimating that a single hour of downtime costs $50,000 or more, the financial stakes of staying in a reactive loop are enormous.
NeuBird AI’s launch of Falcon and FalconClaw marks a definitive attempt to break that loop. By focusing on prevention and the “context engineering” required to make AI trustworthy for enterprise production, the company is positioning itself as the critical intelligence layer for the modern stack.
Advertisement
While the “AI Divide” between executives and practitioners remains a significant hurdle for the industry, NeuBird AI is betting that as engineers see the value of a cli-driven, 92%-accurate agent that can “see around corners,” the skepticism will fade. For the site reliability engineers currently drowning in a flood of non-actionable alerts, the arrival of a reliable ai teammate couldn’t come soon enough.
NeuBird AI Falcon is available starting today, with organizations able to sign up for a free trial at neubird.ai.
The UK is moving forward with its efforts to ban social media for young people. Ahead of this week’s House of Lords debate on the topic, we’re getting you situated with a primer on what’s been happening and what it all means.
What was the last vote about?
On 9 March, the House of Commons discussed amendments tabled by the House of Lords in the government’s flagship legislation, the Children’s Wellbeing and Schools Bill.
The House of Lords previously tabled an amendment to “prevent children under the age of 16 from becoming or being users” of “all regulated user-to-user services,” to be implemented by “highly-effective age assurance measures,” which effectively banned under-16s from social media. When this proposal came before the House of Commons, MPs defeated it by 307 votes to 173.
Instead, the Commons proposed its own amendment: enabling the Secretary of State to introduce provisions “requiring providers of specified internet services” to prevent access by children, under age 18 rather than 16, to specified internet services or to specified features; and to restrict access by children to specified internet services which ministers provide.
Advertisement
Who does this give powers to?
The Commons proposal redirects power from the UK Parliament and the UK’s independent telecom regulator Ofcom to the Secretary of State for Science, Innovation and Technology, currently Liz Kendall, who will be able to restrict internet access for young people and determine what content is considered harmful…just because she can. The amendment also empowers the Secretary of State to limit VPN use for under 18s, as well as restrict access to addictive features and change the age of digital consent in the country; for example, preventing under-18s from playing games online after a certain time.
Why is this a problem?
This process is devoid of checks or accountability mechanisms as ministers will not be required to demonstrate specific harms to young people, which essentially unravels years-long efforts by Ofcom to assess online services according to their risks. And given the moment the UK is currently in, such as refusing to protect trans and LGBTQ+ communities and flaming hostile and racist discourses, it is not unlikely that we’ll see ministers start restricting content that they ideologically or morally feel opposed to, rather than because the content is harmful based, as established by evidence and assessed pursuant to established human rights principles.
We know from other jurisdictions like the United States that legislation seeking to protect young people typically sweeps up a slew of broadly-defined topics. Some block access to websites that contain some “sexual material harmful to minors,” which has historically meant explicit sexual content. But some states are now defining the term more broadly so that “sexual material harmful to minors” could encompass anything like sex education; others simply list a variety of vaguely-defined harms. In either instance, this bill would enable ministers to target LGBTQ+ content online by pushing this behind an under-18s age gate, and this risk is especially clear given what we already know about platform content policies.
How will this impact young people?
The internet is an essential resource for young people (and adults) to access information, explore community, and find themselves. Beyond being spaces where people can share funny videos and engage with enjoyable content, social media enables young people to engage with the world in a way that transcends their in-person realm, as well as find information they may not feel safe to access offline, such as about family abuse or their sexuality. In severing this connection to people and information by banning social media, politicians are forcing millions of young people into a dark and censored world.
Advertisement
How did each party vote?
The initial push to ban under-16s from social media came from the Conservative Party, who have since accused the UK’s Prime Minister Keir Starmer of “dither and delay” for not committing to the ban. The Liberal Democrats have also called this “not good enough.” The Labour Party itself is split, with 107 Labour Party MPs abstaining in the vote on the House of Lords amendment.
But we know that the issue of young people’s online safety is a polarizing topic that politicians have—and will continue to—weaponize for public support, regardless of their actual intentions. This is why we will continue to urge policymakers and regulators to protect people’s rights and freedoms online at all moments, and not just take the easy route for a quick boost in the polls.
How does this bill connect to the Online Safety Act?
The draft Children’s Wellbeing and Schools Bill that came from the Lords provided that any regulation pertaining to the well-being of young people on social media “must be treated as an enforceable requirement” with the Online Safety Act. The Commons amendment, however, starts out by inserting a new clause that amends the Online Safety Act.
For more than six years, we’ve been calling on the UK government to pass better legislation around regulating the internet, and when the Online Safety Act passed we continued to advocate for the rights of people on the internet—including young people—as Ofcom implemented the legislation. This has been a protracted effort by civil society groups, technologists, tech companies, and others participating in Ofcom’s consultation process and urging the regulator to protect internet users in the UK.
Advertisement
The MPs amendment essentially rips this up. Technology Secretary Liz Kendall recently said that ministers intended to go further than the existing Online Safety Act because it was “never meant to be the end point, and we know parents still have serious concerns. That is why I am prepared to take further action.” But when this further action is empowering herself to make arbitrary decisions on content and access, and banning under-18s from social media, this causes much more harm than it solves.
Is the UK alone in pushing legislation like this?
Sadly, no. Calls to ban social media access for young people have gained traction since Australia became the first country in the world to enforce one back in December. On 5 March, Indonesia announced a ban on social media and other “high-risk” online platforms for users under 16. A few days later, new measures came into effect in Brazil that restricts social media access for under-16s, who must now have their accounts linked to a legal guardian. Other countries like Spain and the Philippines have this year announced plans to ban social media for under-16s, with legislation currently pending to implement this.
What are the next steps?
The Children’s Wellbeing and Schools Bill returns to the House of Lords on 25 March for consideration of the new Commons amendments. The bill will only become law if both Houses agree to the final draft.
We will continue to stand up against these proposals—not only to young people’ free expression rights, but also to safeguard the free flow of information that is vital to a democratic society. The issue of online safety is not solved through technology alone, especially not through a ban, and young people deserve a more intentional approach to protecting their safety and privacy online, not this lazy strategy that causes more harm than it solves.
Advertisement
We encourage politicians in the UK to look into what is best, not what is easy, and explore less invasive approaches to protect all people from online harms.
AI data centers are producing extreme heat islands that extend miles beyond facilities
Over 340 million people experience elevated temperatures due to hyperscale AI facilities
Extreme temperature spikes of up to 16.4 °F have been recorded near data centers
The expansion of AI-driven data centers is having a more immediate environmental impact than previously understood, experts have warned.
A research team led by Andrea Marinoni at the University of Cambridge claims these facilities, often sprawling over a million square feet, are not only consuming massive amounts of energy but also generate extreme local heating effects, known as heat islands.
Marinoni claims, “there are still big gaps in our understanding of the impacts of data centers,” emphasizing these effects have been largely overlooked.
Article continues below
Advertisement
Measuring heat impacts across global AI data centers
The team analysed temperature data from more than 6,000 hyperscale facilities over the past two decades, carefully accounting for global warming trends, seasonal changes, and other local influences.
The study found surface temperatures near data centers increased on average by 3.6 °F after operations began, with extreme cases recording rises to 16.4 °F.
Advertisement
These heat increases extend far beyond the immediate facility, sometimes affecting areas up to 6.2 miles away.
When the affected zones were mapped against population data, over 340 million people across North America, Europe, and Asia were affected, experiencing elevated local temperatures.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Observations in Mexico’s Bajio region and Aragon, Spain, revealed temperature increases that were inconsistent with those in the surrounding provinces.
Advertisement
This suggests that the heat effects were directly attributable to the data centers themselves rather than other environmental factors.
“The planned scale-up of data centers could have dramatic impacts on society,” Marinoni said.
Experts express concern over the rapid pace of AI infrastructure development, which may be outpacing sustainable planning.
Advertisement
“The ‘rush for AI-gold’ appears to be overriding good practice and systemic thinking…and is developing far more rapidly than any broader, more sustainable systems,” said Deborah Andrews, emeritus professor at London South Bank University
However, experts argue that further research is required to confirm these findings, particularly given the unusually high local temperature spikes reported.
The long-term consequences of energy-intensive AI operations warrant greater attention, as climate discussions have historically focused on emissions rather than direct heat effects.
Rethinking data center design and operational strategies could enable continued AI expansion while minimizing additional heat stress on neighboring communities and ecosystems.
Advertisement
In a world already experiencing intensified extreme weather events, the rapid proliferation of ultra-hot data centers may amplify local and regional environmental challenges.
Energy emissions remain a primary concern, but the localized warming caused by hyperscale facilities adds a new dimension of environmental risk that needs evaluation.
Netflix is launching a new standalone app for kids’ games called Netflix Playground, the company announced on Monday. Netflix Playground is available as part of a Netflix subscription, and doesn’t have any ads or in-app purchases.
Netflix says the app gives children access to an “ever-growing” library of games for kids. Netflix Playground is launching with titles featuring characters from popular kids’ shows.
The app, which is designed for children ages eight and under, is now available in the U.S., Canada, the U.K., Australia, the Philippines, and New Zealand. It will roll out worldwide on April 28. The app is available on both iOS and Android.
It can be accessed offline without a mobile or Wi-Fi connection, which the company says makes it the “perfect companion for long airplane rides or grocery trips.”
Advertisement
Image Credits:Netflix
For example, one game is titled “Playtime With Peppa Pig,” and sees players “jump into Peppa’s world with a collection of playful activities.” There’s also a “Sesame Street” game where players practice matching with memory cards or coordination with connect-the-dots. Other titles include “Let’s Color,” “Storybots,” “Bad Dinosaurs,” and more.
“We’re building a world where kids can not only watch their favorite stories, they can step inside them and interact with their favorite characters,” said John Derderian, Netflix Vice President of Animation Series + Kids & Family TV, in a press release. “We’re creating a seamless destination for discovery, learning, and play. Whether it’s reuniting with Hank and the ‘Trash Truck’ crew for new adventures or making a smoothie with ‘Peppa Pig,’ watching and playing on Netflix can be the fun and easiest part of every family’s day.”
Netflix first launched games in 2021 and had ambitious plans for the space, but has since dialed them back after its titles failed to gain traction. The streaming giant has also shut down several video game studios like Boss Fight, Spry Fox, and an AAA studio.
Techcrunch event
San Francisco, CA | October 13-15, 2026
Advertisement
Late last year, Netflix forayed into TV gaming with a slate of new party titles meant to be played in groups, including TV versions of Tetris and Pictionary. The company has also said it will prioritize cloud gaming, but has noted that it’s still in the early stages of these plans.
Although the Sega Dreamcast had many good qualities that made it beloved by the thousands of people who bought the console, one glaring omission was the lack of DVD video capabilities. Despite its optical drive being theoretically capable of such a feat, Sega had opted to use the GD-ROM disc format to not have to cough up DVD licensing fees, while the PlayStation 2 could play DVD movies. Fortunately it’s possible to hack DVD capability into the Dreamcast if you aren’t too fussy about the details, as [Throaty Mumbo] recently demonstrated.
For the Tl;dw folk among us, there’s a GitHub repository that contains the basic summary and all needed files. Suffice it to say that it is a bit of a kludge, but on the bright side it does not require one to modify the Dreamcast. Instead it uses a Pico 2 board that emulates a Sega DreamEye camera on the Dreamcast’s Maple bus via the controller port. The Dreamcast then requests image data as if from said camera.
On the DVD side of things there’s a Raspberry Pi 5 that connects to an external USB DVD drive and which encodes the video for transmission via USB to the Pico 2 board. Although somewhat sketchy, it totally serves to get DVDs playing on the Dreamcast. If only Sega had not skimped on those license fees, perhaps.
We may receive a commission on purchases made from links.
The workshop has become a place with specialized gadgets for just about every task you can imagine. However, all this niche inventory often makes your workspace more complicated. It leaves you with a cluttered toolbox packed with pricey, single-purpose items that rarely get used. For many hobbyists and pros, that high-tech solution or a really specific manual tool can be tough to pass up when you’re browsing the hardware store aisles.
If you take a closer look at how useful these items actually are, you’ll see that the classic, versatile tools that have helped tradespeople for generations are often superior to modern, specialized versions. Many of these niche items aren’t good investments because they lack the adaptability of standard equipment.
Advertisement
By taking a close look at these pricey novelties, you can better appreciate the value of a streamlined, multipurpose tool kit. Tools like speed squares, bungee cords, and extraction sockets can handle a wide range of problems across different projects and have many uses, unlike tools designed for a single use. Even with professional marketing and shiny finishes, you’re probably better off leaving these on the shelf.
Advertisement
Digital Angle Gauge
The Craftsman Digital Angle Gauge is impressive, but it’s a lot more than you probably need. It’s built as a four-function tool, so it works as an angle finder, a compound cut calculator, a protractor, and a standard level. It can measure angles from 0 to 220 degrees and stays accurate to the nearest 0.1 degree. It’s made from durable aluminum, but is still pretty heavy at 2.7 pounds.
This is the kind of tool you could get from Home Depot that you wouldn’t realize existed. Digital gauges are great if you need decimal-point precision, but you don’t really need it for framing walls or building furniture. A standard speed square or a sliding T-bevel will give you plenty of accuracy for almost any project. Bringing a device with two delicate LCD screens onto a dusty, rough job site is just asking for problems.
One dropped board or a misplaced hammer swing can shatter those screens, turning your expensive tool into useless aluminum. You’re also going to get tired of dealing with batteries and electronic quirks. Even though the tool is built to be tough, an analog version will never run out of power in the middle of a measurement.
Advertisement
Universal Nut Cracker
The Craftsman Auto Universal Nut Cracker is meant to save you when a nut is stuck and just won’t budge. It uses a hardened steel cutter to split the hardware, working on sizes from 5/16-inch to 7/8-inch across the flats. It’s designed to break rusted or frozen nuts without messing up the threads on the bolt underneath. While that sounds pretty good, it’s often tough to use in real-world situations, like in a cramped engine bay where the frame just won’t fit.
Even though it looks small, it measures 8.35 inches long, 3.35 inches wide, and 1.34 inches high. The maker says you can’t use power tools with it, so you’re stuck using your hands in tight spots where you probably can’t get much leverage anyway. A good set of extraction sockets is usually a better pick for rounded or stuck nuts, since those work on many sizes and aren’t hard to find. Instead of fighting with this tricky gadget, you could just grab a hacksaw or a torch to get that hardware off.
Advertisement
Even the few people who bought it from Craftsman have left it an average of 1 star out of 5 possible stars. Store reviews, like these bad ones from Ace Hardware, often offer valuable insight from buyers.
Advertisement
Auto Caliper Hanger Set
The Craftsman Auto Caliper Hanger Set is a classic example of a tool you just don’t need to pick up. This universal kit works for cars with disc brakes, and it’s supposed to hold the calipers securely while you’re doing brake work. It’s designed to keep the heavy caliper from hanging on your rubber brake lines, which could really damage them. It’s basically a heavy-duty S-hook with a tough coating, so you can reuse it.
Even with all that in mind, it’s really just a single-purpose item that’ll mostly just clutter up your toolbox, which shouldn’t have tools you never use anyway. You can get the same result with things you probably already have in your garage. A basic bungee cord from Tractor Supply, or even a piece of scrap wire from an old coat hanger works just as well. You just bend the wire into an S-shape, and you’re good to go.
This is basically just a simple piece of bent metal made in China. The set does come with a limited lifetime warranty, and the company says it’ll replace it for any reason, even without a receipt. Still, there’s really no reason to spend your money on a dedicated hanger when alternatives you probably have will work similarly.
Advertisement
Auto LED Inspection Mirror
The Craftsman Auto LED Inspection Mirror might seem like a smart way to check dark engine corners or behind walls, but it’s mostly a gimmick. It comes with a telescoping wand that has a rubber handle, a 2-inch mirror, and a swivel joint to help you get into tricky spots. The shaft begins at 6-1/4 inches and can stretch out to 37-1/2 inches.
The big selling point is its built-in LED light, which is meant to help you spot leaks or dropped bolts. However, that light is actually its main problem. Since it has an LED, the mirror needs a CR2032 battery to operate. These batteries last a while in a key fob, but drain relatively quickly with larger devices.
Advertisement
For daily work, a standard telescoping mirror along with a basic headlamp or flashlight is plenty. When you separate the light from the mirror, you actually get better lighting angles. You can bounce the light off the glass to see what you’re checking out without the glare from the built-in LED messing up the reflection. You could even just put a separate light source in the engine bay to light up the whole area instead of counting on one tiny light on a stick.
Advertisement
3-Jaw Oil Filter Wench
The Craftsman 3-jaw Oil Filter Wrench is another niche item that most people can live without. It’s marketed as a universal way to handle oil changes on different vehicles, promising to make the job simpler for anyone, regardless of their skill level. The tool uses metal jaws made from heat-treated steel. It’s designed to handle filters from 2 inches to 4-1/2 inches in diameter. It’s a low-profile item that’s 1.61 inches high and about 6.85 inches long, weighing in at 0.82 pounds.
Even with those specs and a lifetime warranty, this gadget isn’t a necessary purchase. It uses a gear mechanism to grip the filter while you turn it with a 3/8-inch or 1/2-inch drive ratchet. While it technically works, it’s not as versatile as some options. You likely already have many of the basic oil change tools from a store like Harbor Freight. A pair of filter pliers can handle the same job and will fit a much wider range of filter sizes.
This wrench is a heavy chunk of metal that takes up space. Sticking to a reliable strap wrench or standard pliers will save you money and keep your collection uncomplicated. Those tools also work for basic plumbing repairs, whereas this wrench does only one thing.
Advertisement
Why these were picked
YAKOBCHUK VIACHESLAV/Shutterstock
The hardware aisle is filled with specialized gadgets, like those in the Craftsman catalog, that solve singular problems rather than being multi-function tools. While these get marketed as revolutionary solutions to common mechanical hurdles, they can be a poor investment. These niche items tend to prioritize flashy, single-purpose engineering over the rugged adaptability that has defined the trades for generations.
Standard equipment like speed squares, extraction sockets, bungee cords, and basic strap wrenches gives you a level of durability and broad utility that specialized gear can’t match. These classic alternatives aren’t just way more affordable; they also do the same job without electronic glitches or taking up too much space. Being smart in the workshop is often about being clever, not about buying the fanciest gadgets.
This model essentially bridges the gap between the standard and Ultra Galaxy phones with high-end features, minus the S Pen. Some of these premium features could include the S26 Ultra’s new Privacy Display feature.
All of this sounds smart on paper, but it also sounds like acceptance.
After spending time with the Galaxy S26, I have a recurring thought. This compact phone has a solid software experience, reliable cameras, and is generally easy to recommend as a base flagship. But “reliable” is no longer enough when these devices carry flagship pricing.
Advertisement
The Samsung Galaxy S26 Ultra, angled for its Privacy Display.Tom Bedford / Digital Trends
The regular Galaxy S phones are where the problem is
Samsung’s own S26 comparison page shows the base S26 stuck at 25W charging, while the S26+ goes to 45W, and the Ultra got upgraded to 60W. The camera story lands the same way. Samsung’s Galaxy S26 and Galaxy S26+ share the same 12MP ultrawide, 50MP wide, and 10MP telephoto setup, while the Ultra gets the far more ambitious 50MP ultrawide, 200MP main, and 50MP + 10MP telephoto mix.
So apart from the Ultra, the other two models feel like an afterthought, but an expensive four-digit flagship one at that. This is why a Galaxy S27 Pro could make the S27 lineup feel less lethargic and more energetic. Just like the base Pixel 10 and Pixel 10 Pro and the base iPhone 17 and iPhone 17 Pro, there could be a clear distinction in the intermediate model. Right now, the base and Plus models do just enough. The Ultra does everything.
Digital Trends
The Galaxy S27 Pro needs to be a course correction, not a rebrand
But a Pro model only works if Samsung uses it to create a truly convincing middle. One with faster charging, stronger camera hardware, and a better reason to exist below the Ultra.
I think Samsung is definitely in need of this change. But the name alone won’t be enough. If Samsung wants the Pro phone to matter, it has to make this non-ultra Galaxy S phone feel like more than just a safe default and start making it feel worth the premium money again. Otherwise, the S27 Pro will just be another label slapped onto a lineup where all the excitement only lives at the top.
The app is free to download, and once its Gemma-based automatic speech recognition (ASR) models are downloaded, you can start dictating on your phone. In the app, you can see the live transcription, and when you hit pause, the app automatically filters out filler words like “um” and “ah” and polishes the text.
Below the transcript are options like “Key points”, “Formal”, “Short”, and “Long” to transform the text.
Image Credit: Screenshot by TechCrunchImage Credits:Screenshot by TechCrunch
You can also turn off the cloud mode to use local-only processing. (When cloud mode is on, the app uses cloud-based Gemini models for text cleanup.) The Google AI Edge Eloquent can import certain keywords, names, and jargon from your Gmail account, if desired. Plus, you can add your own custom words to the list.
The app displays the history of the transcription session and lets you search through all of them as well. It can show you words dictated in the last session, your word per minute speed, and the total number of words spoken.
Advertisement
“Google AI Edge Eloquent is an advanced dictation app engineered to bridge the gap between natural speech and professional, ready-to-use text. Unlike standard dictation software that transcribes stumbles and filler words verbatim, Eloquent utilizes AI to capture your intended meaning. It automatically edits out ‘ums,’ ‘uhs,’ and mid-sentence self-corrections, outputting clean, accurate prose,” the company’s App Store description reads.
I was saying “Transcription”. Still early days for this app. Image Credits: TechCrunchImage Credits:Screenshot by TechCrunch
While the app is currently only available on iOS, the App Store description references an Android version. (We have reached out to Google for more information, and will update the story if we hear back.)
According to the description, Eloquent offers “seamless Android integration,” where it can be set as users’ default keyboard for system-wide access across any text field. Plus, the app will be able to use the floating button feature, similar to the one Wispr Flow uses on Android, for easy access to transcription from anywhere.
AI-powered transcription apps are gaining popularity among users as speech-to-text models get better. With this experimental app, Google is joining the trend. If this test is successful, we could see improved transcription features across Android, too.
Enterprise AI programs rarely fail because of bad ideas. More often, they get stuck in ungoverned pilot mode and never reach production. At a recent VentureBeat event, technology leaders from MassMutual and Mass General Brigham explained how they avoided that trap — and what the results look like when discipline replaces sprawl.
At MassMutual, the results are concrete: 30% developer productivity gains, IT help desk resolution times reduced from 11 minutes to one, and customer service calls cut from 15 minutes to just one or two.
“We’re always starting with why do we care about this problem?” Sears Merritt, MassMutual’s head of enterprise technology and experience, said at the event. “If we solve the problem, how are we gonna know we solved it? And, how much value is associated with doing that?”
MassMutual, a 175-year-old company serving millions of policy owners and customers, has pushed AI into production across the business — customer support, IT, customer acquisition, underwriting, servicing, claims, and other areas.
Advertisement
Merritt said his team follows the scientific method, beginning with a hypothesis and testing whether it has an outcome that will tangibly drive the business forward. Some ideas are great, but they may be “intractable in the business” due to factors like lack of data or access, or regulatory constraint.
“We won’t go any further with an idea until we get crystal clear on how we’re going to measure, and how we’re going to define success.”
Ultimately, it’s up to different departments and leaders to define what quality means: Choose a metric and define the minimum level of quality before a tool is placed into the hands of teams and partners.
That starting point creates a quick feedback loop. “The things that we find slow us down is where there isn’t shared clarity on what outcome we’re trying to achieve,” which can lead to confusion and constant re-adjusting, said Merritt. “We don’t go to production until there is a business partner that says, ‘Yes, that works.’”
Advertisement
His team is strategic about evaluating emerging tools, and “extremely rigorous” when testing and measuring what “good” means. For instance, they perform trust scoring to lower hallucination rates, establish thresholds and evaluation criteria, and monitor for feature and output drift.
Merritt also operates with a no-commitment policy — meaning the company doesn’t lock itself into using a particular model. It has what he calls an “incredibly heterogeneous” technology environment combining best of breed models alongside mainframes running on COBOL. That flexibility isn’t accidental. His team built common service layers, microservices and APIs that sit between the AI layer and everything underneath — so when a better model comes along, swapping it in doesn’t mean starting over.
Because, Merritt explained, “the best of breed today might be the worst of breed tomorrow, and we don’t want to set ourselves up to fall behind.”
Credit: Brian Malloy Photo
Advertisement
Weeding instead of letting a thousand flowers bloom
Mass General Brigham (MGB), for its part, took more of a spray and pray approach — at first.
Around 15,000 researchers in the not-for-profit health system have been using AI, ML, and deep learning for the last 10 to 15 years, CTO Nallan “Sri” Sriraman said at the same VB event.
But last year, he made a bold choice: His team shut down a sprawl of non-governed AI pilots. Initially, “we did follow the thousand flowers bloom [methodology], but we didn’t have a thousand flowers, we had probably a few tens of flowers trying to bloom,” he said.
Like Merritt’s team at MassMutual, MGB pivoted to a more holistic view, examining why they were developing certain tools for specific departments of workflows. They questioned what capabilities they wanted and needed and what investment those required.
Advertisement
Sriraman’s team also spoke with their primary platform providers — Epic, Workday, ServiceNow, Microsoft — about their roadmaps. This was a “pivotal moment,” he noted, as they realized they were building in-house tools that vendors were already providing (or were planning to roll out).
As Sriraman put it: “Why are we building it ourselves? We are already on the platform. It is going to be in the workflow. Leverage it.”
That said, the marketplace is still nascent, which can make for difficult decisions. “The analogy I will give is when you ask six blind men to touch an elephant and say, what does this elephant look like?” Sriraman said. “You’re gonna get six different answers.”
There’s nothing wrong with that, he noted; it’s just that everybody is discovering and experimenting as the landscape keeps shifting.
Advertisement
Instead of a wild West environment, Sriraman’s team distributes Microsoft Copilot to users across the business, and uses a “small landing zone” where they can safely test more sophisticated products and control token use.
They also began “consciously embedding AI champions“ across business groups. “This is kind of a reverse of letting a thousand flowers bloom, carefully planting and nourishing,” Sriraman said.
Observability is another big consideration; he describes real-time dashboards that manage model drift and safety and allow IT teams to govern AI “a little more pragmatically.” Health monitoring is critical with AI systems, he noted, and his team has established principles and policies around AI use, not to mention least access privileges.
In clinical settings, the guardrails are absolute: AI systems never issue the final decision. “There’s always going to be a doctor or a physician assistant in the loop to close the decision,” Sriraman said. He cited radiology report generation as one area where AI is used heavily, but where a radiologist always signs off.
Advertisement
Sriraman was clear: “Thou shall not do this: Don’t show PHI [protected health information] in Perplexity. As simple as that, right?”
And, importantly, there must be safety mechanisms in place. “We need a big red button, kill it,” Sriraman emphasized. “We don’t put anything in the operational setting without that.”
Ultimately, while agentic AI is a transformative technology, the enterprise approach to it doesn’t have to be dramatically different. “There is nothing new about this,” Sriraman said. “You can replace the word BPM [business process management] from the ’90s and 2000s with AI. The same concepts apply.”
Three YouTube channels have banded together and filed a class action lawsuit against Apple, as first spotted by MacRumors. According to the lawsuit, the creators behind h3h3 Productions, MrShortGameGolf and Golfholics have accused Apple of violating the Digital Millennium Copyright Act by scraping copyrighted videos on YouTube to train its AI models.
While the YouTubers’ videos are available to watch on the platform, the lawsuit alleged that Apple illegally circumvented the “controlled streaming architecture” that regular users are limited to. The creators claimed that Apple’s video scraping was used to train its generative AI products, adding that the tech giant’s “massive financial success would not have been possible without the video content created” by the YouTubers. MacRumors noted that these YouTube channels have also filed similar lawsuits against other tech companies, including Meta, Nvidia, ByteDance and Snap.
It’s not the first time a company’s alleged AI training methods have gotten them in legal trouble. OpenAI and Microsoft were both accused of using copyrighted articles from the NYTimes to train its AI chatbots. Similarly, Perplexity was recently sued by Reddit and Encyclopedia Britannica for alleged copyright and trademark infringements. Last year, Apple was also named in a separate class action lawsuit from two neuroscience professors who claimed their copyrighted works were used without permission. We reached out to Apple for comment and will update the story when we hear back.
The latest Pew Research Center survey, conducted Jan. 20-26, 2026, finds that most White evangelicals (69%) approve of the way Trump is handling his job as president. And a majority (58%) say they support all or most of his plans and policies.
Let that sink in for a bit. The operative term here is probably “white,” but Trump has been embraced by the evangelical community, despite his being about as far removed from the ideals of Christianity as their arch-nemesis, trans people the Devil. (And let’s not forget I’m talking about the ideals, which are often preached but rarely practiced.)
Here’s how Trump handled Easter morning, one of the holiest (no pun intended) holidays observed by the people most likely to support him no matter what:
President Trump: “Tuesday will be Power Plant Day, and Bridge Day, all wrapped up in one, in Iran. There will be nothing like it!!! Open the Fuckin’ Strait, you crazy bastards, or you’ll be living in Hell – JUST WATCH! Praise be to Allah. President DONALD J. TRUMP”
In Trump’s own words, at 5:03 am on Easter Sunday:
Tuesday will be Power Plant Day, and Bridge Day, all wrapped up in one, in Iran. There will be nothing like it!!! Open the Fuckin’ Strait, you crazy bastards, or you’ll be living in Hell – JUST WATCH! Praise be to Allah. President DONALD J. TRUMP
Now, I have to admit that when I first read this, I thought Trump was announcing some new celebration of US infrastructure before derailing his own train of thought. But it’s definitely not that.
Both sides have threatened and hit civilian targets like oil fields and desalination plants critical for drinking water. Iran’s U.N. mission on social media called Trump’s threat “clear evidence of intent to commit war crime.”
Iran’s military joint command warned of stepped-up retaliatory attacks on regional oil and civilian infrastructure if the U.S. and Israel attack such targets there, according to state television.
Advertisement
The laws of armed conflict allow attacks on civilian infrastructure only if the military advantage outweighs the civilian harm, legal scholars say. It’s considered a high bar to clear, and causing excessive suffering to civilians can constitute a war crime.
While it looks like both sides in this war are willing to strike civilian infrastructure, the United States should be trying to take the high road (the one without war crimes). And if it can’t be bothered to do that, the administration should — at the very least — try to keep the president from publicly saying we’re going to commit war crimes.
But, alas, there’s no one willing to stop him. Pete Hegseth is definitely relishing his unearned role as the Secretary of Defense (“Back to the Stone Age.”) And he appears to be firing anyone who disagrees with things like drone-killing people in international waters and, you know, engaging in war crimes.
Shamefully, they won’t see a drop in support despite Trump threatening war crimes, dropping an F-bomb, and promising to send people halfway around the world to hell, as if he were a god himself. And that’s a damning indictment of an entire segment of Americans who choose to treat their religion as a weapon and want the world to be remade in their own image — something they often accuse Muslims of doing. The irony is lost on them, along with the man they’ve chosen to treat as God’s appointed leader.
We’ve had a lot of low points as a nation, but usually we’ve at least tried to improve. That’s no longer the case. We’re under the rule of people who debase and abuse the nation they claim to love. Happy Fuckin’ Easter, you crazy bastards. Welcome to Hell.
You must be logged in to post a comment Login