Not only will it save you time and energy, but in the case of a vacuum upgrade, you’ll soon realise just how much potential your home has when it comes to being freed from dust, hair, and larger particles in the surrounding areas.
Advertisement
Advertisement
For the most part, you’ll want to clean your home in its standard mode, which allows the vacuum to run for up to 40 minutes at a time.
That’s easily enough time to give all of your rooms a proper clean and help to keep the dust on larger floors at bay.
When it is required, however, the Dyson V8 features a Max mode for a deeper clean that’s ideal when dealing with high particle areas like car interiors or floor mats.
Sadly, running the Dyson V8 in Max mode does cut down battery life, which isn’t handy if you have a bigger scale clean in mind.
It’s worth mentioning that the Dyson V8 is still one of the most beloved vacuums that money can buy, and once you’ve run the V8 Absolute through a clean, you’ll be pleased to know that its filtration system helps to keep any captured allergens safely stored.
Advertisement
Dyson vacuums have always set the standard when it comes to delivering a genuinely cleaner air output after use, so you can always feel assured that your V8 is doing the important work of letting you breathe fresh air.
Advertisement
For any vacuum that costs under £200, the Dyson V8 Absolute is a true bargain.
Lasers are cool and all, but they can be somewhat difficult to control at times. This is especially true when you have hundreds, thousands, or millions of lasers you need to steer. Fortunately, the MITRE Corporation might have created exactly what’s needed to accomplish this feat. While you might expect this to be done in a similar fashion as a DLP micro mirror array, these researchers have created something a bit different.
A ski slope like a MEMS array is used to contort light as needed. Each slope is able to be controlled in such a way so precise that entire images are able to be displayed by the arrays. This is done by using a “piezo-opto-mechanical photonic integrated circuit” or (POMPIC). Each slope is constructed from SiO2, Al, AlN, and Si3N4. All of these are deposited in such a way to allow the specific bending needed for control.
While quantum computing hasn’t hit these slopes yet, that doesn’t mean you can’t look into the other puzzles needed for the quantum revolution. Quantum computing is something that people have been trying for a long time to get right. Big claims come from all the big players. Take Microsoft, for example, with claims of using Majorana zero mode anyons for topological quantum computing.
In 1964, science fiction writer Arthur C. Clarke predicted that computers would overtake human evolution.“Present-day electronic brains are complete morons, but this will not be true in another generation,” he told the BBC. “They will start to think, and eventually, they will completely out-think their makers.”
Daniel Roher opens his new documentary The AI Doc: Or How I Became An Apocaloptimist (2026) with this cheerful prophecy. And in the hundred-some minutes that follow, he tries to make sense of a technology that, by his own admission, he does not understand — and a world that is rapidly being changed by it. Explaining that he conceives of AI as a “magic box floating in space,” he enlists the help of experts to provide him with a crash course in what, exactly, AI is.
Roher’s real concern, however, isn’t so much about the workings of AI — though some of his subjects do attempt to explain them for him — but whether it might displace us, as Clarke’s prediction suggests it will.
While making the film, Roher learns that his wife Caroline is pregnant with their first child. He tracks his wife’s pregnancy and the birth of his son in parallel with the advent of AI. It’s a smart choice that builds on a fear all parents share: What sort of world are we making for our children? And behind that question is another, vibrating in anxious silence: What happens after our offspring replace us? This twinned existential angst drives his efforts to hear from the doomers, the techno-optimists, and the in-between “apocaloptimists” whose ranks he ultimately joins.
Advertisement
TheAI Doc, as its sweeping title suggests, wants to shape and lead the narrative around AI. It’s certainly set up to do that — Roher is fresh off an Oscar win for his documentary Navalny, and the film opened in nearly 800 theaters, which counts as wide-release for a nonfiction title. The final product is indicative of the ways that public attitudes around AI are in massive flux. Roher hopes to reach people of my grandmother’s generation who conflate AI with smartphones and spellcheck, as well as people who don’t seem to care whether a video was AI-generated.
But I think that this documentary has come too late to steer the conversation, something the film itself acknowledges. For all its transformative potential, AI isn’t actually unique among emerging technologies yet — it has not been cataclysmic or ushered in a golden age of prosperity — but Roher and many of those he interviews tend to treat it as a radical break with all that has come before. As a result, they tend to fixate on the binary extremes of doom or salvation. It’s an approach that reinforces our own helplessness in the face of AI-driven change, while also muddying our understanding of what we might yet be able to do as we seek to adapt, mitigate harm, and shape the world that AI could otherwise truly start remaking.
Roher, contemplating his child’s future, opts to hear the bad news first. Tristan Harris, the cofounder of the Center for Humane Technology, doesn’t mince words: “I know people who work on AI risk who don’t expect their children to make it to high school.”
Many of the film’s other interviewees are similarly gloomy. Geoffrey Hinton, the “godfather of AI,” for example, argues that as AI becomes smarter, it will become better at manipulating humanity. But no one is more pessimistic than Eliezer Yudkowsky, the well-known AI doomer and co-author of the controversial book If Anyone Builds It, Everyone Dies. As the title suggests, Yudkowsky believes that superintelligent AI would wipe out humanity — a position that he stands by and lays out for Roher.
Advertisement
Turning his back on these storm clouds — and taking the advice of his wife, Caroline, who tells him that he needs to find hope for the future — Roher tunes into the chorus of AI optimists. They tell him, variously, that there are more potential benefits than downsides to AI; that technology has made the world better in every way; that this will be the tool that helps us solve all our greatest problems. Not to mention: AI will bring the best health care on the planet to the poorest people on Earth, extend our healthspan by decades, and enable us to live in a postscarcity utopia free of drudgery. Oh, and: We will become an interplanetary species, all thanks to AI.
These promises initially reassure Roher, perhaps because he seems easily led by whomever he’s spoken to most recently. It is Harris who ultimately convinces him that we can’t separate the promise of AI from the peril it presents. The conclusions that result will be obvious to anyone who’s thought about these issues for more than a moment or two: If AI automates work, for example, how will people make a living?
It doesn’t help that many of the most invested players reflect on these questions superficially, if at all. OpenAI CEO Sam Altman tells Roher that he’s worried about how authoritarian governments will use AI — a claim that is followed in the film by a cut to images of Altman posing with authoritarian leaders. Other tech CEOs fall back on PR pleasantries in response to the filmmaker’s questions, and Roher too often goes easy on them, never diving deeper when they admit that even they aren’t confident that everything will go well. That these are the leaders of AI companies racing against each other to make the technology more and more advanced does little to inspire confidence.
(Some of the techno-pessimistic people interviewed for the documentary have expressed their strongdispleasure with the final result.)
Advertisement
“Why can’t we just stop?” Roher asks these tech CEOs. He’s told that a moratorium is a pipe dream: Many groups around the world are building advanced AI, all with different motivations. Legislation lags far behind the rate of technological progress. Even if we could pass laws in the US and EU that would stop or slow things down, says Anthropic CEO Dario Amodei, we’d have to convince the Chinese government to follow suit.
If we don’t create it, the thinking goes, our enemies will. It’s best to get ahead of them.
This is, of course, the logic of nuclear deterrence: If we don’t mitigate the risk of ending the world through mutually assured destruction, there’s nothing stopping someone else from pressing the button first.
An apocalypse in every generation
Advertisement
The atomic comparison is apt, if only because Roher sees the stakes in similarly stark terms. “Will my son live in a utopia, or will we go extinct in 10 years?” he wonders aloud. It’s a question that’s central to the film. But he never really sits with the more likely scenario that AI will neither lead to human extinction nor end all disease and drudgery. Every generation faces the specter of its own annihilation — and yet the ends of days keep accumulating, no matter how close the doomsday clock gets to apocalypse.
The point, then, isn’t that AI won’t be bad for us, but that by framing the question in strictly utopian or dystopian terms, we miss the messy reality that lies between hell on earth and heaven in the stars. Although The AI Doc tries to chart an “apocaloptimist” course between two extremes, it doesn’t grasp the real stakes. AI doesn’t really create new risks as such — it’s a force multiplier for existingones like the threat of nuclear warfare and the development and use of biological weapons. The chief existential risks of AI are human-made and human-driven. And that means, as Caroline says in the film’s ending narration, “We get to decide how this goes.” She’s right, but her husband never seems to understand how she’s right.
Like too many Big Issue Documentaries, Roher’s film is heavy on problems and light on solutions. It does offer some, calling for international cooperation, transparency, legal liabilities for companies if something goes wrong, testing before release, and adaptive rules to match the speed of progress. But just as this is a strictly introductory course in AI — one that will probably irritate those who’ve already moved on to AI 102 — these recommendations are only a starting point. For Roher, they offer reason to be hopeful. For the rest of us, they’re just the beginning of an opportunity to meaningfully steer the course of our future.
They started by mixing sour plum vodka at home, now they produce 1,800 bottles every month
At every party, people would pull Alexander Cheong aside and ask the same question: can I get a bottle of that?
The “that” was his homemade Sour Plum Vodka. Sick of alcohol brands that felt serious and disconnected from the actual experience of drinking, Alexander started mixing his own at social gatherings, rooted in the Southeast Asian flavours he grew up with.
“Alcohol is a social lubricant,” he said. “It’s about making the worries and stress clear up. We want to embody not taking anything too seriously.”
That philosophy became the foundation of Clumzy—a spirits brand he built with two friends, Kenneth Tan and Daniel Lim, on one simple idea: bottling Southeast Asian flavours with a bit of a kick. Five years later, with just three flavours and no outside investment, the brand has crossed S$1 million in cumulative revenue.
Advertisement
We spoke to the founders to find out how a kitchen experiment became a million-dollar business.
Starting a spirits brand from scratch
Clumzy is known for its signature Sour Plum Vodka./ Image Credit: Clumzy
The origins of Clumzy trace back to 34-year-old Alexander’s natural flair for mixology. Always the life of the party, he rarely enjoyed what he called “cold, hard, and serious” alcohol.
At social gatherings, to save money, he would mix his own cocktails, making his own flavourings from whatever he could imagine. Eventually, Alexander became known for his experimental jungle juices and punches, but one creation in particular stood out: a Sour Plum Vodka that became an instant crowd favourite at every party.
When COVID-19 hit, and social gatherings were restricted, demand for Alexander’s creations didn’t disappear—instead, it intensified. Friends and even friends of friends wouldn’t stop asking him to bottle his drinks for family occasions and casual nights in.
So, Alexander roped in his friend Kenneth to launch Clumzy in early 2021, taking orders via Instagram DMs only. Without any real push, word of mouth spread faster than they could keep up with.
Advertisement
Clumzy’s early “medicine bottle” look (left) vs its revamped packaging (right) after Daniel came on board./ Image Credit: Clumzy
But there was one clear problem: branding.
The product started with a simple label and packaging that wasn’t particularly eye-catching. About a month in, they brought in Daniel, an experienced marketer and close friend, as the third co-founder to help turn the product into a proper brand.
Daniel’s entry laid down Clumzy’s branding foundations. He redesigned the labels and packaging—joking that the original bottle looked like a “medicine bottle”—along with new photography, the website, and the e-commerce platform, shaping the brand identity Clumzy is known for today.
From there, while Daniel handled brand and marketing, Alexander and Kenneth focused on the business, trade relationships, and operations.
“If it didn’t work out, we had nothing to fall back on”
Clumzy started out with one sole offering: the Sour Plum Vodka. The product is marketed for its versatility—being great for shots, served on the rocks or as a cocktail. Each bottle retails at S$58 with an alcohol by volume of under 20%.
Advertisement
In the beginning, the spirit was made by the trio in Kenneth’s kitchen, which was stocked with giant Cambro containers and bottles. Supplies were stored in a small rented warehouse at S$200 per month, meaning they had to physically lug stock back and forth for every production run to Kenneth’s home.
At the time, they were making around 180 bottles a month—a capacity they quickly outgrew as demand surged beyond what a home setup could sustain.
Image Credit: Clumzy
Within just a few months, Clumzy became a legitimate side hustle generating real extra income on top of their day jobs. That created a genuine decision point for the co-founders: were they to stay comfortable with some pocket money, or risk everything to grow it into a real business?
Operating from a home kitchen came with clear limitations. They could only sell directly to consumers, with B2B opportunities completely off the table without a licensed commercial facility.
But upgrading wasn’t a small leap. Setting up a proper production space required tens of thousands of dollars—essentially their entire annual revenue at the time.
Advertisement
If it didn’t work out, we had nothing to fall back on.
Daniel Lim, co-founder of Clumzy
Still, they chose to take the risk. The trio secured a liquor licence and set up a dedicated production facility, moving from manual preparation to automated mixing and bottling processes.
They hosted pop up booths almost every weekend
From the start, the founders have been prudent with their spending, purchasing only what was necessary. They bootstrapped the venture with “a couple of thousand dollars,” which generated revenue and paid for itself, breaking even within a year.
(Left): Alexander and Daniel at Loky’s and the Crew in 2025; (Right): Clumzy also dispenses slushies of its offerings at some events./ Image Credit: Clumzy
Early on, they also committed to building Clumzy through direct consumer engagement.
They signed up for pop-ups and hosted booths “almost every weekend,” becoming familiar faces at curated events like ArtBox 2023, Boutiques Singapore 2024, and Christmas Atelier 2025, amongst smaller pop-ups at bars and cafes since 2021.
Advertisement
“Demand has been very strong,” Daniel noted. “We sold out on the first day of our first Boutiques Singapore back in 2022, which lasted two weeks. We made in one day what we normally would make over three to four days at other events.”
As Clumzy’s presence at weekend pop-ups grew, restaurants and eateries began taking notice, often after seeing strong customer demand at events or encountering the brand through word of mouth and social media.
At the same time, another key growth driver was how the founders expanded the ways customers could experience the product. They diversified into events, offering on-tap services for weddings and house parties, and experimented with more flexible formats, including slushie versions of their drinks at events.
“Diversifying into slushies made sense for us to create an option to make drinks that are fun for people who may not want to drink alcohol that tastes like alcohol,” Daniel shared.
Advertisement
He added that 65% of their customers are women, reflecting how Clumzy has resonated with a demographic that traditional spirits brands have historically underserved.
Hitting the S$1M milestone
(Left): Clumzy’s booth at Boutiques Singapore 2025; (Right): Clumzy stocks at various partners like The Liquor Store./ Image Credit: Clumzy
Today, Clumzy produces around 1,800 bottles of spirits each month and has expanded to a team of eight. Its offerings are stocked at 11 retail partners, including Pat’s Music Pub and The Liquor Shop.
The business model currently sees revenue split roughly into 70% B2C and 30% B2B. After four years of operation, Clumzy crossed S$1 million in cumulative revenue in 2025, a significant milestone for a bootstrapped local spirits brand.
Clumzy launched with its Sour Plum Vodka, before the Chrysanthemum Lychee Gin and finally its Coconut Pandan Rum./ Image Credit: Clumzy
The brand has also since expanded its product line to three spirits, with customer feedback playing a crucial role in shaping its development.
While the Sour Plum Vodka developed a devoted following, it became clear that the polarising flavour was an acquired taste—some loved it instantly, while others needed time to warm up to it. This insight led to the development of the brand’s second flavour: the Chrysanthemum Lychee Gin, designed for those who find the sour plum variety too intense.
After persistent calls from customers for a third offering, the trio released Coconut Pandan Rum recently, a creation they felt was important to Clumzy’s identity as a business inspired by Southeast Asian flavours.
Advertisement
Protecting what they’ve built
The trio believe strongly in what they’ve built and is looking to grow it organically./ Image Credit: Clumzy
Around the time Clumzy turned one, several parties approached the founders with offers: investment for equity stakes, rebranding under a larger umbrella, or outright acquisition.
At the time, all three founders still had day jobs. These offers initially felt genuinely attractive—a chance to take the brand one step up without carrying all the risk alone.
But Alex had the foresight to see what they’d be giving up: undervaluing everything that they stood for as a small founder-led brand. As such, they turned the offers down.
“In hindsight, some of those offers really shortchanged us,” Alex said. “I’m glad we trusted our gut in those decisions, and I’m glad we saw it all pay off eventually through our hard work.”
Clumzy at CellarFiesta 2025 and Artbox 2023./ Image Credit: Clumzy
Today, Clumzy is run by Alexander and Daniel, with Kenneth having taken a backseat in 2024.
The next chapter for the brand involves significantly expanded distribution. After months of negotiations with NTUC FairPrice, the brand’s spirits are set to hit its supermarket shelves soon, marking its entry into mainstream retail.
Advertisement
Encouraged by strong demand in Singapore, international expansion is also on the horizon. The founders plan to bring Clumzy to Thailand and Australia, driven by interest from Singaporeans abroad who have discovered the brand and want access in their new home countries.
That said, the team has also observed a broader shift in drinking culture, with more consumers becoming intentional about their alcohol consumption. While nightlife has seen a decline, overall alcohol consumption remains relatively steady, as more people drink at home or during daytime social occasions.
Daniel noted that it’s all about a healthier relationship with socialising.
“People aren’t drinking less. They’re bored with sameness. Alcohol offerings have felt largely unchanged for decades. What people are genuinely hungry for is novelty and a sense of connection to something familiar,” Daniel explained.
Advertisement
That sense of novelty and familiarity is what Clumzy aims to deliver.
The Bremen startup’s platform deploys teams of AI agents that autonomously execute engineering tasks across more than 75 existing tools, without replacing any of them. Revaia led the Series B; Capgemini joined through its ISAI Cap Venture vehicle. All Series A investors returned.
Synera, the Bremen-based agentic AI platform for industrial engineering, has raised $40 million (approximately €35 million) in a Series B round led by Revaia, with participation from Capgemini through ISAI Cap Venture.
All of the company’s existing Series A investors returned, including UVC Partners with a substantial commitment from its growth fund, BMW iVentures, Cherry Ventures, Venture Stars, and Spark Capital.
The round is intended to accelerate Synera’s expansion in the US and internationally, building on existing deployments at NASA, BMW, Airbus, Volvo Trucks, and Hyundai.
Synera was founded in 2018 in Bremen by Dr. Moritz Maier, Sebastian Möller-Lafore, and Daniel Siegel, a team that had been working together since 2006, initially under the name ELISE (Evolutionary Lightweight Structure Engineering), before rebranding in 2022 to reflect the company’s expanded scope.
Advertisement
The platform connects more than 75 existing engineering tools, including software from Altair, Autodesk, Hexagon, PTC, and Siemens, into a unified orchestration layer, allowing AI agents to execute complex engineering tasks autonomously across design, simulation, optimisation, costing, and reporting without requiring companies to replace their existing infrastructure.
The platform is deployed on-premises, keeping engineering intellectual property and sensitive data within customers’ own environments. Synera has also established a US presence in Boston, Massachusetts.
The company describes its approach as deploying a virtual engineering team: agents that don’t merely assist but autonomously execute, running iterative simulations, generating reports, responding to RFQs, and progressing through approval workflows without human intervention at each step.
The platform has been internally described as “JARVIS for engineers.” Quantified outcomes cited by Synera and independently validated by Frost & Sullivan in a 2025 analysis include a 95% reduction in finite element simulation time at engineering consultancy EDAG, and a 30% weight reduction in 3D-printed robot gripper designs at BMW’s Additive Manufacturing Campus.
Advertisement
NASA has deployed multiple Synera agents to transform requirements into validated part designs, completing hundreds of design iterations in an hour.
The investment context is a structural mismatch between AI investment and manufacturing deployment. Gartner’s 2025 CIO survey found that 86% of manufacturing respondents plan to increase generative AI investment in 2026 and 97% expect to have deployed it by 2028, yet only 41% of AI and generative AI prototypes currently reach production, according to Gartner’s 2024 AI Mandates for the Enterprise survey.
Synera’s proposition is that the gap exists because most AI tools treat engineering as a chat interface problem rather than an infrastructure problem: the agents need to connect to the actual tools where the work happens, not sit alongside them.
The company has also been recognised by Frost & Sullivan with its 2025 Global AI Agents for Engineering Transformational Innovation Leadership award.
Advertisement
The Series A, raised in September 2022, was $14.8 million, led by Spark Capital with BMW iVentures, Cherry Ventures, UVC Partners, and Venture Stars participating. The Series B brings total funding to approximately $58 million.
Capgemini’s entry through ISAI Cap Venture is strategically notable: Capgemini is one of the world’s largest IT services firms and a significant engineering services provider to the automotive and aerospace sectors Synera targets, making it both an investor and a potential channel partner.
Others in the seed round include Crucible Capital, Gallery Ventures, and Uber CEO Dara Khosrowshahi. The company has raised $23 million to date.
Pillar, founded in 2023, automates hedging processes for such businesses. Hedging is when a company places a trade that can offset or cancel out losses from other priced trades. Geopolitics has not been kind to the commodities market, which has seen much volatility in the past year.
Harsha Ramesh, the company’s co-founder and CEO (founded alongside Chinmay Deshpande, the company’s CTO), said the company uses AI to ingest and parse data from client contracts, cash flows, inventories, ERP software, spreadsheets, and even WhatsApp messages to “continuously analyze exposure across commodities, FX, and freight.”
Advertisement
It can then build and manage a hedge portfolio for its clients, and adjust positions automatically based on “market conditions, volatility, and the client’s risk tolerance,” Ramesh continued. The platform executes trades and continuously monitors risk and exposure, turning hedging from a “static, periodic decision to a continuous, autonomous system,” Ramesh said.
Pillar’s clients include Shibuya Sakura Industries, a trading firm that buys and sells commodities like metals; the recyclable materials company Sigma Recycling; and United Metals Solution Group, which also recycles and trades metals.
Ramesh was once a macro trader, managing large derivative trading books and working with some of the largest companies in the world as they sought to hedge foreign exchange rates and interest rate exposure, he said. “I also spent time at a medium-sized physical business in import-export,” he recalled.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
“What stood out was that sophisticated institutions had access to tools, infrastructure, and talent, while the actual producers, importers, and manufacturers driving global trade had little to no access to this,” he said. “Risk management was treated as a luxury, despite being essential.”
Advertisement
Pillar hopes to offer sophisticated, institutional-grade tools to small and medium-sized enterprises. “Our goal is to make hedging as accessible and ubiquitous as payments or accounting software,” he said.
Image Credits:Harsha Ramesh and Chinmay Deshpande.
Others in this business include the legacy desks at big banks and the commodity risk platforms like Topaz and RadarRadar.
Ramesh said humans are still in the loop in some ways at Pillar, handling “approvals, oversight, and strategic decisions.” Humans also help with more “complex situations” — like large transactions, where a human team will mix their judgment with the machine’s execution.
Even though China has gotten very close to its first-ever manned moon landing, NASA remains the only space agency to land people on the moon as of mid-2026. One of those NASA astronauts, Charles M. Duke Jr., left a very special memento behind to mark his landing: A photograph of his beloved family.
Along with Thomas K. Mattingly II and John W. Young, Duke Jr. was part of the Apollo 16 mission, which departed Earth on April 16, 1972. Apollo 16’s goal was the region of Descartes, a crater in the moon’s highlands that previous missions had not visited. Their objectives were to collect rock samples, which would allow scientists to learn more about the moon’s composition, as well as to place instruments and conduct experiments to obtain more detailed readings of solar winds and other forces on the moon’s surface.
Before leaving the moon, Duke Jr. left a small scrap of cloth on its surface marked “64-C,” the name of the class with which he’d passed as a test pilot, and a commemorative coin marking the 25th anniversary of the United States Air Force’s founding. He didn’t just honor his military family, though; he also left something to commemorate the family waiting for him at home, placing a photograph of himself with his two young sons, Charles and Tom, and his wife Dotty. On the back, according to Air & Space Forces Magazine, was a simple message: “This is the family of Astronaut Duke from Planet Earth. Landed on the Moon, April 1972.”
Advertisement
How Astronaut Duke’s family joined him on the perilous mission
Charles M. Duke Jr’s photograph of himself with his family was taken to the moon, it seems, largely for the same reason why dads do most things: to score cool points with his children. Speaking to Business Insider in 2015, Duke Jr recalled asking his kids, “Would y’all like to go to the Moon with me?” to get them interested in the mission. Taking the photograph was his way of allowing them to do just that.
Advertisement
It was for the best, perhaps, that his sons didn’t physically join him and John Young on the Moon’s surface. This is the sort of perilous journey on which so much hinges on luck and timing. Indeed, Apollo 16 came within a hair’s breadth of having to cancel the landing entirely. Speaking to Fox Carolina News in April 2026, Duke Jr explained: “[A] serious problem happened about an hour before we landed on the Moon. [Thomas K] Mattingly reports a problem with the main engine … in the Command Module, which was your ride home.”
This happened on the far side of the moon and could have aborted the landing. Thankfully, Mission Control found a workaround that brought the engine back to life and saved the mission. As for the photograph, it remains there, over half a century later. Though Duke wrapped the photo in plastic, it’s unclear how well it held up to solar radiation in the decades between the Apollo 16 landing and NASA’s Artemis II lunar mission, which took place in early April 2026.
Despite what it sounds like, a concrete calculator is not a mathematical device made of cement. A concrete calculator is actually a digital (or mental) tool for estimating how much concrete a construction or landscaping project will need. Because concrete is typically sold by volume (most often in cubic yards), you should figure out how much you need before you start work on your project. But order too much, and you’ll be overpaying for a ton of excess material spinning around in the cement truck. Don’t order enough, you’ll have to put the project on pause until you can get another delivery. Not the end of the world, by any means, but still a major inconvenience either way.
There are plenty of online concrete calculators you can use to make sure neither scenario becomes your reality on the job. That way, you get a precise estimate based on your project’s specific dimensions without having to spitball it. Just take your basic project dimensions (length, width, and depth), and the calculator converts those figures into cubic volume. No matter if you’re pouring driveways, patios, foundations, or slabs, the calculator makes it so that you always know your total cost and the materials required. Do the simple math, grab your DIY concrete tools, and get to work.
Advertisement
How a concrete calculator works
Microgen/Getty Images
Concrete calculators use a pretty straightforward mathematical formula: length times width times depth, which gives you volume. For most rectangular areas, just measure these three dimensions, multiply them, and you’re good to go. For circles, you’ll need to grab the diameter and factor that in. For more irregular shapes than that, it’s best to divide the total area into smaller sections and do separate calculations. Add it all up in the end, get the grand total, and start working on your construction and concrete jobs.
Once the measurements get entered into the concrete calculator, it’ll probably tell you the results in cubic feet. From there, you can convert them into cubic yards by dividing by 27. Plenty of concrete calculators might do that step for you, but it still helps to know. Don’t forget to account for real-world variables, as well. For example, adding an extra 5% to 10% to the final estimate could help you cover any potential spillage, uneven surfaces, bad mixtures, or even slight miscalculations.
The software industry is racing to write code with artificial intelligence. It is struggling, badly, to make sure that code holds up once it ships.
A survey of 200 senior site-reliability and DevOps leaders at large enterprises across the United States, United Kingdom, and European Union paints a stark picture of the hidden costs embedded in the AI coding boom. According to Lightrun’s 2026 State of AI-Powered Engineering Report, shared exclusively with VentureBeat ahead of its public release, 43% of AI-generated code changes require manual debugging in production environments even after passing quality assurance and staging tests. Not a single respondent said their organization could verify an AI-suggested fix with just one redeploy cycle; 88% reported needing two to three cycles, while 11% required four to six.
The findings land at a moment when AI-generated code is proliferating across global enterprises at a breathtaking pace. Both Microsoft CEO Satya Nadella and Google CEO Sundar Pichai have claimed that around a quarter of their companies’ code is now AI-generated. The AIOps market — the ecosystem of platforms and services designed to manage and monitor these AI-driven operations — stands at $18.95 billion in 2026 and is projected to reach $37.79 billion by 2031.
Yet the report suggests the infrastructure meant to catch AI-generated mistakes is badly lagging behind AI’s capacity to produce them.
Advertisement
“The 0% figure signals that engineering is hitting a trust wall with AI adoption,” said Or Maimon, Lightrun’s chief business officer, referring to the survey’s finding that zero percent of engineering leaders described themselves as “very confident” that AI-generated code will behave correctly once deployed. “While the industry’s emphasis on increased productivity has made AI a necessity, we are seeing a direct negative impact. As AI-generated code enters the system, it doesn’t just increase volume; it slows down the entire deployment pipeline.”
Amazon’s March outages showed what happens when AI-generated code ships without safeguards
The dangers are no longer theoretical. In early March 2026, Amazon suffered a series of high-profile outages that underscored exactly the kind of failure pattern the Lightrun survey describes. On March 2, Amazon.com experienced a disruption lasting nearly six hours, resulting in 120,000 lost orders and 1.6 million website errors. Three days later, on March 5, a more severe outage hit the storefront — lasting six hours and causing a 99% drop in U.S. order volume, with approximately 6.3 million lost orders. Both incidents were traced to AI-assisted code changes deployed to production without proper approval.
The fallout was swift. Amazon launched a 90-day code safety reset across 335 critical systems, and AI-assisted code changes must now be approved by senior engineers before they are deployed.
Maimon pointed directly to the Amazon episodes. “This uncertainty isn’t based on a hypothesis,” he said. “We just need to look back to the start of March, when Amazon.com in North America went down due to an AI-assisted change being implemented without established safeguards.”
Advertisement
The Amazon incidents illustrate the central tension the Lightrun report quantifies in survey data: AI tools can produce code at unprecedented speed, but the systems designed to validate, monitor, and trust that code in live environments have not kept pace. Google’s own 2025 DORA report corroborates this dynamic, finding that AI adoption correlates with an increase in code instability, and that 30% of developers report little or no trust in AI-generated code.
Maimon cited that research directly: “Google’s 2025 DORA report found that AI adoption correlates with an almost 10% increase in code instability. Our validation processes were built for the scale of human engineering, but today, engineers have become auditors for massive volumes of unfamiliar code.”
Developers are losing two days a week to debugging AI-generated code they didn’t write
One of the report’s most striking findings is the scale of human capital being consumed by AI-related verification work. Developers now spend an average of 38% of their work week — roughly two full days — on debugging, verification, and environment-specific troubleshooting, according to the survey. For 88% of the companies polled, this “reliability tax” consumes between 26% and 50% of their developers’ weekly capacity.
This is not the productivity dividend that enterprise leaders expected when they invested in AI coding assistants. Instead, the engineering bottleneck has simply migrated. Code gets written faster, but it takes far longer to confirm that it works.
Advertisement
“In some senses, AI has made the debugging problem worse,” Maimon said. “The volume of change is overwhelming human validation, while the generated code itself frequently does not behave as expected when deployed in Production. AI coding agents cannot see how their code behaves in running environments.”
The redeploy problem compounds the time drain. Every surveyed organization requires multiple deployment cycles to verify a single AI-suggested fix — and according to Google’s 2025 DORA report, a single redeploy cycle takes a day to one week on average. In regulated industries such as healthcare and finance, deployment windows are often narrow, governed by mandated code freezes and strict change-management protocols. Requiring three or more cycles to validate a single AI fix can push resolution timelines from days to weeks.
Maimon rejected the idea that these multiple cycles represent prudent engineering discipline. “This is not discipline, but an expensive bottleneck and a symptom of the fact that AI-generated fixes are often unreliable,” he said. “If we can move from three cycles to one, we reclaim a massive portion of that 38% lost engineering capacity.”
AI monitoring tools can’t see what’s happening inside running applications — and that’s the real problem
If the productivity drain is the most visible cost, the Lightrun report argues the deeper structural problem is what it calls “the runtime visibility gap” — the inability of AI tools and existing monitoring systems to observe what is actually happening inside running applications.
Advertisement
Sixty percent of the survey’s respondents identified a lack of visibility into live system behavior as the primary bottleneck in resolving production incidents. In 44% of cases where AI SRE or application performance monitoring tools attempted to investigate production issues, they failed because the necessary execution-level data — variable states, memory usage, request flow — had never been captured in the first place.
The report paints a picture of AI tools operating essentially blind in the environments that matter most. Ninety-seven percent of engineering leaders said their AI SRE agents operate without significant visibility into what is actually happening in production. Approximately half of all companies (49%) reported their AI agents have only limited visibility into live execution states. Only 1% reported extensive visibility, and not a single respondent claimed full visibility.
This is the gap that turns a minor software bug into a costly outage. When an AI-suggested fix fails in production — as 43% of them do — engineers cannot rely on their AI tools to diagnose the problem, because those tools cannot observe the code’s real-time behavior. Instead, teams fall back on what the report calls “tribal knowledge”: the institutional memory of senior engineers who have seen similar problems before and can intuit the root cause from experience rather than data. The survey found that 54% of resolutions to high-severity incidents rely on tribal knowledge rather than diagnostic evidence from AI SREs or APMs.
In finance, 74% of engineering teams trust human intuition over AI diagnostics during serious incidents
The trust deficit plays out with particular intensity in the finance sector. In an industry where a single application error can cascade into millions of dollars in losses per minute, the survey found that 74% of financial-services engineering teams rely on tribal knowledge over automated diagnostic data during serious incidents — far higher than the 44% figure in the technology sector.
Advertisement
“Finance is a heavily regulated, high-stakes environment where a single application error can cost millions of dollars per minute,” Maimon said. “The data shows that these teams simply do not trust AI not to make a dangerous mistake in their Production environments. This is a rational response to tool failure.”
The distrust extends beyond finance. Perhaps the most telling data point in the entire report is that not a single organization surveyed — across any industry — has moved its AI SRE tools into actual production workflows. Ninety percent remain in experimental or pilot mode. The remaining 10% evaluated AI SRE tools and chose not to adopt them at all. This represents an extraordinary gap between market enthusiasm and operational reality: enterprises are spending aggressively on AI for IT operations, but the tools they are buying remain quarantined from the environments where they would deliver the most value.
Maimon described this as one of the report’s most significant revelations. “Leaders are eager to adopt these new AI tools, but they don’t trust AI to touch live environments,” he said. “The lack of trust is shown in the data; 98% have lower trust in AI operating in production than in coding assistants.”
The observability industry built for human-speed engineering is falling short in the age of AI
The findings raise pointed questions about the current generation of observability tools from major vendors like Datadog, Dynatrace, and Splunk. Seventy-seven percent of the engineering leaders surveyed reported low or no confidence that their current observability stack provides enough information to support autonomous root cause analysis or automated incident remediation.
Advertisement
Maimon did not shy away from naming the structural problem. “Major vendors often build ‘closed-garden’ ecosystems where their AI SREs can only reason over data collected by their own proprietary agents,” he said. “In a modern enterprise, teams typically have a multi-tool stack to provide full coverage. By forcing a team into a single-vendor silo, these tools create an uncomfortable dependency and a strategic liability: if the vendor’s data coverage is missing a specific layer, the AI is effectively blind to the root cause.”
The second issue, Maimon argued, is that current observability-backed AI SRE solutions offer only partial visibility — defined by what engineers thought to log at the time of deployment. Because failures rarely follow predefined paths, autonomous root cause analysis using only these tools will frequently miss the key diagnostic evidence. “To move toward true autonomous remediation,” he said, “the industry must shift toward AI SRE without vendor lock-in; AI SREs must be an active participant that can connect across the entire stack and interrogate live code to capture the ground truth of a failure as it happens.”
When asked what it would take to trust AI SREs, the survey’s respondents coalesced unanimously around live runtime visibility. Fifty-eight percent said they need the ability to provide “evidence traces” of variables at the point of failure, and 42% cited the ability to verify a suggested fix before it actually deploys. No respondents selected the ability to ingest multiple log sources or provide better natural language explanations — suggesting that engineering leaders do not want AI that talks better, but AI that can see better.
The question is no longer whether to use AI for coding — it’s whether anyone can trust what it produces
The survey was administered by Global Surveyz Research, an independent firm, and drew responses from Directors, VPs, and C-level executives in SRE and DevOps roles at enterprises with 1,500 or more employees across the finance, technology, and information technology sectors. Responses were collected during January and February 2026, with questions randomized to prevent order bias.
Advertisement
Lightrun, which is backed by $110 million in funding from Accel and Insight Partners and counts AT&T, Citi, Microsoft, Salesforce, and UnitedHealth Group among its enterprise clients, has a clear commercial interest in the problem the report describes: the company sells a runtime observability platform designed to give AI agents and human engineers real-time visibility into live code execution. Its AI SRE product uses a Model Context Protocol connection to generate live diagnostic evidence at the point of failure without requiring redeployment. That commercial interest does not diminish the survey’s findings, which align closely with independent research from Google DORA and the real-world evidence of the Amazon outages.
Taken together, they describe an industry confronting an uncomfortable paradox. AI has solved the slowest part of building software — writing the code — only to reveal that writing was never the hard part. The hard part was always knowing whether it works. And on that question, the engineers closest to the problem are not optimistic.
“If the live visibility gap is not closed, then teams are really just compounding instability through their adoption of AI,” Maimon said. “Organizations that don’t bridge this gap will find themselves stuck with long redeploy loops, to solve ever more complex challenges. They will lose their competitive speed to the very AI tools that were meant to provide it.”
The machines learned to write the code. Nobody taught them to watch it run.
Motorola’s mid-range flip foldable looks set to follow a familiar formula, with leaked render images of the Razr 70 showing a device that retains much of its predecessor’s design while introducing a new processor, a significant camera upgrade, and three fresh colour options.
The renders, published by Ytechb, show the Razr 70 in Pantone Sporting Green, Pantone Hematite, and Pantone Violet Ice, with the handset expected to launch as the Razr 2026 in the US market, continuing Motorola’s regional naming convention from the current generation.
The Razr 70 leak follows hot on the heels of separately surfaced renders of the Razr 70 Ultra, which point to two striking new material finishes, including a wood-grain option and an Alcantara variant in a new purple-blue tone.
Display and design
The cover display carries over from the Razr 60 at 3.63 inches with a resolution of 1,066 x 1,056 pixels, retaining the wide chin visible at the bottom edge that has characterised the range, while the inner foldable OLED panel measures 6.9 inches with a 2,640 x 1,080 pixel resolution.
Advertisement
Despite the design continuity, the camera configuration does change, with rumours pointing to the 13-megapixel ultra-wide lens being replaced by a 50-megapixel telephoto camera, a meaningful upgrade for a mid-range foldable that has previously lagged behind more premium flip phones on versatility.
Advertisement
Processor and storage
Under the hood, the Razr 70 is expected to arrive with a new eight-core chip clocked at up to 2.75GHz, supported by a choice of 8GB, 12GB, or 16GB of RAM and storage options spanning 256GB, 512GB, and 1TB.
Rounding out the spec sheet are a 32-megapixel front camera and a 4,500mAh battery, the latter a generous capacity for the flip foldable category, where compact chassis dimensions have historically limited battery size.
Advertisement
With the Razr 60 launching in late April 2025, the Razr 70 and Razr 70 Ultra look set to follow a similar schedule, suggesting an official reveal could arrive within the next few weeks.
We’ve put several Motorola’s phones through their paces over the years, and our broader Motorola’s mobile phone coverage is worth checking out for anyone looking to get a sense of where the brand stands heading into this next release.
The Samsung R95H Micro RGB TV the company had on display at CES 2026 was a sight to behold, its bright picture and rich color managing to punch through even under the bright lights of Samsung’s First Look exhibit at the Wynn Las Vegas.
It was a solid next step for the company’s new RGB LED display tech, which made its debut in late 2025 with the launch of a 115-inch model priced around $29,999. At 130 inches, the Samsung R95H shown at CES made its predecessor look small in comparison. It also begged the questions of whether RGB LED TVs would be made available in real-world screen sizes, and if so, when?
The answer to those questions are yes, and now. Samsung has announced the availability of its R95H Micro RGB TV lineup in 65, 75, and 85-inch screen sizes pricedat $3,199.99, $4,499.99, and $6,499.99, respectively. As for the 130-inch model shown at CES, that one is scheduled to arrive later this year at a price that will likely make your head spin. Samsung also previously announced a 115-inch version of the R95H though pricing and availability of that size are not yet available. For now the 2025 model 115-inch R95F Micro RGB TV is carrying over into 2026.
In terms of new 2026 models, alongside the R95H series Micro RGB TVs, Samsung also announced the step-down R85H series Micro RGB TVs, which will be available in 55- to 85-inch screen sizes priced from $1,599.99 to $3,999.99.
Advertisement
The Samsung R95H Micro RGB TV features a Glare Free screen and has high brightness for daytime viewing.
Samsung R95H Micro RGB Features & Design
Samsung’s Micro RGB tech uses micro-sized red, green, and blue LEDs in place of the blue or white light modules found in typical mini-LED TVs like Samsung’s own Neo QLED models. The promise here is of greater color accuracy and 100% “full UHD color spectrum” coverage, along with more refined local dimming to eliminate backlight blooming.
Other features found in the new Samsung R95H series TVs include a Glare Free screen similar to the one found in the company’s 2025 flagship mini-LED and OLED models, Wide Viewing Angle, Ultimate Micro Dimming Pro, and Auto HDR Remastering Pro to upscale standard dynamic range programs to HDR. There’s also something new called Micro RGB HDR Pro, along with Real Depth Enhancer, a feature that debuted in the company’s 2025 models which analyzes pictures in real time to better define the foreground and background elements.
Samsung continues to go all in on AI features for its TVs, and the R95H series offers 4K AI Upscaling Pro, AI Motion Enhancer Pro, and Micro RGB AI Engine Pro. There’s also an Adaptive Picture feature that uses AI to optimize images based on program genre and also provides an AI Customization mode that can create a custom picture preset based on your response to an array of displayed images.
Samsung’s updated Tizen smart interface moves tabs from the screen’s left side to the top.
AI also gets top billing in Samsung’s updated Tizen Smart TV interface, which repositions tabs from the side to the top of the screen. The new layout is cleaner and more-user friendly, and it features a Vision AI Companion tab that lets you explore all manner of topics via Copilot or Perplexity using either the TV’s built-in far-field mic, or the one located in the TV’s Solar Cell Bluetooth remote control. Other Tizen features include Generative Wallpaper for creating custom screensavers, and access to the subscription-based Samsung Art Store that was previously limited to the company’s The Frame TVs.
Advertisement
Samsung TVs have long been a top option for gaming, and the R95H series continues that tradition with 165Hz support across four HDMI 2.1 ports, Freesync Premium Pro, and HDR10+ gaming. There’s also cloud-based gaming available on Samsung’s Gaming Hub, which features Xbox, NVIDIA, GeForce Now, Luna, Blacknut, Antstream, Boosteroid, and more.
While the 130-inch R95H model Samsung showed at CES 2026 featured a “Timeless Frame” floor mount, the 65-85-inch models come with an Infinity Air stand that, combined with the four-side Bezel-less screen, gives the display something of a floating effect. A 4.2.2-channel speaker array powered by 70W delivers Dolby Atmos audio, and there’s Object Tracking Sound+, along with a Q-Symphony feature that combines the TV’s speaker output with that of a compatible Samsung soundbar. Additionally, the R95H is Wireless One Connect Ready, giving you the option for a wireless 165Hz connection using Samsung’s optional Wireless One Connect Box.
The Solar Cell remote used to control the Samsung R95H.
Hands-on with the Samsung R95H Micro RGB TV
Samsung invited eCoustics to its New Jersey headquarters in early March to get hands-on experience with a 65-inch R95H, and I was provided with ample time to make a full set of measurements.
Advertisement. Scroll to continue reading.
Advertisement
As stated above, Samsung claims “full UHD color spectrum coverage” for the R95H, which is another way of saying BT.2020 color space coverage. In the set’s default Filmmaker Mode, P3 color space coverage measured 98.8% and BT.2020 was 92%. That BT.2020 number is obviously lower than what Samsung cites for the TV, but as I learned in a demonstration put on by the company’s engineers at Samsung HQ, they based their specification on the TV’s Dynamic picture mode rather than the more accurate Filmmaker Mode.
By way of comparison, when I measured the Samsung QN90F, the company’s flagship mini-LED TV, P3 color space coverage measured 93.6% and BT.2020 was 76.5% so this TV represents a marked improvement in color reproduction.
The R95H’s REC.709 grayscale delta-E averaged out to 6 in Filmmaker mode, which is a higher than average result. (A delta-E lower than 3 is considered to be imperceptible). This variation would likely be mitigated by a full calibration.
The R95H’s peak HDR brightness in Filmmaker Mode measured on a 10% white window pattern was 1,541 nits, and it was 639 nits on a 100% (fullscreen) pattern. In the Standard picture mode, peak HDR brightness measured higher at 2,223 nits on a 10% window, and 654 nits fullscreen. The R95H’s Standard mode results exceed what I measured on the Samsung Q90F, though the QN90F’s peak brightness was higher in Filmmaker Mode.
Advertisement
In a nutshell, the new Samsung R95H Micro RGB TV offers a wider color space coverage and higher brightness than last year’s top Samsung mini-LED TV, a model that is carrying over to 2026.
The R95H Micro RGB TV’s BT.2020 color gamut coverage exceeds that of top mini-LED and OLED models.
Alpha is a movie I’ve only seen specific clips from because, as an example of a 4,000 nits HDR transfer, it’s a good test for a display’s HDR tone mapping capability. (It involves prehistoric tribes, and there’s a wolf.) Watching a scene where the boy, Keda, and his wolf companion commune in front of the setting sun, there was a fine level of detail in the bright highlights, indicating that the TV’s Micro RGB HDR Pro feature was properly doing its job.
Two picture quality improvements promised by RGB LED tech are a reduction of backlight blooming artifacts and improved off-axis picture uniformity. A check of the white on black scrolling text that opens Blade Runner confirmed the R95H’s ability to deliver solid, halo-free performance, while the uniformity test pattern from the Spears & Munsil Ultra HD Benchmark showed that its picture could retain solid contrast and color saturation even when viewed from a far off-center seat.
Wrapping things up with the opening title sequence of Baby Driver, the R95H displayed only a limited level of motion judder as the titular character strolls along a city street. If that doesn’t sound like a big deal, I’ve seen the picture on some other TVs get seriously wobbly during this sequence, so the Samsung’s motion handling here was nothing short of impressive.
Advertisement
The Bottom Line
My take on the Samsung R95H Micro RGB TV after doing an initial test is that it provides an appreciable step up in picture quality over Samsung’s also-impressive flagship mini-LED TV, the QN90F. My limited time with the R95H meant I didn’t have an opportunity to do a deep dive into its many AI-related picture enhancements, and I also didn’t have a chance to evaluate its built-in sound. But as the first example of an RGB LED TV I’ve spent hands-on time with, I’m excited for this new category, which is finally creating serious picture quality competition for OLED TVs.
You must be logged in to post a comment Login