Home security gets a quiet boost when the details are clear regardless of time of day or lighting conditions. The newest Ring Indoor Cam Plus, priced at $35 (was $60), delivers on that promise with its Retinal 2K resolution, allowing you to see a misplaced key on the kitchen counter or a sleeping pet in the corner without having to squint at blurry corners.
Clarity like that goes much beyond simply capturing a great photo. You can place one in the doorway to record the precise moment a delivery arrives, then zoom in several times to read the label on the box. You could put another one in the living room to see if the kids finished their homework or if the poor old dog tripped over a plant again, and all of that detail appears in bright, vivid color during the day and remains the same when the evening comes around.
The Low-Light Sight function dramatically improves nighttime monitoring. A hallway lit just by a distant nightlight nevertheless displays good and realistic colors on screen, while complete darkness provides clear black and white detail that cuts right through the shadows. Homeowners no longer have to question if a noise was made by the cat or something else because the feed distinguishes between the two right away.
Having many cameras throughout the house provides a more complete image of what’s going on. You can place one in the nursery and look at a sleeping infant from the sofa or the store. Another one near the back door allows you to keep track on who is arriving and exiting without constantly checking. The flexible mount allows you to place it on a shelf, on the wall, or even on the ceiling, allowing you to set it anywhere you need it and then move it to a different location as your needs change.
Real-time alerts appear on your phone as soon as motion passes a preset zone. You can configure those zones to ignore the hallway fan except when it is near the safe or medicine cabinet. Two-way audio is also useful because it provides good voice on both ends, allowing you to remind the adolescent to lock up or simply say hello to a neighbour who has stopped by. Of course, there are simple privacy controls, both in the app and on the camera itself. Simply slide the removable lens cover over to conceal the view and turn off the microphone when the room must remain off-limits. You can also utilize custom zones to black out critical areas, such as your home office desk or a restroom doorway, so that the camera only monitors what is truly vital.
Advertisement
Getting everything up and running takes only a minute or two, and all you need is a standard power outlet and the Ring app on your phone. Views load quickly, and the system continues to grow as more cameras are added to cover new locations. People who start with one frequently add a second or third because the ease simply builds up, as each new camera fills another gap without generating any more difficulty.
Last year, Samsung introduced its first Micro RGB TV, but that model was only available in a massive 115-inch size and had an equally enormous price tag of $27,000. For 2026, it unveiled two ranges — the R85H and R95H — with sizes between 55 and 100 inches, and prices starting at a much more reasonable $1,600.
Why are these called Micro RGB TVs? Firstly, they’re LCD TVs, like most of Samsung’s lineup, but the key difference here is that they use a different backlighting system. Where most LCD TVs use a specialized filter to generate colors, the premium R95 and more affordable R85 models use clusters of red, green and blue micro LEDs to either replace or augment the filters.
Samsung says the R95H hits 100% of the BT.2020 color area, which is more colors than most TVs could ever hope for. Yet, having seen this TV, and others that make the same claim, I don’t think it’s all that important. At least, not right now.
The competing TVs I’ve witnessed hitting this number have had a lot of colors, yes, but accuracy has been reduced in the process, even according to each company’s own test data. It’s more vital that a TV reproduce the color that’s present in the content you’re watching, rather than invent a whole new one. It’s worth adding that the TVs typically hit these numbers in Vivid mode, which is notorious for being cartoonishly bright and colorful.
The Samsung R95H is the company’s high-end Micro RBG TV.
Advertisement
Samsung
A much more compelling feature of these TVs, however, is their use of Samsung Glare Free technology, which is designed to virtually eliminate reflections from windows or overhead lights on the screen. I’ve seen it in person, and it’s one of the best antireflective systems out there.
The TVs will also be capable of displaying Samsung’s own answer to Dolby Vision 2: HDR10 Plus Advanced, though it’s yet to be seen if this format will be supported through content.
The TVs are designed for gaming, too, and have dedicated gaming modes. For smooth movement, they include Samsung’s own Motion Xcelerator 165Hz and Motion Xcelerator 144Hz tech, on the R95H and R85H, respectively.
The R95H is compatible with the Wireless One Connect box if you want to keep your sources and TV separate. Both TVs come with the new Slim Fit Wall Mount, which allows access to existing ports with a hinge at the top of the TV.
Advertisement
As with most of the TVs announced at CES, these models have a plethora of AI modes, from the image and audio processors to the onboard chatbots from either Microsoft Copilot or Perplexity. Two of the most notable modes are AI Soccer Mode Pro (which can make the World Cup look like a video game) and AI Sound Controller Pro, which can amplify or cut different parts of the soundtrack, such as sound effects or dialogue.
While a 130-inch R95H was announced at CES, the company has yet to detail any pricing or availability for that model.
At a briefing earlier this year, I got a look at a preproduction version of the R95H, and Samsung representatives told me that the OLED TVs were still the best choice for picture quality, while the Micro RGBs were better for brightness and color. With their antireflective coatings and gaming features, I can see that these TVs will be popular with console gamers, especially.
However, Samsung is not the only manufacturer to announce Micro RGB TVs for 2026, and it’s harder to find one that isn’t. While I’m skeptical about the benefits of hitting 100% of BT.2020, this is a factor I’ll test when I get my hands on compatible models.
Samsung’s first ‘affordable’ Micro RGB TV confirms the technology has a bright future
Unprecedented colour response
Uncompromising Filmmaker Mode
Exceptional backlighting
No Dolby Vision support
Slight motion blur
Expensive for an LCD TV
Squirrel Widget
Advertisement
Key Features
Micro RGB screen
Replaces the traditional white or blue light shone through colour filters LED TV approach with tiny, separate red, green and blue LEDs
Advertisement
Up to 8 HDMI inputs
You can buy an optional Wireless One Connect box for the set that adds another four that send picture and sound to the TV wirelessly
Advertisement
Tizen smart system with AI
Comprehensive collection of streaming apps and lots of AI support for content searching and learning your viewing habits
Advertisement
Introduction
Having dipped a (very big) toe into Micro RGB technology waters towards the end of 2025 with an ultra-expensive 115-inch model, Samsung has now followed that up with the much more affordable (though still premium) R95H TV series.
Does the technology still feel as exciting and cutting edge on smaller, more affordable screens? And is the huge colour gamut it’s capable of delivering really worth worrying about?
Advertisement
Price
The 75R95H costs £4299 in the UK, and $4499 in the US. The 65-inch model that’s also available from the range’s launch goes for £3399 and $3199.
Advertisement
This means Samsung is pitching the R95H range below – albeit only slightly – its flagship S99H OLED TVs. Though the closest screen size to the QE75R95H in the S99H range is two inches bigger.
While this shows that Samsung sees its QD OLED TVs as the absolute pinnacle of its TV performance, the still-premium pricing of the R95H series suggests Samsung believes Micro RGB capable of doing some pretty special things.
Design
Slender sides and rear
Centrally mounted stand with floating effect
Anti-reflection screen and Art Store create an artwork effect
The R95H has no truck with the wide frame Samsung added to the S99H OLED series. In fact, both the R95H’s screen frame and rear are exceptionally slim by LCD TV standards.
This makes it a great all hanging option – thoughh it actually ships with a desktop stand. This stand slots without screws into to grooves near the centre of the bottom edge, meaning the TV can be placed on even quite narrow bits of furniture. The neck of this stand wears a mirrored finish that creates the optical illusion that the screen is somehow just hovering above the base plate.
Advertisement
Advertisement
Image Credit (Trusted Reviews)
The TV carries well defined and extensive cable channelling on its rear panel to try and stop dangling cables from spoiling the 75R95H’s minimalist chic. Though actually, in a highly unusual move, it’s possible to connect four sources to the TV without any cabling if you add one of Samsung’s new, optional Wireless One Connect boxes to the R95H.
This lets you attach up to four HDMI sources to it, and then broadcasts their pictures and sound wirelessly to the TV from potentially metres away.
One last unusual design feature is the 75R95H’s combination of Samsung’s digital store of digital art screensavers, and an extremely effective anti-reflection screen. Put these together and you can make the TV look like a painting when you’re not watching it.
Image Credit (Trusted Reviews)
Connectivity
Wireless One Connect box option
Four gaming-friendly HDMI 2.1 ports as standard
Wi-Fi, Bluetooth and Airplay 2 support
Advertisement
I’ve already obliquely covered the R95H’s main connection story: Its potential for adding an optional Wireless One Connect Box. This warrants further attention, though, for as well as opening up the potential for cable free connection of up to four external sources to the TV, it also opens up the possibility of the QE75R95H taking in as many as eight HDMI sources at once.
The four HDMI ports built into the R95H’s bodywork and the four on the optional Wireless One Connect box are all fully HDMI 2.1 specified – something I’ll come back to in the Gaming section.
Advertisement
Image Credit (Trusted Reviews)
The Wireless One Connect hosts a couple of USB ports too, again doubling the number of those available.
There are also optical digital audio outputs on the TV and Wireless One Connect box, while the TV’s own ‘built in’ wireless capabilities include Wi-Fi, Bluetooth and AirPlay 2.
Image Credit (Trusted Reviews)
Advertisement
User Experience
Tizen OS smart system
Voice and Gesture control
Two remote controls
The Tizen OS that provides your main interface with the R95H’s smart features is pretty effective. The appearance of the home screen has been improved by shifting the usual roster of sub-menu links from down the side to along the top of the screen, and Samsung has also added a new AI home menu accessed via a direct button now included on the smart remote control.
This AI menu provides manual access to the third-party Co-Pilot and Perplexity AI systems, as well as a Generative AI image creation system that lets you create your own images from a few prompts.
The R95H’s extensive use of AI extends to its support for both the Bixby and Alexa voice recognition systems, and impressively sophisticated tools for coming up with relevant content recommendations based on your viewing habits. This can include the viewing habits of other members of your household, too, thanks to the TV supporting multiple individual user profiles.
Tizen carries a huge array of apps and streaming services, including the individual catch up apps for the UK’s main terrestrial broadcasters. Though there’s no support for Freely ‘wrappers’ carried by some rival brands these days.
Advertisement
Image Credit (Trusted Reviews)
Advertisement
The R95H ships with two remote controls: One traditional button-heavy one, and a much more slender affair with a stripped back button count and a solar panel on its rear that means you’ll never have to change its batteries again. This ‘smart’ remote also carries a built-in mic and Samsung’s new AI button when the other remote doesn’t, so all in all I’m confident this smart remote will be the one most users stick with.
The R95H can also be controlled to some extent via gestures if you’re wearing a Samsung Galaxy Watch, or you can add the TV to Samsung’s SmartThings app for iOS and Android devices, and then control it from your phone via a ‘virtual remote’
The sophistication of the R95H’s Tizen OS makes it a little intimidating initially, but after exploring it for a little while you start to appreciate its depths. Its biggest flaw, ultimately, is its desire to get you to accept adverts on the UI. You can opt out of these during the initial install, but if you do the basic layout of the UI remains unchanged, leaving areas where ads might have appeared often feeling like a fairly substantial waste of space.
Features
Micro RGB panel
Local dimming
Dedicated Micro RGB AI processor
The R95H is the second TV Samsung has released to use Micro RGB technology. This new tech, which is set to appear in 2026 from other brands too, under different names such as RGB LED, Mini RGB and True RGB, replaces conventional LCD TV lighting systems, which shine white or blue lights through colour filters, with dedicated red, green and blue LEDs.
This an approach which has the potential to greatly increase colour gamuts, colour volumes (colour plus brightness), power efficiency and general brightness
Advertisement
Advertisement
The Micro RGB lights on the QE75R95H are working within a VA type of panel, backed up by a potent local dimming system. In the QE75R95H’s case this local dimming zone system operates across a commanding 1792 individually controlled LED clusters. On top of this, of course, there’s the extra dimming effect you can get from each red, green and blue LED.
Image Credit (Trusted Reviews)
Samsung has created a dedicated Micro RGB AI processing system for its new screen technology to, among other things, drive the backlighting system, deliver high-level upscaling of HD sources, and apply the huge colour gamut the screen can provide to real-world content.
The R95H supports the HDR10, HLG and HDR10+ HDR formats, and will eventually, following a firmware update later this year, support the new HDR10+ Advanced format (designed to take on the recently announced Dolby Vision 2 format) by boosting brightness, cloud gaming, motion and the approach the TV takes to different content genres. As ever with Samsung TVs, the R95H doesn’t support Dolby Vision in either of its formats.
Gaming
Up to 165Hz frame rate support
VRR support including AMD FreeSync
Game Hub source screen and dedicated Game menu screen
The R95H leaves no stone unturned on the gaming front. For starters all four onboard and all optional Wireless One Connect HDMI ports support high frame rates for gaming up to 165Hz. They also all support variable refresh rates right across its frame rate range, with the VRR support encompassing the AMD FreeSync Premium Pro format.
Advertisement
Advertisement
Auto low latency support enables the R95H to automatically switch to its Game mode, in which mode the time the screen takes to render graphics drops to 10.4ms. Slightly higher than the S99H OLED, but not by enough for even the most competitive gamer to notice.
Where lag might become an issue, though, is if you’ve connected your console or PC to one of the HDMI inputs on an optional Wireless One Connect Box. The wireless transmission process associated with these boxes adds just under 20ms extra lag time.
The R95H helpfully organises all of your game sources, be they connected by HDMI or streamed via the many cloud gaming services Samsung support, onto a dedicated Game Hub home screen within the Tizen OS GUI, and also calls up a dedicated gaming menu if you press and hold the play button on the remote while playing a game.
Image Credit (Trusted Reviews)
This menu provides detailed information on the incoming gaming feed, and provides a host of cheats – sorry, gaming aids – such as mini map zooming, brightening dark areas so enemies are easier to spot, calling up an onscreen crosshair, and calling in different levels of motion smoothing for those (increasingly rare) occasions where you find yourself playing a low frame rate game.
As a tasty prelude to the video picture quality we’re going to cover in the next section, gaming on the Samsung R95H is a mostly a fantastically fun but also seriously immersive experience. The huge colour vibrancy the Micro RGB screen can achieve (I measured almost 150% of the DCI-P3 colour spectrum) together with brightness that hits peaks as high as around 2200 nits, results in colours that explode off the screen, making titles as varied as Crimson Desert, Forza Horizon and Rayman Legends look radiantly engaging to a degree they’ve seldom looked before.
Advertisement
Advertisement
HDR titles are handled well, with the screen doing a good job of optimising game HDR engines to its capabilities without the results looking clipped or unstable, and gaming feels responsive via the TV’s built-in HDMI ports.
My only gripes with gaming are that blooming around stand-out bright objects seems a little more noticeable if you’re sat off to the side of the screen than it does with video, and that fast pans and rapidly moving objects can look a touch soft compared with the S99H. Though they do look equally fluid.
Picture Quality
Remarkable colour range
High brightness
Excellent backlight controls
While we’ve become pretty accustomed now to TVs that push brightness far beyond the levels commonly used by content creators, doing the same thing for colour is for me much more noticeable – and, therefore, trickier to do convincingly.
Samsung’s Micro RGB AI processor, though, makes a remarkably good job of it. Especially considering it’s dealing with such a new technology.
Advertisement
Starting with just how aggressively the R95H leans into Micro RGB’s wider colour gamut capabilities, measurements taken using Portrait Displays’ Calman Ultimate software, G1 signal generator and C6 HDR5000 light meter reveal that the screen can deliver essentially 150% of the DCI-P3 colour range. An unprecedented figure that at least some of Samsung’s picture presets seek to venture into when showing today’s more constrained HDR images.
Advertisement
The Dynamic preset really goes for it, and is worth checking out for the fullest evidence of the sort of spectacle Samsung’s TV can deliver. While this mode is surprisingly even-handed in how it expands colours across the spectrum, and how little noise it suffers with compared with some rival similar modes I’ve seen on some early Micro RGB/Mini RGB samples, it still looks forced sometimes, particularly when it comes to skin tones.
The Standard preset, while certainly not measuring accurately, is for the most part a joy to watch. I watched multiple favourite 4K Blu-ray test discs in this default mode (having turned off the interfering Eco and ambient sensor-related modes) and for most of the time was both dazzled by seeing such familiar titles looking like they’d been remastered in some new next-gen HDR format, and amazed at how well this ‘expansion’ of their native images had been achieved.
Advertisement
Image Credit (Trusted Reviews)
Especially when it came to avoiding such potentially distracting nasties as exaggerated colour noise, certain tones suddenly jumping out of the picture more than others, and saturations so extreme that the screen is no longer able to express the sort of subtle colour blends required to make objects feel three-dimensional and natural.
Just as importantly as the spectacular but surprisingly authentic feeling colours to the Standard mode’s appeal is the prowess of the Samsung R95H’s backlight system. The more than 1700 local dimming zones in the QE75R95H’s screen together with the extra light control created by using separate red, green and blue LEDs for each lighting ‘unit’ creates light control mechanics which, under the watchful eye of Samsung’s dedicated Micro RGB processor, deliver both fantastically deep black colours by LCD TV standards, but also impressive stability and freedom from either general clouding and backlight blooming around stand-out bright objects.
What’s more, even when a little blooming can occur around extremely bright, colourful objects, unlike normal LED TVs the blooming actually adopts the chief colour tone of the ‘offending’ object. This makes it much less noticeable than the usual grey blooming effect, as your eye more often than not perceives the bloom as natural colour radiance.
Advertisement
Making the capabilities of the backlight controls even more impressive is the fact that the R95H can deliver its deep black colours and clean local dimming effects despite it also being extremely bright.
Advertisement
Calman Ultimate tests reveal brightness peaks of nearly 2300 nits on 10% and more than 660 nits on full 100% light windows, which contributes to an outstanding HDR sensation in terms of baseline brightness and the intensity of classic HDR highlights like sunlight reflecting on glass or metal, or bright streetlights against a night sky.
This impressive brightness doesn’t remotely start to overwhelm the screen’s huge colour capabilities. In fact, far from any tones looking faded or pallid in bright areas, the screen delivers huge levels of colour volume that complete the sense that RGB TVs are in a world of their own where colour is concerned.
Exciting though all of this is, many home cinema fans will still want the QE75R95H to be able to handle movies in a much more ‘as the director intended’ fashion as well, at least for serious film nights.
As with its S99H OLED flagships, Samsung has again managed to cater for this need much more successfully than we might have expected given the extravagant capabilities of the R95H’s screen. Right out of the box, the Filmmaker Mode achieves DeltaE 2000 average measurements with every Calman Ultimate HDR test I tried it with bar one below the 3.0 level beyond which deviations from industry standards might potentially be visible to the human eye. And even on that one test where it strays further than three, it only does so by a half mark.
Advertisement
Filmmaker Mode images inevitably look much less bright and much less vibrant than Standard mode, simply because sticking to today’s common mastering standards means using much less of what the screen can do. But this is as it should be – and the demands of switching into accurate settings don’t cause subjective viewing issues such as pale colour tones, heavily reduced backlight controls or poor dynamic peaking of bright light sources.
Advertisement
Image Credit (Trusted Reviews)
The R95H delivers SDR content in the Standard mode with just as much measurable and subjectively enjoyable precision, while again managing to drastically open SDR up in terms of brightness and colour in the Standard mode – and/or when using a pretty effective SDR to HDR conversion option – without the results looking gaudy.
Inevitably the R95H isn’t perfect. Sometimes the mostly excellent Standard mode can push skin tones, especially in dark scenes, too much, that they look too ripe. A slight pinkish tone can sometimes appear over bright shots in Standard mode too, and very occasionally subtle colour differences, especially over misty backgrounds, can become too overt.
Blooming around bright objects, while disguised by its colour component versus regular LED TVs, is present in a way it isn’t with OLED, and becomes slightly more noticeable if you’re watching the TV from an angle. The Standard mode can sometimes exhibit obvious baseline brightness ‘jumps’ during hard cuts between dark and light shots.
Advertisement
Motion looks slightly softer than it does on Samsung’s regular premium LCD TVs, as well as looking too smooth and noisy if you leave the Standard present’s default Picture Clarity settings in play.
Finally, while for the majority of the time I think most viewers in typical home viewing conditions will love the way the R95H’s anti-glare filter suppresses basically all reflections, it can flatten black levels a little in really extreme bright ambient light.
Advertisement
Upscaling
Well controlled processing side effects
Impressive, 4K-like sharpness and detail
Good noise suppression
The potent, heavily AI-influenced processors in Samsung’s premium TVs over the past few years have consistently delivered some of the best upscaling around – a handy benefit, I suspect, of Samsung’s longer devotion to the 8K TV cause than any other brand.
This trend continues with the Micro RGB AI processor, happily, as the R95H turns HD and even SD into convincing 4K look-a-like territory when it comes to detailing and clarity, without exaggerating source noise or grain, or generating distracting side effects such as colour shift or haloing around hard object edges.
Advertisement
The fact that the upscaling holds up well on a screen as big as 75-inches underlines how effective Samsung’s processing is, too.
Sound Quality
Object Tracking Sound works well
Good power and soundstage creation
Can struggle with sustained deep bass
For most of the time the R95H sounds good. Despite its slender bodywork, for starters, it manages to produce impressive volume levels capable of satisfying pretty substantial rooms. Especially as the sound is projected well beyond the TV’s physical extremities, creating a soundstage larger than even the 75-inch screen.
Advertisement
This large soundstage is exceptionally coherent, too, thanks to the ear-catching efforts of Samsung’s Object Tracking Sound system. Combining clever audio processing with a multi-speaker set up that places speakers all around the screen, OTS does a remarkably accurate job of placing key effects in the right place.
Image Credit (Trusted Reviews)
This applies to everything from dialogue to gunfire and engine noise from moving vehicles, and the number of objects that the OTS engine is capable of handling in any given scene is remarkable.
There’s a lovely crisp, clean but also rounded quality to the QE75R95H’s detailing, and shrill trebles sound controlled, even and free of warbling or buzzing distortions.
Advertisement
The R95H doesn’t quite hold it together at the low frequency end of the sound spectrum, though. Short, impactful bass sounds hit fine, but where there are longer, really deep and pressurised rumbles to handle the pair of dedicated low frequency speakers can descend into various distortions, including buzzing noise, crackling, and a general coarsening of the low frequency sound as the speakers try to push beyond what they’re really capable of achieving.
Squirrel Widget
Advertisement
Should you buy it?
Advertisement
It delivers colours like you’ve never seen before
With a measured ability to cover nearly 150% of the DCI-P3 colour spectrum and nearly 95% of the most extreme BT2020 colour spectrum, and equipped with presets that actually venture into such colour extremes, the 75R95H can deliver some genuinely remarkable colour extremes
Advertisement
Content needs to catch up
While the 75R95H delivers an unprecedented LCD colour response, there’s no real content out there that can fully exploit such wide colour. Though Samsung’s processing does a very good job in some picture presets of mapping current pictures to the TV’s capabilities.
Advertisement
Final Thoughts
With the QE75R95H Samsung has not only proven that Micro RGB and similar technology is relevant even in a world where content doesn’t yet exist that could fully unlock its capabilities, it’s delivered a TV that also breaks new ground with its LCD backlight control and AI features.
And which does things in the colour department that even Samsung’s S99H OLED can’t but that’s not to say you should necessarily buy it over the S99H. There are also areas, inevitably, where the pixel-level light control of OLED remains unmatched.
But the fact that the 75R95H even stands as a credible alternative to a TV as brilliant as the S99H is an outstanding achievement for such a new technology.
How We Test
The R95H was tested over a period of 10 days in both dark test room and regular living room environments.
It was fed a wide variety of content, including console games, 4K Blu-rays, streams of various resolutions and HDR formats from all of the main streaming platforms, as well as broadcast tuner footage.
All of this content was watched on the 75R95H in both daylight and dark conditions, and we explored all of the TV’s many picture setting options to find the best set ups for both regular living room environments and blacked out home cinemas.
Finally, the Samsung 75R95H was tested for both SDR and HDR playback in multiple presets using Portrait Display’s Calman Ultimate software, G1 processor and a C6 HDR5000 colorimeter.
Advertisement
Tested in dark and bright room settings
Tested with real-world content
Benchmarked with Portrait Displays’ Calman Ultimate Software, G1 signal generator and KC6 HDR5000 colorimeter
Gaming input lag was measured with a Leo Bodnar signal generator
FAQs
What HDR formats does the Samsung R95H support?
The 75R95H supports HDR10, HLG, and HDR10+ right away, and the new HDR10+ Advanced system will be added via firmware update later in the year.
What panel technology does the Samsung R95H use?
The QE75R95H uses Samsung Display’s second generation Micro RGB display, applied to a VA panel with more than 1700 local dimming zones.
Advertisement
Test Data
Samsung QE75R95H
Input lag (ms)
10.4 ms
Peak brightness (nits) 5%
2190 nits
Peak brightness (nits) 2%
2000 nits
Peak brightness (nits) 100%
654 nits
Set up TV (timed)
360 Seconds
Full Specs
Samsung QE75R95H Review
UK RRP
£4299
USA RRP
$4499
Manufacturer
Samsung
Screen Size
74.5 inches
Size (Dimensions)
1658.8 x 349.1 x 1019.2 MM
Size (Dimensions without stand)
946.2 x 1658.8 x 29.8 MM
Weight
30.1 KG
Operating System
Tizen
Release Date
2026
Resolution
3840 x 2160
HDR
Yes
Types of HDR
HDR10, HLG, HDR10+, HDR10+ Advanced
Refresh Rate TVs
48 – 165 Hz
Ports
Four HDMI 2.1, two USB, Ethernet, RF input, optical digital audio output; (optional wireless One Connect box) – four HDMI 2.1 ports, two USBs, optical audio port, RF and satellite tuner inputs
Lasers are cool and all, but they can be somewhat difficult to control at times. This is especially true when you have hundreds, thousands, or millions of lasers you need to steer. Fortunately, the MITRE Corporation might have created exactly what’s needed to accomplish this feat. While you might expect this to be done in a similar fashion as a DLP micro mirror array, these researchers have created something a bit different.
A ski slope like a MEMS array is used to contort light as needed. Each slope is able to be controlled in such a way so precise that entire images are able to be displayed by the arrays. This is done by using a “piezo-opto-mechanical photonic integrated circuit” or (POMPIC). Each slope is constructed from SiO2, Al, AlN, and Si3N4. All of these are deposited in such a way to allow the specific bending needed for control.
While quantum computing hasn’t hit these slopes yet, that doesn’t mean you can’t look into the other puzzles needed for the quantum revolution. Quantum computing is something that people have been trying for a long time to get right. Big claims come from all the big players. Take Microsoft, for example, with claims of using Majorana zero mode anyons for topological quantum computing.
In 1964, science fiction writer Arthur C. Clarke predicted that computers would overtake human evolution.“Present-day electronic brains are complete morons, but this will not be true in another generation,” he told the BBC. “They will start to think, and eventually, they will completely out-think their makers.”
Daniel Roher opens his new documentary The AI Doc: Or How I Became An Apocaloptimist (2026) with this cheerful prophecy. And in the hundred-some minutes that follow, he tries to make sense of a technology that, by his own admission, he does not understand — and a world that is rapidly being changed by it. Explaining that he conceives of AI as a “magic box floating in space,” he enlists the help of experts to provide him with a crash course in what, exactly, AI is.
Roher’s real concern, however, isn’t so much about the workings of AI — though some of his subjects do attempt to explain them for him — but whether it might displace us, as Clarke’s prediction suggests it will.
While making the film, Roher learns that his wife Caroline is pregnant with their first child. He tracks his wife’s pregnancy and the birth of his son in parallel with the advent of AI. It’s a smart choice that builds on a fear all parents share: What sort of world are we making for our children? And behind that question is another, vibrating in anxious silence: What happens after our offspring replace us? This twinned existential angst drives his efforts to hear from the doomers, the techno-optimists, and the in-between “apocaloptimists” whose ranks he ultimately joins.
Advertisement
TheAI Doc, as its sweeping title suggests, wants to shape and lead the narrative around AI. It’s certainly set up to do that — Roher is fresh off an Oscar win for his documentary Navalny, and the film opened in nearly 800 theaters, which counts as wide-release for a nonfiction title. The final product is indicative of the ways that public attitudes around AI are in massive flux. Roher hopes to reach people of my grandmother’s generation who conflate AI with smartphones and spellcheck, as well as people who don’t seem to care whether a video was AI-generated.
But I think that this documentary has come too late to steer the conversation, something the film itself acknowledges. For all its transformative potential, AI isn’t actually unique among emerging technologies yet — it has not been cataclysmic or ushered in a golden age of prosperity — but Roher and many of those he interviews tend to treat it as a radical break with all that has come before. As a result, they tend to fixate on the binary extremes of doom or salvation. It’s an approach that reinforces our own helplessness in the face of AI-driven change, while also muddying our understanding of what we might yet be able to do as we seek to adapt, mitigate harm, and shape the world that AI could otherwise truly start remaking.
Roher, contemplating his child’s future, opts to hear the bad news first. Tristan Harris, the cofounder of the Center for Humane Technology, doesn’t mince words: “I know people who work on AI risk who don’t expect their children to make it to high school.”
Many of the film’s other interviewees are similarly gloomy. Geoffrey Hinton, the “godfather of AI,” for example, argues that as AI becomes smarter, it will become better at manipulating humanity. But no one is more pessimistic than Eliezer Yudkowsky, the well-known AI doomer and co-author of the controversial book If Anyone Builds It, Everyone Dies. As the title suggests, Yudkowsky believes that superintelligent AI would wipe out humanity — a position that he stands by and lays out for Roher.
Advertisement
Turning his back on these storm clouds — and taking the advice of his wife, Caroline, who tells him that he needs to find hope for the future — Roher tunes into the chorus of AI optimists. They tell him, variously, that there are more potential benefits than downsides to AI; that technology has made the world better in every way; that this will be the tool that helps us solve all our greatest problems. Not to mention: AI will bring the best health care on the planet to the poorest people on Earth, extend our healthspan by decades, and enable us to live in a postscarcity utopia free of drudgery. Oh, and: We will become an interplanetary species, all thanks to AI.
These promises initially reassure Roher, perhaps because he seems easily led by whomever he’s spoken to most recently. It is Harris who ultimately convinces him that we can’t separate the promise of AI from the peril it presents. The conclusions that result will be obvious to anyone who’s thought about these issues for more than a moment or two: If AI automates work, for example, how will people make a living?
It doesn’t help that many of the most invested players reflect on these questions superficially, if at all. OpenAI CEO Sam Altman tells Roher that he’s worried about how authoritarian governments will use AI — a claim that is followed in the film by a cut to images of Altman posing with authoritarian leaders. Other tech CEOs fall back on PR pleasantries in response to the filmmaker’s questions, and Roher too often goes easy on them, never diving deeper when they admit that even they aren’t confident that everything will go well. That these are the leaders of AI companies racing against each other to make the technology more and more advanced does little to inspire confidence.
(Some of the techno-pessimistic people interviewed for the documentary have expressed their strongdispleasure with the final result.)
Advertisement
“Why can’t we just stop?” Roher asks these tech CEOs. He’s told that a moratorium is a pipe dream: Many groups around the world are building advanced AI, all with different motivations. Legislation lags far behind the rate of technological progress. Even if we could pass laws in the US and EU that would stop or slow things down, says Anthropic CEO Dario Amodei, we’d have to convince the Chinese government to follow suit.
If we don’t create it, the thinking goes, our enemies will. It’s best to get ahead of them.
This is, of course, the logic of nuclear deterrence: If we don’t mitigate the risk of ending the world through mutually assured destruction, there’s nothing stopping someone else from pressing the button first.
An apocalypse in every generation
Advertisement
The atomic comparison is apt, if only because Roher sees the stakes in similarly stark terms. “Will my son live in a utopia, or will we go extinct in 10 years?” he wonders aloud. It’s a question that’s central to the film. But he never really sits with the more likely scenario that AI will neither lead to human extinction nor end all disease and drudgery. Every generation faces the specter of its own annihilation — and yet the ends of days keep accumulating, no matter how close the doomsday clock gets to apocalypse.
The point, then, isn’t that AI won’t be bad for us, but that by framing the question in strictly utopian or dystopian terms, we miss the messy reality that lies between hell on earth and heaven in the stars. Although The AI Doc tries to chart an “apocaloptimist” course between two extremes, it doesn’t grasp the real stakes. AI doesn’t really create new risks as such — it’s a force multiplier for existingones like the threat of nuclear warfare and the development and use of biological weapons. The chief existential risks of AI are human-made and human-driven. And that means, as Caroline says in the film’s ending narration, “We get to decide how this goes.” She’s right, but her husband never seems to understand how she’s right.
Like too many Big Issue Documentaries, Roher’s film is heavy on problems and light on solutions. It does offer some, calling for international cooperation, transparency, legal liabilities for companies if something goes wrong, testing before release, and adaptive rules to match the speed of progress. But just as this is a strictly introductory course in AI — one that will probably irritate those who’ve already moved on to AI 102 — these recommendations are only a starting point. For Roher, they offer reason to be hopeful. For the rest of us, they’re just the beginning of an opportunity to meaningfully steer the course of our future.
They started by mixing sour plum vodka at home, now they produce 1,800 bottles every month
At every party, people would pull Alexander Cheong aside and ask the same question: can I get a bottle of that?
The “that” was his homemade Sour Plum Vodka. Sick of alcohol brands that felt serious and disconnected from the actual experience of drinking, Alexander started mixing his own at social gatherings, rooted in the Southeast Asian flavours he grew up with.
“Alcohol is a social lubricant,” he said. “It’s about making the worries and stress clear up. We want to embody not taking anything too seriously.”
That philosophy became the foundation of Clumzy—a spirits brand he built with two friends, Kenneth Tan and Daniel Lim, on one simple idea: bottling Southeast Asian flavours with a bit of a kick. Five years later, with just three flavours and no outside investment, the brand has crossed S$1 million in cumulative revenue.
Advertisement
We spoke to the founders to find out how a kitchen experiment became a million-dollar business.
Starting a spirits brand from scratch
Clumzy is known for its signature Sour Plum Vodka./ Image Credit: Clumzy
The origins of Clumzy trace back to 34-year-old Alexander’s natural flair for mixology. Always the life of the party, he rarely enjoyed what he called “cold, hard, and serious” alcohol.
At social gatherings, to save money, he would mix his own cocktails, making his own flavourings from whatever he could imagine. Eventually, Alexander became known for his experimental jungle juices and punches, but one creation in particular stood out: a Sour Plum Vodka that became an instant crowd favourite at every party.
When COVID-19 hit, and social gatherings were restricted, demand for Alexander’s creations didn’t disappear—instead, it intensified. Friends and even friends of friends wouldn’t stop asking him to bottle his drinks for family occasions and casual nights in.
So, Alexander roped in his friend Kenneth to launch Clumzy in early 2021, taking orders via Instagram DMs only. Without any real push, word of mouth spread faster than they could keep up with.
Advertisement
Clumzy’s early “medicine bottle” look (left) vs its revamped packaging (right) after Daniel came on board./ Image Credit: Clumzy
But there was one clear problem: branding.
The product started with a simple label and packaging that wasn’t particularly eye-catching. About a month in, they brought in Daniel, an experienced marketer and close friend, as the third co-founder to help turn the product into a proper brand.
Daniel’s entry laid down Clumzy’s branding foundations. He redesigned the labels and packaging—joking that the original bottle looked like a “medicine bottle”—along with new photography, the website, and the e-commerce platform, shaping the brand identity Clumzy is known for today.
From there, while Daniel handled brand and marketing, Alexander and Kenneth focused on the business, trade relationships, and operations.
“If it didn’t work out, we had nothing to fall back on”
Clumzy started out with one sole offering: the Sour Plum Vodka. The product is marketed for its versatility—being great for shots, served on the rocks or as a cocktail. Each bottle retails at S$58 with an alcohol by volume of under 20%.
Advertisement
In the beginning, the spirit was made by the trio in Kenneth’s kitchen, which was stocked with giant Cambro containers and bottles. Supplies were stored in a small rented warehouse at S$200 per month, meaning they had to physically lug stock back and forth for every production run to Kenneth’s home.
At the time, they were making around 180 bottles a month—a capacity they quickly outgrew as demand surged beyond what a home setup could sustain.
Image Credit: Clumzy
Within just a few months, Clumzy became a legitimate side hustle generating real extra income on top of their day jobs. That created a genuine decision point for the co-founders: were they to stay comfortable with some pocket money, or risk everything to grow it into a real business?
Operating from a home kitchen came with clear limitations. They could only sell directly to consumers, with B2B opportunities completely off the table without a licensed commercial facility.
But upgrading wasn’t a small leap. Setting up a proper production space required tens of thousands of dollars—essentially their entire annual revenue at the time.
Advertisement
If it didn’t work out, we had nothing to fall back on.
Daniel Lim, co-founder of Clumzy
Still, they chose to take the risk. The trio secured a liquor licence and set up a dedicated production facility, moving from manual preparation to automated mixing and bottling processes.
They hosted pop up booths almost every weekend
From the start, the founders have been prudent with their spending, purchasing only what was necessary. They bootstrapped the venture with “a couple of thousand dollars,” which generated revenue and paid for itself, breaking even within a year.
(Left): Alexander and Daniel at Loky’s and the Crew in 2025; (Right): Clumzy also dispenses slushies of its offerings at some events./ Image Credit: Clumzy
Early on, they also committed to building Clumzy through direct consumer engagement.
They signed up for pop-ups and hosted booths “almost every weekend,” becoming familiar faces at curated events like ArtBox 2023, Boutiques Singapore 2024, and Christmas Atelier 2025, amongst smaller pop-ups at bars and cafes since 2021.
Advertisement
“Demand has been very strong,” Daniel noted. “We sold out on the first day of our first Boutiques Singapore back in 2022, which lasted two weeks. We made in one day what we normally would make over three to four days at other events.”
As Clumzy’s presence at weekend pop-ups grew, restaurants and eateries began taking notice, often after seeing strong customer demand at events or encountering the brand through word of mouth and social media.
At the same time, another key growth driver was how the founders expanded the ways customers could experience the product. They diversified into events, offering on-tap services for weddings and house parties, and experimented with more flexible formats, including slushie versions of their drinks at events.
“Diversifying into slushies made sense for us to create an option to make drinks that are fun for people who may not want to drink alcohol that tastes like alcohol,” Daniel shared.
Advertisement
He added that 65% of their customers are women, reflecting how Clumzy has resonated with a demographic that traditional spirits brands have historically underserved.
Hitting the S$1M milestone
(Left): Clumzy’s booth at Boutiques Singapore 2025; (Right): Clumzy stocks at various partners like The Liquor Store./ Image Credit: Clumzy
Today, Clumzy produces around 1,800 bottles of spirits each month and has expanded to a team of eight. Its offerings are stocked at 11 retail partners, including Pat’s Music Pub and The Liquor Shop.
The business model currently sees revenue split roughly into 70% B2C and 30% B2B. After four years of operation, Clumzy crossed S$1 million in cumulative revenue in 2025, a significant milestone for a bootstrapped local spirits brand.
Clumzy launched with its Sour Plum Vodka, before the Chrysanthemum Lychee Gin and finally its Coconut Pandan Rum./ Image Credit: Clumzy
The brand has also since expanded its product line to three spirits, with customer feedback playing a crucial role in shaping its development.
While the Sour Plum Vodka developed a devoted following, it became clear that the polarising flavour was an acquired taste—some loved it instantly, while others needed time to warm up to it. This insight led to the development of the brand’s second flavour: the Chrysanthemum Lychee Gin, designed for those who find the sour plum variety too intense.
After persistent calls from customers for a third offering, the trio released Coconut Pandan Rum recently, a creation they felt was important to Clumzy’s identity as a business inspired by Southeast Asian flavours.
Advertisement
Protecting what they’ve built
The trio believe strongly in what they’ve built and is looking to grow it organically./ Image Credit: Clumzy
Around the time Clumzy turned one, several parties approached the founders with offers: investment for equity stakes, rebranding under a larger umbrella, or outright acquisition.
At the time, all three founders still had day jobs. These offers initially felt genuinely attractive—a chance to take the brand one step up without carrying all the risk alone.
But Alex had the foresight to see what they’d be giving up: undervaluing everything that they stood for as a small founder-led brand. As such, they turned the offers down.
“In hindsight, some of those offers really shortchanged us,” Alex said. “I’m glad we trusted our gut in those decisions, and I’m glad we saw it all pay off eventually through our hard work.”
Clumzy at CellarFiesta 2025 and Artbox 2023./ Image Credit: Clumzy
Today, Clumzy is run by Alexander and Daniel, with Kenneth having taken a backseat in 2024.
The next chapter for the brand involves significantly expanded distribution. After months of negotiations with NTUC FairPrice, the brand’s spirits are set to hit its supermarket shelves soon, marking its entry into mainstream retail.
Advertisement
Encouraged by strong demand in Singapore, international expansion is also on the horizon. The founders plan to bring Clumzy to Thailand and Australia, driven by interest from Singaporeans abroad who have discovered the brand and want access in their new home countries.
That said, the team has also observed a broader shift in drinking culture, with more consumers becoming intentional about their alcohol consumption. While nightlife has seen a decline, overall alcohol consumption remains relatively steady, as more people drink at home or during daytime social occasions.
Daniel noted that it’s all about a healthier relationship with socialising.
“People aren’t drinking less. They’re bored with sameness. Alcohol offerings have felt largely unchanged for decades. What people are genuinely hungry for is novelty and a sense of connection to something familiar,” Daniel explained.
Advertisement
That sense of novelty and familiarity is what Clumzy aims to deliver.
The Bremen startup’s platform deploys teams of AI agents that autonomously execute engineering tasks across more than 75 existing tools, without replacing any of them. Revaia led the Series B; Capgemini joined through its ISAI Cap Venture vehicle. All Series A investors returned.
Synera, the Bremen-based agentic AI platform for industrial engineering, has raised $40 million (approximately €35 million) in a Series B round led by Revaia, with participation from Capgemini through ISAI Cap Venture.
All of the company’s existing Series A investors returned, including UVC Partners with a substantial commitment from its growth fund, BMW iVentures, Cherry Ventures, Venture Stars, and Spark Capital.
The round is intended to accelerate Synera’s expansion in the US and internationally, building on existing deployments at NASA, BMW, Airbus, Volvo Trucks, and Hyundai.
Synera was founded in 2018 in Bremen by Dr. Moritz Maier, Sebastian Möller-Lafore, and Daniel Siegel, a team that had been working together since 2006, initially under the name ELISE (Evolutionary Lightweight Structure Engineering), before rebranding in 2022 to reflect the company’s expanded scope.
Advertisement
The platform connects more than 75 existing engineering tools, including software from Altair, Autodesk, Hexagon, PTC, and Siemens, into a unified orchestration layer, allowing AI agents to execute complex engineering tasks autonomously across design, simulation, optimisation, costing, and reporting without requiring companies to replace their existing infrastructure.
The platform is deployed on-premises, keeping engineering intellectual property and sensitive data within customers’ own environments. Synera has also established a US presence in Boston, Massachusetts.
The company describes its approach as deploying a virtual engineering team: agents that don’t merely assist but autonomously execute, running iterative simulations, generating reports, responding to RFQs, and progressing through approval workflows without human intervention at each step.
The platform has been internally described as “JARVIS for engineers.” Quantified outcomes cited by Synera and independently validated by Frost & Sullivan in a 2025 analysis include a 95% reduction in finite element simulation time at engineering consultancy EDAG, and a 30% weight reduction in 3D-printed robot gripper designs at BMW’s Additive Manufacturing Campus.
Advertisement
NASA has deployed multiple Synera agents to transform requirements into validated part designs, completing hundreds of design iterations in an hour.
The investment context is a structural mismatch between AI investment and manufacturing deployment. Gartner’s 2025 CIO survey found that 86% of manufacturing respondents plan to increase generative AI investment in 2026 and 97% expect to have deployed it by 2028, yet only 41% of AI and generative AI prototypes currently reach production, according to Gartner’s 2024 AI Mandates for the Enterprise survey.
Synera’s proposition is that the gap exists because most AI tools treat engineering as a chat interface problem rather than an infrastructure problem: the agents need to connect to the actual tools where the work happens, not sit alongside them.
The company has also been recognised by Frost & Sullivan with its 2025 Global AI Agents for Engineering Transformational Innovation Leadership award.
Advertisement
The Series A, raised in September 2022, was $14.8 million, led by Spark Capital with BMW iVentures, Cherry Ventures, UVC Partners, and Venture Stars participating. The Series B brings total funding to approximately $58 million.
Capgemini’s entry through ISAI Cap Venture is strategically notable: Capgemini is one of the world’s largest IT services firms and a significant engineering services provider to the automotive and aerospace sectors Synera targets, making it both an investor and a potential channel partner.
Others in the seed round include Crucible Capital, Gallery Ventures, and Uber CEO Dara Khosrowshahi. The company has raised $23 million to date.
Pillar, founded in 2023, automates hedging processes for such businesses. Hedging is when a company places a trade that can offset or cancel out losses from other priced trades. Geopolitics has not been kind to the commodities market, which has seen much volatility in the past year.
Harsha Ramesh, the company’s co-founder and CEO (founded alongside Chinmay Deshpande, the company’s CTO), said the company uses AI to ingest and parse data from client contracts, cash flows, inventories, ERP software, spreadsheets, and even WhatsApp messages to “continuously analyze exposure across commodities, FX, and freight.”
Advertisement
It can then build and manage a hedge portfolio for its clients, and adjust positions automatically based on “market conditions, volatility, and the client’s risk tolerance,” Ramesh continued. The platform executes trades and continuously monitors risk and exposure, turning hedging from a “static, periodic decision to a continuous, autonomous system,” Ramesh said.
Pillar’s clients include Shibuya Sakura Industries, a trading firm that buys and sells commodities like metals; the recyclable materials company Sigma Recycling; and United Metals Solution Group, which also recycles and trades metals.
Ramesh was once a macro trader, managing large derivative trading books and working with some of the largest companies in the world as they sought to hedge foreign exchange rates and interest rate exposure, he said. “I also spent time at a medium-sized physical business in import-export,” he recalled.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
“What stood out was that sophisticated institutions had access to tools, infrastructure, and talent, while the actual producers, importers, and manufacturers driving global trade had little to no access to this,” he said. “Risk management was treated as a luxury, despite being essential.”
Advertisement
Pillar hopes to offer sophisticated, institutional-grade tools to small and medium-sized enterprises. “Our goal is to make hedging as accessible and ubiquitous as payments or accounting software,” he said.
Image Credits:Harsha Ramesh and Chinmay Deshpande.
Others in this business include the legacy desks at big banks and the commodity risk platforms like Topaz and RadarRadar.
Ramesh said humans are still in the loop in some ways at Pillar, handling “approvals, oversight, and strategic decisions.” Humans also help with more “complex situations” — like large transactions, where a human team will mix their judgment with the machine’s execution.
Even though China has gotten very close to its first-ever manned moon landing, NASA remains the only space agency to land people on the moon as of mid-2026. One of those NASA astronauts, Charles M. Duke Jr., left a very special memento behind to mark his landing: A photograph of his beloved family.
Along with Thomas K. Mattingly II and John W. Young, Duke Jr. was part of the Apollo 16 mission, which departed Earth on April 16, 1972. Apollo 16’s goal was the region of Descartes, a crater in the moon’s highlands that previous missions had not visited. Their objectives were to collect rock samples, which would allow scientists to learn more about the moon’s composition, as well as to place instruments and conduct experiments to obtain more detailed readings of solar winds and other forces on the moon’s surface.
Before leaving the moon, Duke Jr. left a small scrap of cloth on its surface marked “64-C,” the name of the class with which he’d passed as a test pilot, and a commemorative coin marking the 25th anniversary of the United States Air Force’s founding. He didn’t just honor his military family, though; he also left something to commemorate the family waiting for him at home, placing a photograph of himself with his two young sons, Charles and Tom, and his wife Dotty. On the back, according to Air & Space Forces Magazine, was a simple message: “This is the family of Astronaut Duke from Planet Earth. Landed on the Moon, April 1972.”
Advertisement
How Astronaut Duke’s family joined him on the perilous mission
Charles M. Duke Jr’s photograph of himself with his family was taken to the moon, it seems, largely for the same reason why dads do most things: to score cool points with his children. Speaking to Business Insider in 2015, Duke Jr recalled asking his kids, “Would y’all like to go to the Moon with me?” to get them interested in the mission. Taking the photograph was his way of allowing them to do just that.
Advertisement
It was for the best, perhaps, that his sons didn’t physically join him and John Young on the Moon’s surface. This is the sort of perilous journey on which so much hinges on luck and timing. Indeed, Apollo 16 came within a hair’s breadth of having to cancel the landing entirely. Speaking to Fox Carolina News in April 2026, Duke Jr explained: “[A] serious problem happened about an hour before we landed on the Moon. [Thomas K] Mattingly reports a problem with the main engine … in the Command Module, which was your ride home.”
This happened on the far side of the moon and could have aborted the landing. Thankfully, Mission Control found a workaround that brought the engine back to life and saved the mission. As for the photograph, it remains there, over half a century later. Though Duke wrapped the photo in plastic, it’s unclear how well it held up to solar radiation in the decades between the Apollo 16 landing and NASA’s Artemis II lunar mission, which took place in early April 2026.
Despite what it sounds like, a concrete calculator is not a mathematical device made of cement. A concrete calculator is actually a digital (or mental) tool for estimating how much concrete a construction or landscaping project will need. Because concrete is typically sold by volume (most often in cubic yards), you should figure out how much you need before you start work on your project. But order too much, and you’ll be overpaying for a ton of excess material spinning around in the cement truck. Don’t order enough, you’ll have to put the project on pause until you can get another delivery. Not the end of the world, by any means, but still a major inconvenience either way.
There are plenty of online concrete calculators you can use to make sure neither scenario becomes your reality on the job. That way, you get a precise estimate based on your project’s specific dimensions without having to spitball it. Just take your basic project dimensions (length, width, and depth), and the calculator converts those figures into cubic volume. No matter if you’re pouring driveways, patios, foundations, or slabs, the calculator makes it so that you always know your total cost and the materials required. Do the simple math, grab your DIY concrete tools, and get to work.
Advertisement
How a concrete calculator works
Microgen/Getty Images
Concrete calculators use a pretty straightforward mathematical formula: length times width times depth, which gives you volume. For most rectangular areas, just measure these three dimensions, multiply them, and you’re good to go. For circles, you’ll need to grab the diameter and factor that in. For more irregular shapes than that, it’s best to divide the total area into smaller sections and do separate calculations. Add it all up in the end, get the grand total, and start working on your construction and concrete jobs.
Once the measurements get entered into the concrete calculator, it’ll probably tell you the results in cubic feet. From there, you can convert them into cubic yards by dividing by 27. Plenty of concrete calculators might do that step for you, but it still helps to know. Don’t forget to account for real-world variables, as well. For example, adding an extra 5% to 10% to the final estimate could help you cover any potential spillage, uneven surfaces, bad mixtures, or even slight miscalculations.
The software industry is racing to write code with artificial intelligence. It is struggling, badly, to make sure that code holds up once it ships.
A survey of 200 senior site-reliability and DevOps leaders at large enterprises across the United States, United Kingdom, and European Union paints a stark picture of the hidden costs embedded in the AI coding boom. According to Lightrun’s 2026 State of AI-Powered Engineering Report, shared exclusively with VentureBeat ahead of its public release, 43% of AI-generated code changes require manual debugging in production environments even after passing quality assurance and staging tests. Not a single respondent said their organization could verify an AI-suggested fix with just one redeploy cycle; 88% reported needing two to three cycles, while 11% required four to six.
The findings land at a moment when AI-generated code is proliferating across global enterprises at a breathtaking pace. Both Microsoft CEO Satya Nadella and Google CEO Sundar Pichai have claimed that around a quarter of their companies’ code is now AI-generated. The AIOps market — the ecosystem of platforms and services designed to manage and monitor these AI-driven operations — stands at $18.95 billion in 2026 and is projected to reach $37.79 billion by 2031.
Yet the report suggests the infrastructure meant to catch AI-generated mistakes is badly lagging behind AI’s capacity to produce them.
Advertisement
“The 0% figure signals that engineering is hitting a trust wall with AI adoption,” said Or Maimon, Lightrun’s chief business officer, referring to the survey’s finding that zero percent of engineering leaders described themselves as “very confident” that AI-generated code will behave correctly once deployed. “While the industry’s emphasis on increased productivity has made AI a necessity, we are seeing a direct negative impact. As AI-generated code enters the system, it doesn’t just increase volume; it slows down the entire deployment pipeline.”
Amazon’s March outages showed what happens when AI-generated code ships without safeguards
The dangers are no longer theoretical. In early March 2026, Amazon suffered a series of high-profile outages that underscored exactly the kind of failure pattern the Lightrun survey describes. On March 2, Amazon.com experienced a disruption lasting nearly six hours, resulting in 120,000 lost orders and 1.6 million website errors. Three days later, on March 5, a more severe outage hit the storefront — lasting six hours and causing a 99% drop in U.S. order volume, with approximately 6.3 million lost orders. Both incidents were traced to AI-assisted code changes deployed to production without proper approval.
The fallout was swift. Amazon launched a 90-day code safety reset across 335 critical systems, and AI-assisted code changes must now be approved by senior engineers before they are deployed.
Maimon pointed directly to the Amazon episodes. “This uncertainty isn’t based on a hypothesis,” he said. “We just need to look back to the start of March, when Amazon.com in North America went down due to an AI-assisted change being implemented without established safeguards.”
Advertisement
The Amazon incidents illustrate the central tension the Lightrun report quantifies in survey data: AI tools can produce code at unprecedented speed, but the systems designed to validate, monitor, and trust that code in live environments have not kept pace. Google’s own 2025 DORA report corroborates this dynamic, finding that AI adoption correlates with an increase in code instability, and that 30% of developers report little or no trust in AI-generated code.
Maimon cited that research directly: “Google’s 2025 DORA report found that AI adoption correlates with an almost 10% increase in code instability. Our validation processes were built for the scale of human engineering, but today, engineers have become auditors for massive volumes of unfamiliar code.”
Developers are losing two days a week to debugging AI-generated code they didn’t write
One of the report’s most striking findings is the scale of human capital being consumed by AI-related verification work. Developers now spend an average of 38% of their work week — roughly two full days — on debugging, verification, and environment-specific troubleshooting, according to the survey. For 88% of the companies polled, this “reliability tax” consumes between 26% and 50% of their developers’ weekly capacity.
This is not the productivity dividend that enterprise leaders expected when they invested in AI coding assistants. Instead, the engineering bottleneck has simply migrated. Code gets written faster, but it takes far longer to confirm that it works.
Advertisement
“In some senses, AI has made the debugging problem worse,” Maimon said. “The volume of change is overwhelming human validation, while the generated code itself frequently does not behave as expected when deployed in Production. AI coding agents cannot see how their code behaves in running environments.”
The redeploy problem compounds the time drain. Every surveyed organization requires multiple deployment cycles to verify a single AI-suggested fix — and according to Google’s 2025 DORA report, a single redeploy cycle takes a day to one week on average. In regulated industries such as healthcare and finance, deployment windows are often narrow, governed by mandated code freezes and strict change-management protocols. Requiring three or more cycles to validate a single AI fix can push resolution timelines from days to weeks.
Maimon rejected the idea that these multiple cycles represent prudent engineering discipline. “This is not discipline, but an expensive bottleneck and a symptom of the fact that AI-generated fixes are often unreliable,” he said. “If we can move from three cycles to one, we reclaim a massive portion of that 38% lost engineering capacity.”
AI monitoring tools can’t see what’s happening inside running applications — and that’s the real problem
If the productivity drain is the most visible cost, the Lightrun report argues the deeper structural problem is what it calls “the runtime visibility gap” — the inability of AI tools and existing monitoring systems to observe what is actually happening inside running applications.
Advertisement
Sixty percent of the survey’s respondents identified a lack of visibility into live system behavior as the primary bottleneck in resolving production incidents. In 44% of cases where AI SRE or application performance monitoring tools attempted to investigate production issues, they failed because the necessary execution-level data — variable states, memory usage, request flow — had never been captured in the first place.
The report paints a picture of AI tools operating essentially blind in the environments that matter most. Ninety-seven percent of engineering leaders said their AI SRE agents operate without significant visibility into what is actually happening in production. Approximately half of all companies (49%) reported their AI agents have only limited visibility into live execution states. Only 1% reported extensive visibility, and not a single respondent claimed full visibility.
This is the gap that turns a minor software bug into a costly outage. When an AI-suggested fix fails in production — as 43% of them do — engineers cannot rely on their AI tools to diagnose the problem, because those tools cannot observe the code’s real-time behavior. Instead, teams fall back on what the report calls “tribal knowledge”: the institutional memory of senior engineers who have seen similar problems before and can intuit the root cause from experience rather than data. The survey found that 54% of resolutions to high-severity incidents rely on tribal knowledge rather than diagnostic evidence from AI SREs or APMs.
In finance, 74% of engineering teams trust human intuition over AI diagnostics during serious incidents
The trust deficit plays out with particular intensity in the finance sector. In an industry where a single application error can cascade into millions of dollars in losses per minute, the survey found that 74% of financial-services engineering teams rely on tribal knowledge over automated diagnostic data during serious incidents — far higher than the 44% figure in the technology sector.
Advertisement
“Finance is a heavily regulated, high-stakes environment where a single application error can cost millions of dollars per minute,” Maimon said. “The data shows that these teams simply do not trust AI not to make a dangerous mistake in their Production environments. This is a rational response to tool failure.”
The distrust extends beyond finance. Perhaps the most telling data point in the entire report is that not a single organization surveyed — across any industry — has moved its AI SRE tools into actual production workflows. Ninety percent remain in experimental or pilot mode. The remaining 10% evaluated AI SRE tools and chose not to adopt them at all. This represents an extraordinary gap between market enthusiasm and operational reality: enterprises are spending aggressively on AI for IT operations, but the tools they are buying remain quarantined from the environments where they would deliver the most value.
Maimon described this as one of the report’s most significant revelations. “Leaders are eager to adopt these new AI tools, but they don’t trust AI to touch live environments,” he said. “The lack of trust is shown in the data; 98% have lower trust in AI operating in production than in coding assistants.”
The observability industry built for human-speed engineering is falling short in the age of AI
The findings raise pointed questions about the current generation of observability tools from major vendors like Datadog, Dynatrace, and Splunk. Seventy-seven percent of the engineering leaders surveyed reported low or no confidence that their current observability stack provides enough information to support autonomous root cause analysis or automated incident remediation.
Advertisement
Maimon did not shy away from naming the structural problem. “Major vendors often build ‘closed-garden’ ecosystems where their AI SREs can only reason over data collected by their own proprietary agents,” he said. “In a modern enterprise, teams typically have a multi-tool stack to provide full coverage. By forcing a team into a single-vendor silo, these tools create an uncomfortable dependency and a strategic liability: if the vendor’s data coverage is missing a specific layer, the AI is effectively blind to the root cause.”
The second issue, Maimon argued, is that current observability-backed AI SRE solutions offer only partial visibility — defined by what engineers thought to log at the time of deployment. Because failures rarely follow predefined paths, autonomous root cause analysis using only these tools will frequently miss the key diagnostic evidence. “To move toward true autonomous remediation,” he said, “the industry must shift toward AI SRE without vendor lock-in; AI SREs must be an active participant that can connect across the entire stack and interrogate live code to capture the ground truth of a failure as it happens.”
When asked what it would take to trust AI SREs, the survey’s respondents coalesced unanimously around live runtime visibility. Fifty-eight percent said they need the ability to provide “evidence traces” of variables at the point of failure, and 42% cited the ability to verify a suggested fix before it actually deploys. No respondents selected the ability to ingest multiple log sources or provide better natural language explanations — suggesting that engineering leaders do not want AI that talks better, but AI that can see better.
The question is no longer whether to use AI for coding — it’s whether anyone can trust what it produces
The survey was administered by Global Surveyz Research, an independent firm, and drew responses from Directors, VPs, and C-level executives in SRE and DevOps roles at enterprises with 1,500 or more employees across the finance, technology, and information technology sectors. Responses were collected during January and February 2026, with questions randomized to prevent order bias.
Advertisement
Lightrun, which is backed by $110 million in funding from Accel and Insight Partners and counts AT&T, Citi, Microsoft, Salesforce, and UnitedHealth Group among its enterprise clients, has a clear commercial interest in the problem the report describes: the company sells a runtime observability platform designed to give AI agents and human engineers real-time visibility into live code execution. Its AI SRE product uses a Model Context Protocol connection to generate live diagnostic evidence at the point of failure without requiring redeployment. That commercial interest does not diminish the survey’s findings, which align closely with independent research from Google DORA and the real-world evidence of the Amazon outages.
Taken together, they describe an industry confronting an uncomfortable paradox. AI has solved the slowest part of building software — writing the code — only to reveal that writing was never the hard part. The hard part was always knowing whether it works. And on that question, the engineers closest to the problem are not optimistic.
“If the live visibility gap is not closed, then teams are really just compounding instability through their adoption of AI,” Maimon said. “Organizations that don’t bridge this gap will find themselves stuck with long redeploy loops, to solve ever more complex challenges. They will lose their competitive speed to the very AI tools that were meant to provide it.”
The machines learned to write the code. Nobody taught them to watch it run.
You must be logged in to post a comment Login