Connect with us
DAPA Banner

Tech

Microsoft is officially killing its Outlook Lite app next month

Published

on

Microsoft is shutting down Outlook Lite on May 26, the company confirmed to TechCrunch on Monday. Launched in 2022, Outlook Lite is a lightweight version of the regular Outlook app, designed for Android phones with limited storage and regions with slower internet connections. 

The app had already been scheduled for retirement — Microsoft announced last year that the app would be removed from the Google Play Store in October 2025. Now the company has confirmed that the app will lose functionality for existing users next month.

The news was first reported by Neowin.

“To continue enjoying a secure and feature-rich email experience, we recommend switching to Outlook Mobile,” Microsoft says in an Outlook Lite support page.

Advertisement

Outlook Lite users will be able to access their existing email, calendar items, and attachments by signing into Outlook Mobile. Users will also be directed to the Google Play Store to download the standard Outlook app.

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Financial risk management platform Pillar raises $20M seed in round led by a16z

Published

on

Pillar, a platform that helps commodity-driven businesses (like those in metals, food, and airline companies) manage financial risk, announced Tuesday a $20 million seed round led by Andreessen Horowitz. 

Others in the seed round include Crucible Capital, Gallery Ventures, and Uber CEO Dara Khosrowshahi. The company has raised $23 million to date.

Pillar, founded in 2023, automates hedging processes for such businesses. Hedging is when a company places a trade that can offset or cancel out losses from other priced trades. Geopolitics has not been kind to the commodities market, which has seen much volatility in the past year. 

Harsha Ramesh, the company’s co-founder and CEO (founded alongside Chinmay Deshpande, the company’s CTO), said the company uses AI to ingest and parse data from client contracts, cash flows, inventories, ERP software, spreadsheets, and even WhatsApp messages to “continuously analyze exposure across commodities, FX, and freight.”

Advertisement

It can then build and manage a hedge portfolio for its clients, and adjust positions automatically based on “market conditions, volatility, and the client’s risk tolerance,” Ramesh continued. The platform executes trades and continuously monitors risk and exposure, turning hedging from a “static, periodic decision to a continuous, autonomous system,” Ramesh said.

Pillar’s clients include Shibuya Sakura Industries, a trading firm that buys and sells commodities like metals; the recyclable materials company Sigma Recycling; and United Metals Solution Group, which also recycles and trades metals. 

Ramesh was once a macro trader, managing large derivative trading books and working with some of the largest companies in the world as they sought to hedge foreign exchange rates and interest rate exposure, he said. “I also spent time at a medium-sized physical business in import-export,” he recalled. 

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

“What stood out was that sophisticated institutions had access to tools, infrastructure, and talent, while the actual producers, importers, and manufacturers driving global trade had little to no access to this,” he said. “Risk management was treated as a luxury, despite being essential.” 

Advertisement

Pillar hopes to offer sophisticated, institutional-grade tools to small and medium-sized enterprises. “Our goal is to make hedging as accessible and ubiquitous as payments or accounting software,” he said.

Image Credits:Harsha Ramesh and Chinmay Deshpande.

Others in this business include the legacy desks at big banks and the commodity risk platforms like Topaz and RadarRadar.

Ramesh said humans are still in the loop in some ways at Pillar, handling “approvals, oversight, and strategic decisions.” Humans also help with more “complex situations” — like large transactions, where a human team will mix their judgment with the machine’s execution. 

Source link

Advertisement
Continue Reading

Tech

This Heartwarming Family Photo Has Lived On The Moon For Over 50 Years

Published

on





Even though China has gotten very close to its first-ever manned moon landing, NASA remains the only space agency to land people on the moon as of mid-2026. One of those NASA astronauts, Charles M. Duke Jr., left a very special memento behind to mark his landing: A photograph of his beloved family.

Along with Thomas K. Mattingly II and John W. Young, Duke Jr. was part of the Apollo 16 mission, which departed Earth on April 16, 1972. Apollo 16’s goal was the region of Descartes, a crater in the moon’s highlands that previous missions had not visited. Their objectives were to collect rock samples, which would allow scientists to learn more about the moon’s composition, as well as to place instruments and conduct experiments to obtain more detailed readings of solar winds and other forces on the moon’s surface. 

Before leaving the moon, Duke Jr. left a small scrap of cloth on its surface marked “64-C,” the name of the class with which he’d passed as a test pilot, and a commemorative coin marking the 25th anniversary of the United States Air Force’s founding. He didn’t just honor his military family, though; he also left something to commemorate the family waiting for him at home, placing a photograph of himself with his two young sons, Charles and Tom, and his wife Dotty. On the back, according to Air & Space Forces Magazine, was a simple message: “This is the family of Astronaut Duke from Planet Earth. Landed on the Moon, April 1972.”

Advertisement

How Astronaut Duke’s family joined him on the perilous mission

Charles M. Duke Jr’s photograph of himself with his family was taken to the moon, it seems, largely for the same reason why dads do most things: to score cool points with his children. Speaking to Business Insider in 2015, Duke Jr recalled asking his kids, “Would y’all like to go to the Moon with me?” to get them interested in the mission. Taking the photograph was his way of allowing them to do just that. 

Advertisement

It was for the best, perhaps, that his sons didn’t physically join him and John Young on the Moon’s surface. This is the sort of perilous journey on which so much hinges on luck and timing. Indeed, Apollo 16 came within a hair’s breadth of having to cancel the landing entirely. Speaking to Fox Carolina News in April 2026, Duke Jr explained: “[A] serious problem happened about an hour before we landed on the Moon. [Thomas K] Mattingly reports a problem with the main engine … in the Command Module, which was your ride home.” 

This happened on the far side of the moon and could have aborted the landing. Thankfully, Mission Control found a workaround that brought the engine back to life and saved the mission. As for the photograph, it remains there, over half a century later. Though Duke wrapped the photo in plastic, it’s unclear how well it held up to solar radiation in the decades between the Apollo 16 landing and NASA’s Artemis II lunar mission, which took place in early April 2026.

Advertisement



Source link

Continue Reading

Tech

What Is A Concrete Calculator And How Does It Work?

Published

on





Despite what it sounds like, a concrete calculator is not a mathematical device made of cement. A concrete calculator is actually a digital (or mental) tool for estimating how much concrete a construction or landscaping project will need. Because concrete is typically sold by volume (most often in cubic yards), you should figure out how much you need before you start work on your project. But order too much, and you’ll be overpaying for a ton of excess material spinning around in the cement truck. Don’t order enough, you’ll have to put the project on pause until you can get another delivery. Not the end of the world, by any means, but still a major inconvenience either way.

There are plenty of online concrete calculators you can use to make sure neither scenario becomes your reality on the job. That way, you get a precise estimate based on your project’s specific dimensions without having to spitball it. Just take your basic project dimensions (length, width, and depth), and the calculator converts those figures into cubic volume. No matter if you’re pouring driveways, patios, foundations, or slabs, the calculator makes it so that you always know your total cost and the materials required. Do the simple math, grab your DIY concrete tools, and get to work.

Advertisement

How a concrete calculator works

Concrete calculators use a pretty straightforward mathematical formula: length times width times depth, which gives you volume. For most rectangular areas, just measure these three dimensions, multiply them, and you’re good to go. For circles, you’ll need to grab the diameter and factor that in. For more irregular shapes than that, it’s best to divide the total area into smaller sections and do separate calculations. Add it all up in the end, get the grand total, and start working on your construction and concrete jobs.

Once the measurements get entered into the concrete calculator, it’ll probably tell you the results in cubic feet. From there, you can convert them into cubic yards by dividing by 27. Plenty of concrete calculators might do that step for you, but it still helps to know. Don’t forget to account for real-world variables, as well. For example, adding an extra 5% to 10% to the final estimate could help you cover any potential spillage, uneven surfaces, bad mixtures, or even slight miscalculations.

Advertisement



Source link

Advertisement
Continue Reading

Tech

43% of AI-generated code changes need debugging in production, survey finds

Published

on

The software industry is racing to write code with artificial intelligence. It is struggling, badly, to make sure that code holds up once it ships.

A survey of 200 senior site-reliability and DevOps leaders at large enterprises across the United States, United Kingdom, and European Union paints a stark picture of the hidden costs embedded in the AI coding boom. According to Lightrun’s 2026 State of AI-Powered Engineering Report, shared exclusively with VentureBeat ahead of its public release, 43% of AI-generated code changes require manual debugging in production environments even after passing quality assurance and staging tests. Not a single respondent said their organization could verify an AI-suggested fix with just one redeploy cycle; 88% reported needing two to three cycles, while 11% required four to six.

The findings land at a moment when AI-generated code is proliferating across global enterprises at a breathtaking pace. Both Microsoft CEO Satya Nadella and Google CEO Sundar Pichai have claimed that around a quarter of their companies’ code is now AI-generated. The AIOps market — the ecosystem of platforms and services designed to manage and monitor these AI-driven operations — stands at $18.95 billion in 2026 and is projected to reach $37.79 billion by 2031.

Yet the report suggests the infrastructure meant to catch AI-generated mistakes is badly lagging behind AI’s capacity to produce them.

Advertisement

“The 0% figure signals that engineering is hitting a trust wall with AI adoption,” said Or Maimon, Lightrun’s chief business officer, referring to the survey’s finding that zero percent of engineering leaders described themselves as “very confident” that AI-generated code will behave correctly once deployed. “While the industry’s emphasis on increased productivity has made AI a necessity, we are seeing a direct negative impact. As AI-generated code enters the system, it doesn’t just increase volume; it slows down the entire deployment pipeline.”

Amazon’s March outages showed what happens when AI-generated code ships without safeguards

The dangers are no longer theoretical. In early March 2026, Amazon suffered a series of high-profile outages that underscored exactly the kind of failure pattern the Lightrun survey describes. On March 2, Amazon.com experienced a disruption lasting nearly six hours, resulting in 120,000 lost orders and 1.6 million website errors. Three days later, on March 5, a more severe outage hit the storefront — lasting six hours and causing a 99% drop in U.S. order volume, with approximately 6.3 million lost orders. Both incidents were traced to AI-assisted code changes deployed to production without proper approval.

The fallout was swift. Amazon launched a 90-day code safety reset across 335 critical systems, and AI-assisted code changes must now be approved by senior engineers before they are deployed.

Maimon pointed directly to the Amazon episodes. “This uncertainty isn’t based on a hypothesis,” he said. “We just need to look back to the start of March, when Amazon.com in North America went down due to an AI-assisted change being implemented without established safeguards.”

Advertisement

The Amazon incidents illustrate the central tension the Lightrun report quantifies in survey data: AI tools can produce code at unprecedented speed, but the systems designed to validate, monitor, and trust that code in live environments have not kept pace. Google’s own 2025 DORA report corroborates this dynamic, finding that AI adoption correlates with an increase in code instability, and that 30% of developers report little or no trust in AI-generated code.

Maimon cited that research directly: “Google’s 2025 DORA report found that AI adoption correlates with an almost 10% increase in code instability. Our validation processes were built for the scale of human engineering, but today, engineers have become auditors for massive volumes of unfamiliar code.”

Developers are losing two days a week to debugging AI-generated code they didn’t write

One of the report’s most striking findings is the scale of human capital being consumed by AI-related verification work. Developers now spend an average of 38% of their work week — roughly two full days — on debugging, verification, and environment-specific troubleshooting, according to the survey. For 88% of the companies polled, this “reliability tax” consumes between 26% and 50% of their developers’ weekly capacity.

This is not the productivity dividend that enterprise leaders expected when they invested in AI coding assistants. Instead, the engineering bottleneck has simply migrated. Code gets written faster, but it takes far longer to confirm that it works.

Advertisement

“In some senses, AI has made the debugging problem worse,” Maimon said. “The volume of change is overwhelming human validation, while the generated code itself frequently does not behave as expected when deployed in Production. AI coding agents cannot see how their code behaves in running environments.”

The redeploy problem compounds the time drain. Every surveyed organization requires multiple deployment cycles to verify a single AI-suggested fix — and according to Google’s 2025 DORA report, a single redeploy cycle takes a day to one week on average. In regulated industries such as healthcare and finance, deployment windows are often narrow, governed by mandated code freezes and strict change-management protocols. Requiring three or more cycles to validate a single AI fix can push resolution timelines from days to weeks.

Maimon rejected the idea that these multiple cycles represent prudent engineering discipline. “This is not discipline, but an expensive bottleneck and a symptom of the fact that AI-generated fixes are often unreliable,” he said. “If we can move from three cycles to one, we reclaim a massive portion of that 38% lost engineering capacity.”

AI monitoring tools can’t see what’s happening inside running applications — and that’s the real problem

If the productivity drain is the most visible cost, the Lightrun report argues the deeper structural problem is what it calls “the runtime visibility gap” — the inability of AI tools and existing monitoring systems to observe what is actually happening inside running applications.

Advertisement

Sixty percent of the survey’s respondents identified a lack of visibility into live system behavior as the primary bottleneck in resolving production incidents. In 44% of cases where AI SRE or application performance monitoring tools attempted to investigate production issues, they failed because the necessary execution-level data — variable states, memory usage, request flow — had never been captured in the first place.

The report paints a picture of AI tools operating essentially blind in the environments that matter most. Ninety-seven percent of engineering leaders said their AI SRE agents operate without significant visibility into what is actually happening in production. Approximately half of all companies (49%) reported their AI agents have only limited visibility into live execution states. Only 1% reported extensive visibility, and not a single respondent claimed full visibility.

This is the gap that turns a minor software bug into a costly outage. When an AI-suggested fix fails in production — as 43% of them do — engineers cannot rely on their AI tools to diagnose the problem, because those tools cannot observe the code’s real-time behavior. Instead, teams fall back on what the report calls “tribal knowledge”: the institutional memory of senior engineers who have seen similar problems before and can intuit the root cause from experience rather than data. The survey found that 54% of resolutions to high-severity incidents rely on tribal knowledge rather than diagnostic evidence from AI SREs or APMs.

In finance, 74% of engineering teams trust human intuition over AI diagnostics during serious incidents

The trust deficit plays out with particular intensity in the finance sector. In an industry where a single application error can cascade into millions of dollars in losses per minute, the survey found that 74% of financial-services engineering teams rely on tribal knowledge over automated diagnostic data during serious incidents — far higher than the 44% figure in the technology sector.

Advertisement

“Finance is a heavily regulated, high-stakes environment where a single application error can cost millions of dollars per minute,” Maimon said. “The data shows that these teams simply do not trust AI not to make a dangerous mistake in their Production environments. This is a rational response to tool failure.”

The distrust extends beyond finance. Perhaps the most telling data point in the entire report is that not a single organization surveyed — across any industry — has moved its AI SRE tools into actual production workflows. Ninety percent remain in experimental or pilot mode. The remaining 10% evaluated AI SRE tools and chose not to adopt them at all. This represents an extraordinary gap between market enthusiasm and operational reality: enterprises are spending aggressively on AI for IT operations, but the tools they are buying remain quarantined from the environments where they would deliver the most value.

Maimon described this as one of the report’s most significant revelations. “Leaders are eager to adopt these new AI tools, but they don’t trust AI to touch live environments,” he said. “The lack of trust is shown in the data; 98% have lower trust in AI operating in production than in coding assistants.”

The observability industry built for human-speed engineering is falling short in the age of AI

The findings raise pointed questions about the current generation of observability tools from major vendors like Datadog, Dynatrace, and Splunk. Seventy-seven percent of the engineering leaders surveyed reported low or no confidence that their current observability stack provides enough information to support autonomous root cause analysis or automated incident remediation.

Advertisement

Maimon did not shy away from naming the structural problem. “Major vendors often build ‘closed-garden’ ecosystems where their AI SREs can only reason over data collected by their own proprietary agents,” he said. “In a modern enterprise, teams typically have a multi-tool stack to provide full coverage. By forcing a team into a single-vendor silo, these tools create an uncomfortable dependency and a strategic liability: if the vendor’s data coverage is missing a specific layer, the AI is effectively blind to the root cause.”

The second issue, Maimon argued, is that current observability-backed AI SRE solutions offer only partial visibility — defined by what engineers thought to log at the time of deployment. Because failures rarely follow predefined paths, autonomous root cause analysis using only these tools will frequently miss the key diagnostic evidence. “To move toward true autonomous remediation,” he said, “the industry must shift toward AI SRE without vendor lock-in; AI SREs must be an active participant that can connect across the entire stack and interrogate live code to capture the ground truth of a failure as it happens.”

When asked what it would take to trust AI SREs, the survey’s respondents coalesced unanimously around live runtime visibility. Fifty-eight percent said they need the ability to provide “evidence traces” of variables at the point of failure, and 42% cited the ability to verify a suggested fix before it actually deploys. No respondents selected the ability to ingest multiple log sources or provide better natural language explanations — suggesting that engineering leaders do not want AI that talks better, but AI that can see better.

The question is no longer whether to use AI for coding — it’s whether anyone can trust what it produces

The survey was administered by Global Surveyz Research, an independent firm, and drew responses from Directors, VPs, and C-level executives in SRE and DevOps roles at enterprises with 1,500 or more employees across the finance, technology, and information technology sectors. Responses were collected during January and February 2026, with questions randomized to prevent order bias.

Advertisement

Lightrun, which is backed by $110 million in funding from Accel and Insight Partners and counts AT&T, Citi, Microsoft, Salesforce, and UnitedHealth Group among its enterprise clients, has a clear commercial interest in the problem the report describes: the company sells a runtime observability platform designed to give AI agents and human engineers real-time visibility into live code execution. Its AI SRE product uses a Model Context Protocol connection to generate live diagnostic evidence at the point of failure without requiring redeployment. That commercial interest does not diminish the survey’s findings, which align closely with independent research from Google DORA and the real-world evidence of the Amazon outages.

Taken together, they describe an industry confronting an uncomfortable paradox. AI has solved the slowest part of building software — writing the code — only to reveal that writing was never the hard part. The hard part was always knowing whether it works. And on that question, the engineers closest to the problem are not optimistic.

“If the live visibility gap is not closed, then teams are really just compounding instability through their adoption of AI,” Maimon said. “Organizations that don’t bridge this gap will find themselves stuck with long redeploy loops, to solve ever more complex challenges. They will lose their competitive speed to the very AI tools that were meant to provide it.”

The machines learned to write the code. Nobody taught them to watch it run.

Advertisement

Source link

Continue Reading

Tech

Huge cover display, multiple colours

Published

on

Motorola’s mid-range flip foldable looks set to follow a familiar formula, with leaked render images of the Razr 70 showing a device that retains much of its predecessor’s design while introducing a new processor, a significant camera upgrade, and three fresh colour options.

The renders, published by Ytechb, show the Razr 70 in Pantone Sporting Green, Pantone Hematite, and Pantone Violet Ice, with the handset expected to launch as the Razr 2026 in the US market, continuing Motorola’s regional naming convention from the current generation.

The Razr 70 leak follows hot on the heels of separately surfaced renders of the Razr 70 Ultra, which point to two striking new material finishes, including a wood-grain option and an Alcantara variant in a new purple-blue tone.

Display and design

The cover display carries over from the Razr 60 at 3.63 inches with a resolution of 1,066 x 1,056 pixels, retaining the wide chin visible at the bottom edge that has characterised the range, while the inner foldable OLED panel measures 6.9 inches with a 2,640 x 1,080 pixel resolution.

Advertisement

Despite the design continuity, the camera configuration does change, with rumours pointing to the 13-megapixel ultra-wide lens being replaced by a 50-megapixel telephoto camera, a meaningful upgrade for a mid-range foldable that has previously lagged behind more premium flip phones on versatility.

Advertisement

Processor and storage

Under the hood, the Razr 70 is expected to arrive with a new eight-core chip clocked at up to 2.75GHz, supported by a choice of 8GB, 12GB, or 16GB of RAM and storage options spanning 256GB, 512GB, and 1TB.

Rounding out the spec sheet are a 32-megapixel front camera and a 4,500mAh battery, the latter a generous capacity for the flip foldable category, where compact chassis dimensions have historically limited battery size.

Advertisement

With the Razr 60 launching in late April 2025, the Razr 70 and Razr 70 Ultra look set to follow a similar schedule, suggesting an official reveal could arrive within the next few weeks.

We’ve put several Motorola’s phones through their paces over the years, and our broader Motorola’s mobile phone coverage is worth checking out for anyone looking to get a sense of where the brand stands heading into this next release.

Source link

Advertisement
Continue Reading

Tech

Samsung R95H Micro RGB Hands-On TV Review: RGB LED gets real

Published

on

The Samsung R95H Micro RGB TV the company had on display at CES 2026 was a sight to behold, its bright picture and rich color managing to punch through even under the bright lights of Samsung’s First Look exhibit at the Wynn Las Vegas.

It was a solid next step for the company’s new RGB LED display tech, which made its debut in late 2025 with the launch of a 115-inch model priced around $29,999. At 130 inches, the Samsung R95H shown at CES made its predecessor look small in comparison. It also begged the questions of whether RGB LED TVs would be made available in real-world screen sizes, and if so, when?

The answer to those questions are yes, and now. Samsung has announced the availability of its R95H Micro RGB TV lineup in 65, 75, and 85-inch screen sizes priced at $3,199.99, $4,499.99, and $6,499.99, respectively. As for the 130-inch model shown at CES, that one is scheduled to arrive later this year at a price that will likely make your head spin. Samsung also previously announced a 115-inch version of the R95H though pricing and availability of that size are not yet available. For now the 2025 model 115-inch R95F Micro RGB TV is carrying over into 2026.

In terms of new 2026 models, alongside the R95H series Micro RGB TVs, Samsung also announced the step-down R85H series Micro RGB TVs, which will be available in 55- to 85-inch screen sizes priced from $1,599.99 to $3,999.99.

Advertisement
Samsung Micro RGB-2
The Samsung R95H Micro RGB TV features a Glare Free screen and has high brightness for daytime viewing.

Samsung R95H Micro RGB Features & Design

Samsung’s Micro RGB tech uses micro-sized red, green, and blue LEDs in place of the blue or white light modules found in typical mini-LED TVs like Samsung’s own Neo QLED models. The promise here is of greater color accuracy and 100% “full UHD color spectrum” coverage, along with more refined local dimming to eliminate backlight blooming.

Other features found in the new Samsung R95H series TVs include a Glare Free screen similar to the one found in the company’s 2025 flagship mini-LED and OLED models, Wide Viewing Angle, Ultimate Micro Dimming Pro, and Auto HDR Remastering Pro to upscale standard dynamic range programs to HDR. There’s also something new called Micro RGB HDR Pro, along with Real Depth Enhancer, a feature that debuted in the company’s 2025 models which analyzes pictures in real time to better define the foreground and background elements. 

Samsung continues to go all in on AI features for its TVs, and the R95H series offers 4K AI Upscaling Pro, AI Motion Enhancer Pro, and Micro RGB AI Engine Pro. There’s also an Adaptive Picture feature that uses AI to optimize images based on program genre and also provides an AI Customization mode that can create a custom picture preset based on your response to an array of displayed images.

Samsung Micro RGB-3
Samsung’s updated Tizen smart interface moves tabs from the screen’s left side to the top.

AI also gets top billing in Samsung’s updated Tizen Smart TV interface, which repositions tabs from the side to the top of the screen. The new layout is cleaner and more-user friendly, and it features a Vision AI Companion tab that lets you explore all manner of topics via Copilot or Perplexity using either the TV’s built-in far-field mic, or the one located in the TV’s Solar Cell Bluetooth remote control. Other Tizen features include Generative Wallpaper for creating custom screensavers, and access to the subscription-based Samsung Art Store that was previously limited to the company’s The Frame TVs.

Advertisement

Samsung TVs have long been a top option for gaming, and the R95H series continues that tradition with 165Hz support across four HDMI 2.1 ports, Freesync Premium Pro, and HDR10+ gaming. There’s also cloud-based gaming available on Samsung’s Gaming Hub, which features Xbox, NVIDIA, GeForce Now, Luna, Blacknut, Antstream, Boosteroid, and more.

While the 130-inch R95H model Samsung showed at CES 2026 featured a “Timeless Frame” floor mount, the 65-85-inch models come with an Infinity Air stand that, combined with the four-side Bezel-less screen, gives the display something of a floating effect. A 4.2.2-channel speaker array powered by 70W delivers Dolby Atmos audio, and there’s Object Tracking Sound+, along with a Q-Symphony feature that combines the TV’s speaker output with that of a compatible Samsung soundbar. Additionally, the R95H is Wireless One Connect Ready, giving you the option for a wireless 165Hz connection using Samsung’s optional Wireless One Connect Box.

Samsung Micro RGB-4
The Solar Cell remote used to control the Samsung R95H.

Hands-on with the Samsung R95H Micro RGB TV

Samsung invited eCoustics to its New Jersey headquarters in early March to get hands-on experience with a 65-inch R95H, and I was provided with ample time to make a full set of measurements.

Advertisement. Scroll to continue reading.
Advertisement

As stated above, Samsung claims “full UHD color spectrum coverage” for the R95H, which is another way of saying BT.2020 color space coverage. In the set’s default Filmmaker Mode, P3 color space coverage measured 98.8% and BT.2020 was 92%. That BT.2020 number is obviously lower than what Samsung cites for the TV, but as I learned in a demonstration put on by the company’s engineers at Samsung HQ, they based their specification on the TV’s Dynamic picture mode rather than the more accurate Filmmaker Mode.

By way of comparison, when I measured the Samsung QN90F, the company’s flagship mini-LED TV, P3 color space coverage measured 93.6% and BT.2020 was 76.5% so this TV represents a marked improvement in color reproduction.

The R95H’s REC.709 grayscale delta-E averaged out to 6 in Filmmaker mode, which is a higher than average result. (A delta-E lower than 3 is considered to be imperceptible). This variation would likely be mitigated by a full calibration.

The R95H’s peak HDR brightness in Filmmaker Mode measured on a 10% white window pattern was 1,541 nits, and it was 639 nits on a 100% (fullscreen) pattern. In the Standard picture mode, peak HDR brightness measured higher at 2,223 nits on a 10% window, and 654 nits fullscreen. The R95H’s Standard mode results exceed what I measured on the Samsung Q90F, though the QN90F’s peak brightness was higher in Filmmaker Mode.

Advertisement

In a nutshell, the new Samsung R95H Micro RGB TV offers a wider color space coverage and higher brightness than last year’s top Samsung mini-LED TV, a model that is carrying over to 2026.

Samsung Micro RGB-1
The R95H Micro RGB TV’s BT.2020 color gamut coverage exceeds that of top mini-LED and OLED models.

Alpha is a movie I’ve only seen specific clips from because, as an example of a 4,000 nits HDR transfer, it’s a good test for a display’s HDR tone mapping capability. (It involves prehistoric tribes, and there’s a wolf.) Watching a scene where the boy, Keda, and his wolf companion commune in front of the setting sun, there was a fine level of detail in the bright highlights, indicating that the TV’s Micro RGB HDR Pro feature was properly doing its job.

Two picture quality improvements promised by RGB LED tech are a reduction of backlight blooming artifacts and improved off-axis picture uniformity. A check of the white on black scrolling text that opens Blade Runner confirmed the R95H’s ability to deliver solid, halo-free performance, while the uniformity test pattern from the Spears & Munsil Ultra HD Benchmark showed that its picture could retain solid contrast and color saturation even when viewed from a far off-center seat.

Wrapping things up with the opening title sequence of Baby Driver, the R95H displayed only a limited level of motion judder as the titular character strolls along a city street. If that doesn’t sound like a big deal, I’ve seen the picture on some other TVs get seriously wobbly during this sequence, so the Samsung’s motion handling here was nothing short of impressive.

Advertisement
MRN75R95HAFXZA_007_R Perspective2_Black (1)

The Bottom Line

My take on the Samsung R95H Micro RGB TV after doing an initial test is that it provides an appreciable step up in picture quality over Samsung’s also-impressive flagship mini-LED TV, the QN90F. My limited time with the R95H meant I didn’t have an opportunity to do a deep dive into its many AI-related picture enhancements, and I also didn’t have a chance to evaluate its built-in sound. But as the first example of an RGB LED TV I’ve spent hands-on time with, I’m excited for this new category, which is finally creating serious picture quality competition for OLED TVs.

For more information: Samsung Micro RGB Product Pages

Pros:

  • Wide color gamut coverage
  • High peak HDR brightness
  • Glare Free screen
  • No visible backlight blooming
  • Sleek Infinity Air stand design
  • Excellent HDR tone mapping and motion handling

Cons:

  • Pricey
  • High grayscale delta-E in Filmmaker Mode

Source link

Advertisement
Continue Reading

Tech

Amazon is acquiring Globalstar, the company that powers satellite features on your iPhone

Published

on

Amazon has confirmed it’s acquiring Globalstar and plans to integrate the satellite operator’s low Earth orbit (LEO) satellites and spectrum into the Amazon Leo network. With this acquisition, Amazon seeks to accelerate the deployment of direct-to-device (D2D) capabilities, allowing standard smartphones to support calling, texting, and data via satellite. Amazon has already secured big customers for its satellite broadband service, and this deal will allow it to bypass several infrastructure hurdles.

The Apple partnership and 2028 roadmap

A key component of the deal is a new agreement between Amazon and Apple. Amazon Leo will now power satellite services for the iPhone and Apple Watch, including Emergency SOS, Messages, and Find My. Globalstar is Apple’s current partner, and this collaboration ensures Apple users will transition to Amazon’s expanded network as it matures. Amazon’s acquisition of Globalstar is expected to close in 2027, pending regulatory approval.

@amazon is acquiring @Globalstar to integrate their low Earth orbit satellites into our constellation. This combination will rapidly accelerate our plans to add direct-to-device capabilities into our satellite network to support calling, texting, data, and more. pic.twitter.com/qX7tbwyZw7

— Panos Panay (@panos_panay) April 14, 2026

Amazon also plans to deploy its own next-gen D2D satellite system beginning in 2028. Designed for higher spectrum efficiency, the company says this system will lead to faster speeds and better performance than current satellite-to-cell offerings.

Advertisement

Why this matters

The competition between Amazon and SpaceX is no longer just about who can provide the fastest home Wi-Fi. With this move, it’s now about who owns the most comprehensive connectivity ecosystem. We have already seen Amazon challenge Starlink’s dominance in aviation, and this merger is expected to expand that reach to every handheld device on the planet.

By integrating Globalstar with its infrastructure, Amazon is building the backbone of global communication. For the average user, this means the safety net of satellite connectivity is about to become a standard feature on mobile devices. And network “dead zones” may finally become a thing of the past.

Source link

Advertisement
Continue Reading

Tech

Japan finds a way to recover 90% of lithium from old EV batteries

Published

on


Behind the breakthrough is JX Metals Circular Solutions, a subsidiary of one of Japan’s largest non-ferrous metal companies. While it was announced back in April 2025, it really started grabbing headlines this month after some Japanese publications revealed the actual process at the company’s plant in Tsuruga. Tadashi Nakagawa, the…
Read Entire Article
Source link

Continue Reading

Tech

OpenAI buys its second startup in a month

Published

on

OpenAI has acquired Hiro Finance, a startup that offers AI-powered financial planning tools. As first reported by TechCrunch, fiscal terms of the deal, which was announced on Monday, were not disclosed by OpenAI. However, all signs point this to being an acquhire, with Hiro founder Ethan Bloch writing on LinkedIn that the company’s product would stop working on April 20. Users have until May 13 to migrate their data off of Hiro’s servers before everything is deleted.

It’s unclear if OpenAI plans to offer a dedicated financial planning tool in the mold of Hiro. At the start of the year, the company released Prism, a Claude Code-like app for scientific research that built on its acquisition of the startup behind Crixet. At the very least, it sounds like some of the expertise Hiro has built will make its way to OpenAI’s chatbot. “For decades, personalized financial guidance has been too expensive, too generic, or too hard to access. ChatGPT is finally changing that,” Bloch wrote on LinkedIn.

The deal is the second acquisition in only two weeks to be announced by OpenAI. At the start of the month, the company bought Technology Business Programming Network (TBPN), a media company known for its daily tech podcast. For a company that has by all indications a long and tough road ahead to profitability, it sure does seem OpenAI is spending a lot of time and money on startups that might not end being central to its core business, which in recent months has seen it target the coding market to edge out Anthropic.

Source link

Advertisement
Continue Reading

Tech

How to restore deleted or missing contacts on your iPhone

Published

on

At some point, we all stopped memorizing phone numbers. It happened gradually, and now most of us can barely recall two or three phone numbers off the top of our heads. So when your iPhone contacts vanish, whether after a software update or an accidental delete, it can feel like a minor crisis.

Thankfully, if you act fast, you can easily restore deleted contacts on your iPhone. So, before you start texting people asking for their numbers again, try these methods to get your contacts back. These methods will work on all latest iPhone models.

Recover contacts using iCloud on your iPhone

If your contacts disappear due to a sync issue, the fix is surprisingly simple. Just head to Settings, toggle contacts sync off, then back on. Your contacts should reappear on your iPhone shortly after.

Step 1: Launch the Settings app on your iPhone, click on your name, and open iCloud settings.

Advertisement

Step 2: Tap the See All button and turn off the toggle next to Contacts. Select the Keep on My iPhone option when prompted.

Step 3: Turn the toggle back on and select Merge. Wait a little while, and you should see the deleted contacts back on your iPhone.

Recover contacts using iCloud.com

If you have accidentally deleted a contact, you can restore it from your iCloud. However, if you have turned on Advanced Data Protection for iCloud or updated to iOS 26.4, which automatically enables it, first, you will have to enable iCloud data access on the web.

Step 1: Launch the Settings app on your iPhone, click on your name, and open iCloud settings.

Step 2: Scroll to the bottom, tap iCloud.com, and turn on the “Allow Data Access” toggle.

Step 3: Once you have enabled iCloud data access on the web, launch Safari on your iPhone, visit iCloud.com, and log in with your credentials.

Step 4: Click on the menu dots in the top-right corner, scroll to the bottom, and open “Data Recovery.” Scroll down and tap on Restore Contacts.

Step 5: Now tap the Restore button next to the date and time when the contacts were deleted to restore them.

More techniques for restoring iPhone contacts

Here are some more things you can try:

  • If the above techniques don’t work, you can also try to restore your phone from an iTunes backup, which will also have contact data.
  • If you still have your old iPhone and haven’t changed many contacts since then, you can port them over or add important ones.
  • You can try third-party iOS recovery tools.

Losing your contacts is stressful, but as you can see, getting them back is usually a quick fix. The key takeaway here is to make sure your contacts are always syncing with iCloud so you are never caught off guard again. And if you haven’t backed up your iPhone in a while, now is a great time to do it. Think of it as insurance you hope you never need, but will be very glad you have.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025