The case, led by a special agent in the Commerce Department’s Bureau of Industry and Security, focused on claims that some Meta employees and contractors could access WhatsApp messages despite the app’s use of end-to-end encryption. Read Entire Article Source link
Google Photos on Wednesday announced a new AI-powered feature that will soon turn photos of your clothes into a digital closet where you can create new outfit ideas, and even virtually try on your creations. Yes, the idea takes obvious inspiration from Cher’s iconic virtual wardrobe featured in the movie “Clueless,” where she could scroll through her various ensembles while deciding what to wear.
Google says the new feature will leverage AI technology to automatically create a copy of your wardrobe that’s based on the pieces of clothing appearing in your Google Photos library. From the app, you’ll be able to filter items by category — like tops, bottoms, jewelry, and more — then mix and match them to create different outfits.
The idea of a digital closet in “Clueless” was meant to highlight Cher’s life of privilege. As a result, the fashion industry and various startups have long sought to recreate the feeling of easy outfit creation. Google is betting that AI technology will make it possible for anyone to have access to a similar tool, one that could improve over time as AI advances.
Image Credits:Google Photos
Those outfit ideas can either be shared with friends or saved to a digital moodboard, where you could save ideas for different occasions, like travel, events, date nights, work, and more.
In addition, another feature will let you virtually try on items to preview the looks.
Advertisement
The feature is not yet live, but Google says it will roll out to Google Photos on Android later this summer, followed by iOS, where it will be found under “Collections.” It will compete with existing apps like Acloset, Combyne, Pureple, Wearing, Alta, and others.
The company didn’t go into detail about how the AI works, but notes it will recognize the clothing and accessories featured in your library to create its individual snapshots. Of course, while the AI may be able to pull images from well-lit, full-body photos, we imagine you would get better results by taking the time to photograph your clothes yourself, much as Cher had.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
SenseTime, a Chinese AI company best known for its facial recognition technology, released a new open source model on Tuesday that it claims can both generate and interpret images far faster than top models developed by US competitors. SenseNova U1 could help the company reclaim lost ground after it slipped from its place among the leading players in China’s AI development race.
The model’s secret sauce is its ability to “read” images without translating them to text first, speeding up the process and reducing the amount of computing power required. “The model’s entire reasoning process is no longer limited to text. It can reason with images as well,” Dahua Lin, cofounder and chief scientist at SenseTime, said in an interview with WIRED.
Lin, who is also a professor of information engineering at the Chinese University of Hong Kong, says that models capable of processing images directly will enable robots to better understand the physical world in the future.
Like DeepSeek’s latest flagship model, SenseTime says U1 can be powered by Chinese-made chips. “Several Chinese domestic chipmakers have finished optimizing compatibility with our new model,” Lin says. On release day, 10 Chinese chip designers, including Cambricon and Biren Technology, announced their hardware supports U1.
Advertisement
That flexibility matters because US export controls restrict Chinese firms from accessing the world’s most advanced AI chips, particularly those used for training, which at this point are primarily developed by Western companies like Nvidia. “We will continue to push for training on more different chips,” Lin says. But he also acknowledges that SenseTime “may still need to use the best chips to ensure the speed of our iteration.”
SenseTime released U1 for free on Hugging Face and GitHub, another sign of how Chinese companies are becoming some of the most active contributors to open source AI.
SenseTime was founded in 2014 and became a world leader in computer vision, which is used in applications like facial recognition and autonomous driving. But when ChatGPT and other AI systems powered by natural language processing became the hottest thing in the tech industry, SenseTime began struggling to turn a profit and fell behind newer Chinese startups like DeepSeek and MiniMax.
SenseTime says it hopes that releasing SenseNova-U1 publicly for anyone to use will help it catch up with both domestic and Western AI players. Lin says the company finally made the decision last year to focus on open source because of the helpful feedback it gets from researchers, which enables the company to iterate faster. “In this day and age, being open source or closed source is not the winning factor; the speed of iteration is,” Lin explains.
Advertisement
Going open source also helps SenseTime continue collaborating with international researchers without the interference of geopolitics. The company has been sanctioned repeatedly by the US government in recent years over allegations that its facial recognition technology helped power surveillance systems used to monitor and detain Uyghurs and other minority groups in China’s Xinjiang region. As a result, US firms are restricted from investing in SenseTime and selling certain technologies to it without a license. (SenseTime has denied the allegations.)
A sample image created using SenseNova U1. Generated using AI
Courtesy of SenseTime
Seeing Clearly
In an accompanying technical report, SenseTime claims that SenseNova-U1 generates higher-quality images than all other open source models currently on the market. Its performance is comparable to leading Chinese closed source models like Alibaba’s Qwen and ByteDance’s Seedream, but it still lags behind industry leaders like GPT-Image-2.0, which came out just a week ago.
But the model’s main selling point is its ability to generate images much faster than all of those models. It relies on an innovative technical structure called NEO-Unify that SenseTime previewed earlier this year.
Arm’s shares fell by more than 7pc as TSMC sold off its final tranche of shares in the UK chip design company.
The world’s largest chipmaker, Taiwan’s TSMC, has sold off its final stake in Arm, the UK chip design company, according to a filing today. The filing says the shares sold over the past few days came to a total of around $231m.
TSMC invested some $100m in Arm at around $51/share during the latter’s IPO in 2023, gradually reduced the position through 2024, and has now fully exited at around $207 a share. According to Reuters, Arm shares fell some 7pc yesterday on the news.
Advertisement
Arm’s recent move into in-house chip making rather than just chip design has attracted much attention in recent times and the announcement of a major deal with Meta in March saw its shares soar, so the dip not likely to cause any major concern for shareholders.
Last month, Meta announced it was partnering with Arm, which is majority owned by Japan’s Softbank, “to develop a new class of CPUs to support growing AI workloads and general purpose computing”.
Here in Ireland, Arm opened a new “state-of-the-art” facility in Galway supported by IDA Ireland, the State’s investment promotion agency, last year.
Since establishing its Irish presence in the county back in 2014, Arm has expanded its staff to 90 locally, while employing more than 4,800 across Europe. The UK company’s presence in Ireland is limited to Galway. The facility at Crown Square in Galway is set to become home to innovative advancements in semiconductor tech, the company said at the time.
Advertisement
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Google is adding another AI-powered trick to Translate — this time focused on how you sound, not just what you say.
A new Pronunciation feature, powered by Gemini, is rolling out to help users practise speaking foreign languages more naturally. In addition, you’ll also receive real-time feedback on delivery.
The update slots neatly into Translate’s existing Practice mode, which launched in late 2025 with tools like Listen and Roleplay. Now, when you translate a phrase and tap Practice, you will see a new “Pronounce” button alongside those options. Tap it, and the app will show a phonetic version of the phrase. Then it will activate your microphone, and ask you to read it aloud.
From there, Gemini steps in. The app evaluates your attempt and offers quick feedback. It will flag unclear sounds or suggesting another try, essentially turning Translate into a lightweight pronunciation coach. It’s not overly detailed, but it’s enough to help you tweak your accent and clarity without needing a full language app.
Advertisement
The feature lands as Google Translate marks its 20th anniversary and suggests the app is evolving into a broader language-learning tool. While services like Duolingo have long focused on speaking practice, Google’s approach leans more casual and, usefully, it’s built into a tool millions already use daily.
Advertisement
There are a few limitations for now. Pronunciation is currently Android-only, and it’s rolling out in the US and India, supporting English, Spanish and Hindi at launch. There’s no word yet on when it’ll expand to iOS or more languages. However, given Google’s track record, a wider rollout seems likely.
Translate has always been great at helping you understand other languages. Now it’s taking a step toward helping you actually speak them better too.
A key driver for the rise in medical device cyberattacks, according to RunSafe, is the prominence of legacy tech in healthcare environments.
Cyberattacks on medical devices are becoming more frequent and more disruptive, according to a report released by US cybersecurity company RunSafe Security today (29 April).
The 2026 Medical Device Cybersecurity Index, based on a March 2026 survey of 551 healthcare professionals throughout the US, UK and Germany involved in device purchasing decisions, found that 24pc of surveyed healthcare organisations experienced a cyberattack on a medical device – a rise of 2pc compared to last year.
Of those that experienced an attack, 80pc reported moderate or significant patient care impact as a result, with a quarter of the cohort reporting significant impact.
Advertisement
According to the report, the most commonly affected systems included electronic health record systems (cited by 35pc of affected organisations), patient monitoring devices (23pc), laboratory and diagnostic equipment (18pc), networked surgical equipment (10pc) and imaging systems (8pc).
The most dominant cyberattack methods seen in these incidents were malware infections requiring device quarantine – which were responsible for nearly half of the incidents (48pc) – and network intrusion requiring device isolation (41pc), with both of these incident types maintaining their dominant popularity from 2025.
However, one incident type that RunSafe noted as emerging particularly in 2026 was remote access exploitation, which was seen in 38pc of incidents. RunSafe stated this signalled that attackers are “adapting to the growing remote access footprint of connected devices”.
“Organisations that have not implemented network segmentation, access controls and runtime protections are exposed,” said the company.
Advertisement
For those organisations that experienced a cyberattack on a medical device, recovery was not so simple.
Nearly half (49pc) of reported incidents caused “extended stays or required manual workarounds”, according to the report, with the most common recovery scenario – experienced by 39pc of impacted organisations – involving five to 12 hours of downtime. Meanwhile, 5pc of affected organisations experienced downtime of more than three days.
Legacy issues
A key driver of the growing medical device cyberthreat, according to RunSafe, is the prominence of legacy devices that cannot be patched or easily replaced.
The report found that three in 10 responding organisations operate medical devices that are past the manufacturer’s end-of-support date. A significant proportion of those devices carry known, unpatched vulnerabilities, according to RunSafe.
Advertisement
The reported reasons as to why these healthcare organisations continue to operate at-risk legacy devices spanned clinical, financial and structural constraints.
38pc of respondents said there was no “acceptable” replacement available yet for the legacy device in question, while 36pc said they cannot afford a replacement.
34pc cited regulatory or approval constraints as a barrier, 33pc said replacing the device or system would cause too much disruption and interestingly, 17pc stated that the risk presented by this legacy tech has been formally accepted by leadership.
“The inability to patch, combined with continued clinical reliance on vulnerable devices, creates a structural security gap that cannot be closed solely through procurement alone,” said RunSafe in an analysis of the topic of legacy devices.
Advertisement
“This gap is almost certainly a key driver behind the rise in runtime protection adoption seen in 2026. Runtime protection technologies – which defend devices without requiring a patch – act as a compensating control for a problem that buying new devices cannot solve.”
As recognised by the report, runtime protection technologies are emerging as a critical “compensating control”, with 82pc of respondents stating that they have widely deployed or are piloting runtime exploit protection.
A vulnerable sector
The rise of medical device cyberattacks highlighted by this report comes as the healthcare industry continues to experience breaches and attacks ranging in severity, as noted by RunSafe founder and CEO Joseph M Saunders.
“The findings land against a backdrop of large-scale healthcare cyber incidents that have disrupted care delivery and revenue flows, underscoring how quickly attacks on device-adjacent systems can translate into patient harm,” he said.
Advertisement
“Medical device cybersecurity is increasing in importance to healthcare buyers as they see it as a patient safety and regulatory imperative.”
Last month, medical equipment manufacturing giant Stryker was hit by a cyberattack that caused a global network disruption. Reports at the time suggested that the company’s Cork plant, which employs more than 4,000, was affected by the attack – which pro-Iranian cyber group Handala claimed responsibility for.
Meanwhile, just a few weeks ago, Dublin recruitment platform Healthdaq – which is used by Northern Ireland’s health trusts – reportedly suffered a cyberattack from the relatively new hacker group XP95, which claimed to have accessed hundreds of thousands of files.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Samsung might finally be ready to shake up the look of its flagship phones, but don’t get too excited just yet.
A new leak suggests the Galaxy S27 could bring a redesigned rear camera setup. However, the details are still early and far from locked in.
According to the report, Samsung is supposedly reviewing a potential overhaul of the phone’s camera module, along with broader design changes. This could include tweaks to layout, hardware and overall aesthetics. In fact, these are the areas where the Galaxy S series has felt a little too familiar in recent years.
That said, this isn’t a done deal. The same source notes that progress on the redesign is moving slowly internally. Cost pressures are reportedly playing a role in delaying decisions. In other words, even if Samsung is exploring a new look, it may not make the final cut in time for the S27.
Advertisement
If anything, this sounds more like early-stage planning than a confirmed direction. Samsung has offered users the option of subtle refinements over major visual changes in recent generations. This is particularly true around the camera design. Therefore, a bigger shift would mark a notable change in approach.
There’s also a small but interesting hint buried in the leak. The tipster claims that another upcoming Samsung device has already adopted a redesigned camera layout similar to what’s being considered for the S27. While that’s vague, it could mean Samsung is testing the waters elsewhere. In other words, Samsung might want to do this before committing to its flagship line.
Still, it’s worth keeping expectations in check. The leak comes from a single source with a mixed track record, and even they admit the information isn’t final. Plans like this can evolve quickly or be scrapped entirely, especially when cost and production timelines come into play.
For now, the idea of a fresh Galaxy S design is more of a “wait and see” than anything concrete. But if Samsung does follow through, the S27 could finally break away from the safe, iterative look the series has stuck with for years.
The Record Store Day (RSD) release of a 3LP set of 1978 archival live recordings by tenor saxophone legend Joe Henderson titled Consonance is yet another excellent discovery from the good folks at Resonance Records (championed by producer Zev Feldman).
As with others in Resonance’s recent Jazz Showcase series which we have reviewed here at eCoustics, the original master tapes seem to have been recorded in mono. However, the sound quality is quite good, capturing a well balanced performance with all the instruments are in enjoyable listening proportion: saxophone and piano appear a little more up front with the bass tucked in neatly below them, locking in with the clear but not overwhelming drums.
A recording certainly worthy of its pressing on 180 gram black vinyl — something I can’t always say for many archival releases — the vinyl pressing lacquers for this release were cut by Matthew Lutheran’s at The Mastering Lab and the final discs were manufactured at Quebec’s Le Vinylist.
Consonance finds Mr. Henderson backed by Johanne Brackeen on piano, Danny Spencer on drums and a young future bass legend in his own right, Steve Rodby. The latter was part of the Chicago jazz scene at the time and effectively was one of the regular house musicians at that club before he joined Pat Metheny’s group in the early 1980s.
In fact, the album features compelling liner notes including recollections from Rodby who offers nuance into why it was special to play with Henderson — as well as from Brackeen and Spencer, co-producer John Koenig plus Wayne Segal (son of Jazz Showcase founder/owner, Joe Segal).
Advertisement
Mr. Henderson was no doubt a very special force on the jazz scene — just check some of your favorite classics by Lee Morgan, Horace Silver, Herbie Hancock, Miroslav Vitouš, Freddie Hubbard, Alice Coltrane and others and you’ll find him on many legendary sessions. However, original pressings of his solo works are elusive and very collectible these days. Fortunately, many of his early albums are being reissued and along with that demand, archival live recordings like Consonance help round out the portrait of this artist’s life work.
That said, Consonance opens with a side-long version of John Coltrane’s “Mr. P.C.” Henderson also pays homage to the legendary Charlie Parker with an expansive reading of “Relaxin’ at Camarillo.” And a 16-minute journey explores Thelonious Monk’s “‘Round Midnight.” You’ll also hear some of Mr. Henderson’s originals such as “Inner Urge” — which takes up another full album side — and the show closer “Isotope.”
Even though Record Store Day is over, I suspect you will be able to find copies of this excellent set online as well as in your favorite stores. In fact, you can get it at Amazon for $75.99 and if you can’t find the LP or simply want a less pricey option, the CD version is available for $23.56.
Mark Smotroff is a deep music enthusiast / collector who has also worked in entertainment oriented marketing communications for decades supporting the likes of DTS, Sega and many others. He reviews vinyl for Analog Planet and has written for Audiophile Review, Sound+Vision, Mix, EQ, etc. You can learn more about him at LinkedIn.
Tony Isaac shares a report from NPR: Federal survey data shows that the amount of math homework assigned to fourth and eighth grade students, in particular, has been steadily declining for the past decade. Some educators and parents say this is a good thing — students shouldn’t spend six or more hours a day at school and still have additional schoolwork to complete at home. But the research on homework is complicated. Some studies show that students who spend more time on homework perform better than their peers. For example, a longitudinal study released in 2021 of more than 6,000 students in Germany, Uruguay and the Netherlands found that lower-performing students who increased the amount of time they spent on math homework performed better in math, even one year later.
Other studies, however, suggest homework has minimal outcomes on academic performance: A 1998 study of more than 700 U.S. students led by a researcher at Duke University found that more homework assigned in elementary grades had no significant effect on standardized test scores. The researchers did find small positive gains on class grades when they looked at both test scores and the proportion of homework students completed. More homework was also associated with negative attitudes about school for younger children in the study. “The best educators figured out a long time ago that we can control what we can control,” and that’s what happens during the school day, Superintendent Garrett said, not homework. “There has been a shift away from it naturally anyway, and I felt like this made it equitable across our entire school system.” “The best argument for homework is that mathematical procedures require practice, and you don’t want to waste classroom time on practice, so you send that home,” said Tom Loveless, a researcher and former teacher who has studied homework.
Ariel Taylor Smith, senior director of the Center for Policy and Action at the National Parents Union, said: “The thing they point to is that it’s an equity issue, and not all parents have the same availability and ability to support their students. I would make the argument that if a kid is really far behind in school, that’s an equity issue. They need the additional time to practice.” Kids, she said, “need more practice … Sometimes, you do have to practice the boring stuff, like math.”
“The interesting issue for folks to consider is not should there be more homework, but should there be better homework,” said Joyce Epstein, who has studied homework and is the co-director of the Center on School, Family, and Community Partnerships at the Johns Hopkins University School of Education. “Better homework in math might be knowing the fact that kids don’t have to be practicing for hours, 10 to 20 examples,” when they could establish mastery in less time.
Motorola just announced three new clamshell foldables, and confirmed the US availability of the Razr Fold and Moto Buds 2 Plus
All of these phones are coming to the US on May 21, with the Moto Buds 2 Plus landing on April 30
The Motorola Razr Ultra 2026 is arguably the highlight of these announcements, with a 7-inch foldable screen and three 50MP cameras
Motorola is having a busy day, as the company has just launched five devices, including phones and earbuds.
Leading the charge is the Motorola Razr 2026 family, which includes the base Motorola Razr 2026, the Motorola Razr Plus 2026, and the Motorola Razr Ultra 2026, as well as the previously announced Motorola Razr Fold (you can check out our first impressions of the Ultra in our hands-on Motorola Razr Ultra review).
The Motorola Razr 2026 has a 6.9-inch 1080 x 2640 foldable screen, a 3.6-inch 1056 x 1066 cover display, a MediaTek Dimensity 74350X chipset, 8GB of RAM, 128GB of storage, a 4,800mAh battery with 30W charging, a 50MP wide camera, a 50MP ultra-wide, and a 32MP front-facing camera.
Article continues below
Advertisement
The Motorola Razr Plus 2026 has a similar main display and cameras, but its 1272 x 1080 cover screen is bigger, at 4 inches, and it has a superior Snapdragon 8s Gen 3 chipset, 12GB of RAM, 256GB of storage, and a 4,500mAh battery with 45W charging.
As for the Motorola Razr Ultra 2026, that device has a 7-inch 1224 x 2992 foldable screen, a 4-inch 1272 x 1080 cover screen, a Snapdragon 8 Elite chipset, 16GB of RAM, 512GB of storage, a 5,000mAh battery with 68W charging, and a trio of 50MP cameras.
Image 1 of 3
Advertisement
The Motorola Razr 2026(Image credit: Motorola)
The Motorola Razr Plus 2026(Image credit: Motorola)
The Motorola Razr Ultra 2026(Image credit: Motorola)
In the US, all three phones go up for pre-order on May 14 and ship on May 21, with the base Razr 2026 costing $799.99, the Motorola Razr Plus 2026 $1,099.99, and the Razr Ultra 2026 $1,499.99. We’re still waiting for confirmation on the phones’ pricing and availability in the UK and Australia.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
The Motorola Razr Fold (Image credit: Motorola)
This isn’t actually entirely new, as it was first shown off at CES 2026, but it now has a US price and release date, with pre-orders starting on May 14 and the phone shipping on May 21, for a price of $1,899.99. Pre-orders in the UK are already open, where you can currently grab the device for £1,579.99 (down from £1,779.99).
Advertisement
The Razr Fold has an 8.1-inch 2484 x 2232 foldable screen, a 6.6-inch 2520 x 1080 cover display, a 6,000mAh battery, a 50MP main camera, a 50MP ultra-wide that can also take macro shots, and a 50MP 3x telephoto camera.
There’s also a 32MP camera on the cover screen and a 20MP one on the foldable display. Additionally, the Motorola Razr Fold supports a stylus, and it has a Snapdragon 8 Gen 5 chipset paired with 16GB of RAM and 512GB of storage.
Beyond phones, Motorola has also launched the Moto Buds 2 Plus in the US, following a general announcement in March. These earbuds promise Dynamic Active Noise Cancelation (ANC), spatial audio, and six microphones.
Advertisement
The Moto Buds 2 Plus (Image credit: Motorola)
They also offer nine hours of playtime on a single charge and up to 40 hours of total battery life with the charging case, with 10 minutes of charging providing up to two hours of playback. The Moto Buds 2 Plus will be available from April 30 in the US, at a price of $149.99. In the UK, they cost £130, with a release date yet to be confirmed.
We may receive a commission on purchases made from links.
The television industry is worth a few hundred billion dollars, and it’s expected to smash past $500 billion by 2030. That sounds all very impressive, but a chunk of that comes not from selling pwople their dream TV, but from selling them things they don’t need. It’s not an accident, either; it’s a business model.
Buying a TV should be simple. You can confidently shop for a one online, or you can walk into a store, check out one that looks good, get the hard sell, and then take it home. But with the salesperson’s technical jargon and overinflated claims, you might get a feeling that you’ve bought more than you needed once you settle down on the couch to watch that first show — or maybe you didn’t get the features you actually need. The problem is, many of us do not have the time or the technical knowledge to push back. Therefore, we trust the spec sheet and believe the salesperson, which can result in overspending. Manufacturers and retailers may very well count on exactly that to boost their sales figures.
Advertisement
To arm yourself before you go to the store, we’ve listed five of the most persistent myths in the world of TV buying. They’ve been repeated over and over to the point that they now feel like common sense. But are they? After debunking these myths, we hope you can save a little bit of money, whether you’re on the way to the store or contemplating your next purchase. Here are five TV myths it’s time to stop believing once and for all.
Advertisement
Myth: you need 4K on a small TV
Hollygraphic/Getty Images
Walk into any electronics store with the intention of buying a TV and salespeople will tell you that 4K is the essential viewing experience. They’re not wrong. However, if it’s a small TV you need (we’re talking 44 inches or under), you can save yourself a bit of cash by opting for a 1080p display instead, like that on the Roku Select Series FHD TV. That’s because researchers at the University of Cambridge and Meta Reality Labs say your eyes may not get any of that 4K benefit from a small screen. The explanation for this lies in how the human eye works. “Our brain doesn’t actually have the capacity to sense details in colour very well,” says Professor Rafał Mantiuk, co-author of the study. Our peepers can only process detail up to a certain point. Feed them more resolution than they can handle, and the signals sent to your brain won’t be that different from a lower resolution.
The researchers measured pixels per degree (PPD), which isn’t how many pixels a screen has, but how a screen looks from your viewing position. For an average-sized living room with 2.5 meters between couch and screen, a 44-inch 4K TV offers little to no noticeable benefit over a lower-resolution QHD set of the same size. Knowing the point when you can tell the difference between 4K and 1080p could save you money — and the research team was so keen to assist people with this that they made an online calculator to help. Just enter the necessary details, and it will tell you exactly what resolution is actually beneficial to your eyes.
Advertisement
Myth: you need premium HDMI cables
Towfiqu Ahamed/Getty Images
Cable manufacturers will try to convince you that expensive 4K cables are a necessity, but the fact is they’re not. If your current cheap cables do fall short, the solution is simply another cheap cable from a different brand. HDMI is just a digital signal; it either carries the data or it doesn’t. Whatever you’ve read, a pricier cable will not enhance your picture because the signal has no way of carrying any alleged extra quality. Even if you dug out a dusty old cable from the back of a drawer, it would almost certainly deliver the same picture quality as a $50 cable you just pulled off the shelf at Best Buy.
It’s also worth noting that HDMI cable “versions” don’t actually exist. Whether it’s HDMI 2.0 or 2.1, these numbers describe your device’s ports. What actually counts when choosing the right HDMI cable is the speed category. If that dusty old cable is a standard cable, it won’t be able to handle 4K. But the good news is, even the cheapest cables on today’s market are almost always high-speed or premium high-speed, the latter of which can handle just about any 4K content.
Gold-plated connectors and signal fidelity are unnecessary, too. In fact, buying high-priced cables means you’re just buying a brand name, gimmicky features, and possibly a fancy box. The one exception is next-gen gaming. If you have the hardware capable of pushing 4K at 120fps, treat yourself to an ultra-high-speed cable — but even then, these are often reasonably priced; you don’t need to fork over a fortune.
Advertisement
Myth: you need an extended warranty
bangoland/Shutterstock
The moment you buy a new TV, just wait for the extended warranty hard sell. But did you know that extended warranties are often far more profitable for retailers than the hardware itself? In many cases, they pocket more than half of what you pay for the plan. With the global extended warranty market projected to reach an incredible $286.4 billion by 2032 according to Allied Market Research, this is not an industry built on goodwill — it’s a serious business. But the reality of a modern flat-screen TV is that they fail at a very low rate; we’re talking single-digit percentage numbers here. And when something does go wrong, the repair cost is usually just marginally higher than what you would have paid for the extended warranty. Consumer Reports put it bluntly when they said, “You shouldn’t have to pay extra to get manufacturers or retailers to stand behind their products.”
The pricing is not arbitrary, either. Companies work out how many TVs in a given model are likely to fail and set their prices accordingly, which ensures they always come out on top. The reality is, you’re not buying protection for your TV; you’re subsidizing their profits. Even if you do make a claim on your extended warranty, the experience is seldom straightforward. Repairs drag on, and a lot of the time they need more than one attempt to fix it. Most major credit cards quietly offer the cardholder a warranty extension as a free perk anyway, as long as you use that card to purchase the TV. The smart move is to keep your money or stash it in a repair fund. On a TV that is statistically very unlikely to need fixing, the odds are firmly in your favor.
Advertisement
Myth: TV contrast ratio specs are accurate
TextureVerse Studio/Shutterstock
Contrast ratio measures how deep a TV’s blacks are against how bright its whites can get — and it is one of the most important factors in picture quality. However, if you’ve ever compared the contrast ratios of two TVs, you’ve probably been misled. That’s because the numbers are not directly comparable across brands. Manufacturers are not required to follow any single testing procedure when measuring it, so every brand does it differently — and most measure it in whatever way produces the biggest number.
At the heart of this is the difference between native and dynamic contrast ratio. Every TV has a native contrast ratio — what the screen can physically produce. Many also have dynamic contrast, a feature that adjusts brightness in dark and light scenes to deepen blacks and brighten whites. Because the dynamic figure is often much larger than the native figure, manufacturers sometimes highlight it on packaging — and it cannot be trusted as a reliable guide to what you will actually see. The number on the box is not a standardized measurement; it’s a marketing decision. With no standard benchmark, these numbers are essentially meaningless.
Advertisement
Myth: OLED burn-in is still a serious concern
Amtitus/Getty Images
Burn-in — the ghostly remnant of a static image permanently etched on an OLED screen. It has long haunted the OLED and spooked many buyers over the years. It’s probably the main reason many people have opted for LCD TVs instead. But should you be worried about burn-in on OLED TVs? Evidence suggests that fear is largely misplaced. Most people who think their screen has some burn-in symptoms are actually experiencing image retention. This is temporary and clears up on its own. True burn-in is permanent and was a legitimate concern with older OLEDs. But nowadays, it requires extreme conditions to happen. When it occurs, it occurs when the same static element, like a news channel logo, is left on the screen at high brightness for days on end.
RTINGS decided to put this one to bed when they conducted one of the most comprehensive TV longevity studies ever conducted. It was a 3-year accelerated test on over 100 TVs, accumulating more than 10,000 hours of usage. In the end, every single OLED did eventually show burn-in, but the tech experts made it clear that this was the result of deliberately extreme conditions, and they do not represent normal use. In an earlier test, RTINGS ran six OLED TVs for over 9,000 hours, showing a mix of general TV — the same way people actually watch TV. Not one of them developed significant burn-in. Myth debunked.
Advertisement
Methodology
Yuganov Konstantin/Shutterstock
We searched for the most widely discussed myths regarding TVs on the internet. The five we listed are easily the most talked about. We looked into it even deeper and found expert sources that have firmly debunked each of these myths. Our author also leaned on personal experience, having been a long-time nonbeliever in some of these; personal use showed that a small 1080p TV never posed a problem mounted on a bedroom wall for years, and affordable HDMI cables have never given any trouble. Additionally, the writer is too frugal to buy extended warranties, which have never resulted in any issue. However, all this debunking is also backed by reputable sources rather than relying on the author’s intuition alone.
You must be logged in to post a comment Login