Connect with us
DAPA Banner

Tech

Rockstar Games hit with ransom demand after third-party data breach

Published

on


The group responsible, ShinyHunters, says it didn’t breach Rockstar or its data-warehouse provider, Snowflake. Instead, it exploited access from Anodot, a SaaS analytics tool Rockstar uses to track cloud costs and performance. The attackers allegedly stole authentication tokens from Anodot’s systems and used them to gain unauthorized access to Rockstar’s…
Read Entire Article
Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Sunday Reboot: MacBook Neo upgrades, masses of Mac minis, and iPhone re-entry

Published

on

In this week’s “Sunday Reboot,” a storage upgrade for the MacBook Neo, an excuse to buy many Mac minis, and the iPhones come back to Earth with a late congratulatory message.

Silhouetted person in shadow peers through a narrow opening toward a brightly lit wall of stacked silver computer mini desktops with ports and indicator lights
Image credits: NASA/Overcast

Sunday Reboot is a weekly column covering some of the lighter stories within the Apple reality distortion field from the past seven days. All to get the next week underway with a good first step.
This week, researchers managed to get around Apple Intelligence security measures using prompt injection techniques, a repairability report panned Apple’s hardware again, and Apple’s lawsuit with Epic Games over the App Store continued to roll on. There was also a bug found to break Mac networking every 49 days, 17 hours, two minutes, and 47 seconds.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Tesla Achieves European Breakthrough as Full Self-Driving Supervised Reaches Dutch Roads

Published

on

Tesla FSD Europe Netherlands Launch Test Drive
Tesla finally received approval on April 10 from the Dutch vehicle regulator, the RDW, for its Full Self-Driving Supervised system to be used on European roads. They were the first to receive approval for this advanced technology across Europe, marking a significant milestone for the company. This means that the program has been cleared to run on public roads in the Netherlands, and the distribution began the next day, April 11, for a limited number of early testers who had been patiently waiting.



Drivers with Hardware 4 (HW4) computers in their vehicles received an upgrade to version 2026.3.6, which included the European-tuned build of FSD 14.2.2.5. Before turning on the system, drivers must complete a fast tutorial followed by a mini test within the car interface. Once that’s done, they can take their hands off the wheel under appropriate situations, and cameras will watch their eyes to see if they’re paying attention. If they become distracted, the system will begin to display visual alarms, followed by sounds and vibrations if they do not return to it, and if all else fails, the car will slow down and come to a safe stop on its own.


LEGO Icons Ford Model T Building Set for Adults – Collectible Kit with Detailed Features for Bedroom…
  • COLLECTIBLE MODEL CAR KIT – Embark on a nostalgic journey with the LEGO Icons Ford Model T (11376) building set for adults ages 18 and up
  • VINTAGE CAR DISPLAY PIECE – Recreate the iconic 1910s automobile with gleaming black bodywork, golden accents, a foldback fabric roof and tall…
  • AUTHENTIC MODEL T FUNCTIONS – Fold back the model car’s roof, fold down the split windshield, lift the hood from both sides to see the engine, turn…

Overall, this was the product of 18 months of testing, which included more than 1.5 million kilometers of driving on European highways, as well as numerous controlled scenarios on closed tracks. Before approving the system, regulators reviewed almost 400 compliance points. The RDW pronounced it a beneficial addition to road safety, but stressed that drivers must remain in the driver’s seat and ready to take over at any time.


The European version of the software is substantially different from the one accessible in the United States. This is primarily due to the way regulators work here, which requires them to do pre-market checks, as opposed to their US counterparts’ self-certification strategy. As a result, the Dutch construction is more cautious and limits some of the more aggressive driving characteristics available elsewhere. Automatic turns at junctions and navigation-based lane changes are still accessible, but several parking-lot summoning capabilities found in the United States are not available in the Netherlands.

Advertisement


Subscribers will pay 99 euros per month to receive the system, or 49 euros per month if they already have Enhanced Autopilot. Alternatively, they can purchase it outright for 7,500 euros. Tesla claims that the system leverages billions of kilometers of real-world data collected worldwide, and Elon Musk has just stated that the RDW review process was particularly rigorous.

Tesla FSD Europe Netherlands Launch Test Drive
The Dutch approval is now a one-time occurrence, although it has a provisional validity period of at least 36 months. It means that other European states can adopt it on their own, and authorities in Germany, France, and Italy are expected to do so within the next 4 to 8 weeks. Tesla’s goal is to have the system more widely accepted across the EU by the summer, allowing millions of drivers to use it without having to repeat the testing procedure in each nation.
[Source]

Source link

Advertisement
Continue Reading

Tech

Apple reportedly testing out four different styles for its smart glasses that will rival Meta Ray-Bans

Published

on

Apple may be late to the smart glasses market, but it could be covering all its bases with up to four potential styles for its upcoming product. According to Bloomberg‘s Mark Gurman, Apple could launch some or all of the four styles it’s currently testing for its smart glasses.

Gurman reported Apple is testing out a large rectangular frame that’s comparable to Ray-Ban Wayfarers, a slimmer rectangular design like the glasses that Apple CEO Tim Cook wears, a larger oval or circular frame and a smaller oval or circle option. Apple is also working on a range of colors, including black, ocean blue and light brown, according to Bloomberg.

Internally code-named N50 for now, Apple’s upcoming smart glasses will compete directly with the second-gen Ray-Ban Meta model. While similar, Apple might be differentiating its design with “vertically oriented oval lenses with surrounding lights,” according to the report. Like Meta’s smart glasses, Apple’s upcoming product will capture photos and videos, but is meant to better sync with an iPhone, allowing users to take advantage of Apple’s ecosystem for editing, sharing, phone calls, notifications, music and even its voice assistant, according to Gurman. The release of Apple’s smart glasses could even coincide with the upcoming improved Siri that should arrive with iOS 27.

Gurman reported that Apple could reveal its smart glasses as soon as the end of 2026 or early 2027, followed by an official release sometime in 2027. As for the competition, Meta released its latest model that’s better suited for prescription lenses and offers a more customizable fit.

Advertisement

Source link

Continue Reading

Tech

The MacBook Neo is moonlighting as a Windows gaming machine, and it’s doing it well

Published

on

Apple didn’t position its most affordable MacBook as a gaming machine. The MacBook Neo, a budget-leaning laptop that runs on Apple’s A18 Pro chip, the same chip that powers the iPhone 16 Pro models, has been put through a Windows 11 gaming test for YouTuber ETA Prime. 

Turns out, the results are genuinely surprising. Using Parallels Desktop, a virtualization app (paid) with 3D hardware acceleration, the channel ran Windows 11 ARM directly on the Neo’s 8GB RAM (allocating 5GB to the virtual environment), and it did better than most people would think it would. 

What games actually ran well?

Dirt 3 held 75 fps at 1200p on high settings, while Portal 2 cleared 100 fps on medium settings. Skyrim, on the other hand, maintained roughly 60 fps at 1200p resolution on medium graphics settings, while Marvel Cosmic Invasion averaged around 60 fps at the maximum resolution.

Advertisement

What helped performance was games running as native Windows-on-ARM applications. However, GTA V was among the notable stumbles, as the frame rates through the Parelles weren’t playable at all. However, according to Notebookcheck, the game runs acceptably via Crossover. 

Why does this matter for everyday MacBook Neo users?

For users who work on their Mac but occasionally enjoy playing Windows-only games, MacBook Neo’s ability to run native titles via the Parallels app comes as good news. The cost? Parallels Desktop’s Standard tier costs $99.99 per year, which could add to your weekend leisure sessions. 

Anyways, the bigger takeaway is that the MacBook Neo, even with 8GB of RAM (highlighted as a constraint in the video), can run low-to-mid-range Windows games. It also changes the notion around budget Apple hardware being primarily for productivity-based tasks. 

As virtualization tech continues to improve and Apple provides more RAM in future generations of the MacBook Neo, it could redefine what “budget” actually means for Apple buyers, bridging the gap between MacBook and Windows laptops even further. 

Advertisement

Source link

Continue Reading

Tech

How the Budget-Friendly BougeRV 23-Quart 12V Fridge Keeps Food Fresh Through Every Drive

Published

on

BougeRV 23 Quart 12V Portable Fridge Car
Summer heat makes any travel difficult, especially if you’re transporting groceries and / or cold drinks. Drivers are frequently forced to rely on old, simple coolers with ice that melts faster than a popsicle on a hot day, leaving everything wet by the time they reach. That’s where the BougeRV 23-quart unit, priced at $159.97 (was $189.99), comes in, a more practical solution that plugs directly into your car’s normal 12V socket and keeps items perfectly chilled without any of the fuss.



The unit is 22 inches long and weighs just more than 21 pounds, so it can fit into even the smallest trunks or backseats. It also has a built-in handle, making it simple to pull out at a rest break or transport home after a long shopping excursion. Inside, there’s enough space for a couple days’ worth of food or a full load of drinks and snacks for a family road trip.

Sale


BougeRV 12 Volt Refrigerator 12V Car Fridge 23 Quart Portable Freezer Compressor Cooler 12/24V DC…
  • What You Get: The CR22 12V refrigerator comes with a 2-year Tech Support. If you have any questions about the product, please REACH OUT TO BougeRV, as…
  • Fast Cooling Down to 32℉: With Compressor refrigeration technology, this 12v car refrigerator could achieve 15 min fast cooling from 77℉ to…
  • 45W Low Power Consumption: With ECO energy saving mode, this 23 Qt portable refrigerator’s operating power is less than 36W. Even running on MAX…


It’s powered by a 12-volt socket, which is found in practically every modern automobile, and there are alternatives for residential outlets or even solar power if you are parked for an extended period of time. The compressor system kicks in quickly, about 15 minutes, and maintains a consistent temperature between 8 degrees below zero and 50 degrees Fahrenheit, allowing you to choose between fridge and freezer mode as needed.

Advertisement

BougeRV 23 Quart 12V Portable Fridge Car
The portable fridge uses very little energy (around 36 watts in environment mode), and the smart cycling keeps your daily power consumption under one kilowatt-hour even on the warmest days. To be on the safe side, there’s a built-in battery monitor that will turn it off before it consumes your vehicle’s battery, so you don’t have to worry about that.

BougeRV 23 Quart 12V Portable Fridge Car
People who have used it on road trips note that it works effectively, keeping perishables from spoiling without having to constantly add ice, and it absorbs bumps in the road well, even while traveling at a 30-degree angle. If you’re only running to the store for a quick shopping trip, the fridge will keep running until you return home, even if you get stopped in traffic.

Source link

Continue Reading

Tech

New FCC router rules could trap millions using outdated ISP hardware as supply chain limits stall upgrades and complicate security fixes

Published

on


  • FCC rules block new foreign routers while old, vulnerable ones stay in homes longer
  • ISP customers cannot upgrade routers even when security risks become widely known
  • Router approvals now depend on waivers that may slow down nationwide replacements

The Federal Communications Commission (FCC) has issued new rules intended to address security risks posed by routers produced outside the United States.

A number of recent incidents have shown foreign routers are vulnerable to cyberattacks, with campaigns like Flax, Volt, and Salt Typhoon making headlines across the world.

Source link

Continue Reading

Tech

Your developers are already running AI locally: Why on-device inference is the CISO’s new blind spot

Published

on

For the last 18 months, the CISO playbook for generative AI has been relatively simple: Control the browser.

Security teams tightened cloud access security broker (CASB) policies, blocked or monitored traffic to well-known AI endpoints, and routed usage through sanctioned gateways. The operating model was clear: If sensitive data leaves the network for an external API call, we can observe it, log it, and stop it. But that model is starting to break.

A quiet hardware shift is pushing large language model (LLM) usage off the network and onto the endpoint. Call it Shadow AI 2.0, or the “bring your own model” (BYOM) era: Employees running capable models locally on laptops, offline, with no API calls and no obvious network signature. The governance conversation is still framed as “data exfiltration to the cloud,” but the more immediate enterprise risk is increasingly “unvetted inference inside the device.”

When inference happens locally, traditional data loss prevention (DLP) doesn’t see the interaction. And when security can’t see it, it can’t manage it.

Advertisement

Why local inference is suddenly practical

Two years ago, running a useful LLM on a work laptop was a niche stunt. Today, it’s routine for technical teams.

Three things converged:

  • Consumer-grade accelerators got serious: A MacBook Pro with 64GB unified memory can often run quantized 70B-class models at usable speeds (with practical limits on context length). What once required multi-GPU servers is now feasible on a high-end laptop for many real workflows.

  • Quantization went mainstream: It’s now easy to compress models into smaller, faster formats that fit within laptop memory often with acceptable quality tradeoffs for many tasks.

  • Distribution is frictionless: Open-weight models are a single command away, and the tooling ecosystem makes “download → run → chat” trivial.

The result: An engineer can pull down a multi‑GB model artifact, turn off Wi‑Fi, and run sensitive workflows locally, source code review, document summarization, drafting customer communications, even exploratory analysis over regulated datasets. No outbound packets, no proxy logs, no cloud audit trail.

From a network-security perspective, that activity can look indistinguishable from “nothing happened”.

Advertisement

The risk isn’t only data leaving the company anymore

If the data isn’t leaving the laptop, why should a CISO care?

Because the dominant risks shift from exfiltration to integrity, provenance, and compliance. In practice, local inference creates three classes of blind spots that most enterprises have not operationalized.

1. Code and decision contamination (integrity risk)

Local models are often adopted because they’re fast, private, and “no approval required.” The downside is that they’re frequently unvetted for the enterprise environment.

A common scenario: A senior developer downloads a community-tuned coding model because it benchmarks well. They paste in internal auth logic, payment flows, or infrastructure scripts to “clean it up.” The model returns output that looks competent, compiles, and passes unit tests, but subtly degrades security posture (weak input validation, unsafe defaults, brittle concurrency changes, dependency choices that aren’t allowed internally). The engineer commits the change.

Advertisement

If that interaction happened offline, you may have no record that AI influenced the code path at all. And when you later do incident response, you’ll be investigating the symptom (a vulnerability) without visibility into a key cause (uncontrolled model usage).

2. Licensing and IP exposure (compliance risk)

Many high-performing models ship with licenses that include restrictions on commercial use, attribution requirements, field-of-use limits, or obligations that can be incompatible with proprietary product development. When employees run models locally, that usage can bypass the organization’s normal procurement and legal review process.

If a team uses a non-commercial model to generate production code, documentation, or product behavior, the company can inherit risk that shows up later during M&A diligence, customer security reviews, or litigation. The hard part is not just the license terms, it’s the lack of inventory and traceability. Without a governed model hub or usage record, you may not be able to prove what was used where.

3. Model supply chain exposure (provenance risk)

Local inference also changes the software supply chain problem. Endpoints begin accumulating large model artifacts and the toolchains around them: ownloaders, converters, runtimes, plugins, UI shells, and Python packages.

Advertisement

There is a critical technical nuance here: The file format matters. While newer formats like Safetensors are designed to prevent arbitrary code execution, older Pickle-based PyTorch files can execute malicious payloads simply when loaded. If your developers are grabbing unvetted checkpoints from Hugging Face or other repositories, they aren’t just downloading data — they could be downloading an exploit.

Security teams have spent decades learning to treat unknown executables as hostile. BYOM requires extending that mindset to model artifacts and the surrounding runtime stack. The biggest organizational gap today is that most companies have no equivalent of a software bill of materials for models: Provenance, hashes, allowed sources, scanning, and lifecycle management.

Mitigating BYOM: treat model weights like software artifacts

You can’t solve local inference by blocking URLs. You need endpoint-aware controls and a developer experience that makes the safe path the easy path.

Here are three practical ways:

Advertisement

1. Move governance down to the endpoint

Network DLP and CASB still matter for cloud usage, but they’re not sufficient for BYOM. Start treating local model usage as an endpoint governance problem by looking for specific signals:

  • Inventory and detection: Scan for high-fidelity indicators like .gguf files larger than 2GB, processes like llama.cpp or Ollama, and local listeners on common default port 11434.

  • Process and runtime awareness: Monitor for repeated high GPU/NPU (neural processing unit) utilization from unapproved runtimes or unknown local inference servers.

  • Device policy: Use mobile device management (MDM) and endpoint detection and response (EDR) policies to control installation of unapproved runtimes and enforce baseline hardening on engineering devices. The point isn’t to punish experimentation. It’s to regain visibility.

2. Provide a paved road: An internal, curated model hub

Shadow AI is often an outcome of friction. Approved tools are too restrictive, too generic, or too slow to approve. A better approach is to offer a curated internal catalog that includes:

Advertisement
  • Approved models for common tasks (coding, summarization, classification)

  • Verified licenses and usage guidance

  • Pinned versions with hashes (prioritizing safer formats like Safetensors)

  • Clear documentation for safe local usage, including where sensitive data is and isn’t allowed. If you want developers to stop scavenging, give them something better.

3. Update policy language: “Cloud services” isn’t enough anymore

Most acceptable use policies talk about SaaS and cloud tools. BYOM requires policy that explicitly covers:

  • Downloading and running model artifacts on corporate endpoints

  • Acceptable sources

  • License compliance requirements

  • Rules for using models with sensitive data

  • Retention and logging expectations for local inference tools This doesn’t need to be heavy-handed. It needs to be unambiguous.

The perimeter is shifting back to the device

For a decade we moved security controls “up” into the cloud. Local inference is pulling a meaningful slice of AI activity back “down” to the endpoint.

5 signals shadow AI has moved to endpoints:

Advertisement
  • Large model artifacts: Unexplained storage consumption by .gguf or .pt files.

  • Local inference servers: Processes listening on ports like 11434 (Ollama).

  • GPU utilization patterns: Spikes in GPU usage while offline or disconnected from VPN.

  • Lack of model inventory: Inability to map code outputs to specific model versions.

  • License ambiguity: Presence of “non-commercial” model weights in production builds.

Shadow AI 2.0 isn’t a hypothetical future, it’s a predictable consequence of fast hardware, easy distribution, and developer demand. CISOs who focus only on network controls will miss what’s happening on the silicon sitting right on employees’ desks.

The next phase of AI governance is less about blocking websites and more about controlling artifacts, provenance, and policy at the endpoint, without killing productivity.

Jayachander Reddy Kandakatla is a senior MLOps engineer.

Welcome to the VentureBeat community!

Advertisement

Our guest posting program is where technical experts share insights and provide neutral, non-vested deep dives on AI, data infrastructure, cybersecurity and other cutting-edge technologies shaping the future of enterprise.

Read more from our guest post program — and check out our guidelines if you’re interested in contributing an article of your own!

Source link

Advertisement
Continue Reading

Tech

Five signs data drift is already undermining your security models

Published

on

Data drift happens when the statistical properties of a machine learning (ML) model’s input data change over time, eventually rendering its predictions less accurate. Cybersecurity professionals who rely on ML for tasks like malware detection and network threat analysis find that undetected data drift can create vulnerabilities. A model trained on old attack patterns may fail to see today’s sophisticated threats. Recognizing the early signs of data drift is the first step in maintaining reliable and efficient security systems.

Why data drift compromises security models

ML models are trained on a snapshot of historical data. When live data no longer resembles this snapshot, the model’s performance dwindles, creating a critical cybersecurity risk. A threat detection model may generate more false negatives by missing real breaches or create more false positives, leading to alert fatigue for security teams.

Adversaries actively exploit this weakness. In 2024, attackers used echo-spoofing techniques to bypass email protection services. By exploiting misconfigurations in the system, they sent millions of spoofed emails that evaded the vendor’s ML classifiers. This incident demonstrates how threat actors can manipulate input data to exploit blind spots. When a security model fails to adapt to shifting tactics, it becomes a liability.

5 indicators of data drift

Security professionals can recognize the presence of drift (or its potential) in several ways.

Advertisement

1. A sudden drop in model performance

Accuracy, precision, and recall are often the first casualties. A consistent decline in these key metrics is a red flag that the model is no longer in sync with the current threat landscape.

Consider Klarna’s success: Its AI assistant handled 2.3 million customer service conversations in its first month and performed work equivalent to 700 agents. This efficiency drove a 25% decline in repeat inquiries and reduced resolution times to under two minutes.

Now imagine if those parameters suddenly reversed because of drift. In a security context, a similar drop in performance does not just mean unhappy clients — it also means successful intrusions and potential data exfiltration.

2. Shifts in statistical distributions

Security teams should monitor the core statistical properties of input features, such as the mean, median, and standard deviation. A significant change in these metrics from training data could indicate the underlying data has changed.

Advertisement

Monitoring for such shifts enables teams to catch drift before it causes a breach. For example, a phishing detection model might be trained on emails with an average attachment size of 2MB. If the average attachment size suddenly jumps to 10MB due to a new malware-delivery method, the model may fail to classify these emails correctly.

3. Changes in prediction behavior

Even if overall accuracy seems stable, distributions of predictions might change, a phenomenon often referred to as prediction drift.

For instance, if a fraud detection model historically flagged 1% of transactions as suspicious but suddenly starts flagging 5% or 0.1%, either something has shifted or the nature of the input data has changed. It might indicate a new type of attack that confuses the model or a change in legitimate user behavior that the model was not trained to identify.

4. An increase in model uncertainty

For models that provide a confidence score or probability with their predictions, a general decrease in confidence can be a subtle sign of drift.

Advertisement

Recent studies highlight the value of uncertainty quantification in detecting adversarial attacks. If the model becomes less sure about its forecasts across the board, it is likely facing data it was not trained on. In a cybersecurity setting, this uncertainty is an early sign of potential model failure, suggesting the model is operating in unfamiliar ground and that its decisions might no longer be reliable.

5. Changes in feature relationships

The correlation between different input features can also change over time. In a network intrusion model, traffic volume and packet size might be highly linked during normal operations. If that correlation disappears, it can signal a change in network behavior that the model may not understand. A sudden feature decoupling could indicate a new tunneling tactic or a stealthy exfiltration attempt.

Approaches to detecting and mitigating data drift

Common detection methods include the Kolmogorov-Smirnov (KS) and the population stability index (PSI). These compare the distributions of live and training data to identify deviations. The KS test determines if two datasets differ significantly, while the PSI measures how much a variable’s distribution has shifted over time. 

The mitigation method of choice often depends on how the drift manifests, as distribution changes may occur suddenly. For example, customers’ buying behavior may change overnight with the launch of a new product or a promotion. In other cases, drift may occur gradually over a more extended period. That said, security teams must learn to adjust their monitoring cadence to capture both rapid spikes and slow burns. Mitigation will involve retraining the model on more recent data to reclaim its effectiveness.

Advertisement

Proactively manage drift for stronger security

Data drift is an inevitable reality, and cybersecurity teams can maintain a strong security posture by treating detection as a continuous and automated process. Proactive monitoring and model retraining are fundamental practices to ensure ML systems remain reliable allies against developing threats.

Zac Amos is the Features Editor at ReHack.

Welcome to the VentureBeat community!

Our guest posting program is where technical experts share insights and provide neutral, non-vested deep dives on AI, data infrastructure, cybersecurity and other cutting-edge technologies shaping the future of enterprise.

Advertisement

Read more from our guest post program — and check out our guidelines if you’re interested in contributing an article of your own!

Source link

Continue Reading

Tech

ESPN on Disney Plus Is Expanding to More Countries

Published

on

More people will be able to watch ESPN programming through Disney Plus with Tuesday’s launch of ESPN on Disney Plus in Europe and select Asia-Pacific markets. 

With expansion into more than 50 countries and territories in those regions, people in 100 markets worldwide can now stream ESPN content through Disney Plus, according to a Disney Plus news release. The offering brings live sporting events and studio shows together with general entertainment and family programming in a single app.

In markets including Japan, Korea, Singapore, Taiwan and Hong Kong, a curated selection of English‑language ESPN sports programming is now available on Disney Plus, according to the release. Disney Plus also said, “the initial [ESPN on Disney Plus] offering will vary by market but will grow to thousands of live events over the next year.” 

Advertisement

Programming includes US coverage of the NBA and NHL starting with the 2026-27 season, college sports and more live events. Disney Plus subscribers can watch ESPN’s 30 for 30 documentary collection and select studio shows.

Pre-existing sports content on Disney Plus in Europe includes the UEFA Women’s Champions League, La Liga in the UK and Ireland and the Copa del Rey, UEFA Europa League, UEFA Conference League and DFB Pokal in the Nordic countries, according to Disney Plus.

Watch this: Your Phone is Disgusting: Let’s Fix That

People in Europe and select Asia-Pacific markets just need a Disney Plus subscription to watch ESPN content on Disney Plus. In the US, Disney Plus standalone subscribers can access a curated selection of live sports events, studio shows, and ESPN films, but must subscribe to Disney Plus and ESPN Unlimited to watch all available ESPN programming on the platform.

Advertisement

The ESPN on Disney Plus offering is also available to people in Latin America, the Caribbean, Australia and New Zealand.

Source link

Advertisement
Continue Reading

Tech

Amazon’s Fire TVs risk being left in the doldrums by Hisense and TCL’s Mini LEDs

Published

on

I’ve reviewed a few Amazon Fire TV Series models over the last few years, and generally, I’ve found them to be solid enough TVs.

I’ve always had the suspicion that they could be better for picture quality, and certainly a little less expensive, but then when Amazon’s sales event comes around, the TVs fall to prices that are verging on impulse buy if you want a cheap TV.

I don’t think you could say the same about Amazon’s TVs now.

Having reviewed the newest Fire TV 4-Series, I found it underwhelming. The problems were multiple. For one, it didn’t seem to be a big enough upgrade on the previous generation, at least from a performance perspective.

Advertisement

SQUIRREL_PLAYLIST_10207759

Advertisement

Secondly, the competition has heated up, or to be more exact, they’ve got cheaper. Hisense and TCL’s Mini LEDs can now be had for around the same price, if not less than, Amazon’s Direct LED TVs.

SQUIRREL_PLAYLIST_10208388

Advertisement

The less expensive Fire TVs are no longer the value-led proposition they were a few years ago. And by undercutting Amazon’s own QLED and Mini LED models, the more expensive Fire TVs could be in trouble too.

SQUIRREL_PLAYLIST_10208012

An aggressive expansion…

Hisense 65U7Q Pro TV lifestyleHisense 65U7Q Pro TV lifestyle
Image Credit (Trusted Reviews)

Hisense’s approach to the UK TV market has been a gradual one, offering value-focused TVs similar to Amazon’s Fire TVs while adding premium-priced TVs over time. It’s not interested in OLED (though it does offer an OLED model) as it sees no point in competing with LG and Samsung when the playing field is heavily weighted in their favour. Instead, it wants to make its mark with Mini LEDs.

Advertisement

TCL entered the UK market later than Hisense and realised it’s been playing catch-up. Its approach has rather unbalanced the market with aggressive pricing to gain market share – and it’s working. From bits of data I’ve seen here and there, its share of the market is on an upward trend whereas other, more established players have stagnated or reduced in the last few years.

Advertisement

Both have made the play for Mini LED, bringing sizeable brightness, wide-ranging colours and more precise backlighting for black levels and contrast down to a price that some other TV manufacturers might baulk at.

Right now you can get a Hisense 55-inch U7Q for £599, and a TCL 55-inch C6KS for £426. The 55-inch Fire TV 4-Series is down to £339, but you can see that there’s less room for manoeuvre with Mini LED prices coming down.

Amazon needs to refocus on performance

Amazon Fire TV 4-Series 2026 Final ReckoningAmazon Fire TV 4-Series 2026 Final Reckoning
Image Credit (Trusted Reviews)

I think overall that Amazon’s Fire TVs can be considered a solid proposition, but they do need to offer better performance.

Advertisement

The focus has been on value but with a TCL Mini LED hitting nearly 1000 nits of brightness against a budget Fire TV 4-Series that can only do 350 nits, there’s a chasm and or it’s only going to grow bigger over subsequent years. Amazon needs to pull its finger out.

Advertisement

Amazon was the brand that was undercutting the likes of Sony, Panasonic and LG but that’s now changed with the rise of the Chinese brands. Moreover, the best Fire TVs are no longer made by Amazon but buy its partners.

Fire TVs made by JVC were the epitome of bang average, while the likes of Toshiba offered an even cheaper alternative, but Panasonic made better-performing Fire TVs. As well as there being the risk from TCL and Hisense on the pricing side, there’s a risk that Amazon’s TVs get left behind by other brands. Imagine a world where Amazon’s TVs weren’t the best value or best performing. And would you buy one if they didn’t fulfil either promise?

I don’t doubt that they’re not selling well at the moment, so this acts as more of warning, but Amazon’s Fire TVs need a revamp, especially from a performance perspective, because right now it feels as if its TVs are retreading old ground rather than moving forward.

The playing field has altered quite significantly in the last few years and as I wrote in my review for the Fire TV 4-Series, if you’re standing still and others are moving past you, then you might as well be going backwards.

Advertisement

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025