Connect with us

Tech

‘The fastest desktop gaming processors Intel has ever built’: new Arrow Lake Refresh CPUs are priced to sell, and AMD should be worried

Published

on


  • Intel revealed new Arrow Lake Refresh processors
  • They are the Intel Core Ultra 7 270K Plus and Core Ultra 5 250K Plus
  • Both offer core count increases compared to their Arrow Lake predecessors — and a sizeable boost in gaming performance to the tune of 15%

Intel has released a pair of new desktop processors, which are refreshed models that are a step forward for the firm’s current Arrow Lake range.

Tom’s Hardware reports that these Arrow Lake Refresh chips are the Intel Core Ultra 7 270K Plus and Core Ultra 5 250K Plus. These are pepped-up models of the existing Core Ultra 7 265K and Core Ultra 5 245K CPUs, respectively.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

SQLi flaw in Elementor Ally plugin impacts 250k+ WordPress sites

Published

on

SQLi flaw in Elementor Ally plugin impacts 250k+ WordPress sites

An SQL injection vulnerability in Ally, a WordPress plugin from Elementor for web accessibility and usability with more than 400,000 installations, could be exploited to steal sensitive data without authentication.

The security issue, tracked as CVE-2026-2313, received a high severity score. It was discovered by Drew Webber (mcdruid), an offensive security engineer at Acquia, a software-as-a-service company that provides an enterprise-level Digital Experience Platform (DXP).

SQL injection flaws have been around for more than 25 years and continue to be a threat today, despite being well understood and technically easy to fix and avoid. This type of security issue occurs when user input is directly inserted into an SQL database query without proper sanitization or parameterization.

This allows an attacker to inject SQL commands that alter the query’s behavior to read, modify, or delete information in the database.

Advertisement

CVE-2026-2313 affects all Ally versions up to 4.0.3 and lets an unauthenticated attacker to inject SQL queries via the URL path due to improper handling of a user-supplied URL parameter in a critical function.

“This is due to insufficient escaping on the user-supplied URL parameter in the `get_global_remediations()` method, where it is directly concatenated into an SQL JOIN clause without proper sanitization for SQL context,” reads a technical analysis from WordFence.

“While `esc_url_raw()` is applied for URL safety, it does not prevent SQL metacharacters (single quotes, parentheses) from being injected.

“This makes it possible for unauthenticated attackers to append additional SQL queries into already existing queries that can be used to extract sensitive information from the database via time-based blind SQL injection techniques,” the researchers explain.

Advertisement

Wordfence notes that exploiting the vulnerability is possible only if the plugin is connected to an Elementor account and its Remediation module is active.

The security firm validated the flaw and disclosed it to the vendor on February 13. Elementor fixed the flaw in version 4.1.0 (latest), released on February 23, and an $800 bug bounty was awarded to the researcher.

Data from WordPress.org shows that only about 36% of websites using the Ally plugin have upgraded to version 4.1.0, leaving more than 250,000 sites vulnerable to CVE-2026-2313.

In addition to upgrading Ally to version 4.1.0, site owners/administrators are also recommended to install the latest security update for WordPress, released yesterday.

Advertisement

WordPress 6.9.2, addresses 10 vulnerabilities, including cross-site request (XSS), authorization bypass, and server-side request forgery (SSRF) flaws. The new version of the platform is recommended to be installed “immediately.”

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Advertisement
Continue Reading

Tech

14,000 routers are infected by malware that’s highly resistant to takedowns

Published

on

Researchers say they have uncovered a takedown-resistant botnet of 14,000 routers and other network devices—primarily made by Asus—that have been conscripted into a proxy network that anonymously carries traffic used for cybercrime.

The malware—dubbed KadNap—takes hold by exploiting vulnerabilities that have gone unpatched by their owners, Chris Formosa, a researcher at security firm Lumen’s Black Lotus Labs, told Ars. The high concentration of Asus routers is likely due to botnet operators acquiring a reliable exploit for vulnerabilities affecting those models. He said it’s unlikely that the attackers are using any zero-days in the operation.

A botnet that stands out among others

The number of infected routers averages about 14,000 per day, up from 10,000 last August, when Black Lotus discovered the botnet. Compromised devices are overwhelmingly located in the US, with smaller populations in Taiwan, Hong Kong, and Russia. One of the most salient features of KadNap is a sophisticated peer-to-peer design based on Kademlia, a network structure that uses distributed hash tables to conceal the IP addresses of command-and-control servers. The design makes the botnet resistant to detection and takedowns through traditional methods.

“The KadNap botnet stands out among others that support anonymous proxies in its use of a peer-to-peer network for decentralized control,” Formosa and fellow Black Lotus researcher Steve Rudd wrote Wednesday. “Their intention is clear: avoid detection and make it difficult for defenders to protect against.”

Advertisement

Distributed hash tables have long been used to create hardened peer-to-peer networks, most notably BitTorrent and the Inter-Planetary File System. Rather than having one or more centralized servers that directly control nodes and provide them with the IP addresses of other nodes, DHTs allow any node to poll other nodes for the device or server it’s looking for. The decentralized structure and the substitution of IP addresses with hashes give the network resilience against takedowns or denial of service attacks.

Source link

Advertisement
Continue Reading

Tech

TikTok to Let Apple Music Users Stream Full Songs Without Ever Leaving the App

Published

on

If you’ve ever scrolled TikTok, caught a snippet of a tune, and thought, “I wish I could play this song all the way through,” this is for you. TikTok and Apple Music announced on Wednesday that they have partnered on two new features, Play Full Song and Listening Party. The goal is to offer listeners a seamless music listening experience without ever leaving the social media app.

Apple Music subscribers who discover a song on their TikTok For You Page or on the Sound Detail Page will be able to click Play Full Song to open the Apple Music player and listen to the track in its entirety. From there, subscribers to the music streaming service will be able to save the song as a favorite, add it to a playlist on Apple Music and listen to a customized stream of recommended songs.

When a full-length song is played, the stream will pay artists through Apple Music. 

Advertisement
Images of a mobile phone showing how Apple Music will work seamlessly in TikTok.

TikTok and Apple Music’s Play Full Song and Listening Party features will launch this month.

TikTok

“Tapping into the music you love should feel effortless,” Ole Obermann, co-head of Apple Music, said in a statement. “With Play Full Song, Apple Music subscribers can move easily from discovering a track on TikTok to listening to it in full instantly, without breaking the flow. This integration not only makes it easier for fans to discover, listen to, and engage with the artists they love, but also creates a powerful new pathway for artists — turning moments of discovery into deeper connection and sustained engagement in one simple, seamless experience.”

Listening Party sounds somewhat like Spotify‘s feature of the same name. Fans join a shared, real-time session where they listen to the same tracks together and interact live, with the songs streamed through Apple Music inside TikTok. Musicians can also join and chat with their fans.

Advertisement

“TikTok is where music discovery and culture move at the speed of the community,” Tracy Gardner, global head of music business development at TikTok, said in a statement. “Thanks to Apple Music, Play Full Song gives fans a seamless way to go from discovery to full-length listening, and Listening Party provides a shared place to experience music together in real time. It’s all about bringing artists and fans closer, and turning shared moments into lasting connections.”

Play Full Song and Listening Party will launch globally on TikTok over the next few weeks.

Source link

Advertisement
Continue Reading

Tech

Meta buying social network for AI bots Moltbook should worry anyone who still hopes social media is for people

Published

on

Meta buying Moltbook, the developer of a social media platform designed for AI agents to talk to each other, sounds a little like a joke someone might make about how there are too many bots on Facebook and other Meta platforms. But it looks like Meta hopes to use Moltbook to fill the internet with even more digital voices.

Meta has spent two decades building platforms that connect billions of people. Facebook, Instagram, and Threads all promise some version of the same basic idea: a digital place where humans share thoughts, photos, jokes, and complaints about social media.

Advertisement

Source link

Continue Reading

Tech

Start Time, Apps, And More

Published

on





It’s easy to watch pretty much whatever you want these days, considering all the various streaming apps available at the touch of a button. But live events, like sports and political debates, are a bit trickier, and that goes for awards shows like the Oscars, as well. This year’s 98th Academy Awards are nearly here, so now’s a good time to make sure you’ve got the right setup to view them in real time.

The Oscars will air on broadcast television as they do every year. However, in the streaming era, watching the ceremony isn’t as simple as turning on the TV like it used to be — at least for cord-cutters. Here’s how to watch the 2026 Oscars and other important details you’ll want to know before Hollywood’s biggest stars hit the red carpet.

Advertisement

How to stream the 2026 Oscars live online

Since 2025, anyone with Hulu can livestream the Oscars. This includes those with a standalone Hulu plan, as well as Hulu bundles that include Disney+ and HBO Max or Disney+ and ESPN+. Hulu is available at multiple price points:

  • Hulu (With Ads) bundle: $12.99/month
  • Hulu (No Ads) bundle: $19.99/month
  • Hulu + Live TV bundle+: $89.99/month
  • Hulu Premium + Live TV bundle: $99.99/month

You will also be able to watch the Oscars with YouTube TV, FuboTV, and AT&T TV/DirecTV Stream, which also provide access to live broadcasts.

  • fuboTV: Starting at $73.99/month
  • YouTube TV: $82.99/month

One other app that will stream the Academy Awards is the ABC app, but you’ll need to log in with a cable or satellite subscription to do so. The same goes for watching the Oscars in-browser through ABC.com.

Advertisement

What channel is the 2026 Oscars on?

You’ll be able to watch the show live by tuning in to your local ABC affiliate. It’s included in cable and satellite packages, so if you pay for cable or satellite, you’ll also be able to easily watch the Oscars. You can even do so on your computer, through ABC.com, or by using the ABC app on your TV or mobile device. However, you’ll need to sign in to these services through your TV provider.

Advertisement

What time will the 2026 Oscars start?

Beware the Ides of March — if you don’t want to miss the Academy Awards, that is. The 2026 Oscars will be awarded on March 15. The Oscars technically begin at 7 p.m. Eastern / 4 p.m. Pacific / 11 p.m. GMT, though there will be plenty of pre-show programming — including fan-favorite red carpet interviews. The actual ceremony is expected to last around three and a half hours.

Advertisement

Who will be attending the 2026 Oscars?

Oscar nominees — such as Timothée Chalamet, Elle Fanning, Emma Stone, Michael B. Jordan, and Leonardo DiCaprio — and stars of Best Picture contenders like “Hamnet,” “One Battle After Another,” and “Sinners,” will be in attendance, which is partly why so many people are expected to tune in.

Advertisement

Where will the 2026 Oscars take place?

The Oscars red carpet and awards show are held in the heart of Hollywood at the Dolby Theatre (not to be confused with Dolby Cinema). Dolby Theatre has been the location of many awards events over the years, including the ESPY Awards, AFI Lifetime Achievement Awards, and BET Awards.

Advertisement

Will the Oscars stream on Disney+?

Disney+ is owned by The Walt Disney Company — the same megacorporation that owns ABC and Hulu. However, you won’t be able to stream the Oscars live on Disney+. If you subscribe to one of the Hulu bundles that includes Disney+, you can livestream the Oscars. But you’ll need to open Hulu and watch it on that app — not Disney+.

Advertisement

Can you watch the Oscars with a TV antenna?

You can watch the Oscars on ABC using the over-the-air signal from your local ABC affiliate. Most modern-day smart TVs don’t include antennas internally. That means if you want to receive your local ABC station’s signal over-the-air to watch the Oscars for free, you’ll need to hook up a digital antenna to your TV.

Most smart TVs still include digital tuners and a port to attach an antenna. If you don’t already own one, you can buy an antenna and hook it up, and then use your TV tuner to find ABC. The best indoor TV antennas vary in price and quality, so if you think you’re in an area or your TV is located in a space that might have trouble receiving radio waves, you’ll want to opt for a higher-end digital antenna.

Advertisement

How to watch the 2026 Oscars outside of the U.S.

The Oscars can be watched live in many regions outside of the United States. The Academy Awards will be broadcast on standard TV channels in certain countries, such as ITV1 in the United Kingdom, Crave in Canada, TV2 in Denmark, Seven Network in Australia, or Federalna TV in Bosnia & Herzegovina. 

Advertisement

In much of Latin America, the Oscars will air on TNT. While the ceremony won’t be broadcast on Disney+ in the United States, it will be in many other regions, including Taiwan, Turkey, Austria, New Zealand, Korea, Thailand, and more. The Oscars will stream on JioHotstar in India.

In other countries, the Oscars will stream on apps that aren’t common or available in America. For example, the Oscars will stream on DStv Stream in South Africa, Voyo in Romania, or MBC Shahid in much of the Middle East and North Africa. You can find a complete list of networks and apps showing the 2026 Oscars around the world on the official Academy Awards website.

Advertisement



Source link

Continue Reading

Tech

Nvidia’s new open weights Nemotron 3 super combines three different architectures to beat gpt-oss and Qwen in throughput

Published

on

Multi-agent systems, designed to handle long-horizon tasks like software engineering or cybersecurity triaging, can generate up to 15 times the token volume of standard chats — threatening their cost-effectiveness in handling enterprise tasks.

But today, Nvidia sought to help solve this problem with the release of Nemotron 3 Super, a 120-billion-parameter hybrid model, with weights posted on Hugging Face.

By merging disparate architectural philosophies—state-space models, transformers, and a novel “Latent” mixture-of-experts design—Nvidia is attempting to provide the specialized depth required for agentic workflows without the bloat typical of dense reasoning models, and all available for commercial usage under mostly open weights.

Triple hybrid architecture

At the core of Nemotron 3 Super is a sophisticated architectural triad that balances memory efficiency with precision reasoning. The model utilizes a Hybrid Mamba-Transformer backbone, which interleaves Mamba-2 layers with strategic Transformer attention layers.

Advertisement

To understand the implications for enterprise production, consider the “needle in a haystack” problem. Mamba-2 layers act like a “fast-travel” highway system, handling the vast majority of sequence processing with linear-time complexity. This allows the model to maintain a massive 1-million-token context window without the memory footprint of the KV cache exploding. However, pure state-space models often struggle with associative recall. 

To fix this, Nvidia strategically inserts Transformer attention layers as “global anchors,” ensuring the model can precisely retrieve specific facts buried deep within a codebase or a stack of financial reports.

Beyond the backbone, the model introduces Latent Mixture-of-Experts (LatentMoE). Traditional Mixture-of-Experts (MoE) designs route tokens to experts in their full hidden dimension, which creates a computational bottleneck as models scale. LatentMoE solves this by projecting tokens into a compressed space before routing them to specialists. 

This “expert compression” allows the model to consult four times as many specialists for the exact same computational cost. This granularity is vital for agents that must switch between Python syntax, SQL logic, and conversational reasoning within a single turn.

Advertisement

Further accelerating the model is Multi-Token Prediction (MTP). While standard models predict a single next token, MTP predicts several future tokens simultaneously. This serves as a “built-in draft model,” enabling native speculative decoding that can deliver up to 3x wall-clock speedups for structured generation tasks like code or tool calls.

The Blackwell advantage

For enterprises, the most significant technical leap in Nemotron 3 Super is its optimization for the Nvidia Blackwell GPU platform. By pre-training natively in NVFP4 (4-bit floating point), Nvidia has achieved a breakthrough in production efficiency.

On Blackwell, the model delivers 4x faster inference than 8-bit models running on the previous Hopper architecture, with no loss in accuracy.

In practical performance, Nemotron 3 Super is a specialized tool for agentic reasoning.

Advertisement

It currently holds the No. 1 position on the DeepResearch Bench, a benchmark measuring an AI’s ability to conduct thorough, multi-step research across large document sets.

Benchmark

Nemotron 3 Super

Qwen3.5-122B-A10B

Advertisement

GPT-OSS-120B

General Knowledge

MMLU-Pro

83.73

Advertisement

86.70

81.00

Reasoning

AIME25 (no tools)

Advertisement

90.21

90.36

92.50

HMMT Feb25 (no tools)

Advertisement

93.67

91.40

90.00

HMMT Feb25 (with tools)

Advertisement

94.73

89.55

GPQA (no tools)

Advertisement

79.23

86.60

80.10

GPQA (with tools)

Advertisement

82.70

80.09

LiveCodeBench (v5 2024-07↔2024-12)

Advertisement

81.19

78.93

88.00

SciCode (subtask)

Advertisement

42.05

42.00

39.00

HLE (no tools)

Advertisement

18.26

25.30

14.90

HLE (with tools)

Advertisement

22.82

19.0

Agentic

Advertisement

Terminal Bench (hard subset)

25.78

26.80

24.00

Advertisement

Terminal Bench Core 2.0

31.00

37.50

18.70

Advertisement

SWE-Bench (OpenHands)

60.47

66.40

41.9

Advertisement

SWE-Bench (OpenCode)

59.20

67.40

Advertisement

SWE-Bench (Codex)

53.73

61.20

Advertisement

SWE-Bench Multilingual (OpenHands)

45.78

30.80

Advertisement

TauBench V2

Airline

56.25

66.0

Advertisement

49.2

Retail

62.83

62.6

Advertisement

67.80

Telecom

64.36

95.00

Advertisement

66.00

Average

61.15

74.53

Advertisement

61.0

BrowseComp with Search

31.28

Advertisement

33.89

BIRD Bench

41.80

Advertisement

38.25

Chat & Instruction Following

IFBench (prompt)

72.56

Advertisement

73.77

68.32

Scale AI Multi-Challenge

55.23

Advertisement

61.50

58.29

Arena-Hard-V2

73.88

Advertisement

75.15

90.26

Long Context

AA-LCR

Advertisement

58.31

66.90

51.00

RULER @ 256k

Advertisement

96.30

96.74

52.30

RULER @ 512k

Advertisement

95.67

95.95

46.70

RULER @ 1M

Advertisement

91.75

91.33

22.30

Multilingual

Advertisement

MMLU-ProX (avg over langs)

79.36

85.06

76.59

Advertisement

WMT24++ (en→xx)

86.67

87.84

88.89

Advertisement

It also demonstrates significant throughput advantages, achieving up to 2.2x higher throughput than gpt-oss-120B and 7.5x higher than Qwen3.5-122B in high-volume settings.

Nvidia Nemotron 3 Super key benchmarks chart

Nvidia Nemotron 3 Super key benchmarks chart. Nvidia

Custom ‘open’ license — commercial usage but with important caveats 

The release of Nemotron 3 Super under the Nvidia Open Model License Agreement (updated October 2025) provides a permissive framework for enterprise adoption, though it carries distinct “safeguard” clauses that differentiate it from pure open-source licenses like MIT or Apache 2.0.

Key Provisions for Enterprise Users:

Advertisement
  • Commercial Usability: The license explicitly states that models are “commercially usable” and grants a perpetual, worldwide, royalty-free license to sell and distribute products built on the model.

  • Ownership of Output: Nvidia makes no claim to the outputs generated by the model; the responsibility for those outputs—and the ownership of them—rests entirely with the user.

  • Derivative Works: Enterprises are free to create and own “Derivative Models” (fine-tuned versions), provided they include the required attribution notice: “Licensed by Nvidia Corporation under the Nvidia Open Model License.”

The “Red Lines”:

The license includes two critical termination triggers that production teams must monitor:

  1. Safety Guardrails: The license automatically terminates if a user bypasses or circumvents the model’s “Guardrails” (technical limitations or safety hyperparameters) without implementing a “substantially similar” replacement appropriate for the use case.

  2. Litigation Trigger: If a user institutes copyright or patent litigation against Nvidia alleging that the model infringes on their IP, their license to use the model terminates immediately.

This structure allows Nvidia to foster a commercial ecosystem while protecting itself from “IP trolling” and ensuring that the model isn’t stripped of its safety features for malicious use.

‘The team really cooked’

The release has generated significant buzz within the developer community. Chris Alexiuk, a Senior Product Research Enginner at Nvidia, heralded the launch on X under his handle @llm_wizard as a “SUPER DAY,” emphasizing the model’s speed and transparency. “Model is: FAST. Model is: SMART. Model is: THE MOST OPEN MODEL WE’VE DONE YET,” Chris posted, highlighting the release of not just weights, but 10 trillion tokens of training data and recipes.

Advertisement

The industry adoption reflects this enthusiasm:

  • Cloud and Hardware: The model is being deployed as an Nvidia NIM microservice, allowing it to run on-premises via the Dell AI Factory or HPE, as well as across Google Cloud, Oracle, and shortly, AWS and Azure.

  • Production Agents: Companies like CodeRabbit (software development) and Greptile are integrating the model to handle large-scale codebase analysis, while industrial leaders like Siemens and Palantir are deploying it to automate complex workflows in manufacturing and cybersecurity.

As Kari Briski, Nvidia VP of AI Software, noted: “As companies move beyond chatbots and into multi-agent applications, they encounter… context explosion.”

Nemotron 3 Super is Nvidia’s answer to that explosion—a model that provides the “brainpower” of a 120B parameter system with the operational efficiency of a much smaller specialist. For the enterprise, the message is clear: the “thinking tax” is finally coming down.

Source link

Advertisement
Continue Reading

Tech

I guess this wasn’t an Xbox after all

Published

on

In 2024, Microsoft caused a lot of head-scratching and general bemusement with the launch of its “This is an Xbox” marketing campaign. Now, though, it appears the quandary over what is and isn’t an Xbox has been resolved. Game Developer noticed that the original blog post on Xbox Wire that kicked off the whole affair has been removed. It seems Xbox will be going a new direction with its future promotions.

Maybe since the new Project Helix hardware it has in the works is more definite attempt to blur console and PC gaming, “This is an Xbox” might have been truly confusing as a tagline. Maybe with the recent changing of the guard at the company, the top brass decided that it was the right time to start fresh with a less meme-able marketing plan. Whatever the reason, we have enjoyed this opportunity to learn about the existential philosophy behind being an Xbox. And fortunately, although the blog post may be gone, the video trailer still exists whenever we need to remind ourselves of the many things that can be Xbox-ified.

Source link

Continue Reading

Tech

Cedars-Sinai’s AI beats specialist models at reading heart scam

Published

on

EchoPrime, published in Nature in February 2026, outperforms both task-specific AI tools and previous foundation models across 23 cardiac benchmarks, and its code, weights, and a demo are publicly available.

An echocardiogram is one of the most common diagnostic tools in cardiology: an ultrasound of the heart that reveals how it moves, how its chambers fill and empty, and whether its structure is compromised. Interpreting one requires training, time, and a specific kind of spatial attention, the ability to look at moving images of a beating heart and translate them into a clinical narrative.

Researchers at Cedars-Sinai Medical Center, working with colleagues from Kaiser Permanente Northern California, Stanford Health Care, Beth Israel Deaconess Medical Center in Boston, and Chang Gung Memorial Hospital in Taiwan, have built an AI system that can do the same thing.

EchoPrime, a video-based vision-language model, analyses echocardiogram footage and generates a written report of cardiac form and function. Its findings were published in Nature (volume 650, pages 970-977) in February 2026, under the title “Comprehensive echocardiogram evaluation with view primed vision language AI.”

Advertisement

The scale of the training is what sets EchoPrime apart. The model was trained on more than 12 million echocardiography videos paired with cardiologists’ written interpretations, drawn from 275,442 studies across 108,913 patients at Cedars-Sinai.

Advertisement

No previous AI model for echocardiography has been trained on data of that volume.

What it can do?

Tested across five international health systems, EchoPrime achieved state-of-the-art performance on 23 diverse benchmarks of cardiac structure and function, outperforming both task-specific AI approaches, models trained to do one thing, like measure ejection fraction, and previous foundation models that aimed for broader capability.

The model’s outputs are designed to assist clinicians, not replace them: it produces a verbal summary that cardiologists can review and act on, rather than rendering a diagnosis autonomously.

The research team has made the model’s code, weights, and a working demo publicly available, a decision that reflects a broader shift in AI research towards open publication, and that will allow other institutions to test EchoPrime against their own patient populations.

Advertisement

The context around it

EchoPrime arrives in a year when AI misdiagnosis has been named one of the top patient safety threats by ECRI, the healthcare safety organisation. That context does not undermine EchoPrime’s promise so much as it frames the standard it will need to meet.

The goal is not an AI that sometimes reads echocardiograms accurately, it is one that does so consistently enough to reduce the burden on cardiologists without introducing new categories of error.

Cardiology has been a productive area for AI-assisted diagnostics precisely because the data, ultrasound video, electrocardiograms, imaging, is relatively structured and abundant.

The Cedars-Sinai work is arguably the most thorough attempt yet to turn that abundance of data into a generalised tool. Whether EchoPrime moves from published model to clinical deployment at scale depends on factors, regulatory approval, institutional adoption, liability, that the Nature paper does not address.

Advertisement

But as a demonstration of what is now technically possible in cardiac AI, it sets a new mark.

Source link

Advertisement
Continue Reading

Tech

These Smart Glasses Can Translate Any Language Right Before Your Eyes

Published

on





It’s one thing to be able to haltingly make an order from a menu in a restaurant in another language, but quite another to be able to engage in fluent conversation with a native speaker. Dedicated study is often required to arrive at this point, but as is so often the case today, AI technology seems to have arrived at a rather brilliant shortcut: language-translating smart glasses. 

Alibaba has grown from a tiny startup in 1999 to the powerhouse behind Alipay, Alibaba.com, and more. It has now expanded into yet more new tech territory, with the Quark AI Glasses. Two varieties, the G1 and S1 models, were shown off at Mobile World Congress 2026 in Barcelona, and their ability to translate languages that those nearby are speaking is fascinating. The glasses have a display called Waveguide, a subtle sort of overlay within the lenses that the user can control via tap, double-tap, and swipe motions on the arm of the glasses. A dedicated translation app will detect if someone nearby is speaking a different language and automatically display translated text. 

Advertisement

The Waveguide’s bright green font, intended to be clearly visible yet unobtrusive, seems well suited to this transcription function, which is powered by Qwen AI models. Familiar privacy concerns arise, and there’s also the concern about the accuracy and speed of AI translation in its various forms, but there’s a lot of potential here. Also, of course, there’s a lot more that the Quark AI Glasses can do. It’s hard to say whether smart glasses are truly a viable alternative to computer monitors, but they certainly have a bag of tricks. 

Advertisement

Some more features and functions of Alibaba’s Quark AI Glasses

The translation feature, as advanced as its real-time capabilities may prove to be, could be quite niche for a lot of potential buyers. The goal for AI assistants is to support and fit in with the user’s everyday tasks, first and foremost, and so it’s important that the Quark AI Glasses have a lot of utility when it comes to just that. Alibaba Group boasts that, being “deeply integrated with Alibaba’s ecosystem,” the new models offer associated features such as Taobao price comparison, Fliggy notifications and updates when traveling, and Amap assistance for finding your way around, and also implement voice and touch controls, along with bone conduction audio features.

The ideal with smart glasses is to achieve a lightweight, natural feel that almost makes you forget you aren’t wearing standard glasses, even though there are some places where you should never wear them. These models, it seems, were created to be subtle and convenient in this way, down to the batteries in the arm that can be quickly swapped out as needed. Lasting for about 24 hours at a time, this innovative new system is unique to smart glasses and, combined with the very reasonable pricing structure, is another feature that could see the Quark glasses really take off in the Chinese market. 

Releasing in December 2025, Alibaba created three different editions of both the G1 and the S1. The latter is the dual-display option, and as such, it’s the premium version: Available from ¥3,799 (approximately $552), it’s considerably pricier than the G1 model, up for purchase from ¥1,899 (around $276). However, there’s no release date for the U.S. market just yet.

Advertisement



Source link

Advertisement
Continue Reading

Tech

T-Mobile Added a New Unlimited Phone Plan, but Is It a Better Value?

Published

on

If you’re looking for a phone plan that includes plenty of perks for three or more people, T-Mobile’s new Better Value plan is appealing. The company is calling this a limited-time offer, but without a timeline for when it’s available, so now is a good time to check it out. As with all phone plans, be sure to read the fine print.

In our lists of the best cellphone plans, best unlimited data plans and best T-Mobile plans, we rank T-Mobile’s Essentials plan highly. After reviewing the specifics of the Better Value plan, the Experience More plan — the No. 2 unlimited postpaid plan — presents a more interesting comparison. Let’s see how they stack up.

Better Value plan pricing and features compared

For an account with three lines, the monthly cost of the Better Value plan is $140 (with AutoPay active), plus applicable taxes and fees. Experience More similarly costs $140 a month for three lines. The Essentials plan costs $90 per month for three lines, but lacks most of the add-ons that make the other two plans appealing.

Advertisement

Both the Experience More and Better Value plans offer unlimited data on T-Mobile’s 5G network, a five-year price guarantee and two-year device upgrades.

However, the Better Value plan includes 250GB of high-speed mobile hotspot data, compared to 60GB for the Experience More plan. After those amounts have been used up, data is available at an unlimited rate of 600Kbps. (T-Mobile’s highest tier plan by comparison, Experience Beyond, includes unlimited high-speed hotspot data.)

Better Value also includes more high-speed data when you’re in other countries, with 30GB available in Mexico and Canada, as well as in 215 countries and areas worldwide. That’s more than the Experience More plan, which offers 15GB in North America and 5GB elsewhere.

Advertisement
Man on stage speaking with a large screen behind him with information about the T-Satellite feature

Announcement of the T-Satellite launch date on stage at a T-Mobile event.

Jeff Carlson/CNET

T-Satellite is also included in the Better Value plan, a feature that costs $10 extra for every other T-Mobile plan except for Experience Beyond.

One appeal of these plans, especially in the context of families, is the set of included streaming services. The Better Value plan and Experience More plan both include Netflix Standard with Ads and Hulu, and Apple TV can be added for $3 per month.

Important qualifications

Here’s where the fine print comes in, and it appears that T-Mobile is aiming to inspire and reward loyalty.

Advertisement

If you’re switching from a different carrier, the Better Value plan requires three or more lines and two eligible ports. Although it’s likely a family or small business would be transferring from another provider and not keeping its other lines, Better Value is an effort to build up group plans and incentivize switching away from other carriers.

If you’re already set up with T-Mobile, the Better Value plan requires that you have been a T-Mobile postpaid customer for at least five years. And if you have that much tenure, you should be aware that your current plan might have taxes and fees included, whereas the Better Value plan doesn’t.

The Better Value plan is available in the T-Life app and on T-Mobile.com. When you enter a retail T-Mobile store, you’ll likely be directed to the app or website by an employee.

And lastly, T-Mobile brands this as a limited-time offer, but I confirmed with a spokesperson that it currently has no end date. 

Advertisement

Read more: I got an in-depth look at T-Mobile’s emergency response programs.

T-Mobile Better Value vs. Experience More plans

Better Value plan Experience More plan
High-speed data 5G, unlimited 5G, unlimited
Mobile Hotspot 250GB high-speed, then unlimited at 600Kbps 60GB high-speed, then unlimited at 600Kbps
International Call/Data Unlimited talk and text; 30GB high-speed data in Mexico/Canada/215+ countries, then unlimited at 256Kbps Unlimited talk and text; 15GB high speed data in Canada/Mexico, 5GB high speed data in 215+ countries; then unlimited at 256Kbps
Extras Netflix Standard with Ads; Hulu with Ads; Magenta Status; Apple TV for $3 per month Netflix Standard with Ads; 1 year AAA; Magenta Status; Apple TV for $3 per month
Price Guarantee 5 years 5 years
T-Satellite Included Optional $10 add-on
Cost for 3 lines $140 $140
Limited-time offer? Yes No

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025