Connect with us

Tech

Honor of Kings Is Finally Available in India (Free Skins, Events, and Esports)

Published

on

Tencent’s TiMi Studio and Level Infinite have officially launched Honor of Kings in India. The mobile MOBA game, often described as the world’s most-played MOBA, is now available to download on both Android and iOS devices starting today. The game follows a “Free to Play, Fair to Win” philosophy, meaning players can progress and compete based on skill rather than paid advantages. With the India launch, Honor of Kings is introducing localized content, community events, and esports opportunities aimed specifically at Indian players.

Classic 5v5 MOBA Gameplay Comes to Indian Players

Honor of kings character holding a sword

At its core, Honor of Kings brings traditional 5v5 MOBA gameplay to mobile devices. Matches take place in Hero’s Gorge, a battlefield where teams compete to destroy the enemy base through strategy, teamwork, and hero abilities. Players can choose from a wide roster of heroes with different roles and abilities. The game also features quick matchmaking, ranked battles, and localized voiceovers designed to improve the experience for Indian users.

Interestingly, Dean Huang, Producer of Honor of Kings, said the company will invest ₹100 million in India to support creators and the esports ecosystem, which is amazing news for the eSports community.

Launch Events Offer Free Heroes and Skins

Character in honor of kings game

To celebrate the India launch, Honor of Kings is running several limited-time in-game events that reward players with free items. The “Sign In for a Free Epic Skin” event runs until April 11, allowing players to unlock rewards by logging in regularly. These include heroes such as Mai Shiranui, Ziya, Ukyo Tachibana, Haya, Charlotte, and Augran, along with the Epic Skin Feline Whisperer for Mai Shiranui.

Beyond that, another event, “Game Paltega: Desi Legends United,” runs until March 24. This influencer-led challenge allows players to earn redemption coins that can be exchanged for rewards such as avatar frames, creator voice packs, and stickers featuring popular gaming creators like Scout, Mortal, and Kaash Plays. Players can also participate in the “Draw Daily to Get Gift Cards” event between March 13 and March 21, and the game plans to introduce a new hero inspired by Indian culture as well.

Esports Plans for India

We’ve been to Tencent’s eSports events, and they have always been amazing. Keeping that spirit in mind, the company is building an esports pipeline for Indian players. In fact, two Indian teams will get slots in the KWC at EWC26, the global Honor of Kings tournament pathway. Additionally, the KINGS’ Arise: India City Tour will host offline finals in multiple cities:

Advertisement
  • Bengaluru — March 22
  • Mumbai — March 29
  • Delhi — April 5

Each event will feature a ₹100,000 prize pool.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

TikTok to Let Apple Music Users Stream Full Songs Without Ever Leaving the App

Published

on

If you’ve ever scrolled TikTok, caught a snippet of a tune, and thought, “I wish I could play this song all the way through,” this is for you. TikTok and Apple Music announced on Wednesday that they have partnered on two new features, Play Full Song and Listening Party. The goal is to offer listeners a seamless music listening experience without ever leaving the social media app.

Apple Music subscribers who discover a song on their TikTok For You Page or on the Sound Detail Page will be able to click Play Full Song to open the Apple Music player and listen to the track in its entirety. From there, subscribers to the music streaming service will be able to save the song as a favorite, add it to a playlist on Apple Music and listen to a customized stream of recommended songs.

When a full-length song is played, the stream will pay artists through Apple Music. 

Advertisement
Images of a mobile phone showing how Apple Music will work seamlessly in TikTok.

TikTok and Apple Music’s Play Full Song and Listening Party features will launch this month.

TikTok

“Tapping into the music you love should feel effortless,” Ole Obermann, co-head of Apple Music, said in a statement. “With Play Full Song, Apple Music subscribers can move easily from discovering a track on TikTok to listening to it in full instantly, without breaking the flow. This integration not only makes it easier for fans to discover, listen to, and engage with the artists they love, but also creates a powerful new pathway for artists — turning moments of discovery into deeper connection and sustained engagement in one simple, seamless experience.”

Listening Party sounds somewhat like Spotify‘s feature of the same name. Fans join a shared, real-time session where they listen to the same tracks together and interact live, with the songs streamed through Apple Music inside TikTok. Musicians can also join and chat with their fans.

Advertisement

“TikTok is where music discovery and culture move at the speed of the community,” Tracy Gardner, global head of music business development at TikTok, said in a statement. “Thanks to Apple Music, Play Full Song gives fans a seamless way to go from discovery to full-length listening, and Listening Party provides a shared place to experience music together in real time. It’s all about bringing artists and fans closer, and turning shared moments into lasting connections.”

Play Full Song and Listening Party will launch globally on TikTok over the next few weeks.

Source link

Advertisement
Continue Reading

Tech

Meta buying social network for AI bots Moltbook should worry anyone who still hopes social media is for people

Published

on

Meta buying Moltbook, the developer of a social media platform designed for AI agents to talk to each other, sounds a little like a joke someone might make about how there are too many bots on Facebook and other Meta platforms. But it looks like Meta hopes to use Moltbook to fill the internet with even more digital voices.

Meta has spent two decades building platforms that connect billions of people. Facebook, Instagram, and Threads all promise some version of the same basic idea: a digital place where humans share thoughts, photos, jokes, and complaints about social media.

Advertisement

Source link

Continue Reading

Tech

Start Time, Apps, And More

Published

on





It’s easy to watch pretty much whatever you want these days, considering all the various streaming apps available at the touch of a button. But live events, like sports and political debates, are a bit trickier, and that goes for awards shows like the Oscars, as well. This year’s 98th Academy Awards are nearly here, so now’s a good time to make sure you’ve got the right setup to view them in real time.

The Oscars will air on broadcast television as they do every year. However, in the streaming era, watching the ceremony isn’t as simple as turning on the TV like it used to be — at least for cord-cutters. Here’s how to watch the 2026 Oscars and other important details you’ll want to know before Hollywood’s biggest stars hit the red carpet.

Advertisement

How to stream the 2026 Oscars live online

Since 2025, anyone with Hulu can livestream the Oscars. This includes those with a standalone Hulu plan, as well as Hulu bundles that include Disney+ and HBO Max or Disney+ and ESPN+. Hulu is available at multiple price points:

  • Hulu (With Ads) bundle: $12.99/month
  • Hulu (No Ads) bundle: $19.99/month
  • Hulu + Live TV bundle+: $89.99/month
  • Hulu Premium + Live TV bundle: $99.99/month

You will also be able to watch the Oscars with YouTube TV, FuboTV, and AT&T TV/DirecTV Stream, which also provide access to live broadcasts.

  • fuboTV: Starting at $73.99/month
  • YouTube TV: $82.99/month

One other app that will stream the Academy Awards is the ABC app, but you’ll need to log in with a cable or satellite subscription to do so. The same goes for watching the Oscars in-browser through ABC.com.

Advertisement

What channel is the 2026 Oscars on?

You’ll be able to watch the show live by tuning in to your local ABC affiliate. It’s included in cable and satellite packages, so if you pay for cable or satellite, you’ll also be able to easily watch the Oscars. You can even do so on your computer, through ABC.com, or by using the ABC app on your TV or mobile device. However, you’ll need to sign in to these services through your TV provider.

Advertisement

What time will the 2026 Oscars start?

Beware the Ides of March — if you don’t want to miss the Academy Awards, that is. The 2026 Oscars will be awarded on March 15. The Oscars technically begin at 7 p.m. Eastern / 4 p.m. Pacific / 11 p.m. GMT, though there will be plenty of pre-show programming — including fan-favorite red carpet interviews. The actual ceremony is expected to last around three and a half hours.

Advertisement

Who will be attending the 2026 Oscars?

Oscar nominees — such as Timothée Chalamet, Elle Fanning, Emma Stone, Michael B. Jordan, and Leonardo DiCaprio — and stars of Best Picture contenders like “Hamnet,” “One Battle After Another,” and “Sinners,” will be in attendance, which is partly why so many people are expected to tune in.

Advertisement

Where will the 2026 Oscars take place?

The Oscars red carpet and awards show are held in the heart of Hollywood at the Dolby Theatre (not to be confused with Dolby Cinema). Dolby Theatre has been the location of many awards events over the years, including the ESPY Awards, AFI Lifetime Achievement Awards, and BET Awards.

Advertisement

Will the Oscars stream on Disney+?

Disney+ is owned by The Walt Disney Company — the same megacorporation that owns ABC and Hulu. However, you won’t be able to stream the Oscars live on Disney+. If you subscribe to one of the Hulu bundles that includes Disney+, you can livestream the Oscars. But you’ll need to open Hulu and watch it on that app — not Disney+.

Advertisement

Can you watch the Oscars with a TV antenna?

You can watch the Oscars on ABC using the over-the-air signal from your local ABC affiliate. Most modern-day smart TVs don’t include antennas internally. That means if you want to receive your local ABC station’s signal over-the-air to watch the Oscars for free, you’ll need to hook up a digital antenna to your TV.

Most smart TVs still include digital tuners and a port to attach an antenna. If you don’t already own one, you can buy an antenna and hook it up, and then use your TV tuner to find ABC. The best indoor TV antennas vary in price and quality, so if you think you’re in an area or your TV is located in a space that might have trouble receiving radio waves, you’ll want to opt for a higher-end digital antenna.

Advertisement

How to watch the 2026 Oscars outside of the U.S.

The Oscars can be watched live in many regions outside of the United States. The Academy Awards will be broadcast on standard TV channels in certain countries, such as ITV1 in the United Kingdom, Crave in Canada, TV2 in Denmark, Seven Network in Australia, or Federalna TV in Bosnia & Herzegovina. 

Advertisement

In much of Latin America, the Oscars will air on TNT. While the ceremony won’t be broadcast on Disney+ in the United States, it will be in many other regions, including Taiwan, Turkey, Austria, New Zealand, Korea, Thailand, and more. The Oscars will stream on JioHotstar in India.

In other countries, the Oscars will stream on apps that aren’t common or available in America. For example, the Oscars will stream on DStv Stream in South Africa, Voyo in Romania, or MBC Shahid in much of the Middle East and North Africa. You can find a complete list of networks and apps showing the 2026 Oscars around the world on the official Academy Awards website.

Advertisement



Source link

Continue Reading

Tech

Nvidia’s new open weights Nemotron 3 super combines three different architectures to beat gpt-oss and Qwen in throughput

Published

on

Multi-agent systems, designed to handle long-horizon tasks like software engineering or cybersecurity triaging, can generate up to 15 times the token volume of standard chats — threatening their cost-effectiveness in handling enterprise tasks.

But today, Nvidia sought to help solve this problem with the release of Nemotron 3 Super, a 120-billion-parameter hybrid model, with weights posted on Hugging Face.

By merging disparate architectural philosophies—state-space models, transformers, and a novel “Latent” mixture-of-experts design—Nvidia is attempting to provide the specialized depth required for agentic workflows without the bloat typical of dense reasoning models, and all available for commercial usage under mostly open weights.

Triple hybrid architecture

At the core of Nemotron 3 Super is a sophisticated architectural triad that balances memory efficiency with precision reasoning. The model utilizes a Hybrid Mamba-Transformer backbone, which interleaves Mamba-2 layers with strategic Transformer attention layers.

Advertisement

To understand the implications for enterprise production, consider the “needle in a haystack” problem. Mamba-2 layers act like a “fast-travel” highway system, handling the vast majority of sequence processing with linear-time complexity. This allows the model to maintain a massive 1-million-token context window without the memory footprint of the KV cache exploding. However, pure state-space models often struggle with associative recall. 

To fix this, Nvidia strategically inserts Transformer attention layers as “global anchors,” ensuring the model can precisely retrieve specific facts buried deep within a codebase or a stack of financial reports.

Beyond the backbone, the model introduces Latent Mixture-of-Experts (LatentMoE). Traditional Mixture-of-Experts (MoE) designs route tokens to experts in their full hidden dimension, which creates a computational bottleneck as models scale. LatentMoE solves this by projecting tokens into a compressed space before routing them to specialists. 

This “expert compression” allows the model to consult four times as many specialists for the exact same computational cost. This granularity is vital for agents that must switch between Python syntax, SQL logic, and conversational reasoning within a single turn.

Advertisement

Further accelerating the model is Multi-Token Prediction (MTP). While standard models predict a single next token, MTP predicts several future tokens simultaneously. This serves as a “built-in draft model,” enabling native speculative decoding that can deliver up to 3x wall-clock speedups for structured generation tasks like code or tool calls.

The Blackwell advantage

For enterprises, the most significant technical leap in Nemotron 3 Super is its optimization for the Nvidia Blackwell GPU platform. By pre-training natively in NVFP4 (4-bit floating point), Nvidia has achieved a breakthrough in production efficiency.

On Blackwell, the model delivers 4x faster inference than 8-bit models running on the previous Hopper architecture, with no loss in accuracy.

In practical performance, Nemotron 3 Super is a specialized tool for agentic reasoning.

Advertisement

It currently holds the No. 1 position on the DeepResearch Bench, a benchmark measuring an AI’s ability to conduct thorough, multi-step research across large document sets.

Benchmark

Nemotron 3 Super

Qwen3.5-122B-A10B

Advertisement

GPT-OSS-120B

General Knowledge

MMLU-Pro

83.73

Advertisement

86.70

81.00

Reasoning

AIME25 (no tools)

Advertisement

90.21

90.36

92.50

HMMT Feb25 (no tools)

Advertisement

93.67

91.40

90.00

HMMT Feb25 (with tools)

Advertisement

94.73

89.55

GPQA (no tools)

Advertisement

79.23

86.60

80.10

GPQA (with tools)

Advertisement

82.70

80.09

LiveCodeBench (v5 2024-07↔2024-12)

Advertisement

81.19

78.93

88.00

SciCode (subtask)

Advertisement

42.05

42.00

39.00

HLE (no tools)

Advertisement

18.26

25.30

14.90

HLE (with tools)

Advertisement

22.82

19.0

Agentic

Advertisement

Terminal Bench (hard subset)

25.78

26.80

24.00

Advertisement

Terminal Bench Core 2.0

31.00

37.50

18.70

Advertisement

SWE-Bench (OpenHands)

60.47

66.40

41.9

Advertisement

SWE-Bench (OpenCode)

59.20

67.40

Advertisement

SWE-Bench (Codex)

53.73

61.20

Advertisement

SWE-Bench Multilingual (OpenHands)

45.78

30.80

Advertisement

TauBench V2

Airline

56.25

66.0

Advertisement

49.2

Retail

62.83

62.6

Advertisement

67.80

Telecom

64.36

95.00

Advertisement

66.00

Average

61.15

74.53

Advertisement

61.0

BrowseComp with Search

31.28

Advertisement

33.89

BIRD Bench

41.80

Advertisement

38.25

Chat & Instruction Following

IFBench (prompt)

72.56

Advertisement

73.77

68.32

Scale AI Multi-Challenge

55.23

Advertisement

61.50

58.29

Arena-Hard-V2

73.88

Advertisement

75.15

90.26

Long Context

AA-LCR

Advertisement

58.31

66.90

51.00

RULER @ 256k

Advertisement

96.30

96.74

52.30

RULER @ 512k

Advertisement

95.67

95.95

46.70

RULER @ 1M

Advertisement

91.75

91.33

22.30

Multilingual

Advertisement

MMLU-ProX (avg over langs)

79.36

85.06

76.59

Advertisement

WMT24++ (en→xx)

86.67

87.84

88.89

Advertisement

It also demonstrates significant throughput advantages, achieving up to 2.2x higher throughput than gpt-oss-120B and 7.5x higher than Qwen3.5-122B in high-volume settings.

Nvidia Nemotron 3 Super key benchmarks chart

Nvidia Nemotron 3 Super key benchmarks chart. Nvidia

Custom ‘open’ license — commercial usage but with important caveats 

The release of Nemotron 3 Super under the Nvidia Open Model License Agreement (updated October 2025) provides a permissive framework for enterprise adoption, though it carries distinct “safeguard” clauses that differentiate it from pure open-source licenses like MIT or Apache 2.0.

Key Provisions for Enterprise Users:

Advertisement
  • Commercial Usability: The license explicitly states that models are “commercially usable” and grants a perpetual, worldwide, royalty-free license to sell and distribute products built on the model.

  • Ownership of Output: Nvidia makes no claim to the outputs generated by the model; the responsibility for those outputs—and the ownership of them—rests entirely with the user.

  • Derivative Works: Enterprises are free to create and own “Derivative Models” (fine-tuned versions), provided they include the required attribution notice: “Licensed by Nvidia Corporation under the Nvidia Open Model License.”

The “Red Lines”:

The license includes two critical termination triggers that production teams must monitor:

  1. Safety Guardrails: The license automatically terminates if a user bypasses or circumvents the model’s “Guardrails” (technical limitations or safety hyperparameters) without implementing a “substantially similar” replacement appropriate for the use case.

  2. Litigation Trigger: If a user institutes copyright or patent litigation against Nvidia alleging that the model infringes on their IP, their license to use the model terminates immediately.

This structure allows Nvidia to foster a commercial ecosystem while protecting itself from “IP trolling” and ensuring that the model isn’t stripped of its safety features for malicious use.

‘The team really cooked’

The release has generated significant buzz within the developer community. Chris Alexiuk, a Senior Product Research Enginner at Nvidia, heralded the launch on X under his handle @llm_wizard as a “SUPER DAY,” emphasizing the model’s speed and transparency. “Model is: FAST. Model is: SMART. Model is: THE MOST OPEN MODEL WE’VE DONE YET,” Chris posted, highlighting the release of not just weights, but 10 trillion tokens of training data and recipes.

Advertisement

The industry adoption reflects this enthusiasm:

  • Cloud and Hardware: The model is being deployed as an Nvidia NIM microservice, allowing it to run on-premises via the Dell AI Factory or HPE, as well as across Google Cloud, Oracle, and shortly, AWS and Azure.

  • Production Agents: Companies like CodeRabbit (software development) and Greptile are integrating the model to handle large-scale codebase analysis, while industrial leaders like Siemens and Palantir are deploying it to automate complex workflows in manufacturing and cybersecurity.

As Kari Briski, Nvidia VP of AI Software, noted: “As companies move beyond chatbots and into multi-agent applications, they encounter… context explosion.”

Nemotron 3 Super is Nvidia’s answer to that explosion—a model that provides the “brainpower” of a 120B parameter system with the operational efficiency of a much smaller specialist. For the enterprise, the message is clear: the “thinking tax” is finally coming down.

Source link

Advertisement
Continue Reading

Tech

I guess this wasn’t an Xbox after all

Published

on

In 2024, Microsoft caused a lot of head-scratching and general bemusement with the launch of its “This is an Xbox” marketing campaign. Now, though, it appears the quandary over what is and isn’t an Xbox has been resolved. Game Developer noticed that the original blog post on Xbox Wire that kicked off the whole affair has been removed. It seems Xbox will be going a new direction with its future promotions.

Maybe since the new Project Helix hardware it has in the works is more definite attempt to blur console and PC gaming, “This is an Xbox” might have been truly confusing as a tagline. Maybe with the recent changing of the guard at the company, the top brass decided that it was the right time to start fresh with a less meme-able marketing plan. Whatever the reason, we have enjoyed this opportunity to learn about the existential philosophy behind being an Xbox. And fortunately, although the blog post may be gone, the video trailer still exists whenever we need to remind ourselves of the many things that can be Xbox-ified.

Source link

Continue Reading

Tech

Cedars-Sinai’s AI beats specialist models at reading heart scam

Published

on

EchoPrime, published in Nature in February 2026, outperforms both task-specific AI tools and previous foundation models across 23 cardiac benchmarks, and its code, weights, and a demo are publicly available.

An echocardiogram is one of the most common diagnostic tools in cardiology: an ultrasound of the heart that reveals how it moves, how its chambers fill and empty, and whether its structure is compromised. Interpreting one requires training, time, and a specific kind of spatial attention, the ability to look at moving images of a beating heart and translate them into a clinical narrative.

Researchers at Cedars-Sinai Medical Center, working with colleagues from Kaiser Permanente Northern California, Stanford Health Care, Beth Israel Deaconess Medical Center in Boston, and Chang Gung Memorial Hospital in Taiwan, have built an AI system that can do the same thing.

EchoPrime, a video-based vision-language model, analyses echocardiogram footage and generates a written report of cardiac form and function. Its findings were published in Nature (volume 650, pages 970-977) in February 2026, under the title “Comprehensive echocardiogram evaluation with view primed vision language AI.”

Advertisement

The scale of the training is what sets EchoPrime apart. The model was trained on more than 12 million echocardiography videos paired with cardiologists’ written interpretations, drawn from 275,442 studies across 108,913 patients at Cedars-Sinai.

Advertisement

No previous AI model for echocardiography has been trained on data of that volume.

What it can do?

Tested across five international health systems, EchoPrime achieved state-of-the-art performance on 23 diverse benchmarks of cardiac structure and function, outperforming both task-specific AI approaches, models trained to do one thing, like measure ejection fraction, and previous foundation models that aimed for broader capability.

The model’s outputs are designed to assist clinicians, not replace them: it produces a verbal summary that cardiologists can review and act on, rather than rendering a diagnosis autonomously.

The research team has made the model’s code, weights, and a working demo publicly available, a decision that reflects a broader shift in AI research towards open publication, and that will allow other institutions to test EchoPrime against their own patient populations.

Advertisement

The context around it

EchoPrime arrives in a year when AI misdiagnosis has been named one of the top patient safety threats by ECRI, the healthcare safety organisation. That context does not undermine EchoPrime’s promise so much as it frames the standard it will need to meet.

The goal is not an AI that sometimes reads echocardiograms accurately, it is one that does so consistently enough to reduce the burden on cardiologists without introducing new categories of error.

Cardiology has been a productive area for AI-assisted diagnostics precisely because the data, ultrasound video, electrocardiograms, imaging, is relatively structured and abundant.

The Cedars-Sinai work is arguably the most thorough attempt yet to turn that abundance of data into a generalised tool. Whether EchoPrime moves from published model to clinical deployment at scale depends on factors, regulatory approval, institutional adoption, liability, that the Nature paper does not address.

Advertisement

But as a demonstration of what is now technically possible in cardiac AI, it sets a new mark.

Source link

Advertisement
Continue Reading

Tech

These Smart Glasses Can Translate Any Language Right Before Your Eyes

Published

on





It’s one thing to be able to haltingly make an order from a menu in a restaurant in another language, but quite another to be able to engage in fluent conversation with a native speaker. Dedicated study is often required to arrive at this point, but as is so often the case today, AI technology seems to have arrived at a rather brilliant shortcut: language-translating smart glasses. 

Alibaba has grown from a tiny startup in 1999 to the powerhouse behind Alipay, Alibaba.com, and more. It has now expanded into yet more new tech territory, with the Quark AI Glasses. Two varieties, the G1 and S1 models, were shown off at Mobile World Congress 2026 in Barcelona, and their ability to translate languages that those nearby are speaking is fascinating. The glasses have a display called Waveguide, a subtle sort of overlay within the lenses that the user can control via tap, double-tap, and swipe motions on the arm of the glasses. A dedicated translation app will detect if someone nearby is speaking a different language and automatically display translated text. 

Advertisement

The Waveguide’s bright green font, intended to be clearly visible yet unobtrusive, seems well suited to this transcription function, which is powered by Qwen AI models. Familiar privacy concerns arise, and there’s also the concern about the accuracy and speed of AI translation in its various forms, but there’s a lot of potential here. Also, of course, there’s a lot more that the Quark AI Glasses can do. It’s hard to say whether smart glasses are truly a viable alternative to computer monitors, but they certainly have a bag of tricks. 

Advertisement

Some more features and functions of Alibaba’s Quark AI Glasses

The translation feature, as advanced as its real-time capabilities may prove to be, could be quite niche for a lot of potential buyers. The goal for AI assistants is to support and fit in with the user’s everyday tasks, first and foremost, and so it’s important that the Quark AI Glasses have a lot of utility when it comes to just that. Alibaba Group boasts that, being “deeply integrated with Alibaba’s ecosystem,” the new models offer associated features such as Taobao price comparison, Fliggy notifications and updates when traveling, and Amap assistance for finding your way around, and also implement voice and touch controls, along with bone conduction audio features.

The ideal with smart glasses is to achieve a lightweight, natural feel that almost makes you forget you aren’t wearing standard glasses, even though there are some places where you should never wear them. These models, it seems, were created to be subtle and convenient in this way, down to the batteries in the arm that can be quickly swapped out as needed. Lasting for about 24 hours at a time, this innovative new system is unique to smart glasses and, combined with the very reasonable pricing structure, is another feature that could see the Quark glasses really take off in the Chinese market. 

Releasing in December 2025, Alibaba created three different editions of both the G1 and the S1. The latter is the dual-display option, and as such, it’s the premium version: Available from ¥3,799 (approximately $552), it’s considerably pricier than the G1 model, up for purchase from ¥1,899 (around $276). However, there’s no release date for the U.S. market just yet.

Advertisement



Source link

Advertisement
Continue Reading

Tech

T-Mobile Added a New Unlimited Phone Plan, but Is It a Better Value?

Published

on

If you’re looking for a phone plan that includes plenty of perks for three or more people, T-Mobile’s new Better Value plan is appealing. The company is calling this a limited-time offer, but without a timeline for when it’s available, so now is a good time to check it out. As with all phone plans, be sure to read the fine print.

In our lists of the best cellphone plans, best unlimited data plans and best T-Mobile plans, we rank T-Mobile’s Essentials plan highly. After reviewing the specifics of the Better Value plan, the Experience More plan — the No. 2 unlimited postpaid plan — presents a more interesting comparison. Let’s see how they stack up.

Better Value plan pricing and features compared

For an account with three lines, the monthly cost of the Better Value plan is $140 (with AutoPay active), plus applicable taxes and fees. Experience More similarly costs $140 a month for three lines. The Essentials plan costs $90 per month for three lines, but lacks most of the add-ons that make the other two plans appealing.

Advertisement

Both the Experience More and Better Value plans offer unlimited data on T-Mobile’s 5G network, a five-year price guarantee and two-year device upgrades.

However, the Better Value plan includes 250GB of high-speed mobile hotspot data, compared to 60GB for the Experience More plan. After those amounts have been used up, data is available at an unlimited rate of 600Kbps. (T-Mobile’s highest tier plan by comparison, Experience Beyond, includes unlimited high-speed hotspot data.)

Better Value also includes more high-speed data when you’re in other countries, with 30GB available in Mexico and Canada, as well as in 215 countries and areas worldwide. That’s more than the Experience More plan, which offers 15GB in North America and 5GB elsewhere.

Advertisement
Man on stage speaking with a large screen behind him with information about the T-Satellite feature

Announcement of the T-Satellite launch date on stage at a T-Mobile event.

Jeff Carlson/CNET

T-Satellite is also included in the Better Value plan, a feature that costs $10 extra for every other T-Mobile plan except for Experience Beyond.

One appeal of these plans, especially in the context of families, is the set of included streaming services. The Better Value plan and Experience More plan both include Netflix Standard with Ads and Hulu, and Apple TV can be added for $3 per month.

Important qualifications

Here’s where the fine print comes in, and it appears that T-Mobile is aiming to inspire and reward loyalty.

Advertisement

If you’re switching from a different carrier, the Better Value plan requires three or more lines and two eligible ports. Although it’s likely a family or small business would be transferring from another provider and not keeping its other lines, Better Value is an effort to build up group plans and incentivize switching away from other carriers.

If you’re already set up with T-Mobile, the Better Value plan requires that you have been a T-Mobile postpaid customer for at least five years. And if you have that much tenure, you should be aware that your current plan might have taxes and fees included, whereas the Better Value plan doesn’t.

The Better Value plan is available in the T-Life app and on T-Mobile.com. When you enter a retail T-Mobile store, you’ll likely be directed to the app or website by an employee.

And lastly, T-Mobile brands this as a limited-time offer, but I confirmed with a spokesperson that it currently has no end date. 

Advertisement

Read more: I got an in-depth look at T-Mobile’s emergency response programs.

T-Mobile Better Value vs. Experience More plans

Better Value plan Experience More plan
High-speed data 5G, unlimited 5G, unlimited
Mobile Hotspot 250GB high-speed, then unlimited at 600Kbps 60GB high-speed, then unlimited at 600Kbps
International Call/Data Unlimited talk and text; 30GB high-speed data in Mexico/Canada/215+ countries, then unlimited at 256Kbps Unlimited talk and text; 15GB high speed data in Canada/Mexico, 5GB high speed data in 215+ countries; then unlimited at 256Kbps
Extras Netflix Standard with Ads; Hulu with Ads; Magenta Status; Apple TV for $3 per month Netflix Standard with Ads; 1 year AAA; Magenta Status; Apple TV for $3 per month
Price Guarantee 5 years 5 years
T-Satellite Included Optional $10 add-on
Cost for 3 lines $140 $140
Limited-time offer? Yes No

Advertisement

Source link

Continue Reading

Tech

Are We Finally At The Point Where Phones Can Replace Computers?

Published

on

There was an ideal of convergence, a long time ago, when one device would be all you need, digitally speaking. [ETA Prime] on YouTube seems to think we’ve reached that point, and his recent video about the Samsung S26 Ultra makes a good case for it. Part of that is software: Samsung’s DeX is a huge enabler for this use case. Part of that his hardware: the S26 Ultra, as the upcoming latest-and-greatest flagship phone, has absurd stats and a price tag to match.

First, it’s got 12 GB of that unobtanium once called “RAM”. It’s got an 8-core ARM processor in its Snapdragon Elite SOC, with the two performance cores clocked at 4.74 GHz — which isn’t a world record, but it’s pretty snappy. The other six cores aren’t just doddling along at 3.62 GHz. Except for the very youngest of our readers, you probably remember a time when the world’s greatest supercomputers had as much computing power as this phone.

So it should be no suprise that when [ETA Prime] plugs it into a monitor (using USB-C, natch) he’s able to do all the usual computational tasks without trouble. A big part of that is the desktop mode Samsung phones have had for a while now; we’ve seen hackers make use of it in years gone by. It’s still Android, but Android with a desktop-and-windows interface.

What are the hard tasks? Well, there’s photo and video editing, which the hardware can handle. Though [ETA] notes that it’s held back a bit because Adobe doesn’t offer their full suite on Android. But what’s really taxing for most of us is gaming. Android gaming? Well, obviously a flagship phone can handle anything in the play store.

Advertisement

It’s PC gaming that’s pretty impressive, considering the daisy chain of compatibility needed last time we looked at gaming on ARM. Cyberpunk 2077 gets frame rates near 60, but he needs to drop down to “low” graphics and 720p to do it. You may find that ample, or you may find it unplayable; there’s really no accounting for taste.

We might not always like carrying an everything device with us at all times, but there’s something to be said in not duplicating that functionality on your desk. Give it a couple of years when these things hit the used market at decent prices, and unless PC parts drop in price, convergence might start to seem like a great idea to those of us who aren’t big gamers and don’t need floppy drives.

Advertisement

Source link

Continue Reading

Tech

A DOGE Bro Allegedly Walked Out Of Social Security With 500 Million Americans’ Records On A Thumb Drive And Expected A Pardon If Caught

Published

on

from the seems-bads dept

From the very beginning of the DOGE saga, many of us raised alarms about what would happen when a bunch of inexperienced twenty-somethings were handed unfettered access to the most sensitive databases in the federal government with essentially zero oversight and zero adherence to the security protocols that exist for very good reasons. We wrote about it when a 25-year-old was pushing untested code into the Treasury’s $6 trillion payment system. We published a piece about it, originally reported by ProPublica, when DOGE operatives stormed into Social Security headquarters and demanded access to everything while ignoring the career staff who actually understood the systems.

That ProPublica deep dive painted a picture of 21-to-24-year-olds who didn’t understand the systems they were demanding access to, had “pre-ordained answers and weren’t interested in anything other than defending decisions they’d already made,” and were operating with essentially no accountability. The former acting commissioner described the operation as “a bunch of people who didn’t know what they were doing, with ideas of how government should run—thinking it should work like a McDonald’s or a bank—screaming all the time.”

These are the people who were handed the keys to the most sensitive databases the federal government holds.

And now we have what appears to be the entirely predictable consequence of all of that: direct exfiltration of data in a manner known to break the law, but zero concern over that fact, because of the assurances of a Trump pardon if caught.

Advertisement

The Washington Post has a stunning whistleblower report alleging that a former DOGE software engineer, who had been embedded at the Social Security Administration, walked out with databases containing records on more than 500 million living and dead Americans—on a thumb drive—and then allegedly tried to get colleagues at his new private sector job to help him upload the data to company systems.

According to the disclosure, the former DOGE software engineer, who worked at the Social Security Administration last year before starting a job at a government contractor in October, allegedly told several co-workers that he possessed two tightly restricted databases of U.S. citizens’ information, and had at least one on a thumb drive. The databases, called “Numident” and the “Master Death File,” include records for more than 500 million living and dead Americans, including Social Security numbers, places and dates of birth, citizenship, race and ethnicity, and parents’ names. The complaint does not include specific dates of when he is said to have told colleagues this information, but at least one of the alleged events unfolded around early January, according to the complaint. While working at DOGE, the engineer had approved access to Social Security data.

In the past, this was the kind of thing that the US government actually did a decent job protecting and keeping private. Now they have DOGE bros walking out the door with it on thumbdrives. Holy shit!

And here’s the detail that really tells you everything about the culture DOGE created inside these agencies:

He told another colleague, who refused to help him upload the data because of legal concerns, that he expected to receive a presidential pardon if his actions were deemed to be illegal, according to the complaint.

According to this complaint, this person allegedly understood that what he was doing might be illegal, did it anyway, and had already calculated that the political environment would protect him from consequences. The Elon Musk DOGE bros clearly believed they ran the show and that anyone associated with DOGE was entirely above the law on anything they did.

Advertisement

Perhaps just as troubling, the complaint also alleges that after leaving government employment, the DOGE bro claimed he still had his agency computer and credentials, which he described as carrying “God-level” security access to Social Security’s systems.

The complaint alleges that after leaving government employment, the former DOGE member told colleagues he had a thumb drive with Social Security data and had kept his agency computer and credentials, which he allegedly said carried largely unrestricted “God-level” security access to the agency’s systems — a level of access no other company employee had been granted in its work with SSA.

The Social Security Administration says he had turned in his laptop and lost his credential privileges when he departed. His lawyer denies all alleged wrongdoing, and both the agency and the company said they investigated the claims and didn’t find evidence to confirm them. The company said it conducted a “thorough” two-day internal investigation.

Two whole days! Investigating themselves. On an issue where ignoring it benefits them.

But the SSA’s inspector general is investigating, and has alerted Congress and the Government Accountability Office, which has its own audit of DOGE’s data access underway.

Advertisement

And this whistleblower complaint, filed back in January, surfaces alongside a separate complaint from the SSA’s former chief data officer, Charles Borges, which alleges that DOGE members improperly uploaded copies of Americans’ Social Security data to a digital cloud.

A separate complaint, made in August by the agency’s former chief data officer, Charles Borges, alleges members of DOGE improperly uploaded copies of Americans’ Social Security data to a digital cloud, putting individuals’ private information at risk. In January, the Trump administration acknowledged DOGE staffers were responsible for separate data breaches at the agency, including sharing data through an unapproved third-party service and that one of the DOGE staffers signed an agreement to share data with an unnamed political group aiming to overturn election results in several states.

We wrote about that other leak at the time, of a DOGE bro sharing data with an election denier group.

All of this just confirms what many people expected and none of this should surprise anyone who was paying attention: Donald Trump allowed Elon Musk and his crew of over-confident know-nothings to view federal government computer systems as their personal playthings, where they could access and exfiltrate any data they wanted for whatever ideological reason they wanted.

And we’re only hearing about this because a whistleblower came forward and because a former chief data officer had the courage to file a complaint. How many similar incidents happened at other agencies where no one spoke up? DOGE operatives were embedded across the entire federal government, accessing heavily restricted databases and, as the Washington Post puts it, “merging long-siloed repositories.” Every single one of those agencies had the same dynamic: young, inexperienced but overconfident engineers demanding unfettered access, career staff pushing back and being overruled, and essentially no security protocols being followed.

Advertisement

Former chief data officer Borges put it about as well as anyone could:

“This is absolutely the worst-case scenario,” Borges told The Post. “There could be one or a million copies of it, and we will never know now.”

Once it’s out, you can’t put it back. We’re going to be learning about the consequences of DOGE’s ransacking of federal systems for years, maybe decades. And we’re finding out that the waste, fraud, and abuse we were told DOGE was there to find, appears to have mostly been in their own actions.

Filed Under: doge, elon musk, entitlement, privacy, security, social security

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025