Connect with us
DAPA Banner

Tech

DeepSeek-V4 arrives with near state-of-the-art intelligence at 1/6th the cost of Opus 4.7, GPT-5.5

Published

on

The whale has resurfaced.

DeepSeek, the Chinese AI startup offshoot of High-Flyer Capital Management quantitative analysis firm, became a near-overnight sensation globally in January 2025 with the release of its open source R1 model that matched proprietary U.S. giants.

It’s been an epoch in AI since then, and while DeepSeek has released several updates to that model and its other V3 series, the international AI and business community has been largely waiting with baited breath for the follow-up to the R1 moment.

Now it’s arrived with last night’s release of DeepSeek-V4, a 1.6-trillion-parameter Mixture-of-Experts (MoE) model available free under commercially-friendly open source MIT License, which nears — and on some benchmarks, surpasses — the performance of the world’s most advanced closed-source systems at approximately 1/6th the cost over the application programming interface (API).

Advertisement

This release—which DeepSeek AI researcher Deli Chen described on X as a “labor of love” 484 days after the launch of V3—is being hailed as the “second DeepSeek moment”.

As Chen noted in his post, “AGI belongs to everyone”. It’s available now on AI code sharing community Hugging Face and through DeepSeek’s API.

Frontier-class AI gets pushed into a lower price band

The most immediate impact of the DeepSeek-V4 launch is economic. The corrected pricing table shows DeepSeek is not pricing its new Pro model at near-zero levels, but it is still pushing high-end model access into a far lower cost tier than the leading U.S. frontier models.

DeepSeek-V4-Pro is priced through its API at $1.74 USD per 1 million input tokens on a cache miss and $3.48 per million output tokens.

Advertisement
DeepSeek V4 API pricing

DeepSeek V4 API pricing chart. Credit: DeepSeek AI

That puts a simple one-million-input, one-million-output comparison at $5.22. With cached input, the input price drops to $0.145 per million tokens, bringing that same blended comparison down to $3.625.

That is dramatically cheaper than the current premium pricing from OpenAI and Anthropic. GPT-5.5 is priced at $5.00 per million input tokens and $30.00 per million output tokens, for a combined $35.00 in the same simple comparison.

Claude Opus 4.7 is priced at $5.00 input and $25.00 output, for a combined $30.00.

Advertisement

Model

Input

Output

Total Cost

Advertisement

Source

Grok 4.1 Fast

$0.20

$0.50

Advertisement

$0.70

xAI

MiniMax M2.7

$0.30

Advertisement

$1.20

$1.50

MiniMax

Gemini 3 Flash

Advertisement

$0.50

$3.00

$3.50

Google

Advertisement

Kimi-K2.5

$0.60

$3.00

$3.60

Advertisement

Moonshot

MiMo-V2-Pro (≤256K)

$1.00

$3.00

Advertisement

$4.00

Xiaomi MiMo

GLM-5

$1.00

Advertisement

$3.20

$4.20

Z.ai

GLM-5-Turbo

Advertisement

$1.20

$4.00

$5.20

Z.ai

Advertisement

DeepSeek-V4-Pro

$1.74

$3.48

$5.22

Advertisement

DeepSeek

GLM-5.1

$1.40

$4.40

Advertisement

$5.80

Z.ai

Claude Haiku 4.5

$1.00

Advertisement

$5.00

$6.00

Anthropic

Qwen3-Max

Advertisement

$1.20

$6.00

$7.20

Alibaba Cloud

Advertisement

Gemini 3 Pro

$2.00

$12.00

$14.00

Advertisement

Google

GPT-5.2

$1.75

$14.00

Advertisement

$15.75

OpenAI

GPT-5.4

$2.50

Advertisement

$15.00

$17.50

OpenAI

Claude Sonnet 4.5

Advertisement

$3.00

$15.00

$18.00

Anthropic

Advertisement

Claude Opus 4.7

$5.00

$25.00

$30.00

Advertisement

Anthropic

GPT-5.5

$5.00

$30.00

Advertisement

$35.00

OpenAI

GPT-5.4 Pro

$30.00

Advertisement

$180.00

$210.00

OpenAI

On standard, cache-miss pricing, DeepSeek-V4-Pro comes in at roughly one-seventh the cost of GPT-5.5 and about one-sixth (1/6th) the cost of Claude Opus 4.7.

Advertisement

With cached input, the gap widens: DeepSeek-V4-Pro costs about one-tenth as much as GPT-5.5 and about one-eighth as much as Claude Opus 4.7.

The more extreme near-zero story belongs to DeepSeek-V4-Flash, not the Pro model. Flash is priced at $0.14 per million input tokens on a cache miss and $0.28 per million output tokens, for a combined $0.42.

With cached input, that drops to $0.308. In that case, DeepSeek’s cheaper model is more than 98% below GPT-5.5 and Claude Opus 4.7 in a simple input-plus-output comparison, or nearly 1/100th the cost — though the performance dips significantly.

DeepSeek is compressing advanced model economics into a much lower band, forcing developers and enterprises to revisit the cost-benefit calculation around premium closed models.

Advertisement

For companies running large inference workloads, that price gap can change what is worth automating. Tasks that look too expensive on GPT-5.5 or Claude Opus 4.7 may become economically viable on DeepSeek-V4-Pro, and even more so on DeepSeek-V4-Flash. The launch does not make intelligence free, but it does make the market harder for premium providers to defend on performance alone.

Benchmarking the frontier: DeepSeek-V4-Pro gets close, but GPT-5.5 and Opus 4.7 still lead on most shared tests

DeepSeek-V4-Pro-Max is best understood as a major open-weight leap, not a clean across-the-board defeat of the newest closed frontier systems.

The model’s strongest benchmark claims come from DeepSeek’s own comparison tables, where it is shown against GPT-5.4 xHigh, Claude Opus 4.6 Max and Gemini 3.1 Pro High and bests them on several tests, including Codeforces and Apex Shortlist.

But that is not the same as a head-to-head against OpenAI’s newer GPT-5.5 or Anthropic’s newer Claude Opus 4.7.

Advertisement

Looking only at DeepSeek-V4 versus the latest proprietary models, the picture is more restrained.

On this shared set, GPT-5.5 and Claude Opus 4.7 still lead most categories.

DeepSeek-V4-Pro-Max’s best showing is on BrowseComp, the benchmark measuring agentic AI web browsing prowess (especially highly containerized information), where it scores 83.4%, narrowly behind GPT-5.5 at 84.4% and ahead of Claude Opus 4.7 at 79.3%.

On Terminal-Bench 2.0, DeepSeek scores 67.9%, close to Claude Opus 4.7’s 69.4%, but far behind GPT-5.5’s 82.7%.

Advertisement

Benchmark

DeepSeek-V4-Pro-Max

GPT-5.5

GPT-5.5 Pro, where shown

Advertisement

Claude Opus 4.7

Best result among these

GPQA Diamond

90.1%

Advertisement

93.6%

94.2%

Claude Opus 4.7

Advertisement

Humanity’s Last Exam, no tools

37.7%

41.4%

43.1%

Advertisement

46.9%

Claude Opus 4.7

Humanity’s Last Exam, with tools

48.2%

Advertisement

52.2%

57.2%

54.7%

GPT-5.5 Pro

Advertisement

Terminal-Bench 2.0

67.9%

82.7%

Advertisement

69.4%

GPT-5.5

SWE-Bench Pro / SWE Pro

55.4%

Advertisement

58.6%

64.3%

Claude Opus 4.7

Advertisement

BrowseComp

83.4%

84.4%

90.1%

Advertisement

79.3%

GPT-5.5 Pro

MCP Atlas / MCPAtlas Public

73.6%

Advertisement

75.3%

79.1%

Claude Opus 4.7

Advertisement

The shared academic-reasoning results favor the closed models: On GPQA Diamond, DeepSeek-V4-Pro-Max scores 90.1%, while GPT-5.5 reaches 93.6% and Claude Opus 4.7 reaches 94.2%.

On Humanity’s Last Exam without tools, DeepSeek scores 37.7%, behind GPT-5.5 at 41.4%, GPT-5.5 Pro at 43.1% and Claude Opus 4.7 at 46.9%. With tools enabled, DeepSeek rises to 48.2%, but still trails GPT-5.5 at 52.2%, GPT-5.5 Pro at 57.2% and Claude Opus 4.7 at 54.7%.

The agentic and software-engineering results are more mixed, but they still show DeepSeek-V4-Pro-Max trailing GPT-5.5 and Opus 4.7.

On Terminal-Bench 2.0, DeepSeek’s 67.9% is competitive with Claude Opus 4.7’s 69.4%, but GPT-5.5 is much higher at 82.7%.

Advertisement

On SWE-Bench Pro, DeepSeek’s 55.4% trails GPT-5.5 at 58.6% and Claude Opus 4.7 at 64.3%. On MCP Atlas, DeepSeek’s 73.6% is slightly behind GPT-5.5 at 75.3% and Claude Opus 4.7 at 79.1%.

BrowseComp is the standout: DeepSeek’s 83.4% beats Claude Opus 4.7’s 79.3% and nearly matches GPT-5.5’s 84.4%, though GPT-5.5 Pro’s 90.1% remains well ahead.

So ultimately, DeepSeek-V4-Pro-Max does not appear to dethrone GPT-5.5 or Claude Opus 4.7 on the benchmarks that can be directly compared across the companies’ published tables. But it gets close enough on several of them — especially BrowseComp, Terminal-Bench 2.0 and MCP Atlas — that its much lower API pricing becomes the headline.

In practical terms, DeepSeek does not need to win every leaderboard row to matter. If it can deliver near-frontier performance on many enterprise-relevant agent and reasoning tasks at roughly one-sixth to one-seventh the standard API cost of GPT-5.5 or Claude Opus 4.7, it still forces a major rethink of the economics of advanced AI deployment.

Advertisement

DeepSeek-V4-Pro-Max is clearly the strongest open-weight model in the field right now, and it is unusually close to frontier closed systems on several practical benchmarks.

While GPT-5.5 and Claude Opus 4.7 still retain the lead in most direct head-to-head comparisons across the company’s benchmark charts, DeepSeek V4 Pro gets close while being dramatically cheaper and openly available.

A big jump from DeepSeek V3.2

To understand the magnitude of this release, one must look at the performance gains of the base models. DeepSeek-V4-Pro-Base represents a significant advancement over the previous generation, DeepSeek-V3.2-Base. In World Knowledge, V4-Pro-Base achieved 90.1 on MMLU (5-shot) compared to V3.2’s 87.8, and a massive jump on MMLU-Pro from 65.5 to 73.5.

The improvement in high-level reasoning and verified facts is even more pronounced: on SuperGPQA, V4-Pro-Base reached 53.9 compared to V3.2’s 45.0, and on the FACTS Parametric benchmark, it more than doubled its predecessor’s performance, jumping from 27.1 to 62.6. Simple-QA verified scores also saw a dramatic rise from 28.3 to 55.2.

Advertisement

The Long Context capabilities have also been refined. On LongBench-V2, V4-Pro-Base scored 51.5, significantly outpacing the 40.2 achieved by V3.2-Base. In Code and Math, V4-Pro-Base reached 76.8 on HumanEval (Pass@1), up from 62.8 on V3.2-Base.

These numbers underscore that DeepSeek has not just optimized for inference cost, but has fundamentally improved the intelligence density of its base architecture. The efficiency story is equally compelling for the Flash variant. DeepSeek-V4-Flash-Base, despite utilizing a substantially smaller number of parameters, outperforms the larger V3.2-Base across wide benchmarks, particularly in long-context scenarios.

A new information ‘traffic controller,’ Manifold-Constrained Hyper-Connections (mHC)

DeepSeek’s ability to offer these prices and performance figures is rooted in radical architectural innovations detailed in its technical report also released today, “Towards Highly Efficient Million-Token Context Intelligence.”

The standout technical achievement of V4 is its native one-million-token context window. Historically, maintaining such a large context required massive memory (the key values or KV cache).

Advertisement

DeepSeek solved this by introducing a Hybrid Attention Architecture that combines Compressed Sparse Attention (CSA) to reduce initial token dimensionality and Heavily Compressed Attention (HCA) to aggressively compress the memory footprint for long-range dependencies.

In practice, the V4-Pro model requires only 10% of the KV cache and 27% of the single-token inference FLOPs compared to its predecessor, the DeepSeek-V3.2, even when operating at a 1M token context.

To stabilize a network of 1.6 trillion parameters, DeepSeek moved beyond traditional residual connections. The company’s researchers incorporated Manifold-Constrained Hyper-Connections (mHC) to strengthen signal propagation across layers while preserving the model’s expressivity.

mHC allows an AI to have a much wider flow of information (so it can learn more complex things) without the risk of the model becoming unstable or “breaking” during its training. It’s like giving a city a 10-lane highway but adding a perfect AI traffic controller to ensure no one ever hits the brakes.

Advertisement

This is paired with the Muon optimizer, which allowed the team to achieve faster convergence and greater training stability during the pre-training on more than 32T diverse and high-quality tokens.

This pre-training data was refined to remove hatched auto-generated content, mitigating the risk of model collapse and prioritizing unique academic values. The model’s 1.6T parameters utilize a Mixture-of-Experts (MoE) design where only 49B parameters are activated per token, further driving down compute requirements.

Training the mixture-of-experts (MoE) to work as a whole

DeepSeek-V4 was not simply trained; it was “cultivated” through a unique two-stage paradigm.

  1. First, through Independent Expert Cultivation, domain-specific experts were trained through Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) using the GRPO (Group Relative Policy Optimization) algorithm. This allowed each expert to master specialized skills like mathematical reasoning or codebase analysis.

  2. Second, Unified Model Consolidation integrated these distinct proficiencies into a single model via on-policy distillation, where the unified model acts as the student learning to optimize reverse KL loss with teacher models. This distillation process ensures that the model preserves the specialized capabilities of each expert while operating as a cohesive whole.

The model’s reasoning capabilities are further segmented into three increasing “effort” modes.

Advertisement
  1. The “Non-think” mode provides fast, intuitive responses for routine tasks.

  2. “Think High” provides conscious logical analysis for complex problem-solving.

  3. Finally, “Think Max” pushes the boundaries of model reasoning, bridging the gap with frontier models on complex reasoning and agentic tasks. This flexibility allows users to match the compute effort to the difficulty of the task, further enhancing cost-efficiency.

Breaking the Nvidia GPU stranglehold with local Chinese Huawei Ascend NPUs

While the model weights are the headline, the software stack released alongside them is arguably more important for the future of “Sovereign AI.”

Analyst Rui Ma highlighted a single sentence from the release as the most critical: DeepSeek validated their fine-grained Expert Parallelism (EP) scheme on Huawei Ascend NPUs (neural processing units).

By achieving a 1.50x to 1.73x speedup on non-Nvidia GPU platforms, DeepSeek has provided a blueprint for high-performance AI deployment that is resilient to Western GPU supply chains and export controls.

However, it’s important to note that DeepSeek still claims it used officially licensed, legal Nvidia GPUs for DeepSeek V4’s training, in addition to the Huawei NPUs.

Advertisement

DeepSeek has also open-sourced the MegaMoE mega-kernel as a component of its DeepGEMM library. This CUDA-based implementation delivers up to a 1.96x speedup for latency-sensitive tasks like RL rollouts and high-speed agent serving.

This move ensures that developers can run these massive models with extreme efficiency on existing hardware, further cementing DeepSeek’s role as the primary driver of open-source AI infrastructure.

The technical report emphasizes that these optimizations are crucial for supporting a standard 1M context across all official services.

Licensing and local deployment

DeepSeek-V4 is released under the MIT License, the most permissive framework in the industry. This allows developers to use, copy, modify, and distribute the weights for commercial purposes without royalties—a stark contrast to the “restricted” open-weight licenses favored by other companies.

Advertisement

For local deployment, DeepSeek recommends setting sampling parameters to temperature = 1.0 and top_p = 1.0. For those utilizing the “Think Max” reasoning mode, the team suggests setting the context window to at least 384K tokens to avoid truncating the model’s internal reasoning chains.

The release includes a dedicated encoding folder with Python scripts demonstrating how to encode messages in OpenAI-compatible format and parse the model’s output, including reasoning content.

DeepSeek-V4 is also seamlessly integrated with leading AI agents like Claude Code, OpenClaw, and OpenCode. This native integration underscores its role as a bedrock for developer tools, providing an open-source alternative to the proprietary ecosystems of major cloud providers.

Community reactions and what comes next

The community reaction has been one of shock and validation. Hugging Face officially welcomed the “whale” back, stating that the era of cost-effective 1M context length has arrived.

Advertisement

Industry experts noted that the “second DeepSeek moment” has effectively reset the developmental trajectory of the entire field, placing massive pressure on closed-source providers like OpenAI and Anthropic to justify their premiums.

AI evaluation firm Vals AI noted that DeepSeek-V4 is now the “#1 open-weight model on our Vibe Code Benchmark, and it’s not close”.

DeepSeek is moving quickly to retire its older architectures. The company announced that the legacy deepseek-chat and deepseek-reasoner endpoints will be fully retired on July 24, 2026. All traffic is currently being rerouted to the V4-Flash architecture, signifying a total transition to the million-token standard.

DeepSeek-V4 is more than just a new model; it is a challenge to the status quo. By proving that architectural innovation can substitute for raw compute-maximalism, DeepSeek has made the highest levels of AI intelligence accessible to the global developer community at a far lower cost — something that could benefit the globe, even at a time when lawmakers and leaders in Washington, D.C. are raising concerns about Chinese labs “distilling” from U.S. proprietary giants to train open source models, and fears of said open source or jailbroken proprietary models being used to create weapons and commit terror.

Advertisement

The truth is, while all of these are potential risks — as they were and have been with prior technologies that broadened information access, like search and the internet itself — the benefits seem far outweigh them, and DeepSeek’s quest to keep frontier AI models open is of benefit to the entire planet of potential AI users, especially enterprises looking to adopt the cutting-edge at the lowest possible cost.

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Today’s NYT Connections: Sports Edition Hints, Answers for April 25 #579

Published

on

Looking for the most recent regular Connections answers? Click here for today’s Connections hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle and Strands puzzles.


Today’s Connections: Sports Edition could trip you up, unless you know your video games. If you’re struggling with today’s puzzle but still want to solve it, read on for hints and the answers.

Connections: Sports Edition is published by The Athletic, the subscription-based sports journalism site owned by The Times. It doesn’t appear in the NYT Games app, but it does in The Athletic’s own app. Or you can play it for free online.

Advertisement

Read more: NYT Connections: Sports Edition Puzzle Comes Out of Beta

Hints for today’s Connections: Sports Edition groups

Here are four hints for the groupings in today’s Connections: Sports Edition puzzle, ranked from the easiest yellow group to the tough (and sometimes bizarre) purple group.

Yellow group hint: Not on top.

Advertisement

Green group hint: Hoops teams.

Blue group hint: Great netminders.

Purple group hint: Good group for gamers.

Answers for today’s Connections: Sports Edition groups

Yellow group: In the lowest position.

Advertisement

Green group: NBA teams, on scoreboards.

Blue group: Hall of Fame hockey goaltenders.

Purple group: Baseball video games.

Read more: Wordle Cheat Sheet: Here Are the Most Popular Letters Used in English Words

Advertisement

What are today’s Connections: Sports Edition answers?

completed NYT Connections: Sports Edition puzzle for April 25, 2026

The completed NYT Connections: Sports Edition puzzle for April 25, 2026.

NYT/Screenshot by CNET

The yellow words in today’s Connections

The theme is in the lowest position. The four answers are bottom, cellar, last and worst.

The green words in today’s Connections

The theme is NBA teams, on scoreboards. The four answers are DEN, OKC, SAC and WAS.

Advertisement

The blue words in today’s Connections

The theme is  Hall of Fame hockey goaltenders. The four answers are Brodeur, Fuhr, Parent and Roy.

The purple words in today’s Connections

The theme is baseball video games. The four answers are Backyard, High Heat, Slugfest and The Show.

Source link

Advertisement
Continue Reading

Tech

Why are top university websites serving porn? It comes down to shoddy housekeeping.

Published

on

With that, they have now hijacked that university’s subdomain. Given the reputations universities have, search queries then flow to the top of Google’s results.

Shakhov wrote:

The root cause is simple: organizations create DNS records and never clean them up. There is no expiry date on a CNAME record. Nobody gets an alert when the target stops responding. And most university IT departments don’t maintain a comprehensive inventory of their subdomains and where they point.

This is compounded by how universities operate—they are highly decentralized. Individual departments, labs, research groups, and student organizations can often request subdomains independently. When people leave, there is no decommissioning process for the DNS records they created.

Finding hijacked subdomains is straightforward. People need only enter site:[university].edu “xxx” or site:[university].edu “porn” for an affected institution, and scores of results will appear. In some cases, the subdomains returned no longer lead to porn sites, but as of Friday morning, many still did.

Advertisement

The lesson here is clear: Any organization with a website should compile a running inventory of all subdomains along with the purpose of each one and its corresponding CNAME record. Then staff should regularly audit the list in search of “dangling” records, meaning those that remain even after the official subdomain has gone dark. Any subdomain found to be inactive should have its CNAME removed.

Clearly, many universities and other organizations are flouting this common-sense practice. Shakhov said only a handful of the affected universities have expunged dangling CNAME records since he went public with his findings earlier this month. Even then, several of them have failed to get the URLs delisted by Google. That results in the indexed remaining visible in search results. Inquiries sent to UC Berkeley, Columbia, and Washington University didn’t receive responses before publication.

Source link

Advertisement
Continue Reading

Tech

Android wants to replace email verification codes with one-tap credentials

Published

on


Google is working on a more streamlined way for app developers to authenticate users. The company has introduced a new verified email credential issued directly through Android’s Credential Manager API, with the goal of modernizing the authentication process. Users will no longer need to check their inbox for temporary authentication…
Read Entire Article
Source link

Continue Reading

Tech

RUN Powered by ADP software review: Simple and streamlined for small businesses

Published

on

Why you can trust TechRadar


We spend hours testing every product or service we review, so you can be sure you’re buying the best. Find out more about how we test.

ADP is one of the largest providers of payroll, HR, and tax services in the business world, but its products are more often associated with larger enterprises – so RUN powered by ADP is a refreshing change of pace.

It’s a payroll and HR platform specifically designed for smaller businesses with fewer than 50 employees. We’ve reviewed all the best HR software, with this particular service built to make potentially complex functions faster, easier, and more reliable, so the people in charge of small businesses can concentrate on the work they really want to be doing.

Advertisement

Source link

Continue Reading

Tech

Hyundai IONIQ V Debuts at Beijing Auto Show, Sports Futuristic Wedge Design

Published

on

Hyundai IONIQ V EV Beijing Auto Show Reveal
Hyundai just revealed its IONIQ V sedan at the Beijing Auto Show, and the new vehicle shares the elegant design of the Venus concept that inspired it. The engineers and designers stayed fairly near to the wild lines that made the concept so appealing, releasing an electric machine designed from the bottom up with Chinese buyers in mind.



The IONIQ V’s design features a single flowing curve that extends from nose to tail, with no bumps in between. The frameless doors blend seamlessly into the body, and the side mirrors appear to hover above the fenders. The headlights protrude from the front like the two parts of the Hyundai logo, and a corresponding light bar spans the entire width of the rear, just above the sleek tiny tail. The overall style is low and wedge-like, paying homage to vintage 80s designs while remaining undeniably modern.

Sale


Amphibious Remote Control Car, 1:18 Monster Truck Toys for Boys RC Cars, 2.4 GHz Waterproof RC Trucks…
  • 2.4 GHz Remote Control Car – 1:18 scale cool design, waterproof RC truck toys made of premium material and sturdy, with LED lights, waterproof remote…
  • High Quality & DIY Removable Toys RC Cars – This remote control monster truck structure design quality, flexibility and strength in one. The rc truck…
  • All Terrain Amphibious Monster Truck – 4-wheel drive off-road design rc trucks for kids, with high-quality tires (shock absorption, strong grip…

The IONIQ V measures just under 193 inches long, 74.4 inches wide, and has a wheelbase of 114.2 inches. That all adds up to plenty of space inside for everyone, with front passengers having 42.4 inches of legroom and rear passengers having 40.1 inches, which is among the finest in its class for a car of this size. The IONIQ V is nearly as long as a Sonata, but thanks to its all-electric construction, it rides lower and glides more smoothly.

Advertisement


Things under the hood are powered by an 800-volt electrical system, which allows you to take advantage of speedy charging. CATL is providing the batteries, which together provide more than 600 kilometers of range on China’s CLTC test cycle, or approximately 373 miles under local conditions. Of course, real-world driving would likely be slightly lower, but the layout appears to have been designed with long-distance comfort in mind for China’s congested highways.

Hyundai IONIQ V Interior
Hyundai IONIQ V Interior
Inside the driver’s cockpit, there’s a very clean dashboard with a single large 27-inch screen operating at 4K resolution. The steering wheel contains the only physical controls, while the typical gauges are located on the horizon-style head-up display. The software is powered by Qualcomm’s Snapdragon 8295 chip, and Hyundai’s onboard AI voice assistant is ready to listen for natural voice requests to modify music, navigation, climate, or seat settings. Orange and blue interior tones add a touch of modest color without overwhelming the serene and peaceful cabin.

Hyundai IONIQ V EV Beijing Auto Show Reveal
Hyundai executives characterize their strategy as producing cars in China for Chinese drivers before exporting the best parts to other markets. The IONIQ V is already hinting at several new stylistic elements that could make their way into future Hyundais all over the world, whether in next-generation crossovers or sedans. You can get in on the fun later this year, and all of the cars will come with specialized staff to assist with servicing, as well as a one-price policy aimed to make purchasing much easier.
[Source]

Source link

Advertisement
Continue Reading

Tech

Want a premium Pixel for less? The Pixel 10 Pro XL is now under $850

Published

on

The jump from a good smartphone camera to a genuinely great one comes down to how the hardware and software work together, and no manufacturer has pushed that integration further than Google has with the Pixel 10 Pro XL.

The Google Pixel 10 Pro XL is a fantastic phone, and with $250 knocked off the asking price, it’s now available for $849 — a figure that starts to look very reasonable once you understand what the hardware is actually capable of.

Google Puxel 10XL on a blue backgroundGoogle Puxel 10XL on a blue background

Right now you can buy a Google Pixel 10 Pro XL for under $850

The Google Pixel 10 Pro XL is the best premium Pixel phone you can buy right now, and at $849, that case has never been easier to make.

Advertisement

View Deal

The headline camera number is 100x Pro Res Zoom, powered by a combination of the upgraded telephoto lens and Google’s AI imaging pipeline, which means the kind of reach that used to require a dedicated camera is now sitting in your pocket at a fraction of the cost.

Advertisement

The Pixel 10 Pro XL‘s camera system is built on top of the Google Tensor G5 chip, which Google describes as the biggest chip upgrade in the Pixel lineup yet, with an improved TPU and CPU designed specifically to handle the kind of on-device AI processing that makes features like Pro Res Zoom and real-time video stabilisation possible.

Gemini Live also adds another layer to the camera experience — point the camera at something you’re curious about and have a natural spoken conversation about what you’re seeing, whether that’s an exhibit at a museum or a dish on a menu you can’t read.

Advertisement

The 6.8-inch Super Actua OLED display reaches 3,300 nits of peak brightness, which puts it ahead of the standard Pixel 10‘s Actua panel and makes outdoor visibility a genuine strength rather than a tolerated limitation, even in direct sunlight.

Advertisement

The Whatsapp LogoThe Whatsapp Logo

Get Updates Straight to Your WhatsApp

Join Now

Advertisement

Build quality comes from durable aluminium and Corning Gorilla Glass Victus 2, and the phone is water resistant, so the hardware matches the ambition of everything running on top of it.

Seven years of software and security updates, 256GB of storage, and a 5,200mAh battery rated for 24-plus hours cement the Google Pixel 10 Pro XL as the best premium Pixel phone you can buy right now, and at $849 with $250 off, that case has never been easier to make.

SQUIRREL_PLAYLIST_10148964

Advertisement

Advertisement

Source link

Continue Reading

Tech

4 easy ways to stay on top of cybersecurity in the workplace

Published

on

As cyberthreats advance, so too must workforce cyber defences to avoid making what are often preventable and costly mistakes.

Cybersecurity measures in the workplace never grind to a halt, in that employees and employers must always strive to ensure that their skills and systems are as advanced, if not more so, than those wielded by people with malicious intent.

A lot of cybersecurity is arguably common sense – don’t click suspicious links, don’t share sensitive information and so on – but it doesn’t hurt to have a refresher course now and then to keep it all fresh in the mind. To that point, here are some of the most helpful tips to follow if you want to improve or maintain your company’s cybersecurity efforts. 

Silo your systems

This one is specifically for anyone who works from home. It goes without saying that we feel comfortable in our own properties and have tried and tested ways of doing things. But there is such a thing as being too comfortable and employees may forget that their systems should never overlap with the organisations.

Advertisement

If you are using company software, keep all activity tied to the workplace. That is to say, don’t download anything not approved by the organisation, or anything you are using in a personal capacity. 

Furthermore, if you move around and work between locations – for example at home, a cafe, a work hub – do your due diligence first and ensure that the network you are using is secure. This can be easier said than done, as using public Wi-Fi in general can be risky. With that in mind, shared office spaces and hubs tend to be a more secure option. If you are using what could be a potentially non-secure network in a public place, always use a VPN as an added layer of protection. 

Get AI ready 

Advancements in technology unfortunately bring risk. AI has unlimited potential and it is certainly the way forward for a lot of organisations looking to advance, scale and grow, but as we have seen recently, it also presents significant risk, as threat actors can use it to launch highly sophisticated scams. The companies and employees that are serious about avoiding and navigating threats are the ones that will adopt AI upskilling as a core aspect of the organisation – not just as a once a year box-ticking exercise.   

Useful skills to consider include an understanding of AI and ML models, data science for cyber defence, AI-specific threats and broad digital literacy. You can’t defend against a threat that you don’t understand and if your organisation has knowledge gaps, then you are automatically in a weak position. So make sure everyone on a system understands the ins and outs of how it works and how to keep it secure. 

Advertisement

Simple simply isn’t good enough

We have all picked a password because it was simple and easy to remember, making our own lives simpler and easier, in theory. But when you choose an obvious password, or take shortcuts online, it can expose you to malicious people who can easily bypass the protections you put in place. 

That doesn’t mean that every password has to be 80 characters long, or so obscure that you yourself can’t recall it without physically writing it down. But it should be something with a diverse set of characters, that someone else couldn’t guess. For example, avoid using easily obtained information like the names of pets, loved ones, birthdays or other significant dates. Implementing multifactor authentication adds another critical layer and biometric verification tools, such as fingerprint or facial recognition software, can also be useful. 

Stay current

It is important to note that all of the above is effectively useless if you are operating off of a system that is old or is not updated frequently. In the same way that innovators are constantly coming up with new ways to enhance a system, bad actors are also constantly coming up with ways to break and exploit weaknesses in systems. If you don’t regularly update your devices then you are basically holding a door open for threat actors and welcoming them in. 

If your approved system or device is due an update or if there is a trustworthy patch to be applied, don’t put it on the long finger. The longer you leave an update the more vulnerable you leave yourself, your co-workers and the organisation. So, don’t leave it on the to-do list for too long. 

Advertisement

When it comes down to it, cybersecurity measures are arguably the most important policies in place at a company. When they are breached or weakened, either accidentally, or on purpose, there is no one in an organisation that won’t feel the impact. Externally, it also places the consumers and partners of a business at risk. Especially financially, or if that business deals with complex or sensitive information.  

So we all have to do our bit to ensure practical, robust and consistent cybersecurity engagement.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Meta signs multibillion-dollar deal to use Amazon’s Graviton chips for agentic AI

Published

on

An AWS Graviton chip. (Amazon Photo)

Facebook parent Meta signed a deal to use Amazon’s Graviton chips for agentic AI, the latest indication of growing demand for the tech giant’s growing silicon business.

Bloomberg reports that the deal is worth billions of dollars over multiple years. It comes one day after Meta said it would lay off roughly 10% of its workforce, or about 8,000 employees, as companies across the industry cut headcount while pouring billions into AI infrastructure.

The deal gives Meta access to tens of millions of Graviton5 processor cores, running in AWS data centers, making Meta one of the largest Graviton customers in the world, the companies said. It builds on Meta’s existing use of Amazon Bedrock, the company’s platform for AI models.

Amazon CEO Andy Jassy said in a LinkedIn post that agentic AI is “becoming almost as big a CPU story as a GPU story.” In other words, while graphical processing units (mostly from Nvidia) have dominated the AI hardware conversation, agentic systems need traditional central processing units to handle the reasoning and coordination that happens between steps.

The deal comes just after Intel reported a big quarter, with data center revenue up 22%, driven in part by surging demand for CPUs for agentic AI workloads — the same trend Amazon is riding with Graviton. Intel stock is up more than 22%; Amazon stock rose 3% after the Meta news.

Advertisement

Meta has taken a broad approach, signing deals with Nvidia and AMD, recently agreeing to use Google’s custom processors, and developing its own in-house silicon with Broadcom.

“As we scale the infrastructure behind Meta’s AI ambitions, diversifying our compute sources is a strategic imperative,” said Santosh Janardhan, head of infrastructure at Meta, in a release.

Amazon is establishing itself as a major chipmaker in its own right. CEO Andy Jassy disclosed in his annual shareholder letter that Amazon’s custom silicon business is generating more than $20 billion a year in revenue, saying it’s “quite possible” Amazon will sell racks of its chips to third parties in the future. That would mean competing more directly with Nividia.

Its roster of chip customers is growing. Anthropic committed to running its models on Amazon’s Trainium processors as part of a $25 billion expanded partnership announced this week, and OpenAI agreed to use Trainium as part of a $100 billion cloud deal earlier this year

Advertisement

Source link

Continue Reading

Tech

X-energy stock pops 27% on first day of trading following upsized IPO

Published

on

X-energy’s stock popped today in its debut on the Nasdaq, opening at $30.11 before closing at $29.20, up 27% over its initial public offering of $23 per share.

Investors can’t get enough nuclear power, apparently. Even the initial share price had been revised upward from the $16 to $19 target floated by the company during its investor roadshow. At close, the company was valued at $11.5 billion.

Just five years ago, such interest in a nuclear startup would have come as a surprise to many. 

Back then, the nuclear industry was haunted by delayed projects and massive cost overruns at recently completed reactors. Two power plants were completed in Georgia — one in the late 2010s and another in the early 2020s. In total, they cost around $30 billion to build.

Advertisement

Nuclear startups in the early 2020s were in their infancy, and at least one frontrunner had run into significant regulatory problems, sparking fears that the industry hadn’t been able to put its past behind it.

Now, investors appear optimistic that X-energy and its peers have figured out a way around the challenges.

Much of the momentum can be traced to the AI-driven data center boom. GPUs need tremendous amounts of electricity, and while solar, wind, batteries, and natural gas have been filling the need today, tech companies have been hoping to diversify. Nuclear power is one of the many options they’ve been exploring, hoping that the compact form factor will be an ideal fit for their sprawling data centers.

Nuclear power has long had more potential to power the U.S. grid than it has been able to deliver. Today, about 18% of electricity in the country comes from nuclear power. But reactor costs have risen in recent decades. Nuclear power might be one of the most reliable sources of electricity in the U.S., but it’s also one of the most expensive.

Advertisement

X-energy’s 80-megawatt reactor design is an order of magnitude smaller than many existing nuclear power plants. The company is betting that modularity can help bring costs down, and data center operators are hoping that a single campus can be powered by a fleet of reactors, providing the sort of redundancy and stability they prize. Amazon has said it will buy up to 5 gigawatts’ worth of capacity from X-energy over the next decade or so, but chemical maker Dow will receive the startup’s first power plant. 

Construction is underway at X-energy’s fuel facility, and while the company has yet to start construction of a power plant, investors appear bullish that the company will be able to break nuclear power free from its decades-long malaise. 

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Source link

Advertisement
Continue Reading

Tech

Tech Moves: Expedia names CFO; former Tune CEO leaves Roku; Amazon and Microsoft departures

Published

on

Derek Andersen. (Expedia Photo)

Expedia Group named Derek Andersen as its new chief financial officer starting May 11. He succeeds Scott Schenkel, who is stepping down after more than two years in the role.

The Seattle-based travel giant hired Andersen from Snap, the company behind Snapchat, where he served as CFO for more than seven years. Before that, was Amazon’s vice president of finance, overseeing its global suite of video businesses, including Prime Video and Amazon Studios.

Andersen said he’s looking forward to returning to Seattle to join Expedia.

“The company has built strong assets, from its technology and consumer brands to one of the largest B2B businesses in the industry and is well positioned to shape the future of travel,” he said in a statement.

In announcing the news, Expedia CEO Ariane Gorin praised Andersen’s fit for the CFO role and thanked Schenkel, who previously served as CFO and interim CEO at eBay, for his impact.

Advertisement
Peter Hamilton. (LinkedIn Photo)

Peter Hamilton has left Roku, where he served as head of ad innovation for the streaming platform for more than four years. The Seattle-based executive was previously CEO of Tune, a mobile marketing startup, for more than a decade.

“I came to Roku to see what it feels like to turn on innovation at massive scale, and I left better understanding the village of people that make it all possible,” he said in a LinkedIn post.

Hamilton added that he’ll once again step into a CEO role, but did not say at which company.

Ann Johnson. (LinkedIn Photo)

Ann Johnson is leaving Microsoft after more than a decade, most recently serving as corporate vice president and executive security advisor. On May 4, the Seattle-based leader will become executive VP of Security (Identity & Fraud) Services at Mastercard.

In a Q&A posted by Mastercard, Johnson described her work in cybersecurity as “purpose-driven” and said she was eager to join the company. In her new role, she’ll work to help secure commerce and financial transactions — “such an important part of the ecosystem,” she said.

Microsoft’s Annie Pearl has a new role as CVP of the Copilot product for Microsoft AI. Pearl previously led product, engineering and design for Azure Experiences and has been with the company for more than three years. She is based in San Francisco.

Advertisement

— After more than 20 years with Amazon, Vidya Shastri left the company as a director of software development and is taking a career break. Shastri departed last year, but this week shared a lengthy post on Substack reflecting on his decades at the tech giant.

Shastri said on LinkedIn that he had two mantras at Amazon: people first, then product, and nothing matters more than trust. “It’s the ‘virtuous cycle of trust’ that makes teams and organizations great, even in this era of downsizing and AI,” he said.

Judd Lee is now chief financial officer for Safe Software, a Surrey, B.C., data software company. Lee joins from BrightEdge, and was previously CFO at Seattle’s RealNetworks, Parallels and SignalSense. Safe Software also named Vanessa Ribreau as chief people officer.

Jake Oster, Amazon’s former director of energy, environment and sustainability policy, is now VP of sustainability policy and community relations for Oracle.

Advertisement

“At Oracle Cloud Infrastructure, each AI data center is designed with the surrounding community’s future in mind and we are making investments in job creation, water infrastructure, and new sources of energy generation,” he said on LinkedIn, adding that he will remain in Seattle.

Gen. James Rainey, a retired four-star general from the U.S. Army, was welcomed as an advisor to Overland AI. The Seattle-based startup in February raised $100 million to meet demand for its autonomous ground vehicles used by the U.S. military.

— Former Starbucks CEO Howard Schultz is joining the board of directors for Gopuff, a Philadelphia-based delivery app offering snacks and everyday essentials.

Jonathan Bricker, a Fred Hutch Cancer Center public health scientist, was awarded the institution’s Endowed Chair in Cancer Prevention. Bricker has helped develop tools to reduce cancer risk, including the AI-powered QuitBot app.

Advertisement

Pacific Northwest National Laboratory announced leadership changes:

  • William “Bill” Pike is now deputy director for science and technology, previously having served as chief science and technology officer for PNNL’s National Security Directorate.
  • Angela Becker-Dippmann is associate laboratory director for the Energy and Environment Directorate, having previously worked as director of EED’s Program Development Office.
  • Amy Schmidt is executive director and chief HR officer, transitioning from the role of head of talent management.

And in case you missed it, GeekWire reported some big tech moves earlier this week.

Amazon made two notable promotions:

  • AWS infrastructure chief Prasad Kalyanaraman joined the company’s S-team leadership group.
  • Cloud computing and AI services leader Dave Brown was promoted to senior vice president.

LinkedIn announced two promotions as well:

  • Daniel Shapero, the company’s chief operating officer since 2021, is now CEO.
  • Mohak Shroff, LinkedIn’s longtime engineering leader, has taken the new role of president of platforms and digital work.

Source link

Continue Reading

Trending

Copyright © 2025