The long-rumored iPhone Fold, or maybe the iPhone Ultra, should arrive in fall 2026. Here’s what the rumor mill says about Apple’s first foldable iPhone.
A render of what the iPhone Fold could look like
While the rest of the smartphone industry has embraced foldable smartphones, Apple has so far held back from launching its own. However, the rumor mill certainly believes that one model will eventually come out of Cupertino, and that 2026 could be the year it finally does. With high expectations, the model referred to as the iPhone Fold is anticipated to be a big launch for the company. That launch could be just half a year away. Continue Reading on AppleInsider | Discuss on our Forums
Kyber, first observed in circulation as early as September, takes its name from the alternate designation of ML-KEM (Module-Lattice-based Key Encapsulation Mechanism). The algorithm, standardized by the National Institute of Standards and Technology, is part of a broader effort to prepare encryption systems for a future in which quantum computers… Read Entire Article Source link
A new financially motivated hacking group tracked as BlackFile has been linked to a wave of data theft and extortion attacks against retail and hospitality organizations since February 2026.
The group, also tracked as CL-CRI-1116, UNC6671, and Cordial Spider, is impersonating corporate IT helpdesk staff to steal employee credentials and demand seven-figure ransoms, according to information shared by cybersecurity firm Palo Alto Networks’ Unit 42 with the Retail & Hospitality Information Sharing and Analysis Center (RH-ISAC).
Unit 42 security researchers have also linked BlackFile with moderate confidence to “The Com,” a loose-knit network of English-speaking cybercriminals known for targeting and recruiting young people for extortion, violence, and the production of child sexual exploitation material (CSAM).
In a Thursday report, RH-ISAC said that the group’s attacks begin with phone calls to employees from spoofed numbers, in which the threat actors pose as IT support to lure staff to fake corporate login pages that ask them to enter their credentials and one-time passcodes.
“The attackers behind CL-CRI-1116 use voice-based phishing (vishing) from spoofed Voice over Internet Protocol (VoIP) numbers or fraudulent Caller ID Names (CNAM) as a social engineering technique, typically posing as IT support staff,” RH-ISAC said.
Advertisement
“We can confirm that we are seeing a significant increase in Blackfile matters and that TTPs appear to be very similar to such groups as ShinyHunters and SLSH and similar copycats employing vishing/social engineering data exploit tactics,” CyberSteward founder and CEO Jason S.T. Kotler also told BleepingComputer.
Using stolen credentials, the BlackFile attackers register their own devices to bypass multifactor authentication, then escalate access to executive-level accounts by scraping internal employee directories.
BlackFile steals data from victims’ Salesforce and SharePoint servers using standard API functions, searching specifically for files containing terms such as “confidential” and “SSN.”
The exfiltrated documents are downloaded to attacker-controlled servers and published to the gang’s dark web data leak site before victims are contacted with ransom demands via compromised employee email accounts or randomly generated Gmail addresses.
Advertisement
BlackFile data leak site (RH-ISAC)
“By leveraging Salesforce API access and standard SharePoint download functions, the attackers move large volumes of data – including CSV datasets of employee phone numbers and confidential business reports – to attacker-controlled infrastructure,” RH-ISAC added.
“This is often done under the guise of legitimate SSO-authenticated sessions to avoid triggering simple user-agent alerts.”
Employees of compromised companies (including senior executives) have also been targets of swatting attempts, which involve making false emergency calls to responders. Attackers often use this tactic to exert additional pressure on their victims.
Mandiant also told BleepingComputer that they are actively responding to several vishing incidents that led to data theft and extortion, including one that used a BlackFile victim-shaming site that is now offline.
To reduce the success rate of BlackFile’s attacks, RH-ISAC recommends that organizations strengthen their call-handling policies, enforce multifactor identity verification for callers, and conduct simulation-based social engineering training for frontline staff.
Advertisement
AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.
At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.
Overpriced Mac minis are flooding eBay amid shortages of the sold-out machines, which have become a favored tool for running on-device AI models like OpenClaw.
This week, reports indicated that the $599 M4 Mac mini base model with 16GB RAM and 256GB of storage is sold out on Apple’s retail website, with no options for delivery or in-store pickup. The shortages have since extended to other configurations of the base model, regardless of the amount of memory selected. This is the first time the base model has been sold out, some outlets noted. Meanwhile, models with higher storage (512GB and up) are only available to ship starting in June.
As a result, eBay has become a secondary market for these in-demand computers. On the site, various configurations of the M4 Mac mini are available for sale at higher prices than if buying direct from Apple, which is no longer an option.
Apple’s power-efficient Mac minis have become popular devices for testing and running at-home, on-device AI models, beginning with the OpenClaw craze but now extending to OpenClaw alternatives like ZeroClaw, other AI tools from Anthropic and OpenAI, Perplexity Computer, or other specialized local models. Unlike some PCs, Mac minis also run quietly and tend to be more reliable for 24/7 use, compared with laptop computers.
Advertisement
The shortage of the devices also comes alongside an industry-wide memory crunch and plans for a Mac mini refresh, according to Bloomberg. However, refreshes of product lines haven’t led to shortages before.
Apple did not immediately respond to a request for comment.
This perfect storm of supply chain stress and increased demand for AI-friendly machines has inflated the prices of used consumer electronics.
As of Friday morning, M4 base models with the 16GB RAM/256GB SSD configuration were selling at markups like $715-$795 for a new, “open box” model, and as high as $979 for an “excellent” refurbished version. Some “lightly used, pre-owned” Mac minis with this configuration were selling for around $700 — more than $100 more than the price of a new base model.
Advertisement
Image Credits:eBay (screenshot)
There was also a single listing for a $925 brand-new M4 Mac mini with the same 16GB RAM and 256GB storage; the listing warned in bright red text: “Last one.”
Image Credits:eBay (screenshot)
While you still may be able to score a reasonably priced refurb if you keep a close eye out (or if you win an eBay auction where the bid has started at a lower price point), it seems that the demand for the device is going to keep prices up until Apple’s supply chain refreshes.
And now that the Mac mini is unavailable, Apple has begun to see increased demand for the Mac Studio, too. That computer is also now sold out across several configurations.
As Ars Technica pointed out, you can still get a MacBook Pro with 128GB RAM and larger SSDs within a few weeks, and even the new and popular MacBook Neo is still shipping within two to three weeks. This suggests the real issue is consumer demand for the Mac mini itself.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Erdŏgan has 15 days to sign the bill into law. The legislation enters into force six months after publication in the Official Gazette. The main opposition CHP criticised it as a political censorship tool rather than child protection.
Turkey has previously blocked Instagram, Roblox, and restricted platforms during the İmamoglu protests.
Turkey’s Grand National Assembly passed a law late on Wednesday banning social media for children under 15, making the country the latest, and one of the largest by population, to introduce legislative age restrictions on social media access.
Under the law, social media companies including YouTube, TikTok, Facebook, and Instagram will be required to implement age verification systems, block under-15s from creating accounts, and provide parental control tools to manage the accounts of 15-to-17-year-olds.
Advertisement
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
President Recep Tayyip Erdŏgan has 15 days to sign the bill. If signed, it will enter into force six months after publication in the Official Gazette. Online gaming companies must also appoint a Turkey-based representative to ensure compliance.
The immediate political catalyst is the Kahramanön school shooting on 14 April 2026, in which a 14-year-old boy killed nine students and a teacher at a middle school in Kahramanön in southern Turkey before dying himself. Police subsequently arrested 162 people accused of sharing footage of the attack online. Investigators are examining the perpetrator’s online activity for clues to his motivation.
Advertisement
Erdŏgan made the political link explicit in a televised address on Monday: “We are living in a period where some digital sharing applications are corrupting our children’s minds, and social media platforms have, to put it bluntly, become cesspools.”
The parliamentary commission that proposed the law framed it in a report titled “Threats and Risks Awaiting Our Children in Digital Media.”
The law’s operational mechanics carry significant compliance demands for platforms. Companies with more than 10 million daily users in Turkey, a threshold that covers all major platforms, must remove content deemed harmful within one hour of notification in an emergency.
Foreign services with more than 100,000 daily users must maintain a local representative. Enforcement is through Turkey’s communications watchdog, the BTK.
Advertisement
Penalties escalate from advertising bans to access speed restrictions, effectively throttling platform performance, to potential access bans. The speed restriction mechanism is the same tool Turkey has used in previous enforcement actions against platforms that have declined to comply with content removal orders.
Running parallel to the under-15 law is a second legislative initiative that is editorially more significant for digital rights. The Turkish government has separately reached an agreement with social media companies requiring that all Turkish citizens, not just minors, verify their identity to use social media accounts.
The precise mechanism of this identity verification system has not been disclosed, and how platforms will technically implement it is still unknown. Merve Gürlek, the Turkish official who announced the agreement, made the announcement on 3 April; further details of the legal framework are still being drafted.
The end of social media anonymity for all Turkish users represents a categorically different kind of intervention than the under-15 age restriction and carries obvious implications for political speech.
Advertisement
The opposition Republican People’s Party, the CHP, Turkey’s main secular opposition, voted against the bill, arguing that children should be protected “not with bans but with rights-based policies.”
This is a standard liberal critique of age-based social media bans, but in the Turkish context it carries additional weight given the government’s documented record of using platform restrictions for political purposes.
Online communications were widely restricted during 2025 protests in support of Istanbul’s jailed opposition mayor Ekrem İmamoglu. Instagram was blocked in 2024 following a dispute over Hamas-related content.
Roblox was banned, with Turkish officials citing inappropriate sexual content and, separately, what an official described as the “promotion of homosexuality.”
Advertisement
The law passed by parliament is not in itself a tool of censorship; but it expands and formalises the regulatory infrastructure through which the government controls what Turks can access online.
Turkey’s law joins a rapidly expanding international landscape of social media age restrictions. Australia’s ban for under-16s came into force in December 2025.
Norway announced on Friday that it plans to legislate a ban for under-16s by end of 2026. Indonesia has implemented restrictions on under-16 access to platforms exposing minors to pornography, cyberbullying, and addiction. France has age verification requirements for social media.
The UK’s Online Safety Act imposes harm prevention duties on platforms. Turkey’s approach is distinctive in two ways: it pairs the child protection measure with a universal identity verification requirement that no comparable democracy has yet implemented, and it introduces it in a political environment where the infrastructure for platform restriction has already been deployed against political opposition.
Advertisement
Whether the legislation functions primarily as child protection or primarily as a new layer of state control over digital speech will depend largely on how the BTK applies its enforcement powers in the years ahead.
In an effort to simplify the connections on modern TVs, companies like LG and Samsung are currently offering an option on select TV models to move all of the source connections from the TV itself to a separate box located somewhere near the TV (generally, up to about 30 feet away). With this wireless connectivity, the only connection required for the TV itself is a power cord. This makes installation much simpler, whether you’re mounting your TV on a wall or on a stand out in the middle of an open floor plan living room.
LG’s Zero Connect Box allows owners to move all of a TV’s wired connectivity to a separate box up to 30 feet away from the TV itself.
Early implementations of this technology required line of sight connection between the box and the TV, which isn’t always convenient or practical. The high bandwidth required to transmit a full 4K video signal with HDR and with lossless immersive multi-channel audio can make the signal susceptible to dropouts in the form out audio and video glitches.
Samsung offers a wireless connection box called the “Wireless One Connect Box” on its Frame Pro TV and select Micro RGB TV models as well as an optional add-on for its S95H OLED TV. LG offers a wireless connection box called the “Zero Connect Box” in its W6 Wallpaper OLED TV as well as its MRGB9M Mini RGB LED/LCD TV.
But while Samsung’s commitment for wireless video performance is “Trust Me, Bro. It’s Fine,” (and it may very well be), LG has actually attained third party certification for its wireless audio and video connectivity.
By moving all of the inputs and outputs to a separate wireless connection box, LG’s W6 Wallpaper OLED TV can get much thinner than a traditional flat panel TV.
This week, LG announced that its wireless Zero Connect Box has attained “True Wireless Lossless Vision” certification from TÜV Rheinland. This certification verifies that the specified models deliver “visually lossless 4K picture quality” at frame rates up to 165 Hz. Certified models include LG’s W6 OLED Wallpaper TV and MRGB9M Mini RGB LED/LCD TV.
According to LG, the certification verifies that LG wireless TVs preserve color accuracy, image detail and HDR tone reproduction during wireless video transmission. For this verification, TÜV Rheinland established a dedicated test standard covering key factors in the delivery of high-quality visual performance, such as input lag, color accuracy and gamma tracking.
Advertisement
TÜV Rheinland is a leading global independent testing, inspection, and certification organization headquartered in Cologne, Germany.
Following the required evaluations, TÜV Rheinland confirmed that LG’s wireless TVs maintain accurate color reproduction, fine image detail and precise HDR tone performance within defined tolerance levels (relative to the input signal), qualifying them as “visually lossless” under international test standards. For consumers, this means cinematic films, live sports and next‑generation gaming can be enjoyed wirelessly with the same confidence in picture quality previously reserved for hard‑wired TV setups.
“True Wireless Lossless Vision certification confirms that our premium TVs can deliver award-winning picture quality and wireless freedom at the same time,” said Park Hyoung‑sei, president of the LG Media Entertainment Solution Company. “This achievement reflects LG’s long-standing commitment to enhancing the viewing experience and to elevating everyday living.”
As of April 2026, the LG W6 OLED TV is available to pre-order in the U.S. The MRGB9M is expected to be available later this year.
The Bottom Line
While wireless connection of source devices to a TV makes installation far less complex, some consumers may worry that doing so will impact the audio and video performance. With third party certification, buyers of LG’s wireless-enabled TVs can rest assured that that eliminating all these extra wires won’t mean sacrificing the picture and sound quality that they’re paying for.
If you have ever stared at a chaotic wall of Spotify playlists on your phone and wished you could just organize them into folders, we have good news. Spotify is finally rolling out playlist folders to its iOS and Android apps.
Playlist folders have existed on Spotify’s desktop app since 2010, making this a 16-year wait for mobile users. A Redditor was the first to flag the feature going live on iOS, with different users on the same thread confirming it had shown up on Android too.
To get started, tap the “+” icon inside your Library, and you will now see a new Folder option in the menu. From there, you can name your folder and start dropping playlists into it.
The feature also lets you play everything inside a folder at once, either in order or on shuffle. That last part is genuinely useful for long commutes or gym sessions where you want variety without manually hopping between playlists.
The timing of this rollout makes sense given how Spotify has been promoting its Prompted playlists, recently. Those tend to pile up fast and clutter your library quickly. Folders give you a clean way to manage all of that without heading to a desktop.
Advertisement
Are there any limitations to Spotify’s mobile playlist folders?
We haven’t seen the feature on our Spotify apps yet. According to the discussion on Reddit, you can only move playlists into folders, not entire albums. Custom folder cover art is also not supported yet. And since this is a server-side rollout, not everyone will see the option at the same time. If it has not appeared for you yet, patience is the only fix for now.
Google plans to invest up to $40 billion in Anthropic and support the AI firm’s growing computing needs, Bloomberg reports. The Alphabet subsidiary is committing to invest $10 billion now, at a $350 billion valuation for Anthropic, with another $30 billion to follow if Anthropic hits certain performance targets, according to Anthropic.
The promise of investment comes after Anthropic released its latest model, Mythos, to a limited group of partners this month. Anthropic says that Mythos is the company’s most powerful model to date and has significant cybersecurity applications. Due to potential misuse, Anthropic has restricted broader access while it works with select organizations to evaluate and address those risks — though the model has already fallen into unsanctioned hands. It’s also likely expensive to run at scale.
The AI race is increasingly defined by access to the compute needed to train and deploy these systems. OpenAI has moved aggressively to secure that capacity through a web of multi-hundred-billion deals across cloud providers, chip suppliers, and energy, including an expanded deal with chipmaker Cerebras this month.
Anthropic has been in a scramble of its own. The company has faced widespread complaints about Claude use limits in recent weeks and responded with a bevy of infrastructure deals. Earlier this month, Anthropic struck a deal with cloud computing provider CoreWeave for data center capacity. It also this week secured an additional $5 billion investment from Amazon, part of a broad agreement under which Anthropic is expected to spend up to $100 billion for around 5 gigawatts of compute capacity over time.
Advertisement
While Google is a direct competitor in AI models, it’s also a key infrastructure supplier to Anthropic. Anthropic relies heavily on Google Cloud for chips and infrastructure, including access to Google’s tensor processing units (or TPUs), which are specialized chips designed for AI workloads and considered among the best alternatives to Nvidia’s in-demand processors.
Anthropic’s relationship with Google predates this week’s news. Earlier this month, Anthropic announced a partnership with Google and chipmaker Broadcom, which designs custom AI chips for Google, to access multiple gigawatts of TPU-based computing capacity beginning in 2027; a subsequent Broadcom securities filing put that figure at 3.5 gigawatts.
The new Google investment expands that arrangement, with Google Cloud now providing a fresh 5 gigawatts of capacity over the next five years, with room to scale further.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
Anthropic’s valuation stood at $350 billion as recently as February; investors have since been eager to back the company at $800 billion or more, according to Bloomberg. The company is also reportedly considering an IPO as soon as October.
Advertisement
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
DeepSeek, the Chinese AI startup offshoot of High-Flyer Capital Management quantitative analysis firm, became a near-overnight sensation globally in January 2025 with the release of its open source R1 model that matched proprietary U.S. giants.
It’s been an epoch in AI since then, and while DeepSeek has released severalupdates to that model and its other V3 series, the international AI and business community has been largely waiting with baited breath for the follow-up to the R1 moment.
Now it’s arrived with last night’s release of DeepSeek-V4, a 1.6-trillion-parameter Mixture-of-Experts (MoE) model available free under commercially-friendly open source MIT License, which nears — and on some benchmarks, surpasses — the performance of the world’s most advanced closed-source systems at approximately 1/6th the cost over the application programming interface (API).
As Chen noted in his post, “AGI belongs to everyone”. It’s available now on AI code sharing community Hugging Face and through DeepSeek’s API.
Frontier-class AI gets pushed into a lower price band
The most immediate impact of the DeepSeek-V4 launch is economic. The corrected pricing table shows DeepSeek is not pricing its new Pro model at near-zero levels, but it is still pushing high-end model access into a far lower cost tier than the leading U.S. frontier models.
DeepSeek-V4-Pro is priced through its API at $1.74 USD per 1 million input tokens on a cache miss and $3.48 per million output tokens.
Advertisement
DeepSeek V4 API pricing chart. Credit: DeepSeek AI
That puts a simple one-million-input, one-million-output comparison at $5.22. With cached input, the input price drops to $0.145 per million tokens, bringing that same blended comparison down to $3.625.
That is dramatically cheaper than the current premium pricing from OpenAI and Anthropic. GPT-5.5 is priced at $5.00 per million input tokens and $30.00 per million output tokens, for a combined $35.00 in the same simple comparison.
Claude Opus 4.7 is priced at $5.00 input and $25.00 output, for a combined $30.00.
On standard, cache-miss pricing, DeepSeek-V4-Pro comes in at roughly one-seventh the cost of GPT-5.5 and about one-sixth (1/6th) the cost of Claude Opus 4.7.
Advertisement
With cached input, the gap widens: DeepSeek-V4-Pro costs about one-tenth as much as GPT-5.5 and about one-eighth as much as Claude Opus 4.7.
The more extreme near-zero story belongs to DeepSeek-V4-Flash, not the Pro model. Flash is priced at $0.14 per million input tokens on a cache miss and $0.28 per million output tokens, for a combined $0.42.
With cached input, that drops to $0.308. In that case, DeepSeek’s cheaper model is more than 98% below GPT-5.5 and Claude Opus 4.7 in a simple input-plus-output comparison, or nearly 1/100th the cost — though the performance dips significantly.
DeepSeek is compressing advanced model economics into a much lower band, forcing developers and enterprises to revisit the cost-benefit calculation around premium closed models.
Advertisement
For companies running large inference workloads, that price gap can change what is worth automating. Tasks that look too expensive on GPT-5.5 or Claude Opus 4.7 may become economically viable on DeepSeek-V4-Pro, and even more so on DeepSeek-V4-Flash. The launch does not make intelligence free, but it does make the market harder for premium providers to defend on performance alone.
Benchmarking the frontier: DeepSeek-V4-Pro gets close, but GPT-5.5 and Opus 4.7 still lead on most shared tests
DeepSeek-V4-Pro-Max is best understood as a major open-weight leap, not a clean across-the-board defeat of the newest closed frontier systems.
The model’s strongest benchmark claims come from DeepSeek’s own comparison tables, where it is shown against GPT-5.4 xHigh, Claude Opus 4.6 Max and Gemini 3.1 Pro High and bests them on several tests, including Codeforces and Apex Shortlist.
But that is not the same as a head-to-head against OpenAI’s newer GPT-5.5 or Anthropic’s newer Claude Opus 4.7.
Advertisement
Looking only at DeepSeek-V4 versus the latest proprietary models, the picture is more restrained.
On this shared set, GPT-5.5 and Claude Opus 4.7 still lead most categories.
DeepSeek-V4-Pro-Max’s best showing is on BrowseComp, the benchmark measuring agentic AI web browsing prowess (especially highly containerized information), where it scores 83.4%, narrowly behind GPT-5.5 at 84.4% and ahead of Claude Opus 4.7 at 79.3%.
On Terminal-Bench 2.0, DeepSeek scores 67.9%, close to Claude Opus 4.7’s 69.4%, but far behind GPT-5.5’s 82.7%.
Advertisement
Benchmark
DeepSeek-V4-Pro-Max
GPT-5.5
GPT-5.5 Pro, where shown
Advertisement
Claude Opus 4.7
Best result among these
GPQA Diamond
90.1%
Advertisement
93.6%
—
94.2%
Claude Opus 4.7
Advertisement
Humanity’s Last Exam, no tools
37.7%
41.4%
43.1%
Advertisement
46.9%
Claude Opus 4.7
Humanity’s Last Exam, with tools
48.2%
Advertisement
52.2%
57.2%
54.7%
GPT-5.5 Pro
Advertisement
Terminal-Bench 2.0
67.9%
82.7%
—
Advertisement
69.4%
GPT-5.5
SWE-Bench Pro / SWE Pro
55.4%
Advertisement
58.6%
—
64.3%
Claude Opus 4.7
Advertisement
BrowseComp
83.4%
84.4%
90.1%
Advertisement
79.3%
GPT-5.5 Pro
MCP Atlas / MCPAtlas Public
73.6%
Advertisement
75.3%
—
79.1%
Claude Opus 4.7
Advertisement
The shared academic-reasoning results favor the closed models: On GPQA Diamond, DeepSeek-V4-Pro-Max scores 90.1%, while GPT-5.5 reaches 93.6% and Claude Opus 4.7 reaches 94.2%.
On Humanity’s Last Exam without tools, DeepSeek scores 37.7%, behind GPT-5.5 at 41.4%, GPT-5.5 Pro at 43.1% and Claude Opus 4.7 at 46.9%. With tools enabled, DeepSeek rises to 48.2%, but still trails GPT-5.5 at 52.2%, GPT-5.5 Pro at 57.2% and Claude Opus 4.7 at 54.7%.
The agentic and software-engineering results are more mixed, but they still show DeepSeek-V4-Pro-Max trailing GPT-5.5 and Opus 4.7.
On Terminal-Bench 2.0, DeepSeek’s 67.9% is competitive with Claude Opus 4.7’s 69.4%, but GPT-5.5 is much higher at 82.7%.
Advertisement
On SWE-Bench Pro, DeepSeek’s 55.4% trails GPT-5.5 at 58.6% and Claude Opus 4.7 at 64.3%. On MCP Atlas, DeepSeek’s 73.6% is slightly behind GPT-5.5 at 75.3% and Claude Opus 4.7 at 79.1%.
BrowseComp is the standout: DeepSeek’s 83.4% beats Claude Opus 4.7’s 79.3% and nearly matches GPT-5.5’s 84.4%, though GPT-5.5 Pro’s 90.1% remains well ahead.
So ultimately, DeepSeek-V4-Pro-Max does not appear to dethrone GPT-5.5 or Claude Opus 4.7 on the benchmarks that can be directly compared across the companies’ published tables. But it gets close enough on several of them — especially BrowseComp, Terminal-Bench 2.0 and MCP Atlas — that its much lower API pricing becomes the headline.
In practical terms, DeepSeek does not need to win every leaderboard row to matter. If it can deliver near-frontier performance on many enterprise-relevant agent and reasoning tasks at roughly one-sixth to one-seventh the standard API cost of GPT-5.5 or Claude Opus 4.7, it still forces a major rethink of the economics of advanced AI deployment.
Advertisement
DeepSeek-V4-Pro-Max is clearly the strongest open-weight model in the field right now, and it is unusually close to frontier closed systems on several practical benchmarks.
While GPT-5.5 and Claude Opus 4.7 still retain the lead in most direct head-to-head comparisons across the company’s benchmark charts, DeepSeek V4 Pro gets close while being dramatically cheaper and openly available.
A big jump from DeepSeek V3.2
To understand the magnitude of this release, one must look at the performance gains of the base models. DeepSeek-V4-Pro-Base represents a significant advancement over the previous generation, DeepSeek-V3.2-Base. In World Knowledge, V4-Pro-Base achieved 90.1 on MMLU (5-shot) compared to V3.2’s 87.8, and a massive jump on MMLU-Pro from 65.5 to 73.5.
The improvement in high-level reasoning and verified facts is even more pronounced: on SuperGPQA, V4-Pro-Base reached 53.9 compared to V3.2’s 45.0, and on the FACTS Parametric benchmark, it more than doubled its predecessor’s performance, jumping from 27.1 to 62.6. Simple-QA verified scores also saw a dramatic rise from 28.3 to 55.2.
Advertisement
The Long Context capabilities have also been refined. On LongBench-V2, V4-Pro-Base scored 51.5, significantly outpacing the 40.2 achieved by V3.2-Base. In Code and Math, V4-Pro-Base reached 76.8 on HumanEval (Pass@1), up from 62.8 on V3.2-Base.
These numbers underscore that DeepSeek has not just optimized for inference cost, but has fundamentally improved the intelligence density of its base architecture. The efficiency story is equally compelling for the Flash variant. DeepSeek-V4-Flash-Base, despite utilizing a substantially smaller number of parameters, outperforms the larger V3.2-Base across wide benchmarks, particularly in long-context scenarios.
A new information ‘traffic controller,’ Manifold-Constrained Hyper-Connections (mHC)
The standout technical achievement of V4 is its native one-million-token context window. Historically, maintaining such a large context required massive memory (the key values or KV cache).
Advertisement
DeepSeek solved this by introducing a Hybrid Attention Architecture that combines Compressed Sparse Attention (CSA) to reduce initial token dimensionality and Heavily Compressed Attention (HCA) to aggressively compress the memory footprint for long-range dependencies.
In practice, the V4-Pro model requires only 10% of the KV cache and 27% of the single-token inference FLOPs compared to its predecessor, the DeepSeek-V3.2, even when operating at a 1M token context.
To stabilize a network of 1.6 trillion parameters, DeepSeek moved beyond traditional residual connections. The company’s researchers incorporated Manifold-Constrained Hyper-Connections (mHC) to strengthen signal propagation across layers while preserving the model’s expressivity.
mHC allows an AI to have a much wider flow of information (so it can learn more complex things) without the risk of the model becoming unstable or “breaking” during its training. It’s like giving a city a 10-lane highway but adding a perfect AI traffic controller to ensure no one ever hits the brakes.
Advertisement
This is paired with the Muon optimizer, which allowed the team to achieve faster convergence and greater training stability during the pre-training on more than 32T diverse and high-quality tokens.
This pre-training data was refined to remove hatched auto-generated content, mitigating the risk of model collapse and prioritizing unique academic values. The model’s 1.6T parameters utilize a Mixture-of-Experts (MoE) design where only 49B parameters are activated per token, further driving down compute requirements.
Training the mixture-of-experts (MoE) to work as a whole
DeepSeek-V4 was not simply trained; it was “cultivated” through a unique two-stage paradigm.
First, through Independent Expert Cultivation, domain-specific experts were trained through Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) using the GRPO (Group Relative Policy Optimization) algorithm. This allowed each expert to master specialized skills like mathematical reasoning or codebase analysis.
Second, Unified Model Consolidation integrated these distinct proficiencies into a single model via on-policy distillation, where the unified model acts as the student learning to optimize reverse KL loss with teacher models. This distillation process ensures that the model preserves the specialized capabilities of each expert while operating as a cohesive whole.
The model’s reasoning capabilities are further segmented into three increasing “effort” modes.
Advertisement
The “Non-think” mode provides fast, intuitive responses for routine tasks.
“Think High” provides conscious logical analysis for complex problem-solving.
Finally, “Think Max” pushes the boundaries of model reasoning, bridging the gap with frontier models on complex reasoning and agentic tasks. This flexibility allows users to match the compute effort to the difficulty of the task, further enhancing cost-efficiency.
Breaking the Nvidia GPU stranglehold with local Chinese Huawei Ascend NPUs
While the model weights are the headline, the software stack released alongside them is arguably more important for the future of “Sovereign AI.”
Analyst Rui Ma highlighted a single sentence from the release as the most critical: DeepSeek validated their fine-grained Expert Parallelism (EP) scheme on Huawei Ascend NPUs (neural processing units).
By achieving a 1.50x to 1.73x speedup on non-Nvidia GPU platforms, DeepSeek has provided a blueprint for high-performance AI deployment that is resilient to Western GPU supply chains and export controls.
However, it’s important to note that DeepSeek still claims it used officially licensed, legal Nvidia GPUs for DeepSeek V4’s training, in addition to the Huawei NPUs.
Advertisement
DeepSeek has also open-sourced the MegaMoE mega-kernel as a component of its DeepGEMM library. This CUDA-based implementation delivers up to a 1.96x speedup for latency-sensitive tasks like RL rollouts and high-speed agent serving.
This move ensures that developers can run these massive models with extreme efficiency on existing hardware, further cementing DeepSeek’s role as the primary driver of open-source AI infrastructure.
The technical report emphasizes that these optimizations are crucial for supporting a standard 1M context across all official services.
Licensing and local deployment
DeepSeek-V4 is released under the MIT License, the most permissive framework in the industry. This allows developers to use, copy, modify, and distribute the weights for commercial purposes without royalties—a stark contrast to the “restricted” open-weight licenses favored by other companies.
Advertisement
For local deployment, DeepSeek recommends setting sampling parameters to temperature = 1.0 and top_p = 1.0. For those utilizing the “Think Max” reasoning mode, the team suggests setting the context window to at least 384K tokens to avoid truncating the model’s internal reasoning chains.
The release includes a dedicated encoding folder with Python scripts demonstrating how to encode messages in OpenAI-compatible format and parse the model’s output, including reasoning content.
DeepSeek-V4 is also seamlessly integrated with leading AI agents like Claude Code, OpenClaw, and OpenCode. This native integration underscores its role as a bedrock for developer tools, providing an open-source alternative to the proprietary ecosystems of major cloud providers.
Community reactions and what comes next
The community reaction has been one of shock and validation. Hugging Face officially welcomed the “whale” back, stating that the era of cost-effective 1M context length has arrived.
Advertisement
Industry experts noted that the “second DeepSeek moment” has effectively reset the developmental trajectory of the entire field, placing massive pressure on closed-source providers like OpenAI and Anthropic to justify their premiums.
AI evaluation firm Vals AI noted that DeepSeek-V4 is now the “#1 open-weight model on our Vibe Code Benchmark, and it’s not close”.
DeepSeek is moving quickly to retire its older architectures. The company announced that the legacy deepseek-chat and deepseek-reasoner endpoints will be fully retired on July 24, 2026. All traffic is currently being rerouted to the V4-Flash architecture, signifying a total transition to the million-token standard.
DeepSeek-V4 is more than just a new model; it is a challenge to the status quo. By proving that architectural innovation can substitute for raw compute-maximalism, DeepSeek has made the highest levels of AI intelligence accessible to the global developer community at a far lower cost — something that could benefit the globe, even at a time when lawmakers and leaders in Washington, D.C. are raising concerns about Chinese labs “distilling” from U.S. proprietary giants to train open source models, and fears of said open source or jailbroken proprietary models being used to create weapons and commit terror.
Advertisement
The truth is, while all of these are potential risks — as they were and have been with prior technologies that broadened information access, like search and the internet itself — the benefits seem far outweigh them, and DeepSeek’s quest to keep frontier AI models open is of benefit to the entire planet of potential AI users, especially enterprises looking to adopt the cutting-edge at the lowest possible cost.
The two new all-electric models include the BMW i7 50 xDrive and BMW i7 60 xDrive, each featuring a dual-motor, all-wheel-drive powertrain. The former is powered by a 449 hp motor producing 487 lb-ft of torque, while the latter features a 536 hp motor delivering 549 lb-ft of torque. Read Entire Article Source link
When Tim Cook took over Apple in 2011, the big question was whether anyone could follow in the footsteps of Steve Jobs. For many, Jobs was Apple.
A massive fifteen-year stint later, it’s clear that Cook has delivered – and then some. Not with a single breakthrough product like the Jobs-era iPhone or iPod, but a long list of hits, experiments and the occasional misstep that reshaped what Apple is today.
Here are 15 of our favourite Apple products that defined Cook’s decade-and-a-half legacy, both for better and for worse.
Apple Watch
Image Credit (Trusted Reviews)
The Apple Watch was the first big “post-Jobs” category – and it didn’t receive a particularly warm welcome initially. Early versions leaned awkwardly into fashion, complete with gold editions and luxury marketing, despite early Apple Watches only being supported for a relatively short period of time.
Advertisement
Advertisement
But, slowly but surely, Apple’s wearable found its footing. Today, the Watch is less about style and more about health with features like heart rate monitoring, ECG and fall detection, and has become one of the company’s most important products as a result.
It also helps that it plays so nicely with connected iPhones, offering a level of interoperability that most Android-based wearables still can’t quite match.
AirPods
Image Credit (Trusted Reviews)
Considering how popular AirPods are in 2026, it’s funny to look back at the reactions on social media when they were first revealed in 2016. People generally disregarded the buds, comparing them to electric toothbrush heads, but within a year of launch, they were everywhere.
As with the Apple Watch, Cook’s sprinkling of magic meant the buds worked very well with iPhones, iPads and Macs. They offer great sound and features like seamless handoff between devices, and they’ve vastly improved in the years since, not only in features but also in the overall design with the Pro and Max variants.
Advertisement
Advertisement
iPhone X
Image Credit (Trusted Reviews)
While the original iPhone was a Jobs-era innovation, the iPhone X was the moment that the modern iPhone was born.
It ditched the staples of the iconic iPhone design – the Home button and bezels – for an all-screen design with the now instantly recognisable Face ID notch. It was a controversial change at the time, but it’s a design that Apple still uses on its iPhone lineup today.
Apple Silicon
If there’s one product that feels like a true Cook-era mic drop, it has to be Apple Silicon.
Advertisement
Ditching the dominant force that was Intel to build its own chips was a huge risk – especially considering Mac apps would essentially need to be rebuilt for the platform to fully take advantage of the power on offer. But that risk paid off, almost immediately.
The M1 MacBook Air was absurdly fast, silent and efficient compared to practically anything else around, and it has only improved with newer versions in the years since.
Advertisement
iPad Pro
Image Credit (Trusted Reviews)
The iPad Pro is Apple’s long-running attempt to answer a simple question: Can a tablet replace your laptop?
Even after all these years, the answer is still… it depends. But with the Pencil, keyboard and increasingly powerful M-series desktop chips, it has become the go-to tool for creatives and professionals who favour touchscreen over traditional mouse input.
Advertisement
Apple Music
Image Credit (Trusted Reviews)
Cook didn’t just drive hardware – he also pushed Apple into the increasingly lucrative services business.
With its sights set on the dominant Spotify, Apple Music was the company’s first foray into services, and it was a massive success. It has a vast collection of songs available in Hi-Res format and Dolby Atmos for an immersive listening experience, and it, of course, plays exceptionally well with iOS, macOS and iPadOS.
Apple Pay
The launch of Apple Pay changed the way that we pay for products and services, both online and in the real world. It’s a feature that we don’t even think about these days – we just pull out our phones and pay with a tap – but Apple was one of the first to make that possible back in 2014.
Advertisement
Advertisement
Apple Vision Pro
The Apple Vision Pro is Cook’s “what’s next?” product, a £/$3499 headset that Apple insists isn’t VR but ‘spatial computing’. It’s early tech, expensive and a bit awkward – but also undeniably impressive compared to cheaper headsets from the likes of Meta with its M-series power and high-end graphics.
But whether it becomes the next iPhone or next HomePod remains to be seen – given the waning interest in VR headsets, it’s quite possible it could be the latter.
iPhone SE
Image Credit (Trusted Reviews)
Not every Apple product needs to be cutting-edge, and the iPhone SE is a great example of that.
Cook’s supply chain mastery was on full display here, reusing older components with newer internals to offer the iPhone experience at a much more affordable price. It wasn’t perfect, of course, but it had a special place for those who missed the ‘old school’ iPhone look.
Advertisement
Apple Pencil
Image Credit (Trusted Reviews)
Steve Jobs famously said nobody wanted a stylus – but it turns out that people did when it came to the big screens of iPads. They just didn’t want bad ones.
The Apple Pencil helped transform the iPad into a legitimate creative tool, especially for artists, designers and good ol’ note-takers, with an experience that still isn’t quite matched by Android stylus alternatives.
Advertisement
MagSafe
Image Credit (Trusted Reviews)
MagSafe – the iPhone variant, not that used in Macs – was a game-changer when it was released with the iPhone 12, so much so that the framework has since been baked into the Qi2 standard for all phones to follow.
Advertisement
It just makes so much sense: using a ring of magnets, not only does the phone snap into place perfectly on wireless chargers, but it also lets you add a bunch of accessories like battery packs, wallets, or even camera grips without messing around with different cases. Just snap it on and pull it off when you’re done.
MacBook Pro
Image Credit (Trusted Reviews)
The MacBook Pro had a few bumps in the road under Cook’s leadership. People loved the old style of MacBook Pro, but Cook’s Apple reinvented it in 2016, removing fan-favourite features like MagSafe charging and SD card slots and introducing an OLED touch bar that quickly became the butt of the joke.
It took until 2021 for the MacBook Pro to reverse course, ditching the gimmicky touch bar and its reliance on USB-C and bringing back MagSafe charging and a plethora of ports, which, combined with Apple’s M-series silicon, now make it one of the best laptops around.
MacBook Neo
Image Credit (Trusted Reviews)
Advertisement
We couldn’t talk about the MacBook Pro without at least mentioning the MacBook Neo, which could be considered Cook’s Magnum Opus ahead of stepping down.
Advertisement
For years, the MacBook Air was Apple’s entry point into the macOS ecosystem, but it still cost close to a grand, if not more. The problem is that there are plenty of cheaper Windows-based laptops, and those tend to win out for budget-focused buyers.
But then came along the MacBook Neo, and despite sporting an iPhone-level A18 Pro chipset, it excels in the budget market in both general performance and battery longevity, all for just £/$599, which makes pretty much every cheap Windows laptop look underpowered and expensive. A defining moment indeed.
Magic Mouse 2
Image Credit (Trusted Reviews)
The Magic Mouse 2 was a beautifully designed mouse with one tiny problem: you have to charge it from the bottom. Which means you can’t use it while it’s charging. Yes, the memes were great for this one.
Advertisement
It’s such a small decision, but it perfectly captures the “design over practicality” criticism that followed Apple for years, and for better or worse, will be remembered as a defining Cook-era product.
Advertisement
Polishing cloth
Yes, really.
A £/$19 Apple-branded cloth to clean your screen. It became an instant meme – not because it’s bad, but because it so perfectly represents Apple’s confidence in its brand.
Only Apple could sell that… and have it go out of stock.
Advertisement
Jokes aside, Under Cook, Apple stopped being just a computer company and became a part of basically everything we do, from how we pay for coffee to what we wear on our wrists. It wasn’t always a perfect run, but he turned the post-Jobs era into a massive, unstoppable ecosystem that most of us now couldn’t imagine living without.
Advertisement
It’s safe to say that John Ternus is now the one with big shoes to fill.
You must be logged in to post a comment Login