Over 10,000 Zimbra Collaboration Suite (ZCS) instances exposed online are vulnerable to ongoing attacks exploiting a cross-site scripting (XSS) security flaw, according to nonprofit security organization Shadowserver.
Zimbra is a popular email and collaboration software suite used by hundreds of millions of people worldwide, including hundreds of government agencies and thousands of businesses.
The vulnerability (tracked as CVE-2025-48700) affects ZCS 8.8.15, 9.0, 10.0, and 10.1 and can allow unauthenticated attackers to access sensitive information after executing arbitrary JavaScript within the user’s session.
Synacor released security patches to address the flaw in June 2025, when it warned that CVE-2025-48700 exploits require no user interaction and can be triggered when a user views a maliciously crafted email message in the Zimbra Classic UI.
The U.S. cybersecurity agency also ordered Federal Civilian Executive Branch (FCEB) agencies to secure their Zimbra servers within three days, by April 23.
On Friday, Internet security watchdog Shadowserver also warned that over 10,500 Zimbra servers exposed online remain unpatched, most of them in Asia (3,794) and Europe (3,793).
This phishing campaign (codenamed Operation GhostMail by security researchers at Seqrite Labs) also targeted the Ukrainian State Hydrology Agency (a critical infrastructure entity under the Ministry of Infrastructure that provides navigational, maritime, and hydrographic support) and delivered an obfuscated JavaScript payload when recipients opened the malicious emails in vulnerable Zimbra webmail sessions.
“The phishing email has no malicious attachments, no suspicious links, no macros. The entire attack chain lives inside the HTML body of a single email, there are no malicious attachments,” Seqrite Labs said at the time.
Advertisement
Zimbra flaws are frequently exploited in attacks and have been used to breach thousands of vulnerable email servers in recent years.
For instance, Russian Winter Vivern cyberespies used another reflected XSS exploit to breach Zimbra webmail portals in February 2023 and steal emails sent and received by NATO-aligned organizations and individuals, including military personnel, government officials, and diplomats.
More recently, in October 2024, U.S. and U.K. cyber agencies warned that APT29 (a.k.a. Cozy Bear, Midnight Blizzard) hackers linked to Russia’s Foreign Intelligence Service (SVR) were targeting vulnerable Zimbra servers “at a mass scale,” exploiting a security issue that had been previously abused to steal email account credentials.
AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.
At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.
The long-rumored iPhone Fold, or maybe the iPhone Ultra, should arrive in fall 2026. Here’s what the rumor mill says about Apple’s first foldable iPhone.
A render of what the iPhone Fold could look like
While the rest of the smartphone industry has embraced foldable smartphones, Apple has so far held back from launching its own. However, the rumor mill certainly believes that one model will eventually come out of Cupertino, and that 2026 could be the year it finally does. With high expectations, the model referred to as the iPhone Fold is anticipated to be a big launch for the company. That launch could be just half a year away. Continue Reading on AppleInsider | Discuss on our Forums
Google plans to invest up to $40 billion in Anthropic and support the AI firm’s growing computing needs, Bloomberg reports. The Alphabet subsidiary is committing to invest $10 billion now, at a $350 billion valuation for Anthropic, with another $30 billion to follow if Anthropic hits certain performance targets, according to Anthropic.
The promise of investment comes after Anthropic released its latest model, Mythos, to a limited group of partners this month. Anthropic says that Mythos is the company’s most powerful model to date and has significant cybersecurity applications. Due to potential misuse, Anthropic has restricted broader access while it works with select organizations to evaluate and address those risks — though the model has already fallen into unsanctioned hands. It’s also likely expensive to run at scale.
The AI race is increasingly defined by access to the compute needed to train and deploy these systems. OpenAI has moved aggressively to secure that capacity through a web of multi-hundred-billion deals across cloud providers, chip suppliers, and energy, including an expanded deal with chipmaker Cerebras this month.
Anthropic has been in a scramble of its own. The company has faced widespread complaints about Claude use limits in recent weeks and responded with a bevy of infrastructure deals. Earlier this month, Anthropic struck a deal with cloud computing provider CoreWeave for data center capacity. It also this week secured an additional $5 billion investment from Amazon, part of a broad agreement under which Anthropic is expected to spend up to $100 billion for around 5 gigawatts of compute capacity over time.
Advertisement
While Google is a direct competitor in AI models, it’s also a key infrastructure supplier to Anthropic. Anthropic relies heavily on Google Cloud for chips and infrastructure, including access to Google’s tensor processing units (or TPUs), which are specialized chips designed for AI workloads and considered among the best alternatives to Nvidia’s in-demand processors.
Anthropic’s relationship with Google predates this week’s news. Earlier this month, Anthropic announced a partnership with Google and chipmaker Broadcom, which designs custom AI chips for Google, to access multiple gigawatts of TPU-based computing capacity beginning in 2027; a subsequent Broadcom securities filing put that figure at 3.5 gigawatts.
The new Google investment expands that arrangement, with Google Cloud now providing a fresh 5 gigawatts of capacity over the next five years, with room to scale further.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
Anthropic’s valuation stood at $350 billion as recently as February; investors have since been eager to back the company at $800 billion or more, according to Bloomberg. The company is also reportedly considering an IPO as soon as October.
Advertisement
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
DeepSeek, the Chinese AI startup offshoot of High-Flyer Capital Management quantitative analysis firm, became a near-overnight sensation globally in January 2025 with the release of its open source R1 model that matched proprietary U.S. giants.
It’s been an epoch in AI since then, and while DeepSeek has released severalupdates to that model and its other V3 series, the international AI and business community has been largely waiting with baited breath for the follow-up to the R1 moment.
Now it’s arrived with last night’s release of DeepSeek-V4, a 1.6-trillion-parameter Mixture-of-Experts (MoE) model available free under commercially-friendly open source MIT License, which nears — and on some benchmarks, surpasses — the performance of the world’s most advanced closed-source systems at approximately 1/6th the cost over the application programming interface (API).
As Chen noted in his post, “AGI belongs to everyone”. It’s available now on AI code sharing community Hugging Face and through DeepSeek’s API.
Frontier-class AI gets pushed into a lower price band
The most immediate impact of the DeepSeek-V4 launch is economic. The corrected pricing table shows DeepSeek is not pricing its new Pro model at near-zero levels, but it is still pushing high-end model access into a far lower cost tier than the leading U.S. frontier models.
DeepSeek-V4-Pro is priced through its API at $1.74 USD per 1 million input tokens on a cache miss and $3.48 per million output tokens.
Advertisement
DeepSeek V4 API pricing chart. Credit: DeepSeek AI
That puts a simple one-million-input, one-million-output comparison at $5.22. With cached input, the input price drops to $0.145 per million tokens, bringing that same blended comparison down to $3.625.
That is dramatically cheaper than the current premium pricing from OpenAI and Anthropic. GPT-5.5 is priced at $5.00 per million input tokens and $30.00 per million output tokens, for a combined $35.00 in the same simple comparison.
Claude Opus 4.7 is priced at $5.00 input and $25.00 output, for a combined $30.00.
On standard, cache-miss pricing, DeepSeek-V4-Pro comes in at roughly one-seventh the cost of GPT-5.5 and about one-sixth (1/6th) the cost of Claude Opus 4.7.
Advertisement
With cached input, the gap widens: DeepSeek-V4-Pro costs about one-tenth as much as GPT-5.5 and about one-eighth as much as Claude Opus 4.7.
The more extreme near-zero story belongs to DeepSeek-V4-Flash, not the Pro model. Flash is priced at $0.14 per million input tokens on a cache miss and $0.28 per million output tokens, for a combined $0.42.
With cached input, that drops to $0.308. In that case, DeepSeek’s cheaper model is more than 98% below GPT-5.5 and Claude Opus 4.7 in a simple input-plus-output comparison, or nearly 1/100th the cost — though the performance dips significantly.
DeepSeek is compressing advanced model economics into a much lower band, forcing developers and enterprises to revisit the cost-benefit calculation around premium closed models.
Advertisement
For companies running large inference workloads, that price gap can change what is worth automating. Tasks that look too expensive on GPT-5.5 or Claude Opus 4.7 may become economically viable on DeepSeek-V4-Pro, and even more so on DeepSeek-V4-Flash. The launch does not make intelligence free, but it does make the market harder for premium providers to defend on performance alone.
Benchmarking the frontier: DeepSeek-V4-Pro gets close, but GPT-5.5 and Opus 4.7 still lead on most shared tests
DeepSeek-V4-Pro-Max is best understood as a major open-weight leap, not a clean across-the-board defeat of the newest closed frontier systems.
The model’s strongest benchmark claims come from DeepSeek’s own comparison tables, where it is shown against GPT-5.4 xHigh, Claude Opus 4.6 Max and Gemini 3.1 Pro High and bests them on several tests, including Codeforces and Apex Shortlist.
But that is not the same as a head-to-head against OpenAI’s newer GPT-5.5 or Anthropic’s newer Claude Opus 4.7.
Advertisement
Looking only at DeepSeek-V4 versus the latest proprietary models, the picture is more restrained.
On this shared set, GPT-5.5 and Claude Opus 4.7 still lead most categories.
DeepSeek-V4-Pro-Max’s best showing is on BrowseComp, the benchmark measuring agentic AI web browsing prowess (especially highly containerized information), where it scores 83.4%, narrowly behind GPT-5.5 at 84.4% and ahead of Claude Opus 4.7 at 79.3%.
On Terminal-Bench 2.0, DeepSeek scores 67.9%, close to Claude Opus 4.7’s 69.4%, but far behind GPT-5.5’s 82.7%.
Advertisement
Benchmark
DeepSeek-V4-Pro-Max
GPT-5.5
GPT-5.5 Pro, where shown
Advertisement
Claude Opus 4.7
Best result among these
GPQA Diamond
90.1%
Advertisement
93.6%
—
94.2%
Claude Opus 4.7
Advertisement
Humanity’s Last Exam, no tools
37.7%
41.4%
43.1%
Advertisement
46.9%
Claude Opus 4.7
Humanity’s Last Exam, with tools
48.2%
Advertisement
52.2%
57.2%
54.7%
GPT-5.5 Pro
Advertisement
Terminal-Bench 2.0
67.9%
82.7%
—
Advertisement
69.4%
GPT-5.5
SWE-Bench Pro / SWE Pro
55.4%
Advertisement
58.6%
—
64.3%
Claude Opus 4.7
Advertisement
BrowseComp
83.4%
84.4%
90.1%
Advertisement
79.3%
GPT-5.5 Pro
MCP Atlas / MCPAtlas Public
73.6%
Advertisement
75.3%
—
79.1%
Claude Opus 4.7
Advertisement
The shared academic-reasoning results favor the closed models: On GPQA Diamond, DeepSeek-V4-Pro-Max scores 90.1%, while GPT-5.5 reaches 93.6% and Claude Opus 4.7 reaches 94.2%.
On Humanity’s Last Exam without tools, DeepSeek scores 37.7%, behind GPT-5.5 at 41.4%, GPT-5.5 Pro at 43.1% and Claude Opus 4.7 at 46.9%. With tools enabled, DeepSeek rises to 48.2%, but still trails GPT-5.5 at 52.2%, GPT-5.5 Pro at 57.2% and Claude Opus 4.7 at 54.7%.
The agentic and software-engineering results are more mixed, but they still show DeepSeek-V4-Pro-Max trailing GPT-5.5 and Opus 4.7.
On Terminal-Bench 2.0, DeepSeek’s 67.9% is competitive with Claude Opus 4.7’s 69.4%, but GPT-5.5 is much higher at 82.7%.
Advertisement
On SWE-Bench Pro, DeepSeek’s 55.4% trails GPT-5.5 at 58.6% and Claude Opus 4.7 at 64.3%. On MCP Atlas, DeepSeek’s 73.6% is slightly behind GPT-5.5 at 75.3% and Claude Opus 4.7 at 79.1%.
BrowseComp is the standout: DeepSeek’s 83.4% beats Claude Opus 4.7’s 79.3% and nearly matches GPT-5.5’s 84.4%, though GPT-5.5 Pro’s 90.1% remains well ahead.
So ultimately, DeepSeek-V4-Pro-Max does not appear to dethrone GPT-5.5 or Claude Opus 4.7 on the benchmarks that can be directly compared across the companies’ published tables. But it gets close enough on several of them — especially BrowseComp, Terminal-Bench 2.0 and MCP Atlas — that its much lower API pricing becomes the headline.
In practical terms, DeepSeek does not need to win every leaderboard row to matter. If it can deliver near-frontier performance on many enterprise-relevant agent and reasoning tasks at roughly one-sixth to one-seventh the standard API cost of GPT-5.5 or Claude Opus 4.7, it still forces a major rethink of the economics of advanced AI deployment.
Advertisement
DeepSeek-V4-Pro-Max is clearly the strongest open-weight model in the field right now, and it is unusually close to frontier closed systems on several practical benchmarks.
While GPT-5.5 and Claude Opus 4.7 still retain the lead in most direct head-to-head comparisons across the company’s benchmark charts, DeepSeek V4 Pro gets close while being dramatically cheaper and openly available.
A big jump from DeepSeek V3.2
To understand the magnitude of this release, one must look at the performance gains of the base models. DeepSeek-V4-Pro-Base represents a significant advancement over the previous generation, DeepSeek-V3.2-Base. In World Knowledge, V4-Pro-Base achieved 90.1 on MMLU (5-shot) compared to V3.2’s 87.8, and a massive jump on MMLU-Pro from 65.5 to 73.5.
The improvement in high-level reasoning and verified facts is even more pronounced: on SuperGPQA, V4-Pro-Base reached 53.9 compared to V3.2’s 45.0, and on the FACTS Parametric benchmark, it more than doubled its predecessor’s performance, jumping from 27.1 to 62.6. Simple-QA verified scores also saw a dramatic rise from 28.3 to 55.2.
Advertisement
The Long Context capabilities have also been refined. On LongBench-V2, V4-Pro-Base scored 51.5, significantly outpacing the 40.2 achieved by V3.2-Base. In Code and Math, V4-Pro-Base reached 76.8 on HumanEval (Pass@1), up from 62.8 on V3.2-Base.
These numbers underscore that DeepSeek has not just optimized for inference cost, but has fundamentally improved the intelligence density of its base architecture. The efficiency story is equally compelling for the Flash variant. DeepSeek-V4-Flash-Base, despite utilizing a substantially smaller number of parameters, outperforms the larger V3.2-Base across wide benchmarks, particularly in long-context scenarios.
A new information ‘traffic controller,’ Manifold-Constrained Hyper-Connections (mHC)
The standout technical achievement of V4 is its native one-million-token context window. Historically, maintaining such a large context required massive memory (the key values or KV cache).
Advertisement
DeepSeek solved this by introducing a Hybrid Attention Architecture that combines Compressed Sparse Attention (CSA) to reduce initial token dimensionality and Heavily Compressed Attention (HCA) to aggressively compress the memory footprint for long-range dependencies.
In practice, the V4-Pro model requires only 10% of the KV cache and 27% of the single-token inference FLOPs compared to its predecessor, the DeepSeek-V3.2, even when operating at a 1M token context.
To stabilize a network of 1.6 trillion parameters, DeepSeek moved beyond traditional residual connections. The company’s researchers incorporated Manifold-Constrained Hyper-Connections (mHC) to strengthen signal propagation across layers while preserving the model’s expressivity.
mHC allows an AI to have a much wider flow of information (so it can learn more complex things) without the risk of the model becoming unstable or “breaking” during its training. It’s like giving a city a 10-lane highway but adding a perfect AI traffic controller to ensure no one ever hits the brakes.
Advertisement
This is paired with the Muon optimizer, which allowed the team to achieve faster convergence and greater training stability during the pre-training on more than 32T diverse and high-quality tokens.
This pre-training data was refined to remove hatched auto-generated content, mitigating the risk of model collapse and prioritizing unique academic values. The model’s 1.6T parameters utilize a Mixture-of-Experts (MoE) design where only 49B parameters are activated per token, further driving down compute requirements.
Training the mixture-of-experts (MoE) to work as a whole
DeepSeek-V4 was not simply trained; it was “cultivated” through a unique two-stage paradigm.
First, through Independent Expert Cultivation, domain-specific experts were trained through Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) using the GRPO (Group Relative Policy Optimization) algorithm. This allowed each expert to master specialized skills like mathematical reasoning or codebase analysis.
Second, Unified Model Consolidation integrated these distinct proficiencies into a single model via on-policy distillation, where the unified model acts as the student learning to optimize reverse KL loss with teacher models. This distillation process ensures that the model preserves the specialized capabilities of each expert while operating as a cohesive whole.
The model’s reasoning capabilities are further segmented into three increasing “effort” modes.
Advertisement
The “Non-think” mode provides fast, intuitive responses for routine tasks.
“Think High” provides conscious logical analysis for complex problem-solving.
Finally, “Think Max” pushes the boundaries of model reasoning, bridging the gap with frontier models on complex reasoning and agentic tasks. This flexibility allows users to match the compute effort to the difficulty of the task, further enhancing cost-efficiency.
Breaking the Nvidia GPU stranglehold with local Chinese Huawei Ascend NPUs
While the model weights are the headline, the software stack released alongside them is arguably more important for the future of “Sovereign AI.”
Analyst Rui Ma highlighted a single sentence from the release as the most critical: DeepSeek validated their fine-grained Expert Parallelism (EP) scheme on Huawei Ascend NPUs (neural processing units).
By achieving a 1.50x to 1.73x speedup on non-Nvidia GPU platforms, DeepSeek has provided a blueprint for high-performance AI deployment that is resilient to Western GPU supply chains and export controls.
However, it’s important to note that DeepSeek still claims it used officially licensed, legal Nvidia GPUs for DeepSeek V4’s training, in addition to the Huawei NPUs.
Advertisement
DeepSeek has also open-sourced the MegaMoE mega-kernel as a component of its DeepGEMM library. This CUDA-based implementation delivers up to a 1.96x speedup for latency-sensitive tasks like RL rollouts and high-speed agent serving.
This move ensures that developers can run these massive models with extreme efficiency on existing hardware, further cementing DeepSeek’s role as the primary driver of open-source AI infrastructure.
The technical report emphasizes that these optimizations are crucial for supporting a standard 1M context across all official services.
Licensing and local deployment
DeepSeek-V4 is released under the MIT License, the most permissive framework in the industry. This allows developers to use, copy, modify, and distribute the weights for commercial purposes without royalties—a stark contrast to the “restricted” open-weight licenses favored by other companies.
Advertisement
For local deployment, DeepSeek recommends setting sampling parameters to temperature = 1.0 and top_p = 1.0. For those utilizing the “Think Max” reasoning mode, the team suggests setting the context window to at least 384K tokens to avoid truncating the model’s internal reasoning chains.
The release includes a dedicated encoding folder with Python scripts demonstrating how to encode messages in OpenAI-compatible format and parse the model’s output, including reasoning content.
DeepSeek-V4 is also seamlessly integrated with leading AI agents like Claude Code, OpenClaw, and OpenCode. This native integration underscores its role as a bedrock for developer tools, providing an open-source alternative to the proprietary ecosystems of major cloud providers.
Community reactions and what comes next
The community reaction has been one of shock and validation. Hugging Face officially welcomed the “whale” back, stating that the era of cost-effective 1M context length has arrived.
Advertisement
Industry experts noted that the “second DeepSeek moment” has effectively reset the developmental trajectory of the entire field, placing massive pressure on closed-source providers like OpenAI and Anthropic to justify their premiums.
AI evaluation firm Vals AI noted that DeepSeek-V4 is now the “#1 open-weight model on our Vibe Code Benchmark, and it’s not close”.
DeepSeek is moving quickly to retire its older architectures. The company announced that the legacy deepseek-chat and deepseek-reasoner endpoints will be fully retired on July 24, 2026. All traffic is currently being rerouted to the V4-Flash architecture, signifying a total transition to the million-token standard.
DeepSeek-V4 is more than just a new model; it is a challenge to the status quo. By proving that architectural innovation can substitute for raw compute-maximalism, DeepSeek has made the highest levels of AI intelligence accessible to the global developer community at a far lower cost — something that could benefit the globe, even at a time when lawmakers and leaders in Washington, D.C. are raising concerns about Chinese labs “distilling” from U.S. proprietary giants to train open source models, and fears of said open source or jailbroken proprietary models being used to create weapons and commit terror.
Advertisement
The truth is, while all of these are potential risks — as they were and have been with prior technologies that broadened information access, like search and the internet itself — the benefits seem far outweigh them, and DeepSeek’s quest to keep frontier AI models open is of benefit to the entire planet of potential AI users, especially enterprises looking to adopt the cutting-edge at the lowest possible cost.
The two new all-electric models include the BMW i7 50 xDrive and BMW i7 60 xDrive, each featuring a dual-motor, all-wheel-drive powertrain. The former is powered by a 449 hp motor producing 487 lb-ft of torque, while the latter features a 536 hp motor delivering 549 lb-ft of torque. Read Entire Article Source link
When Tim Cook took over Apple in 2011, the big question was whether anyone could follow in the footsteps of Steve Jobs. For many, Jobs was Apple.
A massive fifteen-year stint later, it’s clear that Cook has delivered – and then some. Not with a single breakthrough product like the Jobs-era iPhone or iPod, but a long list of hits, experiments and the occasional misstep that reshaped what Apple is today.
Here are 15 of our favourite Apple products that defined Cook’s decade-and-a-half legacy, both for better and for worse.
Apple Watch
Image Credit (Trusted Reviews)
The Apple Watch was the first big “post-Jobs” category – and it didn’t receive a particularly warm welcome initially. Early versions leaned awkwardly into fashion, complete with gold editions and luxury marketing, despite early Apple Watches only being supported for a relatively short period of time.
Advertisement
Advertisement
But, slowly but surely, Apple’s wearable found its footing. Today, the Watch is less about style and more about health with features like heart rate monitoring, ECG and fall detection, and has become one of the company’s most important products as a result.
It also helps that it plays so nicely with connected iPhones, offering a level of interoperability that most Android-based wearables still can’t quite match.
AirPods
Image Credit (Trusted Reviews)
Considering how popular AirPods are in 2026, it’s funny to look back at the reactions on social media when they were first revealed in 2016. People generally disregarded the buds, comparing them to electric toothbrush heads, but within a year of launch, they were everywhere.
As with the Apple Watch, Cook’s sprinkling of magic meant the buds worked very well with iPhones, iPads and Macs. They offer great sound and features like seamless handoff between devices, and they’ve vastly improved in the years since, not only in features but also in the overall design with the Pro and Max variants.
Advertisement
Advertisement
iPhone X
Image Credit (Trusted Reviews)
While the original iPhone was a Jobs-era innovation, the iPhone X was the moment that the modern iPhone was born.
It ditched the staples of the iconic iPhone design – the Home button and bezels – for an all-screen design with the now instantly recognisable Face ID notch. It was a controversial change at the time, but it’s a design that Apple still uses on its iPhone lineup today.
Apple Silicon
If there’s one product that feels like a true Cook-era mic drop, it has to be Apple Silicon.
Advertisement
Ditching the dominant force that was Intel to build its own chips was a huge risk – especially considering Mac apps would essentially need to be rebuilt for the platform to fully take advantage of the power on offer. But that risk paid off, almost immediately.
The M1 MacBook Air was absurdly fast, silent and efficient compared to practically anything else around, and it has only improved with newer versions in the years since.
Advertisement
iPad Pro
Image Credit (Trusted Reviews)
The iPad Pro is Apple’s long-running attempt to answer a simple question: Can a tablet replace your laptop?
Even after all these years, the answer is still… it depends. But with the Pencil, keyboard and increasingly powerful M-series desktop chips, it has become the go-to tool for creatives and professionals who favour touchscreen over traditional mouse input.
Advertisement
Apple Music
Image Credit (Trusted Reviews)
Cook didn’t just drive hardware – he also pushed Apple into the increasingly lucrative services business.
With its sights set on the dominant Spotify, Apple Music was the company’s first foray into services, and it was a massive success. It has a vast collection of songs available in Hi-Res format and Dolby Atmos for an immersive listening experience, and it, of course, plays exceptionally well with iOS, macOS and iPadOS.
Apple Pay
The launch of Apple Pay changed the way that we pay for products and services, both online and in the real world. It’s a feature that we don’t even think about these days – we just pull out our phones and pay with a tap – but Apple was one of the first to make that possible back in 2014.
Advertisement
Advertisement
Apple Vision Pro
The Apple Vision Pro is Cook’s “what’s next?” product, a £/$3499 headset that Apple insists isn’t VR but ‘spatial computing’. It’s early tech, expensive and a bit awkward – but also undeniably impressive compared to cheaper headsets from the likes of Meta with its M-series power and high-end graphics.
But whether it becomes the next iPhone or next HomePod remains to be seen – given the waning interest in VR headsets, it’s quite possible it could be the latter.
iPhone SE
Image Credit (Trusted Reviews)
Not every Apple product needs to be cutting-edge, and the iPhone SE is a great example of that.
Cook’s supply chain mastery was on full display here, reusing older components with newer internals to offer the iPhone experience at a much more affordable price. It wasn’t perfect, of course, but it had a special place for those who missed the ‘old school’ iPhone look.
Advertisement
Apple Pencil
Image Credit (Trusted Reviews)
Steve Jobs famously said nobody wanted a stylus – but it turns out that people did when it came to the big screens of iPads. They just didn’t want bad ones.
The Apple Pencil helped transform the iPad into a legitimate creative tool, especially for artists, designers and good ol’ note-takers, with an experience that still isn’t quite matched by Android stylus alternatives.
Advertisement
MagSafe
Image Credit (Trusted Reviews)
MagSafe – the iPhone variant, not that used in Macs – was a game-changer when it was released with the iPhone 12, so much so that the framework has since been baked into the Qi2 standard for all phones to follow.
Advertisement
It just makes so much sense: using a ring of magnets, not only does the phone snap into place perfectly on wireless chargers, but it also lets you add a bunch of accessories like battery packs, wallets, or even camera grips without messing around with different cases. Just snap it on and pull it off when you’re done.
MacBook Pro
Image Credit (Trusted Reviews)
The MacBook Pro had a few bumps in the road under Cook’s leadership. People loved the old style of MacBook Pro, but Cook’s Apple reinvented it in 2016, removing fan-favourite features like MagSafe charging and SD card slots and introducing an OLED touch bar that quickly became the butt of the joke.
It took until 2021 for the MacBook Pro to reverse course, ditching the gimmicky touch bar and its reliance on USB-C and bringing back MagSafe charging and a plethora of ports, which, combined with Apple’s M-series silicon, now make it one of the best laptops around.
MacBook Neo
Image Credit (Trusted Reviews)
Advertisement
We couldn’t talk about the MacBook Pro without at least mentioning the MacBook Neo, which could be considered Cook’s Magnum Opus ahead of stepping down.
Advertisement
For years, the MacBook Air was Apple’s entry point into the macOS ecosystem, but it still cost close to a grand, if not more. The problem is that there are plenty of cheaper Windows-based laptops, and those tend to win out for budget-focused buyers.
But then came along the MacBook Neo, and despite sporting an iPhone-level A18 Pro chipset, it excels in the budget market in both general performance and battery longevity, all for just £/$599, which makes pretty much every cheap Windows laptop look underpowered and expensive. A defining moment indeed.
Magic Mouse 2
Image Credit (Trusted Reviews)
The Magic Mouse 2 was a beautifully designed mouse with one tiny problem: you have to charge it from the bottom. Which means you can’t use it while it’s charging. Yes, the memes were great for this one.
Advertisement
It’s such a small decision, but it perfectly captures the “design over practicality” criticism that followed Apple for years, and for better or worse, will be remembered as a defining Cook-era product.
Advertisement
Polishing cloth
Yes, really.
A £/$19 Apple-branded cloth to clean your screen. It became an instant meme – not because it’s bad, but because it so perfectly represents Apple’s confidence in its brand.
Only Apple could sell that… and have it go out of stock.
Advertisement
Jokes aside, Under Cook, Apple stopped being just a computer company and became a part of basically everything we do, from how we pay for coffee to what we wear on our wrists. It wasn’t always a perfect run, but he turned the post-Jobs era into a massive, unstoppable ecosystem that most of us now couldn’t imagine living without.
Advertisement
It’s safe to say that John Ternus is now the one with big shoes to fill.
A lack of skilled workers has been a problem throughout many different industries in the United States over the past several years. Even the military is not immune to the problem, as the U.S. Navy can’t find workers to build warships. Believe it or not, there’s also a shortage of pilots as well. However, the U.S. Air Force is currently working to alleviate that problem to the tune of $50,000 per pilot.
This incentive program is designed to keep active-duty pilots in service with bonuses, thus helping to fill the shortage gap. Those bonuses are paid in exchange for longer commitments, and apply to eligible pilots, remotely piloted aircraft operators, air battle managers, and combat systems officers. The 2026 fiscal year aviation bonus can go as high as $50,000 per year depending on the role, and overall experience. There’s also a structure in place for higher payouts in exchange for shorter agreements, for fighter, bomber, and U-2 pilots (who wear space suits when they fly).
This incentive program isn’t new, and was recently used in 2025. It targeted pilots with one or two years left on their Undergraduate Pilot Training commitment and included a bonus of up to $50,000 annually, with an option of up to $200,000 up front. That option gave pilots the ability to select their preferred assignments. The 2025 program had separate bonus tiers in place for combat systems officers and navigators, ranging from $15,000 to $30,000 per year, with longer commitments reaching as high as $360,000.
Advertisement
Air Force pilot retention programs have existed for years
George D. Lepp/Getty Images
The U.S. Air Force’s modern push to retain pilots kicked into high gear back in 2017 with the Aviation Bonus Program. This was a tiered structure, and went beyond the retention pay program previously offered years prior. The program paid eligible fighter and drone pilots up to $35,000 per year. Bomber and special operations pilots received up to $30,000, while surveillance and rescue pilots got up to $28,000. Other roles, including combat systems officers, were paid anywhere from $10,000 to $20,000 per year.
The 2017 Aviation Bonus Program was authorized by Congress that year to address concerns over a lack of active-duty pilots. The new program was part of the National Defense Authorization Act (NDAA), which included input from senior Air Force leadership. The NDAA is congressional legislation that’s passed every year since 1961. It grants the Department of Defense (which very recently made a deal with OpenAI) the authority to operate, set personnel policies, and prioritize funding.
Advertisement
The Air Force has not publicly disclosed if the Aviation Bonus Program has successfully helped to retain pilots as intended. However, there have been indications from some pilots that money is only one part of the decision-making process. There are broader factors at play, including quality of service, mission experience, and long-term career fulfillment. All of these elements could help determine whether pilots choose to remain in service, or return to civilian life once their time is up.
The 2026 versions of the ASUS Zenbook and VivoBook laptops have recently been released in the Indian market. The latest range comprises high-end and budget-friendly models that boast enhanced processing capabilities, light weight, and advanced AI features. Users can purchase these laptops through various online and offline mediums.
Design, Performance, and AI Features
All the laptops in this range are powered by next-gen Intel Core Ultra Series 3 and Snapdragon X2 processors. With AI capabilities built into the chips, users get improved speed and performance, especially for modern tasks.
This results in faster switching between apps, better support for creative work, and longer battery life. Overall, the experience feels more responsive and reliable for daily use, whether for office work, learning, or streaming.
Another feature that stands out in this line of ASUS laptops is its design. This particular design uses Ceraluminum, a material used to create the laptop casing, making the laptops very durable yet lightweight. In other words, they look luxurious while being highly portable.
Advertisement
Complete Lineup and Pricing
The newly launched portfolio includes a wide range of models:
Zenbook S14 (UX5406AA) – starting at INR 1,79,990
Zenbook DUO (UX8407AA) – starting at INR 2,99,990
Zenbook A14 (UX3407NA) – starting at INR 1,85,990
Vivobook 14 (X1407AA) – starting at INR 98,990
Vivobook 16 (X1607AA) – starting at INR 1,01,990
Vivobook S14 (S3407AA) – starting at INR 1,28,990
Vivobook S16 (S3607AA) – starting at INR 1,31,990
Zenbook Series Highlights
The Zenbook series focuses on premium design and performance, with each model built for different needs. The Zenbook S14 is a lightweight laptop at around 1.2 kg, featuring a slim design and a 14-inch 3K OLED touch display, along with up to 27 hours of battery life, making it ideal for users who need portability.
Moreover, the Zenbook DUO is equipped with two 14-inch 3K OLED touchscreen displays that offer an improved multitasking environment, as well as a battery life of up to 32 hours. Conversely, the Zenbook A14 focuses more on its portability, as it weighs less than 1 kg and uses the Snapdragon X2 chipset, which highlights its AI performance and battery life.
Vivobook Series Highlights
The main theme of the Vivobook laptops is to provide efficient performance but at an affordable price. The Vivobook 14 and 16 models are best suited for general use, and their functionality supports productivity and provides additional security. On the other hand, the Vivobook S14 and S16 models would be suitable for users with greater knowledge, as they offer excellent performance and longer battery life.
Availability Details
Both laptops are easily available throughout India, via both online and offline channels. They are available in ASUS Exclusive Stores, Hybrid Stores, ASUS E-shop, Flipkart, and Amazon. They are also sold through authorized retail partners across the country. The Zenbook A16 is expected to be available from June onwards.
It certainly won’t be a slow start for Ternus, who currently oversees all of Apple’s hardware engineering. But he also will be tasked with navigating new AI and manufacturing challenges, as I explore in this week’s One More Thing episode, embedded below.
Advertisement
Watch this: The Biggest Battles Ahead for Apple’s Next CEO, John Ternus
But Cook isn’t completely leaving Apple. His influence continues as he takes the role of executive chairman on Apple’s board of directors. You might see him continue to play the role of a Washington whisperer, as the company said Cook will be “engaging with policymakers around the world.”
That leaves Ternus free to focus his energy on new product launches. His first mission? Make sure that enhanced personalized Siri really works well on those fun new gadgets this fall. Because if that flops, it’s going to be a rough first year.
Advertisement
Watch this: What’s Next for Apple Without Tim Cook at the Helm
For more One More Thing, subscribe to our YouTube page to catch Bridget Carey breaking down the latest Apple news and issues every Friday.
If you have a desktop 3D printer, you probably want something to hang filament spools on. [LVTRC] has a spool roller that fits the bill. It also incorporates a scale and a round touch screen. (Google Translate)
We’ve seen those round screens before, and now we wonder why we didn’t think of this. The GC9A01 display shows a progress ring and lets you save settings or calibrations to EEPROM. An Arduino Nano provides the brain, and the load cell connects to an HX711. The project is made to fit a specific printer, but it should be little trouble to adapt it to a different printer or to mount it in an external mount.
One of the calibration steps, of course, is to program the weight of an empty spool to subtract from the total weight. The device can store up to five specific profiles.
Advertisement
Not the biggest spool holder we’ve seen. We keep thinking that we don’t know why we want a circular screen, and then someone always drops in to show us another thing we didn’t think about.
Russian telecom operators ask to delay the introduction of VPN traffic fees
Companies cite technical hurdles
VPN traffic fees are part of a wider plan to reduce VPN usage in the country
Russian telecom operators have called on the Ministry of Digital Development to postpone the introduction of new fees on VPN traffic.
According to the Moscow-based business daily Vedomost, providers claim technical limitations mean their systems will not be ready for the scheduled May 1 rollout.
In late March, Digital Development Minister Maksut Shadaev instructed operators to levy extra charges on users exceeding 15GB of international data per month.
The move is part of a broader strategy to reduce VPN usage as more residents adopt the technology to bypass blocks on platforms like Telegram.
VPNs function by rerouting traffic through encrypted international servers. This masks a user’s IP address and allows them to bypass domestic censorship to access blocked websites
Maxim Katz, a prominent Russian opposition figure who tracks VPN connectivity in the region, says these efforts signal how Roskomnadzor — Russia’s censorship agency — lacks the technical abilities to prevent residents from using VPNs to bypass government-imposed restrictions.
“They cannot do it technically, and now they want the businesses to help them. But the businesses don’t want to help them,” Katz told TechRadar. He also suggested that companies will likely obey the orders, but that, in practice, “actually nothing would change.”
You must be logged in to post a comment Login