Publicly released exploit code for an effectively unpatched vulnerability that gives root access to virtually all releases of Linux is setting off alarm bells as defenders scramble to ward off severe compromises inside data centers and on personal devices.
The vulnerability and exploit code that exploits it were released Wednesday evening by researchers from security firm Theori, five weeks after privately disclosing it to the Linux kernel security team. The team patched the vulnerability in versions 7.0, 6.19.12, 6.18.12, 6.12.85, 6.6.137, 6.1.170, 5.15.204, and 5.10.254) but few of the Linux distributions had incorporated those fixes at the time the exploit was released.
A single script hacks all distros
The critical flaw, tracked as CVE-2026-31431 and the name CopyFail, is a local privilege escalation, a vulnerability class that allows unprivileged users to elevate themselves to administrators. CopyFail is particularly severe because it can be exploited with a single piece of exploit code—released in Wednesday’s disclosure—that works across all vulnerable distributions with no modification. With that, an attacker can, among other things, hack multi-tenant systems, break out of containers based on Kubernetes or other frameworks, and create malicious pull requests that pipe the exploit code through CI/CD work flows.
“‘Local privilege escalation’ sounds dry, so let me unpack it,” researcher Jorijn Schrijvershof wrote Thursday. “It means: an attacker who already has some way to run code on the machine, even as the most boring unprivileged user, can promote themselves to root. From there they can read every file, install backdoors, watch every process, and pivot to other systems.”
Advertisement
Schrijvershof added that the same Python script Theori released works reliably for Ubuntu 22.04, Amazon Linux 2023, SUSE 15.6, and Debian 12. The researcher continued:
The gap between a laptop and a tablet has never been smaller. In 2026, Apple is out here selling a laptop with a smartphone processor in it — the MacBook Neo — while also selling the iPad Pro, a tablet with a PC chip at its heart. The A18 Pro chip at the core of the Neo, the same chip which powered the iPhone 16 Pro, is more powerful than an Intel Core i7 processor from a decade ago. But while the iPad Pro may run laps around many laptops in sheer performance, what if you don’t want to be locked to the Apple ecosystem? Can a Samsung Galaxy Tab tablet replace your laptop?
No beating around the bush: a high-end Samsung tablet can easily replace a laptop for a large number of people. For the past two years, I’ve replaced my Windows laptop with a Samsung tablet. When I first embarked on what was, at the time, a bold experiment in my personal computing habits, I wrote that I was shocked at how capable Samsung tablets were but that there were still a few quirks. In 2026, I wouldn’t say all the kinks have been ironed out, but there are fewer of them with easier workarounds.
Crucially, I’m using a Galaxy Tab S10 Ultra, a beast of a tablet with a 14.5″ display and a Mediatek Dimensity 9300+ processor. I would strongly caution against trying to replace your laptop with one of Samsung’s budget tablets, as it simply won’t have the necessary power or display real estate. So, if you’re wondering whether your next computer should be a Samsung tablet, here are the pros and cons I’ve found after making the switch.
Advertisement
Samsung’s tablets are more laptop than ever
Max Miller/SlashGear
With the right accessories, Samsung’s most powerful tablets are easy laptop replacements. My current setup uses a Galaxy Tab S10 Ultra — a 14.5″ AMOLED display makes it bigger than some laptops, and much nicer to look at, too — paired with the official Book Cover Keyboard. A trackpad and backlit keyboard snaps magnetically into place and folds up like a laptop when not in use. Other times, I’ll opt for a mouse and a low-profile mechanical keyboard, bringing it closer to a desktop experience.
DeX, the built-in desktop mode, is Samsung’s secret sauce on the software side. Sure, you can use the Tab S10 Ultra in normal tablet mode, but with multi-monitor support and virtual desktops added in Samsung’s One UI 8.0 Android skin, DeX is now closer than ever to mimicking the functionality of a laptop without detracting from the Galaxy Tab’s strengths as a tablet.
Advertisement
Moreover, Samsung’s tablets are in a Goldilocks zone. No other device I own is so well-suited to both productivity and leisure. I can write articles like these with Google Docs, a web browser, and Slack open onscreen at once, and I can easily move from the desk to my bed if I want to kick back and enjoy an episode of “The Boys” using the tablet’s lavish display and shockingly good quad-firing speakers with Dolby Atmos support.
I’ve even had a blast gaming on my Galaxy Tab. Local titles like “Destiny: Rising” are a treat on the large screen, and I played through most of “Cyberpunk 2077” in the cloud through GeForce Now. It’s not the same as playing on my Windows gaming rig, but it’s remarkable that a device as thin as its own USB-C port can deliver these laptop-grade experiences.
Advertisement
Galaxy tablets still have some pain points compared to PCs
Max Miller/SlashGear
Not everyone can replace their laptop with a Samsung Galaxy Tab. If you’re a visual creative who often edits video, you’ll need to make do with mobile editors like Lumia Fusion or the endlessly buggy CapCut. That’s a non-starter for most who need pro-grade creative tools like Adobe Premiere Pro. Avid gamers are in a rough spot, too. Running mobile games on a 14.5″ tablet is ludicrous fun, but for AAA titles, you’ll have to rely on cloud streaming through a service like GeForce Now, which requires a constant Internet connection.
Even run-of-the-mill office work can be a slog if you’re not willing to adjust to Android’s limitations when figuring out how to turn your Android tablet into a laptop. If you use Microsoft Office 365 software like Word or Excel, you’ll find the Android apps are extremely limited. You’ll either need to run the web app versions in a browser or opt for something like Google Workspace which works natively. As a freelance writer, I switched from Word to Google Docs, and sync everything through Google Drive. That works perfectly for now, but if I want to switch to open-source solutions like LibreOffice, I’ll be back in a tough spot.
Still, these are all software limitations. The hardware itself is remarkably capable. In fact, I recently converted a decade-old laptop running an Intel Core i7 Kaby Lake processor to Fedora Linux and have been using it alongside my Galaxy Tab S10 Ultra. The laptop is no slouch, but the tablet still puts it to shame. As time goes on, you can only expect tablets to become even more performant. In the very near future, it may finally be time to redefine the boundaries between mobile and desktop hardware.
Apple CEO Tim Cook said on the company’s earnings call on Thursday that it could take “several months” to meet skyrocketing demand for the Mac Mini, the company’s compact but mighty, screen-free desktop computer. Cook’s remarks come after coders determined in recent months that the Mac Mini was the perfect machine for agentic AI tasks.
“On the Mac Mini and Mac Studio, both of these are amazing platforms for AI and agentic tools,” Cook said on the earnings call, in response to analyst questions. “And customer adoption of that is happening faster than we expected.”
The news comes amid another record-setting quarter for the company. iPhone sales came up shorter than expected, though demand for the iPhone 17 has been super high, and Apple’s subscription services business has continued to grow.
Apple faced supply constraints on both the iPhone and the Mac product line this quarter. iPhone shortages are being driven mostly by a limited supply of the advanced chips that power the phones. But as Cook made clear, at least two different factors are driving shortages in Apple’s Mac business: The rapid adoption of generative AI and unexpected demand for the company’s new, colorful, and more affordable MacBook Neo laptop.
Advertisement
Mac sales are typically a fraction of what iPhone sales are—$8.4 billion this quarter, compared to nearly $57 billion in sales of the iPhone—and the Mac Mini, specifically, is a fraction of that. But with the launch of OpenClaw earlier this year, an open-source AI tool, Mac Minis began flying off the shelves because they offer both enough power and a dedicated computing environment for agentic AI tasks.
Some eager customers have already been waiting months for their Mac Minis. MacRumors reported in early March that Apple had stopped selling a configuration of the computer that included 512 GB of memory. As of last week, the base model of Mac Mini was entirely sold out.
Cook, and his soon-to-be-successor John Ternus, also addressed Cook’s transition out of the CEO role later this year. Cook said on the earnings call that it’s the “right moment” to step into the executive chairman role for a “number of reasons,” including that Apple is well-positioned financially and that its upcoming product road map is “incredible.” He called Ternus a “person of remarkable character and a born leader.”
Ternus then joined the call for a minute to vouch for Cook as a business leader and to assure investors he’d take a similarly deliberate and thoughtful approach in leading the company. He, too, mentioned the company’s road map.
Advertisement
Both men were scant on details around this supposedly very exciting product road map, but hopefully, it includes more … road Macs.
This April, Hong Kong-based DAIMON Robotics has released Daimon-Infinity, which it describes as the largest omni-modal robotic dataset for physical AI, featuring high resolution tactile sensing and spanning a wide range of tasks from folding laundry at home to manufacturing on factory assembly lines. The project is supported by collaborative efforts of partners across China and the globe, including Google DeepMind, Northwestern University, and the National University of Singapore.
The move signals a key strategic initiative for DAIMON, a two-and-a-half-year-old company known for its advanced tactile sensor hardware, most notably a monochromatic, vision-based tactile sensor that packs over 110,000 effective sensing units into a fingertip-sized module. Drawing on its high-resolution tactile sensing technology and a distributed out-of-lab collection network capable of generating millions of hours of data annually, DAIMON is building large-scale robot manipulation datasets that include vast amounts of tactile sensing data. To accelerate the real-world deployment of embodied AI, the company has also open-sourced 10,000 hours of its data.
Prof. Michael Yu Wang, co-founder and chief scientist at DAIMON Robotics, has pioneered Vision-Tactile-Language-Action (VTLA) architecture, elevating the tactile to a modality on par with vision.DAIMON Robotics
Behind the strategy is Prof. Michael Yu Wang, DAIMON’s co-founder and chief scientist. Prof. Wang earned his PhD at Carnegie Mellon — studying manipulation under Matt Mason — and went on to found the Robotics Institute at the Hong Kong University of Science and Technology. An IEEE Fellow and former Editor-in-Chief of IEEE Transactions on Automation Science and Engineering, he has spent roughly four decades in the field. His objective is to address the missing “insensitivity” of robot manipulation, which practically relies on the dominant Vision-Language-Action (VLA) model. He and his team have pioneered Vision-Tactile-Language-Action (VTLA) architecture, elevating the tactile to a modality on par with vision.
Advertisement
We spoke with Prof. Wang about how tactile feedback aims to change dexterous manipulation, how the dataset initiative is foreseen to improve our understanding of robotic hands in natural environments, and where — from hotels to convenience stores in China — he sees touch-enabled robots making their first real-world inroads.
Daimon-Infinity is the world’s largest omni-modal dataset for Physical AI, featuring million-hour scale multimodal data, ultra-high-res tactile feedback, data from 80+ real scenarios and 2,000+ human skills, and more.DAIMON Robotics
The Dataset Initiative
This month, DAIMON Robotics released the largest and most comprehensive robotic manipulation dataset with multiple leading academic institutions and enterprises. Why releasing the dataset now, rather than continuing to focus on product development? What impact will this have on the embodied intelligence industry?
DAIMON Robotics has been around for almost two and a half years. We have been committed to developing high-resolution, multimodal tactile sensing devices to perceive the interaction between a robot’s hand (particularly its fingertips) and objects. Our devices have become quite robust. They are now accepted and used by a large segment of users, including academic and research institutes as well as leading humanoid robotics companies.
Advertisement
As embodied AI continues to advance, the critical role of data has been clearer. Data scarcity remains a primary bottleneck in robot learning, particularly the lack of physical interaction data, which is essential for robots to operate effectively in the real world. Consequently, data quality, reliability, and cost have become major concerns in both research and commercial development.
This is exactly where DAIMON excels. Our vision-based tactile technology captures high-quality, multimodal tactile data. Beyond basic contact forces, it records deformation, slip and friction, material properties and surface textures — enabling a comprehensive reconstruction of physical interactions. Building on our expertise in multimodal fusion, we have developed a robust data processing pipeline that seamlessly integrates tactile feedback with vision, motion trajectories, and natural language, transforming raw inputs into training-ready dataset for machine learning models.
Recognizing the industry-wide data gap, we view large-scale data collection not only as our unique competitive advantage, but as a responsibility to the broader community.
By building and open-sourcing the dataset, we aim to provide the high-quality “fuel” needed to power embodied AI, ultimately accelerating the real-world deployment of general-purpose robotic foundation models.
Advertisement
The robotics industry is highly competitive, and many teams have chosen to focus on data. DAIMON is releasing a large and highly comprehensive cross-embodiment, vision-based tactile multimodal robotic manipulation dataset. How were you able to achieve this?
We have a dedicated in-house team focused on expanding our capabilities, including building hardware devices and developing our own large-scale model. Although we are a relatively small company, our core tactile sensing technology and innovative data collection paradigm enable us to build large-scale dataset.
Our approach is to broaden our offering. We have built the world’s largest distributed out-of-lab data collection network. Rather than relying on centralized data factories, this lightweight and scalable system allows data to be gathered across diverse real-world environments, enabling us to generate millions of hours of data per year.
“To drive the advancement of the entire embodied AI field, we have open-sourced 10,000 hours of the dataset for the broader community.” —Prof. Michael Yu Wang, DAIMON Robotics
Advertisement
This dataset is being jointly developed with several institutions worldwide. What roles did they play in its development, and how will the dataset benefit their research and products?
Besides China based teams, our partners include leading research groups from universities, such as Northwestern University and the National University of Singapore, as well as top global enterprises like Google DeepMind and China Mobile. Their decision to partner with DAIMON is a strong testament to the value of our tactile-rich dataset.
Among the companies involved there are some that have already built their own models but are now incorporating tactile information. By deploying our data collection devices across research, manufacturing and other real-world scenarios, they help us to gather highly practical, application-driven data. In turn, our partners leverage the data to train models tailored to their specific use cases. Furthermore, to drive the advancement of the entire embodied AI field, we have open-sourced 10,000 hours of the dataset for the broader community.
Equipped with Daimon’s visuotactile sensor, the gripper delicately senses contact and precisely controls force to pick up a fragile eggshell.Daimon Robotics
From VLA to VTLA: Why Tactile Sensing Changes the Equation
The mainstream paradigm in robotics is currently the Vision-Language-Action (VLA) model, but your team has proposed a Vision-Tactile-Language-Action (VTLA) model. Why is it necessary to incorporate tactile sensing? What does it enable robots to achieve, and which tasks are likely to fail without tactile feedback?
Advertisement
Over these years of working to make generalist robots capable of performing manipulation tasks, especially dexterous manipulation — not just power grasping or holding an object, but manipulating objects and using tools to impart forces and motion onto parts — we see these robots being used in household as well as industrial assembly settings.
It is well established that tactile information is essential for providing feedback about contact states so that robots can guide their hands and fingers to perform reliable manipulation. Without tactile sensing, robots are severely limited. They struggle to locate objects in dark environments, and without slip detection, they can easily drop fragile items like glass. Furthermore, the inability to precisely control force often leads to failed manipulation tasks or, in severe cases, physical damage. Naturally, the VLA approach needs to be enhanced to incorporate tactile information. We expanded the VLA framework to incorporate tactile data, creating the VTLA model.
An additional benefit of our tactile sensor is that it is vision-based: We capture visual images of the deformation on the fingertip surface. We capture multiple images in a time sequence that encodes contact information, from which we can infer forces and other contact states. This aligns well with the visual framework that VLA is based upon. Having tactile information in a visual image format makes it naturally suitable for integration into the VLA framework, transforming it into a VTLA system. That is the key advantage: Vision-based tactile sensors provide very high resolution at the pixel level, and this data can be incorporated into the framework, whether it is an end-to-end model or another type of architecture.
DAIMON has been known for its vision-based tactile sensors that can pack over 110,000 effective sensing units.DAIMON Robotics
The Technology: Monochromatic Vision-based Tactile Sensing
You and your team have spent many years deeply engaged in vision-based tactile sensing and have developed the world’s first monochromatic vision-based tactile sensing technology. Why did you choose this technical path?
Advertisement
Once we started investigating tactile sensors, we understood our needs. We wanted sensors that closely mimic what we have under our fingertip skin. Physiological studies have well documented the capabilities humans have at their fingertips — knowing what we touch, what kind of material it is, how forces are distributed, and whether it is moving into the right position as our brain controls our hands. We knew that replicating these capabilities on a robot hand’s fingertips would help considerably.
When we surveyed existing technologies, we found many types, including vision-based tactile sensors with tri-color optics and other simpler designs. We decided to integrate the best of these into an engineering-robust solution that works well without being overly complicated, keeping cost, reliability, and sensitivity within a satisfactory range, thus ultimately developing a monochromatic vision-based tactile sensing technique. This is fundamentally an engineering approach rather than a purely scientific one, since a great deal of foundational research already existed. With the growing realization of the necessity of tactile data, all of this will advance hand in hand.
Last year, DAIMON launched a multi-dimensional, high-resolution, high-frequency vision-based tactile sensor. Compared with traditional tactile sensors, where does its core advantage lie? Which industries could it potentially transform?
The key features of our sensors are the density of distributed force measurement and the deformation we can capture over the area of a fingertip. I believe we have the highest density in terms of sensing units. That is one very important metric. The other is dynamics: the frequency and bandwidth — how quickly we can detect force changes, transmit signals, and process them in real time. Other important aspects are largely engineering-related, such as reliability, drift, durability of the soft surface, and resistance to interference from magnetic, optical, or environmental factors.
Advertisement
A growing number of researchers and companies are recognizing the importance of tactile sensing and adopting our technology. I believe the advances in tactile sensing will elevate the entire community and industry to a higher level. One of our potential customers is deploying humanoid robots in a small convenience store, with densely packed shelves where shelf space is at a premium. The robot needs to reach into very tight spaces — tighter than books on a shelf — to pick out an object. Current two-jaw parallel grippers cannot fit into most of these spaces. Observing how humans pick up objects, you clearly need at least three slim fingers to touch and roll the object toward you and secure it. Thus, we are starting to see very specific needs where tactile sensing capabilities are essential.
From Academia to Startup
After 40 years in academia — founding the HKUST Robotics Institute, earning prestigious honors including IEEE Fellow, and serving as Editor-in-Chief of IEEE TASE — what motivated you to found DAIMON Robotics?
I have come a long way. I started learning robotics during my PhD at Carnegie Mellon, where there were truly remarkable groups working on locomotion under Marc Raibert, who founded Boston Dynamics, and on manipulation under my advisor, Matt Mason, a leader in the field. We have been working on dexterous manipulation, not only at Carnegie Mellon, but globally for many years.
However, progress has been limited for a long time, especially in building dexterous hands and making them work. Only recently have locomotion robots truly taken off, and only in the last few years have we begun to see major advancements in robot hands. There is clearly room for advancing manipulation capabilities, which would enable robots to do work like humans. While at Hong Kong University of Science and Technology, I saw increasingly greater people entering this area in the form of students and postdoctoral researchers. We wanted to jumpstart our effort by leveraging the available capital and talent resources.
Advertisement
Fortunately, one of my postdocs, Dr. Duan Jianghua, has a strong sense for commercial opportunities. Recognizing the rapid growth of robotics market and the unique value that our vision-based tactile sensing technology could bring, together we started DAIMON Robotics, and it has progressed well. The community has grown tremendously in China, Japan, Korea, the U.S., and Europe.
Robots equipped with DAIMON technology have been deployed in factory settings. The company aims to enable robots to achieve “embodied intelligence” and close the gap between what they can see and what they can feel.DAIMON Robotics
Business Model and Commercial Strategy
What is DAIMON’s current business model and strategic focus? What role does the dataset release play in your commercial strategy?
We started as a device company focused on making highly capable tactile sensors, especially for robot hands. But as technology and business developed, everyone realized it is not just about one component, rather the entire technology chain: devices, data of adequate quality and quantity, and finally the right framework to build, train, and deploy models on robots in real application environments.
Our business strategy is best described as “3D”: Devices, Data, and Deployment. We build devices for data collection, our own ecosystem, and for deploying them in our partners’ potential application domains. This enables the collection of real-world tactile-rich data and complete closed-loop validation. This will become an integral part of the 3D business model. Most startups in this space are following a similar path until eventually some may become more specialized or more tightly integrated with other companies. For now, it is mostly vertical integration.
Advertisement
Embodied Skills and the Convergence Moment
You’ve introduced the concept of “embodied skills” as essential for humanoid robots to move beyond having just an advanced AI “brain.” What prompted this insight? What new capabilities could embodied skills enable? After the rapid evolution of models and hardware over the past two years, has your definition or roadmap for embodied skills evolved?
We have come a long way now see a convergence point where electrical, electronic, and mechatronic hardware technologies have advanced tremendously in last two decades. Robots are now fully electric, do not require hydraulics, because hardware has evolved rapidly. Modern electronics provide tremendous bandwidth with high torques. If we can build intelligence into these systems, we can create truly humanoid robots with the ability to operate in unstructured environments, make decisions, and take actions autonomously.
“Our vision is for robots to achieve robust manipulation capabilities and evolve into reliable partners for humans.” —Prof. Michael Yu Wang, DAIMON Robotics
AI has arrived at exactly the right time. Enormous resources have been invested in AI development, especially large language models, which are now being generalized into world models that enable physical AI capabilities. We would like to see these manifested in real-world systems.
Advertisement
While both AI and core hardware technologies continue to evolve, the focus is much clearer now. For example, human-sized robots are preferred in a home environment. This is an exciting domain with a promise of great societal benefit if we can eventually achieve safe, reliable, and cost-effective robots.
The Road to Real-World Deployment
Today, many robots can deliver impressive demos, yet there remains a gap before they truly enter real-world applications. What could be a potential trigger for real-world deployment? Which scenarios are most likely to achieve large-scale deployment first?
I think the road toward large-scale deployment of generalist robots is still long, but we are starting to see signs of feasibility within specific domains. It is very similar to autonomous vehicles, where we are yet to see full deployment of robo-taxis, while we have already started to find mobile robots and smaller vehicles widely deployed in the hospitality industry. Virtually every major hotel in China now has a delivery robot — no arms, just a vehicle that picks up items from the hotel lobby (e.g., food deliveries). The delivery person just loads the food and selects the room number. It is up to the robot thereafter to navigate and reach the guest’s room, which includes using the elevator, to deliver the food. This is already nearly 100 percent deployed in major Chinese hotels.
Hotel and restaurant robots are viewed as a model for deploying humanoid robots in specific domains like overnight drugstores and convenience stores. I expect complete deployment in such settings within a short timeframe, followed by other applications. Overall, we can expect autonomous robots, including humanoids, to progressively penetrate specific sectors, delivering value in each and expanding into others.
Advertisement
Ultimately, our vision is for robots to achieve robust manipulation capabilities and evolve into reliable partners for humans. By seamlessly integrating into our homes and daily lives, they will genuinely benefit and serve humanity.
This interview has been edited for length and clarity.
Apple CEO Tim Cook made it clear, that the company will reinvest any tariff refund it gets into new U.S. manufacturing initiatives, further funding domestic production.
In almost an afterthought at the end of the earnings conference call, Cook made a big announcement. Beyond just going through the recently-announced motions and filing for that tariff refund, Apple has a plan.
While there were no specifics, and nobody left to follow up the statement, Apple will invest what it gets back into US manufacturing.
Tariffs and tariff-related costs continue to pressure results, though Apple hasn’t framed them as a dominant constraint in the March quarter. Prior disclosures show those costs remain significant, and performance indicates Apple is absorbing much of the impact instead of raising prices.
Advertisement
Apple is making a deliberate tradeoff to protect pricing stability and demand. Scale is helping hold volume steady even as rising costs limit margin expansion.
Tariffs are now a recurring cost line
Apple has previously disclosed tariff and tariff-related costs ranging from about $800 million in a single quarter to more than $1.4 billion as rates and volumes shifted during and after the U.S.-China trade war. Figures include more than direct import duties and account for added costs tied to logistics and supply chain adjustments.
So far, Apple has committed $600 billion to domestic manufacturing. While the about $3 billion it will get back from tariffs is a small slice of that, Cook promised new projects will be funded with those refunds.
Tariffs have moved from a policy shock to a more predictable cost structure. Apple now treats them as an ongoing expense alongside currency shifts and component pricing.
Advertisement
Apple has largely absorbed those costs so far and kept pricing stable across most of its hardware while posting strong financial results. Restraint suggests the company is testing how far it can hold prices as demand for premium devices remains strong but not unlimited.
Supply chain shifts reduce risk but don’t remove pressure
Supply chain changes remain one of Apple’s main tools for managing tariff exposure, and the strategy has clear limits. Apple has expanded manufacturing outside China and increased iPhone production in India while shifting more assembly of other products to Vietnam.
Moves reduce reliance on any single region for U.S.-bound devices but don’t remove the underlying cost pressure. Shifts to improve resilience cannot match China’s scale, efficiency, and supplier concentration.
China still plays a central role in Apple’s global manufacturing footprint, particularly for high-volume and high-end production. Moving capacity at scale takes years, which constrains how quickly Apple can rebalance its supply chain even as diversification continues.
Advertisement
The company is turning tariffs from a one-time financial shock into a manageable ongoing cost. Apple is relying on its scale, supply chain adjustments, and financial flexibility to keep growing.
Electric vehicles are now common on the road, but charging still remains one of the biggest friction points. Even when you find a fast charger, stopping can easily add 30 minutes or more to a trip, which makes long distance travel feel less convenient compared to refueling a gas car.
At BYD’s charging facility in Beijing, the company is already demonstrating a system that aims to remove that delay. Vehicles are pulling in, plugging in, and charging using BYD’s second generation Blade Battery and flash charging setup, giving a clearer picture of how the technology works outside a controlled prototype environment.
Charging speeds are being pushed far beyond current standards
Image used with permission by copyright holder
BYD’s pitch is centered on how quickly usable range can be added rather than how fast a battery can reach full charge. The company describes the experience in terms of a short stop, suggesting that a vehicle could gain a significant amount of range in the time it takes to grab a coffee.
The charging setup reflects that approach. The cable is suspended from an overhead rail instead of resting on the ground, which makes it easier to handle and allows it to move freely based on the position of the vehicle. It also supports connections from either side, which reduces the need to reposition the car in a busy charging area.
The battery is where most of the change is happening
Digital Trends
While the charger itself draws attention, BYD is positioning the second generation Blade Battery as the core of the system. The company says the battery has been redesigned to handle higher charging speeds while addressing common bottlenecks such as heat buildup and performance in low temperatures.
According to BYD, the system can charge from 10 percent to 97 percent in around 12 minutes even at temperatures as low as minus 30 degrees Celsius. The company also states that the battery passes simultaneous nail penetration and charging tests, which are intended to simulate severe failure conditions.
Advertisement
How it compares to current fast charging
Image used with permission by copyright holder
Most widely available fast chargers today operate at around 350 kilowatts, while some newer vehicles can reach closer to 500 kilowatts under peak conditions. Even in those cases, charging from 10 percent to 80 percent typically takes between 20 and 30 minutes.
BYD says its flash charging system can deliver up to 1,500 kilowatts through a single connector, which would place it well beyond current charging infrastructure. Under those conditions, the company claims the system can move from 10 percent to 70 percent in about five minutes and up to 97 percent in roughly nine minutes.
This is already in use, with plans to scale quickly
Image used with permission by copyright holder
The system at BYD’s Beijing site is not being presented as a prototype, as vehicles are already using the charging stations on site, which provides a more practical indication of how the technology performs outside a controlled demonstration environment.
BYD positions this as an early stage of deployment and says it plans to build up to 20,000 of these charging stations by the end of 2026, with the network expected to expand beyond China as part of a broader global rollout, a scale that will ultimately determine whether the system remains limited to specific locations or becomes part of everyday charging infrastructure
A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Thursday’s puzzle instead then click here: NYT Strands hints and answers for Thursday, April 30 (game #788).
Strands is the NYT’s latest word game after the likes of Wordle, Spelling Bee and Connections – and it’s great fun. It can be difficult, though, so read on for my Strands hints.
Want more word-based fun? Then check out my NYT Connections today and Quordle today pages for hints and answers for those games, and Marc’s Wordle today page for the original viral word game.
Advertisement
SPOILER WARNING: Information about NYT Strands today is below, so don’t read on if you don’t want to know the answers.
Article continues below
NYT Strands today (game #789) – hint #1 – today’s theme
What is the theme of today’s NYT Strands?
• Today’s NYT Strands theme is… I ❤️ Hawaii
NYT Strands today (game #789) – hint #2 – clue words
Play any of these words to unlock the in-game hints system.
Advertisement
PUKE
PEEL
DAME
PINE
LAME
PLEAD
NYT Strands today (game #789) – hint #3 – spangram letters
How many letters are in today’s spangram?
• Spangram has 11 letters
NYT Strands today (game #789) – hint #4 – spangram position
What are two sides of the board that today’s spangram touches?
First side: left, 8th row
Last side: right, 3rd row
Advertisement
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON’T WANT TO SEE THEM.
Advertisement
NYT Strands today (game #789) – the answers
(Image credit: New York Times)
The answers to today’s Strands, game #789, are…
POKE
HULA
LUAU
UKULELE
PINEAPPLE
MACADAMIA
SPANGRAM: ALOHASPIRIT
My rating: Easy
My score: Perfect
Happy Lei Day, Hawaiians.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Having never been to Hawaii — the closest I’ve got is a shirt I owned in the late 1990s — I feared I would struggle my way around today’s board, but the reality was that it could not have been easier.
It’s a testament to how much Hawaiian culture has permeated around the globe that I was familiar with all of today’s words, with the exception of MACADAMIA — which I did not know had a strong link to the islands (I thought they were Australian). It was also a very tricky word to piece together and took me a couple of attempts to get in the right order.
Advertisement
Yesterday’s NYT Strands answers (Thursday, April 30, game #788)
DRIZZLE
MIST
STEAM
VAPOR
HUMIDITY
AEROSOL
SPANGRAM: CONDENSATION
What is NYT Strands?
Strands is the NYT’s not-so-new-any-more word game, following Wordle and Connections. It’s now a fully fledged member of the NYT’s games stable that has been running for a year and which can be played on the NYT Games site on desktop or mobile.
I’ve got a full guide to how to play NYT Strands, complete with tips for solving it, so check that out if you’re struggling to beat it each day.
Capcom’s long-awaited sci-fi action game is finally landing on PC in 2026, and with several storefronts competing for your purchase, choosing where to buy it can make a meaningful difference to both what you pay and what you get alongside your copy.
Pragmata
Pragmata places players in a near-future lunar setting where the rules of physics have broken down, casting them as an armoured Cosmonaut tasked with protecting a mysterious girl named Diana who holds the key to humanity’s survival across a world of collapsing environments and digitised threats.
With that premise in mind, the first question most buyers will ask is whether to go official or shop around, and Steam answers the former convincingly, offering direct launcher integration with mod support, cloud saves, and community forums, though its pricing holds firmly at retail outside of seasonal sale windows.
Where to buy Pragmata on release
G2A.COM stands out as the ultimate destination for smart buyers who want the best balance of price and features. Unlike traditional stores that stick to a single retail price, G2A is a global marketplace where multiple independent sellers compete, often resulting in much better deals. It is widely considered one of the best places to buy digital games because it offers incredible flexibility, including various regional options and digital delivery that is both fast and secure. What truly sets G2A.COM apart is its role as a comprehensive content hub. Beyond being a place for a simple purchase, it provides users with editorials, guides, and lore pieces that help you get the most out of your game. You can track Pragmata through your wishlist and receive instant alerts when prices drop. Furthermore, G2A Plus subscribers enjoy extra discounts across a massive catalog. With a transparent seller review system and a user-friendly interface, it is easily the most feature- rich option for anyone looking for where to buy PC games online.
Epic Games Store takes a similarly official route and goes a step further by carrying both the Standard and Deluxe editions of Pragmata, with its Epic Rewards cashback programme offering a modest return on spend, though fixed publisher pricing means meaningful savings are rare without a site-wide coupon event.
Advertisement
Advertisement
For buyers who want that official retailer assurance without committing to a specific launcher ecosystem, Green Man Gaming is a reasonable middle ground, working directly with publishers to supply licensed keys and offering occasional XP-based loyalty discounts, though its prices tend to shadow Steam’s retail levels fairly closely.
Eneba operates on a marketplace model rather than a traditional storefront, which opens up more competitive pricing through independent sellers, and its refund policy for unverified keys adds a layer of reassurance for anyone cautious about third-party key sites.
Loaded takes a leaner approach still, stripping the experience back to instant key delivery with minimal friction, which suits buyers who want a fast transaction above all else, though the absence of seller feedback tools or community features makes it a bare-bones option by comparison.
Apple’s Tim Cook, left, and Meta’s Mark Zuckerberg. (Apple, Meta Photos)
Seattle Seahawks fans envisioning another tech billionaire as the new owner of the NFL team have a couple Silicon Valley-based names to consider. Or not.
Meta founder Mark Zuckerberg and Apple CEO Tim Cook have been mentioned as potential suitors for the franchise, which was put up for sale in February by the estate of the late Microsoft co-founder Paul Allen.
Front Office Sports reported Thursday that Zuckerberg and Cook are among at least four interested parties considering making offers for the team. They are the first serious names to emerge in a sale that could fetch upwards of $7 billion.
In a post on X, Dylan Byers of Puck said the report was not true and sources close to Cook and Zuckerberg said neither was interested in bidding for the Seahawks.
Reps for all of those involved declined to comment or could not be reached, including the Paul G. Allen estate and the bank handling the sale process, Front Office Sports said.
Advertisement
According to Forbes, Zuckerberg is worth $222 billion and Cook $2.8 billion.
Cook announced last week that he is stepping down as Apple CEO and will become executive chairman on Sept. 1.
The plan to sell the 50-year-old franchise is part of the long process of divesting many of the assets and investments that Allen made during his lifetime and direct all proceeds to philanthropy. Since his death, Allen’s estate has steadily moved to sell major assets, including real estate holdings and, more recently, advancing the sale process for the NBA’s Portland Trail Blazers.
Allen co-founded Microsoft with childhood friend Bill Gates, and the billionaire philanthropist bought the Seahawks in 1997 for approximately $200 million from previous owner Ken Behring. The purchase secured the team’s home in Seattle after Behring threatened a move to California. Allen ran the team until his death in 2018 at the age of 65 after he was diagnosed with a recurrence of non-Hodgkin’s lymphoma.
Advertisement
Gates has previously expressed no interest in owning the team, but the list of those who could includes many who made their fortunes in tech, from former Microsoft CEO Steve Ballmer to Amazon founder Jeff Bezos.
A new study from Harvard Medical School and Beth Israel Deaconess found that an OpenAI reasoning model outperformed experienced ER doctors at diagnosing and managing patient cases using messy, real-world emergency department records. Researchers say the results don’t support replacing doctors, but they do suggest AI could meaningfully reshape clinical workflows if tested carefully in prospective trials. NPR reports: The researchers ran a series of experiments on the AI model to test its clinical acumen — including actual cases like the lupus patient who’d been previously treated at the emergency department at Beth Israel in Boston. The team graded how well the AI model could provide an accurate diagnosis at three moments in time, from the triage stage in the ER, up to being admitted into the hospital. Overall, AI outperformed two experienced physicians — and did so with only the electronic health records and the limited information that had been available to the physicians at the time. “This is the big conclusion for me — it works with the messy real-world data of the emergency department, ” said Dr. Adam Rodman, a clinical researcher at Beth Israel and one of the study authors. “It works for making diagnoses in the real world.”
Other parts of the study focused on case reports published in the New England Journal of Medicine and clinical vignettes to suss out whether the AI model could meet well-established “benchmarks” and game out thorny diagnostic questions. “The model outperformed our very large physician baseline,” said Raj Manrai, assistant professor of Biomedical Informatics at Harvard Medical School who was also part of the study. The authors emphasize the AI relied on text alone, while in real life, clinicians need to attend to many other inputs like images, sounds and nonverbal cues when diagnosing and treating a patient. The findings have been published Thursday in the journal Science.
This brings Legora’s valuation just a tad closer to Harvey’s, which reached $11 billion last month when Sequoia tripled down on its investment. Andreessen Horowitz, Coatue, Conviction Partners, Elad Gil, Matt Miller’s Evantic, and Kleiner Perkins also participated in that round.
Legora, too, is backed by high-profile VCs, but it puts even more emphasis on the big names it secured as clients, such as Bird & Bird, Cleary Gottlieb, and Linklaters. According to the company, the platform it launched only 18 months ago is now used by more than 1,000 law firms and in-house legal teams across 50 markets.
Harvey has game in that area too. It claims 100,000 lawyers across 1,300 organizations as customers, ranging from global law firms like Hengeler Mueller and Latham & Watkins to corporate legal teams at companies like T-Mobile and Bridgewater.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
With global leadership as the end goal, the Harvey v. Legora rivalry is one they intend to play on each other’s home turf. Legora has opened multiple offices around the world with the U.S. a key focus for its expansion. Conversely, Harvey is pushing into Europe.
Advertisement
With plenty of capital to spend on both sides, that battle has moved to mindshare. Not long after Winston Weinberg’s company Harvey signed a brand partnership with actor Gabriel Macht, who plays a high-powered lawyer in the TV series “Suits,” Legora launched an advertising campaign featuring movie star Jude Law under the slogan “Law just got more attractive.”
Both companies may be right to bet heavily on marketing. Rivalry aside, they are built on top of large language models made by AI giants that could well become their competitors. When Anthropic launched a legal plug-in for Claude not long ago, several publicly listed legal software companies saw their stocks drop.
Legora CEO Max Junestrand says he isn’t concerned.
“Foundation models are improving quickly, but the real value is in how they’re applied,” he wrote in a statement. It also shows how the startup instills FOMO among its target users, stating that “the legal teams that embed AI effectively today will shape how the industry evolves.”
Advertisement
NVentures’ investment is also a signal that Legora might have enough of a moat to protect them from the model makers, and its bigger rival.
You must be logged in to post a comment Login