Connect with us
DAPA Banner

Tech

Tariff-driven costs boost Apple’s domestic manufacturing

Published

on

Apple CEO Tim Cook made it clear, that the company will reinvest any tariff refund it gets into new U.S. manufacturing initiatives, further funding domestic production.

In almost an afterthought at the end of the earnings conference call, Cook made a big announcement. Beyond just going through the recently-announced motions and filing for that tariff refund, Apple has a plan.

While there were no specifics, and nobody left to follow up the statement, Apple will invest what it gets back into US manufacturing.

Tariffs and tariff-related costs continue to pressure results, though Apple hasn’t framed them as a dominant constraint in the March quarter. Prior disclosures show those costs remain significant, and performance indicates Apple is absorbing much of the impact instead of raising prices.

Advertisement

Apple is making a deliberate tradeoff to protect pricing stability and demand. Scale is helping hold volume steady even as rising costs limit margin expansion.

Tariffs are now a recurring cost line

Apple has previously disclosed tariff and tariff-related costs ranging from about $800 million in a single quarter to more than $1.4 billion as rates and volumes shifted during and after the U.S.-China trade war. Figures include more than direct import duties and account for added costs tied to logistics and supply chain adjustments.

So far, Apple has committed $600 billion to domestic manufacturing. While the about $3 billion it will get back from tariffs is a small slice of that, Cook promised new projects will be funded with those refunds.

Tariffs have moved from a policy shock to a more predictable cost structure. Apple now treats them as an ongoing expense alongside currency shifts and component pricing.

Advertisement

Apple has largely absorbed those costs so far and kept pricing stable across most of its hardware while posting strong financial results. Restraint suggests the company is testing how far it can hold prices as demand for premium devices remains strong but not unlimited.

Supply chain shifts reduce risk but don’t remove pressure

Supply chain changes remain one of Apple’s main tools for managing tariff exposure, and the strategy has clear limits. Apple has expanded manufacturing outside China and increased iPhone production in India while shifting more assembly of other products to Vietnam.

Moves reduce reliance on any single region for U.S.-bound devices but don’t remove the underlying cost pressure. Shifts to improve resilience cannot match China’s scale, efficiency, and supplier concentration.

China still plays a central role in Apple’s global manufacturing footprint, particularly for high-volume and high-end production. Moving capacity at scale takes years, which constrains how quickly Apple can rebalance its supply chain even as diversification continues.

Advertisement

The company is turning tariffs from a one-time financial shock into a manageable ongoing cost. Apple is relying on its scale, supply chain adjustments, and financial flexibility to keep growing.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

How the Kindle Paperwhite Makes Every Page Turn Feel Natural

Published

on

Kindle Paperwhite Gen 12
Carrying a Kindle Paperwhite, priced at $134.99 (was $159.99), in one’s hands everyday acts as a constant reminder of what makes this device a must-have for bookworms. This e-reader weighs only 7 oz, hence why it seems like something that the device could float as it easily fits into one’s pockets or bag without them knowing.



To kick things off, it attempts to create the real page-turning experience, and compared to a regular book, it would be like day and night difference. The new model has 300 pixels per inch, outperforming its predecessor and rendering texts as clear as a crystal in direct sunshine, making it a lifesaver for readers like myself who love to read outside. Finally, the device’s backlight works well, since it automatically adjusts based on the illumination in the room you’re reading in, and there’s even a feature that turns the brightness into a warmer color when you read late at night.

The pages flip 25% faster than in the previous version, and all menus appear instantly. It is 20% faster, which is fairly astonishing, especially for those who read books, since they will not be bored or frustrated by the time it takes to turn the pages. Also, the touch response is exceptionally fast, so it does not interfere with the reader’s perspective when reading. The battery performance provides 12 weeks of reading time for 30 minutes of reading per day without WiFi or lighting. The recharge will just take 2.5 hours.


Among other advantages, IPX8 waterproof technology ensures that you can use it even in the rain or while taking a bath without damage. Furthermore, it can withstand immersion in 2 meters of fresh water for an hour, allowing you to take it to a pool, beach, or simply bathe. In addition, the Paperwhite has a large amount of storage space; 16 GB of built-in memory allows for the storage of thousands of books. A Wi-Fi connection enables you to connect to your library in minute, or if you want to listen to an audiobook, simply turn on Bluetooth.

Advertisement

Source link

Continue Reading

Tech

MAGA Is Confused About ‘Animal Farm’

Published

on

If you read George Orwell’s classic political satire Animal Farm in seventh grade, you probably remember the basic contours of the plot: fed up with human rule, a group of well-intentioned barnyard animals set up their own egalitarian society, with disastrous results. Published in 1945, Animal Farm has a timeless (and, certainly, contemporarily relevant) message: It’s about how the impulse to retain power will always come at the expense of our basic morality.

That message, however, seems to have been lost on most MAGA influencers assigned the book in middle school (if they even read it at all). After their failure to cancel Barbie or the Wicked movies, conservatives have moved on to a new film adaptation of Animal Farm. (The animated film, which is directed by Lord of the Rings star Andy Serkis, opens May 1).

The problem, however, is that they’ve failed to reach a consensus on what the actual message of Animal Farm is.

The right-wing outrage cycle over a movie featuring Seth Rogen making fart jokes appears to have been sparked by influencers like Emily Saves America and Riley Gaines, who recently posted the trailer for the film. In an April 28 X post, Gaines tweeted that the film was “incredibly well done. They do a perfect job of reminding viewers that Marxism always has and always will fail.” She hashtagged her tweet #AnimalFarmPartner, leading people to assume the post had been the result of a paid partnership between herself and Angel Studios, the Utah-based entertainment company distributing the film, which was also behind the faith-based blockbusters Sound of Freedom and The King of Kings.

Advertisement

Many on both the left and the right found Gaines’ tweet bizarre, in part because while Animal Farm is certainly a critique of Stalinism, it’s also very clearly not a full-throated endorsement of capitalist ideals. The human owner of the farm is a capitalist, and after he is overthrown, the power-hungry pigs mimic his behaviors, adopting human clothes and profiting off the labor of the other farm animals. The book is ultimately less a condemnation of specific systems of governance than a critique of mankind’s lust for power and blind adherence to ideology.

In the latest adaptation, Serkis also tweaked the plot by adding a greedy human character (voiced by Glenn Close) who wants to buy the farm, characterizing the film in USA Today as “about authoritarianism and power corrupting and our response to that”—a message that, in theory at least, would certainly resonate with 2026 audiences.

It clearly did not, however, resonate with many of Gaines’ ideological bedfellows, who pounced on her for being a Marxist shill. “Promoting communism is the new gay for pay,” right wing podcaster Tim Pool tweeted. Earlier this month, he posted that he had turned down an offer from Angel Studios to promote the film due to it being “pro communism and anti-capitalism.” The influencer Peachy Keenan also excoriated the film, calling it “retarded socialist propaganda.”

The inability to reach a consensus on the actual message of the new Animal Farm movie may very well be a reflection of its artistic merits, or lack thereof. (Indeed, the film currently has a 23 percent rating on Rotten Tomatoes.) But it’s also just generally a reflection of how little media literacy exists in our current information landscape—an issue that, in fairness, is far from specific to the right. Unless the moral messaging of a work of fiction is clearly and consistently telegraphed throughout, there seems to be a complete inability to accept ambiguity or contradiction, or to acknowledge that multiple ideas can be good or bad at the same time.

Advertisement

Though middle schoolers might be able to immediately grasp the takeaways from Animal Farm, it says something that high-profile political commentators can’t. In fairness, Orwell himself, who has been claimed by both the right and the left during his lifetime and beyond, probably would have appreciated the confusion his novel has wrought—even if he may not have appreciated Seth Rogen’s fart jokes.

Source link

Continue Reading

Tech

Can A Samsung Tablet Replace A Laptop?

Published

on





The gap between a laptop and a tablet has never been smaller. In 2026, Apple is out here selling a laptop with a smartphone processor in it — the MacBook Neo — while also selling the iPad Pro, a tablet with a PC chip at its heart. The A18 Pro chip at the core of the Neo, the same chip which powered the iPhone 16 Pro, is more powerful than an Intel Core i7 processor from a decade ago. But while the iPad Pro may run laps around many laptops in sheer performance, what if you don’t want to be locked to the Apple ecosystem? Can a Samsung Galaxy Tab tablet replace your laptop?

No beating around the bush: a high-end Samsung tablet can easily replace a laptop for a large number of people. For the past two years, I’ve replaced my Windows laptop with a Samsung tablet. When I first embarked on what was, at the time, a bold experiment in my personal computing habits, I wrote that I was shocked at how capable Samsung tablets were but that there were still a few quirks. In 2026, I wouldn’t say all the kinks have been ironed out, but there are fewer of them with easier workarounds.

Crucially, I’m using a Galaxy Tab S10 Ultra, a beast of a tablet with a 14.5″ display and a Mediatek Dimensity 9300+ processor. I would strongly caution against trying to replace your laptop with one of Samsung’s budget tablets, as it simply won’t have the necessary power or display real estate. So, if you’re wondering whether your next computer should be a Samsung tablet, here are the pros and cons I’ve found after making the switch.

Advertisement

Samsung’s tablets are more laptop than ever

With the right accessories, Samsung’s most powerful tablets are easy laptop replacements. My current setup uses a Galaxy Tab S10 Ultra  — a 14.5″ AMOLED display makes it bigger than some laptops, and much nicer to look at, too  — paired with the official Book Cover Keyboard. A trackpad and backlit keyboard snaps magnetically into place and folds up like a laptop when not in use. Other times, I’ll opt for a mouse and a low-profile mechanical keyboard, bringing it closer to a desktop experience.

DeX, the built-in desktop mode, is Samsung’s secret sauce on the software side. Sure, you can use the Tab S10 Ultra in normal tablet mode, but with multi-monitor support and virtual desktops added in Samsung’s One UI 8.0 Android skin, DeX is now closer than ever to mimicking the functionality of a laptop without detracting from the Galaxy Tab’s strengths as a tablet.

Advertisement

Moreover, Samsung’s tablets are in a Goldilocks zone. No other device I own is so well-suited to both productivity and leisure. I can write articles like these with Google Docs, a web browser, and Slack open onscreen at once, and I can easily move from the desk to my bed if I want to kick back and enjoy an episode of “The Boys” using the tablet’s lavish display and shockingly good quad-firing speakers with Dolby Atmos support.

I’ve even had a blast gaming on my Galaxy Tab. Local titles like “Destiny: Rising” are a treat on the large screen, and I played through most of “Cyberpunk 2077” in the cloud through GeForce Now. It’s not the same as playing on my Windows gaming rig, but it’s remarkable that a device as thin as its own USB-C port can deliver these laptop-grade experiences.

Advertisement

Galaxy tablets still have some pain points compared to PCs

Not everyone can replace their laptop with a Samsung Galaxy Tab. If you’re a visual creative who often edits video, you’ll need to make do with mobile editors like Lumia Fusion or the endlessly buggy CapCut. That’s a non-starter for most who need pro-grade creative tools like Adobe Premiere Pro. Avid gamers are in a rough spot, too. Running mobile games on a 14.5″ tablet is ludicrous fun, but for AAA titles, you’ll have to rely on cloud streaming through a service like GeForce Now, which requires a constant Internet connection.

Even run-of-the-mill office work can be a slog if you’re not willing to adjust to Android’s limitations when figuring out how to turn your Android tablet into a laptop. If you use Microsoft Office 365 software like Word or Excel, you’ll find the Android apps are extremely limited. You’ll either need to run the web app versions in a browser or opt for something like Google Workspace which works natively. As a freelance writer, I switched from Word to Google Docs, and sync everything through Google Drive. That works perfectly for now, but if I want to switch to open-source solutions like LibreOffice, I’ll be back in a tough spot.

Still, these are all software limitations. The hardware itself is remarkably capable. In fact, I recently converted a decade-old laptop running an Intel Core i7 Kaby Lake processor to Fedora Linux and have been using it alongside my Galaxy Tab S10 Ultra. The laptop is no slouch, but the tablet still puts it to shame. As time goes on, you can only expect tablets to become even more performant. In the very near future, it may finally be time to redefine the boundaries between mobile and desktop hardware.

Advertisement



Source link

Advertisement
Continue Reading

Tech

Good Luck Getting a Mac Mini for the Next ‘Several Months’

Published

on

Apple CEO Tim Cook said on the company’s earnings call on Thursday that it could take “several months” to meet skyrocketing demand for the Mac Mini, the company’s compact but mighty, screen-free desktop computer. Cook’s remarks come after coders determined in recent months that the Mac Mini was the perfect machine for agentic AI tasks.

“On the Mac Mini and Mac Studio, both of these are amazing platforms for AI and agentic tools,” Cook said on the earnings call, in response to analyst questions. “And customer adoption of that is happening faster than we expected.”

The news comes amid another record-setting quarter for the company. iPhone sales came up shorter than expected, though demand for the iPhone 17 has been super high, and Apple’s subscription services business has continued to grow.

Apple faced supply constraints on both the iPhone and the Mac product line this quarter. iPhone shortages are being driven mostly by a limited supply of the advanced chips that power the phones. But as Cook made clear, at least two different factors are driving shortages in Apple’s Mac business: The rapid adoption of generative AI and unexpected demand for the company’s new, colorful, and more affordable MacBook Neo laptop.

Advertisement

Mac sales are typically a fraction of what iPhone sales are—$8.4 billion this quarter, compared to nearly $57 billion in sales of the iPhone—and the Mac Mini, specifically, is a fraction of that. But with the launch of OpenClaw earlier this year, an open-source AI tool, Mac Minis began flying off the shelves because they offer both enough power and a dedicated computing environment for agentic AI tasks.

Some eager customers have already been waiting months for their Mac Minis. MacRumors reported in early March that Apple had stopped selling a configuration of the computer that included 512 GB of memory. As of last week, the base model of Mac Mini was entirely sold out.

Cook, and his soon-to-be-successor John Ternus, also addressed Cook’s transition out of the CEO role later this year. Cook said on the earnings call that it’s the “right moment” to step into the executive chairman role for a “number of reasons,” including that Apple is well-positioned financially and that its upcoming product road map is “incredible.” He called Ternus a “person of remarkable character and a born leader.”

Ternus then joined the call for a minute to vouch for Cook as a business leader and to assure investors he’d take a similarly deliberate and thoughtful approach in leading the company. He, too, mentioned the company’s road map.

Advertisement

Both men were scant on details around this supposedly very exciting product road map, but hopefully, it includes more … road Macs.

Source link

Continue Reading

Tech

DAIMON Robotics Wants to Give Robot Hands a Sense of Touch

Published

on

This article is brought to you by DAIMON Robotics.

This April, Hong Kong-based DAIMON Robotics has released Daimon-Infinity, which it describes as the largest omni-modal robotic dataset for physical AI, featuring high resolution tactile sensing and spanning a wide range of tasks from folding laundry at home to manufacturing on factory assembly lines. The project is supported by collaborative efforts of partners across China and the globe, including Google DeepMind, Northwestern University, and the National University of Singapore.

The move signals a key strategic initiative for DAIMON, a two-and-a-half-year-old company known for its advanced tactile sensor hardware, most notably a monochromatic, vision-based tactile sensor that packs over 110,000 effective sensing units into a fingertip-sized module. Drawing on its high-resolution tactile sensing technology and a distributed out-of-lab collection network capable of generating millions of hours of data annually, DAIMON is building large-scale robot manipulation datasets that include vast amounts of tactile sensing data. To accelerate the real-world deployment of embodied AI, the company has also open-sourced 10,000 hours of its data.

Person in navy suit and blue striped tie against a blue studio backdrop Prof. Michael Yu Wang, co-founder and chief scientist at DAIMON Robotics, has pioneered Vision-Tactile-Language-Action (VTLA) architecture, elevating the tactile to a modality on par with vision.DAIMON Robotics

Behind the strategy is Prof. Michael Yu Wang, DAIMON’s co-founder and chief scientist. Prof. Wang earned his PhD at Carnegie Mellon — studying manipulation under Matt Mason — and went on to found the Robotics Institute at the Hong Kong University of Science and Technology. An IEEE Fellow and former Editor-in-Chief of IEEE Transactions on Automation Science and Engineering, he has spent roughly four decades in the field. His objective is to address the missing “insensitivity” of robot manipulation, which practically relies on the dominant Vision-Language-Action (VLA) model. He and his team have pioneered Vision-Tactile-Language-Action (VTLA) architecture, elevating the tactile to a modality on par with vision.

Advertisement

We spoke with Prof. Wang about how tactile feedback aims to change dexterous manipulation, how the dataset initiative is foreseen to improve our understanding of robotic hands in natural environments, and where — from hotels to convenience stores in China — he sees touch-enabled robots making their first real-world inroads.

Daimon-Infinity is the world’s largest omni-modal dataset for Physical AI, featuring million-hour scale multimodal data, ultra-high-res tactile feedback, data from 80+ real scenarios and 2,000+ human skills, and more.DAIMON Robotics

The Dataset Initiative

This month, DAIMON Robotics released the largest and most comprehensive robotic manipulation dataset with multiple leading academic institutions and enterprises. Why releasing the dataset now, rather than continuing to focus on product development? What impact will this have on the embodied intelligence industry?

DAIMON Robotics has been around for almost two and a half years. We have been committed to developing high-resolution, multimodal tactile sensing devices to perceive the interaction between a robot’s hand (particularly its fingertips) and objects. Our devices have become quite robust. They are now accepted and used by a large segment of users, including academic and research institutes as well as leading humanoid robotics companies.

Advertisement

As embodied AI continues to advance, the critical role of data has been clearer. Data scarcity remains a primary bottleneck in robot learning, particularly the lack of physical interaction data, which is essential for robots to operate effectively in the real world. Consequently, data quality, reliability, and cost have become major concerns in both research and commercial development.

This is exactly where DAIMON excels. Our vision-based tactile technology captures high-quality, multimodal tactile data. Beyond basic contact forces, it records deformation, slip and friction, material properties and surface textures — enabling a comprehensive reconstruction of physical interactions. Building on our expertise in multimodal fusion, we have developed a robust data processing pipeline that seamlessly integrates tactile feedback with vision, motion trajectories, and natural language, transforming raw inputs into training-ready dataset for machine learning models.

Recognizing the industry-wide data gap, we view large-scale data collection not only as our unique competitive advantage, but as a responsibility to the broader community.

By building and open-sourcing the dataset, we aim to provide the high-quality “fuel” needed to power embodied AI, ultimately accelerating the real-world deployment of general-purpose robotic foundation models.

Advertisement

The robotics industry is highly competitive, and many teams have chosen to focus on data. DAIMON is releasing a large and highly comprehensive cross-embodiment, vision-based tactile multimodal robotic manipulation dataset. How were you able to achieve this?

We have a dedicated in-house team focused on expanding our capabilities, including building hardware devices and developing our own large-scale model. Although we are a relatively small company, our core tactile sensing technology and innovative data collection paradigm enable us to build large-scale dataset.

Our approach is to broaden our offering. We have built the world’s largest distributed out-of-lab data collection network. Rather than relying on centralized data factories, this lightweight and scalable system allows data to be gathered across diverse real-world environments, enabling us to generate millions of hours of data per year.

“To drive the advancement of the entire embodied AI field, we have open-sourced 10,000 hours of the dataset for the broader community.” —Prof. Michael Yu Wang, DAIMON Robotics

Advertisement

This dataset is being jointly developed with several institutions worldwide. What roles did they play in its development, and how will the dataset benefit their research and products?

Besides China based teams, our partners include leading research groups from universities, such as Northwestern University and the National University of Singapore, as well as top global enterprises like Google DeepMind and China Mobile. Their decision to partner with DAIMON is a strong testament to the value of our tactile-rich dataset.

Among the companies involved there are some that have already built their own models but are now incorporating tactile information. By deploying our data collection devices across research, manufacturing and other real-world scenarios, they help us to gather highly practical, application-driven data. In turn, our partners leverage the data to train models tailored to their specific use cases. Furthermore, to drive the advancement of the entire embodied AI field, we have open-sourced 10,000 hours of the dataset for the broader community.

Robotic gripper delicately holding a cracked eggshell in a dimly lit roomEquipped with Daimon’s visuotactile sensor, the gripper delicately senses contact and precisely controls force to pick up a fragile eggshell.Daimon Robotics

From VLA to VTLA: Why Tactile Sensing Changes the Equation

The mainstream paradigm in robotics is currently the Vision-Language-Action (VLA) model, but your team has proposed a Vision-Tactile-Language-Action (VTLA) model. Why is it necessary to incorporate tactile sensing? What does it enable robots to achieve, and which tasks are likely to fail without tactile feedback?

Advertisement

Over these years of working to make generalist robots capable of performing manipulation tasks, especially dexterous manipulation — not just power grasping or holding an object, but manipulating objects and using tools to impart forces and motion onto parts — we see these robots being used in household as well as industrial assembly settings.

It is well established that tactile information is essential for providing feedback about contact states so that robots can guide their hands and fingers to perform reliable manipulation. Without tactile sensing, robots are severely limited. They struggle to locate objects in dark environments, and without slip detection, they can easily drop fragile items like glass. Furthermore, the inability to precisely control force often leads to failed manipulation tasks or, in severe cases, physical damage. Naturally, the VLA approach needs to be enhanced to incorporate tactile information. We expanded the VLA framework to incorporate tactile data, creating the VTLA model.

An additional benefit of our tactile sensor is that it is vision-based: We capture visual images of the deformation on the fingertip surface. We capture multiple images in a time sequence that encodes contact information, from which we can infer forces and other contact states. This aligns well with the visual framework that VLA is based upon. Having tactile information in a visual image format makes it naturally suitable for integration into the VLA framework, transforming it into a VTLA system. That is the key advantage: Vision-based tactile sensors provide very high resolution at the pixel level, and this data can be incorporated into the framework, whether it is an end-to-end model or another type of architecture.

Close-up of a vision-based tactile sensor with 110,000 sensing units, resembling a smartwatch screen glowing with colorful digital static in the darkDAIMON has been known for its vision-based tactile sensors that can pack over 110,000 effective sensing units.DAIMON Robotics

The Technology: Monochromatic Vision-based Tactile Sensing

You and your team have spent many years deeply engaged in vision-based tactile sensing and have developed the world’s first monochromatic vision-based tactile sensing technology. Why did you choose this technical path?

Advertisement

Once we started investigating tactile sensors, we understood our needs. We wanted sensors that closely mimic what we have under our fingertip skin. Physiological studies have well documented the capabilities humans have at their fingertips — knowing what we touch, what kind of material it is, how forces are distributed, and whether it is moving into the right position as our brain controls our hands. We knew that replicating these capabilities on a robot hand’s fingertips would help considerably.

When we surveyed existing technologies, we found many types, including vision-based tactile sensors with tri-color optics and other simpler designs. We decided to integrate the best of these into an engineering-robust solution that works well without being overly complicated, keeping cost, reliability, and sensitivity within a satisfactory range, thus ultimately developing a monochromatic vision-based tactile sensing technique. This is fundamentally an engineering approach rather than a purely scientific one, since a great deal of foundational research already existed. With the growing realization of the necessity of tactile data, all of this will advance hand in hand.

Daimon tactile sensor showing force, geometry, material, and contact data visualizations.DAIMON vision-based tactile sensor captures high-quality, multimodal tactile data.DAIMON Robotics

Last year, DAIMON launched a multi-dimensional, high-resolution, high-frequency vision-based tactile sensor. Compared with traditional tactile sensors, where does its core advantage lie? Which industries could it potentially transform?

The key features of our sensors are the density of distributed force measurement and the deformation we can capture over the area of a fingertip. I believe we have the highest density in terms of sensing units. That is one very important metric. The other is dynamics: the frequency and bandwidth — how quickly we can detect force changes, transmit signals, and process them in real time. Other important aspects are largely engineering-related, such as reliability, drift, durability of the soft surface, and resistance to interference from magnetic, optical, or environmental factors.

Advertisement

A growing number of researchers and companies are recognizing the importance of tactile sensing and adopting our technology. I believe the advances in tactile sensing will elevate the entire community and industry to a higher level. One of our potential customers is deploying humanoid robots in a small convenience store, with densely packed shelves where shelf space is at a premium. The robot needs to reach into very tight spaces — tighter than books on a shelf — to pick out an object. Current two-jaw parallel grippers cannot fit into most of these spaces. Observing how humans pick up objects, you clearly need at least three slim fingers to touch and roll the object toward you and secure it. Thus, we are starting to see very specific needs where tactile sensing capabilities are essential.

From Academia to Startup

After 40 years in academia — founding the HKUST Robotics Institute, earning prestigious honors including IEEE Fellow, and serving as Editor-in-Chief of IEEE TASE — what motivated you to found DAIMON Robotics?

I have come a long way. I started learning robotics during my PhD at Carnegie Mellon, where there were truly remarkable groups working on locomotion under Marc Raibert, who founded Boston Dynamics, and on manipulation under my advisor, Matt Mason, a leader in the field. We have been working on dexterous manipulation, not only at Carnegie Mellon, but globally for many years.

However, progress has been limited for a long time, especially in building dexterous hands and making them work. Only recently have locomotion robots truly taken off, and only in the last few years have we begun to see major advancements in robot hands. There is clearly room for advancing manipulation capabilities, which would enable robots to do work like humans. While at Hong Kong University of Science and Technology, I saw increasingly greater people entering this area in the form of students and postdoctoral researchers. We wanted to jumpstart our effort by leveraging the available capital and talent resources.

Advertisement

Fortunately, one of my postdocs, Dr. Duan Jianghua, has a strong sense for commercial opportunities. Recognizing the rapid growth of robotics market and the unique value that our vision-based tactile sensing technology could bring, together we started DAIMON Robotics, and it has progressed well. The community has grown tremendously in China, Japan, Korea, the U.S., and Europe.

Humanoid robots assembling electronics on an automated factory production lineRobots equipped with DAIMON technology have been deployed in factory settings. The company aims to enable robots to achieve “embodied intelligence” and close the gap between what they can see and what they can feel.DAIMON Robotics

Business Model and Commercial Strategy

What is DAIMON’s current business model and strategic focus? What role does the dataset release play in your commercial strategy?

We started as a device company focused on making highly capable tactile sensors, especially for robot hands. But as technology and business developed, everyone realized it is not just about one component, rather the entire technology chain: devices, data of adequate quality and quantity, and finally the right framework to build, train, and deploy models on robots in real application environments.

Our business strategy is best described as “3D”: Devices, Data, and Deployment. We build devices for data collection, our own ecosystem, and for deploying them in our partners’ potential application domains. This enables the collection of real-world tactile-rich data and complete closed-loop validation. This will become an integral part of the 3D business model. Most startups in this space are following a similar path until eventually some may become more specialized or more tightly integrated with other companies. For now, it is mostly vertical integration.

Advertisement

Embodied Skills and the Convergence Moment

You’ve introduced the concept of “embodied skills” as essential for humanoid robots to move beyond having just an advanced AI “brain.” What prompted this insight? What new capabilities could embodied skills enable? After the rapid evolution of models and hardware over the past two years, has your definition or roadmap for embodied skills evolved?

We have come a long way now see a convergence point where electrical, electronic, and mechatronic hardware technologies have advanced tremendously in last two decades. Robots are now fully electric, do not require hydraulics, because hardware has evolved rapidly. Modern electronics provide tremendous bandwidth with high torques. If we can build intelligence into these systems, we can create truly humanoid robots with the ability to operate in unstructured environments, make decisions, and take actions autonomously.

“Our vision is for robots to achieve robust manipulation capabilities and evolve into reliable partners for humans.” —Prof. Michael Yu Wang, DAIMON Robotics

AI has arrived at exactly the right time. Enormous resources have been invested in AI development, especially large language models, which are now being generalized into world models that enable physical AI capabilities. We would like to see these manifested in real-world systems.

Advertisement

While both AI and core hardware technologies continue to evolve, the focus is much clearer now. For example, human-sized robots are preferred in a home environment. This is an exciting domain with a promise of great societal benefit if we can eventually achieve safe, reliable, and cost-effective robots.

The Road to Real-World Deployment

Today, many robots can deliver impressive demos, yet there remains a gap before they truly enter real-world applications. What could be a potential trigger for real-world deployment? Which scenarios are most likely to achieve large-scale deployment first?

I think the road toward large-scale deployment of generalist robots is still long, but we are starting to see signs of feasibility within specific domains. It is very similar to autonomous vehicles, where we are yet to see full deployment of robo-taxis, while we have already started to find mobile robots and smaller vehicles widely deployed in the hospitality industry. Virtually every major hotel in China now has a delivery robot — no arms, just a vehicle that picks up items from the hotel lobby (e.g., food deliveries). The delivery person just loads the food and selects the room number. It is up to the robot thereafter to navigate and reach the guest’s room, which includes using the elevator, to deliver the food. This is already nearly 100 percent deployed in major Chinese hotels.

Hotel and restaurant robots are viewed as a model for deploying humanoid robots in specific domains like overnight drugstores and convenience stores. I expect complete deployment in such settings within a short timeframe, followed by other applications. Overall, we can expect autonomous robots, including humanoids, to progressively penetrate specific sectors, delivering value in each and expanding into others.

Advertisement

Ultimately, our vision is for robots to achieve robust manipulation capabilities and evolve into reliable partners for humans. By seamlessly integrating into our homes and daily lives, they will genuinely benefit and serve humanity.

This interview has been edited for length and clarity.

Source link

Advertisement
Continue Reading

Tech

BYD might have just solved the worst part of owning an EV

Published

on

Electric vehicles are now common on the road, but charging still remains one of the biggest friction points. Even when you find a fast charger, stopping can easily add 30 minutes or more to a trip, which makes long distance travel feel less convenient compared to refueling a gas car.

At BYD’s charging facility in Beijing, the company is already demonstrating a system that aims to remove that delay. Vehicles are pulling in, plugging in, and charging using BYD’s second generation Blade Battery and flash charging setup, giving a clearer picture of how the technology works outside a controlled prototype environment.

Charging speeds are being pushed far beyond current standards

BYD’s pitch is centered on how quickly usable range can be added rather than how fast a battery can reach full charge. The company describes the experience in terms of a short stop, suggesting that a vehicle could gain a significant amount of range in the time it takes to grab a coffee.

The charging setup reflects that approach. The cable is suspended from an overhead rail instead of resting on the ground, which makes it easier to handle and allows it to move freely based on the position of the vehicle. It also supports connections from either side, which reduces the need to reposition the car in a busy charging area.

The battery is where most of the change is happening

While the charger itself draws attention, BYD is positioning the second generation Blade Battery as the core of the system. The company says the battery has been redesigned to handle higher charging speeds while addressing common bottlenecks such as heat buildup and performance in low temperatures.

According to BYD, the system can charge from 10 percent to 97 percent in around 12 minutes even at temperatures as low as minus 30 degrees Celsius. The company also states that the battery passes simultaneous nail penetration and charging tests, which are intended to simulate severe failure conditions.

Advertisement

How it compares to current fast charging

Most widely available fast chargers today operate at around 350 kilowatts, while some newer vehicles can reach closer to 500 kilowatts under peak conditions. Even in those cases, charging from 10 percent to 80 percent typically takes between 20 and 30 minutes.

BYD says its flash charging system can deliver up to 1,500 kilowatts through a single connector, which would place it well beyond current charging infrastructure. Under those conditions, the company claims the system can move from 10 percent to 70 percent in about five minutes and up to 97 percent in roughly nine minutes.

This is already in use, with plans to scale quickly

The system at BYD’s Beijing site is not being presented as a prototype, as vehicles are already using the charging stations on site, which provides a more practical indication of how the technology performs outside a controlled demonstration environment.

BYD positions this as an early stage of deployment and says it plans to build up to 20,000 of these charging stations by the end of 2026, with the network expected to expand beyond China as part of a broader global rollout, a scale that will ultimately determine whether the system remains limited to specific locations or becomes part of everyday charging infrastructure

Source link

Advertisement
Continue Reading

Tech

NYT Strands hints and answers for Friday, May 1 (game #789)

Published

on

Looking for a different day?

A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Thursday’s puzzle instead then click here: NYT Strands hints and answers for Thursday, April 30 (game #788).

Strands is the NYT’s latest word game after the likes of Wordle, Spelling Bee and Connections – and it’s great fun. It can be difficult, though, so read on for my Strands hints.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Where to buy Pragmata: The PC storefront options explained

Published

on

Capcom’s long-awaited sci-fi action game is finally landing on PC in 2026, and with several storefronts competing for your purchase, choosing where to buy it can make a meaningful difference to both what you pay and what you get alongside your copy.

Pragmata

Pragmata places players in a near-future lunar setting where the rules of physics have broken down, casting them as an armoured Cosmonaut tasked with protecting a mysterious girl named Diana who holds the key to humanity’s survival across a world of collapsing environments and digitised threats.

With that premise in mind, the first question most buyers will ask is whether to go official or shop around, and Steam answers the former convincingly, offering direct launcher integration with mod support, cloud saves, and community forums, though its pricing holds firmly at retail outside of seasonal sale windows.

G2A image showing a scene from the game PragmataG2A image showing a scene from the game Pragmata

Where to buy Pragmata on release

G2A.COM stands out as the ultimate destination for smart buyers who want the best balance of price and features. Unlike traditional stores that stick to a single retail price, G2A is a global marketplace where multiple independent sellers compete, often resulting in much better deals. It is widely considered one of the best places to buy digital games because it offers incredible flexibility, including various regional options and digital delivery that is both fast and secure. What truly sets G2A.COM apart is its role as a comprehensive content hub. Beyond being a place for a simple purchase, it provides users with editorials, guides, and lore pieces that help you get the most out of your game. You can track Pragmata through your wishlist and receive instant alerts when prices drop. Furthermore, G2A Plus subscribers enjoy extra discounts across a massive catalog. With a transparent seller review system and a user-friendly interface, it is easily the most feature- rich option for anyone looking for where to buy PC games online.

Epic Games Store takes a similarly official route and goes a step further by carrying both the Standard and Deluxe editions of Pragmata, with its Epic Rewards cashback programme offering a modest return on spend, though fixed publisher pricing means meaningful savings are rare without a site-wide coupon event.

Advertisement

Advertisement

For buyers who want that official retailer assurance without committing to a specific launcher ecosystem, Green Man Gaming is a reasonable middle ground, working directly with publishers to supply licensed keys and offering occasional XP-based loyalty discounts, though its prices tend to shadow Steam’s retail levels fairly closely.

Eneba operates on a marketplace model rather than a traditional storefront, which opens up more competitive pricing through independent sellers, and its refund policy for unverified keys adds a layer of reassurance for anyone cautious about third-party key sites.

Loaded takes a leaner approach still, stripping the experience back to instant key delivery with minimal friction, which suits buyers who want a fast transaction above all else, though the absence of seller feedback tools or community features makes it a bare-bones option by comparison.

Advertisement
Platform Type of Seller Purchase Process Post-Purchase Features
G2A.COM Marketplace Independent sellers, fast delivery Deal alerts, verified reviews, editorial content, G2A Plus
Eneba Marketplace Instant digital delivery Refund policy, loyalty program
Steam Official Platform Direct from Steam Mod support, forums, cloud saves
Epic Games Store Official Platform Structured checkout Epic Rewards, version bonuses
Green Man Gaming Authorized Retailer Licensed keys XP-based discounts, official support
Loaded Retail Key Seller Minimal interface Basic support, 1% cashback

Source link

Continue Reading

Tech

Mark Zuckerberg and Tim Cook, Seahawks owners? Tech moguls’ reported interest quickly spiked

Published

on

Apple’s Tim Cook, left, and Meta’s Mark Zuckerberg. (Apple, Meta Photos)

Seattle Seahawks fans envisioning another tech billionaire as the new owner of the NFL team have a couple Silicon Valley-based names to consider. Or not.

Meta founder Mark Zuckerberg and Apple CEO Tim Cook have been mentioned as potential suitors for the franchise, which was put up for sale in February by the estate of the late Microsoft co-founder Paul Allen.

Front Office Sports reported Thursday that Zuckerberg and Cook are among at least four interested parties considering making offers for the team. They are the first serious names to emerge in a sale that could fetch upwards of $7 billion.

In a post on X, Dylan Byers of Puck said the report was not true and sources close to Cook and Zuckerberg said neither was interested in bidding for the Seahawks.

Reps for all of those involved declined to comment or could not be reached, including the Paul G. Allen estate and the bank handling the sale process, Front Office Sports said.

Advertisement

According to Forbes, Zuckerberg is worth $222 billion and Cook $2.8 billion.

Cook announced last week that he is stepping down as Apple CEO and will become executive chairman on Sept. 1.

The plan to sell the 50-year-old franchise is part of the long process of divesting many of the assets and investments that Allen made during his lifetime and direct all proceeds to philanthropy. Since his death, Allen’s estate has steadily moved to sell major assets, including real estate holdings and, more recently, advancing the sale process for the NBA’s Portland Trail Blazers.

Allen co-founded Microsoft with childhood friend Bill Gates, and the billionaire philanthropist bought the Seahawks in 1997 for approximately $200 million from previous owner Ken Behring. The purchase secured the team’s home in Seattle after Behring threatened a move to California. Allen ran the team until his death in 2018 at the age of 65 after he was diagnosed with a recurrence of non-Hodgkin’s lymphoma.

Advertisement

Gates has previously expressed no interest in owning the team, but the list of those who could includes many who made their fortunes in tech, from former Microsoft CEO Steve Ballmer to Amazon founder Jeff Bezos.

Related:

Source link

Continue Reading

Tech

In Real-World Test, an AI Model Did Better Than ER Doctors At Diagnosing Patients

Published

on

A new study from Harvard Medical School and Beth Israel Deaconess found that an OpenAI reasoning model outperformed experienced ER doctors at diagnosing and managing patient cases using messy, real-world emergency department records. Researchers say the results don’t support replacing doctors, but they do suggest AI could meaningfully reshape clinical workflows if tested carefully in prospective trials. NPR reports: The researchers ran a series of experiments on the AI model to test its clinical acumen — including actual cases like the lupus patient who’d been previously treated at the emergency department at Beth Israel in Boston. The team graded how well the AI model could provide an accurate diagnosis at three moments in time, from the triage stage in the ER, up to being admitted into the hospital. Overall, AI outperformed two experienced physicians — and did so with only the electronic health records and the limited information that had been available to the physicians at the time. “This is the big conclusion for me — it works with the messy real-world data of the emergency department, ” said Dr. Adam Rodman, a clinical researcher at Beth Israel and one of the study authors. “It works for making diagnoses in the real world.”

Other parts of the study focused on case reports published in the New England Journal of Medicine and clinical vignettes to suss out whether the AI model could meet well-established “benchmarks” and game out thorny diagnostic questions. “The model outperformed our very large physician baseline,” said Raj Manrai, assistant professor of Biomedical Informatics at Harvard Medical School who was also part of the study. The authors emphasize the AI relied on text alone, while in real life, clinicians need to attend to many other inputs like images, sounds and nonverbal cues when diagnosing and treating a patient. The findings have been published Thursday in the journal Science.

Source link

Continue Reading

Trending

Copyright © 2025