Connect with us
DAPA Banner

Tech

ChatGPT sucks at being a real robot

Published

on

This story was originally published in The Highlight, Vox’s member-exclusive magazine. To get access to member-exclusive stories every month, join the Vox Membership program today.

There’s something sad about seeing a humanoid robot lying on the floor. Without any electricity, these bipedal machines can’t stand up, so if they’re powered down and not hanging from a winch, they’re sprawled out on the floor, staring up at you, helpless.

That’s how I met Atlas a couple of months ago. I’d seen the robot on YouTube a hundred times, running obstacle courses and doing backflips. Then I saw it on the floor of a lab at MIT. It was just lying there. The contrast is jarring, if only because humanoid robots have become so much more capable and ubiquitous since Atlas got famous on YouTube.

Across town at Boston Dynamics, the company that makes Atlas, a newer version of the humanoid robot had learned not only to walk but also to drop things and pick them back up instinctively, thanks to a single artificial intelligence model that controls its movement. Some of these next-generation Atlas robots will soon be working on factory floors — and may venture further. Thanks in part to AI, general-purpose humanoids of all types seem inevitable.

Advertisement

“In Shenzhen, you can already see them walking down the street every once in a while,” Russ Tedrake told me back at MIT. “You’ll start seeing them in your life in places that are probably dull, dirty, and dangerous.”

Tedrake runs the Robot Locomotion Group at the MIT Computer Science and Artificial Intelligence Lab, also known as CSAIL, and he co-led the project that produced the latest AI-powered Atlas. Walking was once the hard thing for robots to learn, but not anymore. Tedrake’s group has shifted focus from teaching robots how to move to helping them understand and interact with the world through software, namely AI. They’re not the only ones.

In the United States, venture capital investment in robotics startups grew from $42.6 million in 2020 to nearly $2.8 billion in 2025. Morgan Stanley predicts the cumulative global sales of humanoids will reach 900,000 in 2030 and explode to more than 1 billion by 2050, the vast majority of which will be for industrial and commercial purposes. Some believe these robots will ultimately replace human labor, ushering in a new global economic order. After all, we designed the world for humans, so humanoids should be able to navigate it with ease and do what we do.

an illustration of one nervous person and three robots all transporting brown boxes together in a line

Janik Söllner for Vox

They won’t all be factory workers, if certain startups get their way. A company called X1 Technologies has started taking preorders for its $20,000 home robot, Neo, which wears clothes, does dishes, and fetches snacks from the fridge. Figure AI introduced its Figure 03 humanoid robot, which also does chores. Sunday Robotics said it would have fully autonomous robots making coffee in beta testers’ homes next year.

Advertisement

So far, we’ve seen a lot of demos of these AI-powered home robots and promises from the industrial humanoid makers, but not much in the way of a new global economic order. Demos of home robots, like the X1 Neo, have relied on human operators, making these automatons, in practice, more like puppets. Reports suggest that Figure AI and Apptronik have only one or two robots on manufacturing floors at any given time, usually doing menial tasks. That’s a proof of concept, not a threat to the human work force.

“In order to make them better, we have to make AI better.”

You can think of all these robots as the physical embodiment of AI, or just embodied AI. This is what happens when you put AI into a physical system, enabling it to interact with the real world. Whether that’s in the form of a humanoid robot or an autonomous car, it’s the next frontier for hardware and, arguably, technological progress writ large.

Embodied AI is already transforming how farming works, how we move goods around the world, and what’s possible in surgical theaters. We might be just one or two breakthroughs away from walking, talking, thinking machines that can work alongside us, unlocking a whole new realm of possibilities. “Might” is the key word there.

Advertisement

“If we’re looking for robots that will work side by side with us in the next couple of years, I don’t think it will be humanoids,” Daniela Rus, director of CSAIL, told me not long after I left Tedrake’s lab. “Humanoids are really complicated, and we have to make them better. And in order to make them better, we have to make AI better.”

So to understand the gap between the hype around humanoids and the technology’s real promise, you have to know what AI can and can’t do for robots. You also, unfortunately, have to try to understand what Elon Musk has been up to at Tesla for the past five years.

It’s still embarrassing to watch the part of the Tesla AI Day presentation in 2021 when a human person dressed in a robot costume appears on stage dancing to dubstep music. Musk eventually stops the dance and announces that Tesla, “a robotics company,” will have a prototype of a general-purpose humanoid robot, now known as Optimus, the following year. Not many people believed him, and now, years later, Tesla still has not delivered a fully functional Optimus. Never afraid to make a prediction, Musk told audiences at Davos in January 2026 that Tesla’s robot will go on sale next year.

“People took him seriously because he had a great track record,” said Ken Goldberg, a roboticist at the University of California-Berkeley and co-founder of Ambi Robotics. “I think people were inspired by that.”

Advertisement

You can imagine why people got excited, though. With the Optimus robot, Elon Musk promised to eliminate poverty and offer shareholders “infinite” profits. He said engineers could effectively translate Tesla’s self-driving car technology into software that could power autonomous robots that could work in factories or help around the house. It’s a version of the same vision humanoid robotics startups are chasing today, albeit colored by several years of Musk’s unfulfilled promises.

We now know that Optimus struggles with a lot of the same problems as other attempts at general-purpose humanoids. It often requires humans to remotely operate it, and it struggles with dexterity and precision. The 1X Neo, likewise, needed a human’s help to open a refrigerator door and collapsed onto the floor in a demo for a New York Times journalist last year. The hardware seems capable enough. Optimus can dance, and Neo can fold clothes, albeit a bit clumsily. But they don’t yet understand physics. They don’t know how to plan or to improvise. They certainly can’t think.

“People in general get too excited by the idea of the robot and not the reality.”

“People in general get too excited by the idea of the robot and not the reality,” said Rodney Brooks, co-founder of iRobot, makers of the Roomba robot vacuum. Brooks, a former CSAIL director, has written extensively and skeptically about humanoid robots.

Advertisement

Clearly, there’s a gap between what’s happening in research labs and what’s being deployed in the real world. Some of the optimism around humanoids is based on good science, though. In 2023, Tedrake coauthored a landmark paper with Tony Zhao, co-founder and CEO of Sunday Robotics, that outlined a novel method for training robots to move like humans. It involves humans performing the task wearing sensor-laden gloves that send data to an AI model that enables the robot to figure out how to do those tasks. This complemented work Tedrake was doing at the Toyota Research Institute that used the same kinds of methods AI models use to generate images to generate robot behavior. You’ve heard of large language models, or LLMs. Tedrake calls these large behavior models, or LBMs.

It makes sense. By watching humans do things over and over, these AI models collect enough data to generate new behaviors that can adapt to changing environments. Folding laundry, for example, is a popular example of a task that requires nimble hands and better brains. If a robot picks up a shirt and the fabric flops down in an unexpected way, it needs to figure out how to handle that uncertainty. You can’t simply program it to know what to do when there are so many variables. You can, however, teach it to learn.

That’s what makes the lemonade demo so impressive. Some of Rus’s students at CSAIL have been teaching a humanoid robot named Ruby to make lemonade — something that you might want a robot butler to do one day — by wearing sensors that measure not only the movements but the forces involved. It’s a combination of delicate movements, like pouring sugar, and strong ones, like lifting a jug of water. I watched Ruby do this without spilling a drop. It hadn’t been programmed to make lemonade. It had learned.

The real challenge is getting this method to scale. One way is simply to brute-force it: Employ thousands of humans to perform basic tasks, like folding laundry, to build foundation models for the physical world. Foundation models are the massive datasets that can be adapted to specific tasks like generating text, images, or in this case, robot behavior. You can also get humans to teleoperate countless robots in order to train these models. These so-called arm farms already exist in warehouses in Eastern Europe, and they’re about as dystopian as they sound.

Advertisement

Another option is YouTube. There are a lot of how-to videos on YouTube, and some researchers think that feeding them all into an AI model will provide enough data to give robots a better understanding of how the world works. These two-dimensional videos are obviously limited, if only because they can’t tell us anything about the physics of the objects in the frame. The same goes for synthetic data, which involves a computer rapidly and repeatedly carrying out a task in a simulation. The upside here, of course, is more data, more quickly. The downside is that the data isn’t as good, especially when it comes to physical forces like friction and torque, which also happen to be the most important for robot dexterity.

“Physics is a tough task to master,” Brooks said. “And if you have a robot, which is not good with physics, in the presence of people, it doesn’t end well.”

an illustration of a robot butler tripping up some stairs. Food and drinks fly everywhere.

Janik Söllner for Vox

That’s not even taking into account the many other bottlenecks facing robotics right now. While components have gotten cheaper — you can buy a humanoid robot right now for less than $6,000, compared to the $75,000 it cost to buy Boston Dynamics’ small, four-legged robot Spot five years ago — batteries represent a major bottleneck for robotics, limiting the run time of most humanoids to two to four hours.

Then you have the problem with processing power. The AI models that can make humanoids more human require massive amounts of compute. If that’s done in the cloud, you’ve got latency issues, preventing the robot from reacting in real time. And inevitably, to tie a lot of other constraints into a tidy bundle, the AI is just not good enough.

Advertisement

If you trace the history of AI and the history of robotics back to their origins, you’ll see a braided line. The two technologies have intersected time and again, since the birth of the term “artificial intelligence” at a Dartmouth summer research workshop in the summer of 1956. Then, half a century later, things started heating up on the AI front, when advances in machine learning and powerful processors called GPUs — the things that have now made Nvidia a $5 trillion company — ushered in the era of deep learning. I’m about to throw a few technical terms at you, so bear with me.

Machine learning is a type of AI. It’s when algorithms look for patterns in data and make decisions without being explicitly trained to do so. Deep learning takes it to another level with the help of a machine learning model called a neural network. You can think of a neural network, a concept that’s even older than AI, as a system loosely modeled on the human brain that’s made up of lots of artificial neurons that do math problems. Deep learning uses multilayered neural networks to learn from huge data sets and to make decisions and predictions. Among other accomplishments, neural networks have revolutionized computer vision to improve perception in robots.

There are different architectures for neural networks that can do different things, like recognize images or generate text. One is called a transformer. The “GPT” in ChatGPT stands for “generative pre-trained transformer,” which is a type of large language model, or LLM, that powers many generative AI chatbots. While you’d think LLMs would be good at making robots think, they really aren’t. Then there are diffusion models, which are often used for image generation and, more recently, making robots appear to think. The framework that Tedrake and his coauthors described in their 2023 research into using generative AI to train robots is based on diffusion.

“Under the hood, what’s actually going on should be something much more like our own brains.”

Advertisement

Three things stand out in this very limited explanation of how AI and robots get along. One is that deep learning requires a massive amount of processing power and, as a result, a huge amount of energy. The other is that the latest AI models work with the help of stacks of neural networks whose millions or even billions of artificial neurons do their magic in mysterious and usually inefficient ways. The third thing is that, while LLMs are good at language, and diffusion models are good at images, we don’t have any models that are good enough at physics to send a 200-pound robot marching into a crowd to shake hands and make friends.

As Josh Tenenbaum, a computational cognitive scientist at MIT, explained to me recently, an LLM can make it easier to talk to a robot, but it’s hardly capable of being the robot’s brains. “You could imagine a system where there’s a language model, there’s a chatbot, you want to talk to your robot,” Tenenbaum said. “Under the hood, what’s actually going on should be something much more like our own brains and minds or other animals, not just humans in terms of how it’s embodied and deals with the world.”

So we need better AI for robots, if not in general. Scientists at CSAIL have been working on a couple of physics-inspired and brain-like technologies they’re calling liquid neural networks and linear optical networks. They both fall into the category of state-space models, which are emerging as an alternative or rival to transformer-based models. Whereas transformer-based models look at all available data to identify what’s important, state-space models are much more efficient, as they maintain a summary of the world that gets updated as new data comes in. It’s closer to how the human brain works.

To be perfectly honest, I’d never heard of state-space models until Rus, the CSAIL director, told me about them when we chatted in her office a few weeks ago. She pulled up a video to illustrate the difference between a liquid neural network and a traditional model used for self-driving cars. In it, you can see how the traditional model focuses its attention on everything but the road, while the newer state-space model only looks at the road. If I’m riding in that car, by the way, I want the AI that’s watching the road.

Advertisement

“And instead of a hundred thousand neurons,” Rus says, referring to the traditional neural network, “I have only 19.” And here’s where it gets really compelling. She added, “And because I have only 19, I can actually figure out how these neurons fire and what the correlation is between these neurons and the action of the car.”

You may have already heard that we don’t really know how AI works. If newer approaches bring us a little bit closer to comprehension, it certainly seems worth taking them seriously, especially if we’re talking about the kinds of brains we’ll put in humanoid robots.

When a humanoid robot loses power, when electricity stops flowing to the motors that keep it upright, it collapses into a heap of heavy metal parts. This can happen for any number of reasons. Maybe it’s a bug in the code or a lost wifi connection. And when they’re on, humanoids are full of energy as their joints fight gravity or stand ready to bend. If you imagine being on the wrong side of that incredible mechanical power, it’s easy to doubt this technology.

Some companies that make humanoid robots also admit that they’re not very useful yet. They’re too unreliable to help out around the house, and they’re not efficient enough to be helpful in factories. Furthermore, most of the money being spent developing robots is being spent on making them safe around people. When it comes to deploying robots that can contribute to productivity, that can participate in the economy, it makes a lot more sense to make them highly specialized and not human-shaped.

Advertisement

“Let’s not do open heart surgery right away with these things.”

The embodied AI that will transform the world in the near future is what’s already out there. In fact, it’s what’s been out there for years. Early self-driving cars date back to the 1980s, when Ernst Dickmanns put a vision-guided Mercedes van on the streets of Munich. Researchers from Carnegie Mellon University got a minivan to drive itself across the United States in 1995. Now, decades later, Waymo is operating its robotaxi service in a half-dozen American cities, and the company says its AI-powered cars actually make the roads safer for everyone.

Then there are the Roombas of the world, the robots that are designed to do one thing and keep getting better at it. You can include the vast array of increasingly intelligent manufacturing and warehouse robots in this camp too. By 2027, the year Elon Musk is on track to miss his deadline to start selling Optimus humanoids to the public, Amazon will reportedly replace more than 600,000 jobs with robots. These would probably be boring robots, but they’re safe and effective.

Science fiction promised us humanoids, however. Pick an era in human history, in fact, and someone was dreaming about an automaton that could move like us, talk like us, and do all our dirty work. Replicants, androids, the Mechanical Turk — all these humanoid fantasies imagined an intelligent synthetic self.

Advertisement

Reality gave us package-toting platforms on wheels roving around Amazon warehouses or the sensor-heavy self-driving cars clogging San Francisco streets. In time, even the skeptics think that humanoids will be possible. Probably not in five years, but maybe in 50, we’ll get artificially intelligent companions who can walk alongside us. They’ll take baby steps.

“Good robots are going to be clumsy at first, and you have to find applications where it’s okay for the robot to make mistakes and then recover,” Tedrake said. “Let’s not do open-heart surgery right away with these things. This is more like folding laundry.”

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Game development diary: TestFlight, trial by fire, and a trophy

Published

on

The in-development word game “Character Limit” faced testers in the last two months, but as TestFlight got underway, an unexpected game convention opportunity went especially well.

Split view showing TestFlight app dashboard with large blue TestFlight icon on the left, and a crowd at an event booth titled Character Limit on the right
A tale of two tests: TestFlight and a gaming convention.

Back in early February, Character Limit had reached a good stopping point to get some testing done with real players. A lot of the work had been done, so now it was time to get some bug fixing and polishing done, and to get some real feedback.
This previously came in the form of visits to meet other game developers in Cardiff for brief sessions. But you can only go so far in terms of feedback from a kind audience.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Home Depot Spring Black Friday (2026): Best Tool and Grill Deals

Published

on

Words have no meaning when Black Friday falls in April and lasts two weeks. Originally coined to denote the pandemonium and chaos when holiday shopping met football games after Thanksgiving, Black Friday has come to blankly mean “discounts whenever.”

And so when The Home Depot says they’ve got a “Spring Black Friday” sale going, what they seem to be trying to say is that springtime might as well be Christmas for the DIY and backyard set. It’s when you buy stuff. Except probably for yourself.

Anyway, most of this sale is not a barn-burner. But Home Depot loves a BOGO tool sale on the Milwaukee tools used and recommended by WIRED tester Scott Gilbertson. And Weber grills are $50 to $100 off, including a couple of WIRED’s favorite grills on earth.

Here are the deals WIRED is tracking on the Home Depot Black Friday Spring Sale, ending April 22. Or just check out the whole Home Depot Black Friday deals below.

Advertisement

$50 off the Best Gas Grill for Most Families

Image may contain: Appliance, Burner, Device, Electrical Device, Oven, Bbq, Cooking, Food, Grilling, and Mailbox

Weber

Spirit E-210 Gas Grill

For years, we’ve been recommending Weber’s straightforward 200-series Sprit grills as some of the best grills at the intersection of value and performance. The build quality is good, the cook is even, and the heat on the propane burners is easy to adjust. Like all Webers, you can build your grill’s workspace out with accessories and snap-on options until it’s tong heaven. Spirit already starts out pretty affordable, with a 10-year warranty and porcelain-coated cast iron grill grates that make for easy clean-up and clean cooks. An extra $50 off is a nice cherry on top.

But note that while a Spirit is likely all the grill you’ll ever need for a large family, grill cooks who throw a lot of parties might upgrade to the Genesis E-325 ($849) for the larger searing area and higher BTUs, added storage and prep, and the option on a top grill. That’s also on sale in April, for $100 off list price.

BOGO Deals on Milwaukee, Dewalt, and Ryobi Tools

The other thing The Home Depot likes to do is offer BOGOs on tools—in this case packaging a $200 tool with a free $200 power pack. This is, needless to say, a nice deal.

Advertisement

On the Milwaukee tool ecosystem used by WIRED reviewer and inveterate DIYer Scott Gilbertson—favored for its mix of value, durability and pure power, an assortment of tools come with a free power pack.

But these BOGos can be a bit maddening to sort out on Home Depot’s website. So I’ve done a little legwork for you. Here are the links to the BOGO deals for Milwaukee, Ryobi, and DeWalt. You’re welcome.

Steep Discounts on Ryobi Yard Tools

Longtime WIRED reviewer Parker Hall has long held the belief that Ryobi yard tools are the most most slept-on tool ecosystem for home gardeners and landscapers, from mowers to chain saws to trimmers.

Part of the reason is service: At least in our region (the Pacific Northwest), Ryobi doesn’t make you send in tools to be serviced somewhere else. They instead keep a repairman on retainer, and he comes to you and fixes your mower. This is a wonderful thing. In any case, Hall says that he’s rarely had cause to call on his repairman. He just likes to know he’s there.

Advertisement

Source link

Continue Reading

Tech

A man allegedly threw a Molotov cocktail at Sam Altman’s house

Published

on

A 20-year-old man was arrested by the San Francisco Police Department after allegedly throwing a Molotov cocktail at OpenAI CEO Sam Altman’s house, The New York Times reports.

In a statement shared on X, SFPD wrote that it responded to a request for a fire investigation in the North Beach neighborhood of San Francisco around 7:12 AM ET / 4:12AM PT. “At the scene, officers learned that an unknown male subject threw an incendiary destructive device at a home, causing a fire at an exterior gate.” After the man fled on foot, police found and arrested him around an hour later while responding to a business’ complaint about an “unknown male subject threatening to burn down the building.” That business turned out to be OpenAI’s headquarters and the subject happened to be the same man who threw the Molotov at Altman’s house.

“Early this morning, someone threw a Molotov cocktail at Sam Altman’s home and also made threats at our San Francisco headquarters. Thankfully, no one was hurt,” an OpenAI spokesperson confirmed in a statement to Wired. “We deeply appreciate how quickly SFPD responded and the support from the city in helping keep our employees safe. The individual is in custody, and we’re assisting law enforcement with their investigation.”

As it’s become more commonplace, artificial intelligence has also become more divisive. While more and more people continue to use AI tools, public reaction to the encroachment of the technology, whether in gaming or customer service, is increasingly negative. Altman’s warnings of AI’s impact on employment, and a recent New Yorker investigation digging into his allegedly manipulative leadership style at OpenAI, have also raised questions about the CEO’s prominent role as a steward of the technology.

Advertisement

Source link

Continue Reading

Tech

Microsoft Begins Removing Copilot Branding From Windows 11 Apps

Published

on

Microsoft has started stripping Copilot branding out of Notepad in Windows 11, replacing the old Copilot menu with a more generic “writing tools” label. The AI features themselves aren’t going away, but Microsoft seems to be backing off the heavy-handed Copilot branding and extra entry points. Windows Central reports: As promised, Microsoft is now beginning its effort to reduce and remove Copilot branding across Windows 11, with the latest Notepad update for Insiders outright removing the Copilot icon and phrasing. Now, the AI menu is simply called “writing tools,” and maintains the same functionality as before. Additionally, Microsoft has also removed references to AI in the Settings area in Notepad. Now, the ability to turn on or off these AI powered writing tools are now listed under “Advanced features.”

This change is present in the latest preview build of Notepad which is now rolling out to all Windows Insiders. The app version is 11.2512.28.0, and you’ll know you have it if you see the Copilot icon replaced with a pen icon instead. […] For Notepad, it appears Microsoft has opted to replace the Copilot menu with something more generic. It’s still the same functionally, but it’s no longer leaning on the tainted Copilot brand. Of course, you can still easily turn off all AI features in Notepad if you don’t want them. The Verge reports that the “unnecessary Copilot buttons” are also disappearing from the Snipping Tool, Photos, and Widgets.

Source link

Continue Reading

Tech

SiFive raises $400m Series G at $3.65bn valuation in final round before IPO

Published

on

In short: SiFive, the RISC-V chip IP firm founded by the Berkeley engineers who created the open-source instruction set architecture, raised $400 million in an oversubscribed Series G on April 9, 2026, at a valuation of $3.65 billion. The round was led by Atreides Management and backed by Nvidia, Apollo Global Management, D1 Capital Partners, Point72 Turion, T. Rowe Price Investment Management, Capital Group, Prosperity7 Ventures, and Sutter Hill Ventures. CEO Patrick Little described it as the company’s final private funding round before an initial public offering.

Open source, closed competition

RISC-V (pronounced “risk five”) is an open-source instruction set architecture, the foundational specification governing how a processor interprets and executes instructions, developed at the University of California, Berkeley, from 2010 onwards. Unlike the proprietary architectures maintained by Arm Holdings and Intel, RISC-V is free to implement, extend, and commercialise without per-unit royalties or usage restrictions. SiFive was founded in 2015 by three of the project’s principal architects: Krste Asanović, Andrew Waterman, and Yunsup Lee, working alongside David Patterson, a Turing Award winner and co-author of the standard text on computer architecture. The company’s business model is structurally similar to Arm’s: it designs CPU intellectual property and licences that IP to customers who integrate it into their own silicon, rather than fabricating chips itself. The critical difference is that SiFive’s designs sit on an architecture that no single company controls.

That independence became more commercially valuable in March 2026, when Arm launched its AGI CPU, its first in-house silicon product in its 35-year history, with Meta and OpenAI as debut customers. The move repositioned Arm from a neutral IP licensor into a company with direct hardware ambitions, creating the kind of vertical conflict that has historically pushed technology buyers toward open-standard alternatives, and generating fresh urgency for a competitor that owes no allegiance to any proprietary architecture owner. Intel attempted a different route into the space: in 2021 the chipmaker offered more than $2 billion to acquire SiFive outright, a deal that collapsed over valuation disagreements. Intel has since joined Elon Musk’s Terafab as a foundry partner in April 2026, committing its 18A process node to a $25 billion AI compute facility backed by Tesla, SpaceX, and xAI, a strategic reorientation that leaves the RISC-V IP licensing position without Intel as a would-be acquirer or rival.

The Series G: who invested, and why

The $400 million Series G was led by Atreides Management, a Boston-based investment firm managed by Gavin Baker, who built his reputation running Fidelity’s OTC Portfolio before founding Atreides in 2019. New participants include Nvidia, Apollo Global Management, D1 Capital Partners, Point72 Turion, and T. Rowe Price Investment Management. Existing shareholders Prosperity7 Ventures, Capital Group, and Sutter Hill Ventures also participated. The round closed oversubscribed and lifts SiFive’s total valuation to $3.65 billion, up from the $2.5 billion set at the Series F in March 2022. Nvidia’s presence on the cap table is a technical statement as well as a financial one: in January 2026 SiFive announced it is integrating NVLink Fusion into its high-performance data centre platform, enabling RISC-V-based CPUs to connect directly to Nvidia GPUs via a coherent, high-bandwidth interconnect that reduces latency and improves system utilisation for large-scale AI inference. That compatibility positions SiFive’s CPU IP to work alongside the Vera Rubin platform Nvidia announced at GTC 2026, the company’s next-generation GPU architecture targeting agentic AI workloads.

Advertisement

The broader investment context is one of accelerating hyperscale demand for custom silicon. Amazon committed $50 billion to its Trainium chip programme in its April 2026 shareholder letter, positioning in-house AI silicon as a strategic infrastructure necessity rather than an optional enhancement. The deal between Google, Anthropic, and Broadcom for custom AI compute represents a parallel approach, using purpose-built ASICs to reduce dependence on commodity processors across hyperscale inference workloads. SiFive’s pitch is that it offers hyperscale customers a third path: RISC-V CPU IP that is fully customisable, architecturally independent, and built on an open standard that no single acquirer can lock down. “Hyperscale customers have made it very clear that it is time to accelerate the availability of open standard alternatives for the data centre,” said CEO Patrick Little. “Their consistent ask is for customisable CPU solutions in IP form, that will enable them to meaningfully differentiate their data centre compute solutions.

What the capital will build

SiFive has outlined three areas of deployment for the Series G capital. Advanced research and development takes the largest share, focused on expanding the roadmap of high-performance scalar, vector, and matrix RISC-V CPU IP, accelerator cores, and system IP targeting data centre deployments. A second allocation covers software ecosystem development, including existing efforts to port CUDA, Red Hat Enterprise Linux, and Ubuntu to RISC-V, work that is critical to making the architecture practically deployable in production data centres where software compatibility is as important as raw performance. The third allocation supports customer enablement: the direct engineering collaboration that helps hyperscale clients and system vendors integrate SiFive IP into their own silicon programmes. Little framed the company’s open-standard positioning as a structural advantage that compounds over time: “RISC-V was created by our founders to be similar to other open standards, driven and continually improved by collaboration and cross-pollination across a broad community of innovators. This ensures choice and flexibility for customers, and ultimately benefits consumers.” He argued that the market is becoming more receptive to open-standard alternatives precisely as Arm moves further into selling its own branded hardware.

Ten billion cores and the IPO signal

SiFive reported record growth in 2025, with its IP featured in more than 500 semiconductor designs and more than 10 billion RISC-V cores shipped to date across consumer electronics, automotive systems, and data centre processors. The company has framed the data centre segment as a potential $100 billion-plus addressable market, driven by the agentic AI infrastructure buildout that has prompted every major hyperscaler to commit tens of billions of dollars annually to compute expansion. Patrick Little told Reuters that the April 2026 fundraise is the company’s final private round before an IPO, though no exchange or pricing timeline has been confirmed. The signal carries weight: a valuation of $3.65 billion and a roster of investors that includes a major GPU manufacturer, a bulge-bracket alternative asset manager, and two prominent long-only asset managers suggests SiFive is preparing for the kind of institutional scrutiny that accompanies a public filing. As AI chip investment reached record levels in 2025, with capital flowing to custom silicon programmes at every major cloud provider, SiFive’s timing places it squarely at the centre of a market transition it has been building toward for a decade.

Advertisement

Source link

Continue Reading

Tech

Encrypted Emails Are Now Available for Some Gmail Phone App Enterprise Customers

Published

on

We all love encryption. If you use Gmail in an enterprise setting, especially if your work includes sensitive information, you probably love it even more. Certain Gmail app users on iOS and Android phones can now send and receive encrypted emails within the app itself — no add-ons necessary.

Previously, Gmail users could only send emails via end-to-end encryption (E2EE) on their desktops. Google’s announcement said there is “no need to download extra apps or use mail portals.” Customers can simply compose and read encrypted emails on the Gmail app itself on their iOS and Android phones.

Advertisement
A screen capture of a Gmail email on a mobile device. The options for encryption are shown at the bottom of the screen, with Additional Encryption toggled on.

An example of an encrypted email in the Gmail app.

Google

But not all Gmail consumers will be able to use the new feature. It’s only available for Enterprise Plus subscribers with the Assured Controls or Assured Controls Plus add-on. Enterprise Plus is a subscription plan, one of several within Google Workspace. Plus is intended for large businesses and other organizations and offers higher data security and client-side encryption, which the less expensive Enterprise Standard lacks.

Assured Controls and Assured Controls Plus are designed to increase digital sovereignty, data residency and compliance.

More from ZDNETThe Best Email Encryption Software of 2026: Expert Tested

Advertisement

Google said the feature is designed to allow users to “engage with your organization’s most sensitive data from anywhere on their mobile devices while ensuring data remains compliant.”

With the new feature, Gmail app users can send encrypted emails to anyone, even if they aren’t using Gmail. If the recipient is using the Gmail app, the encrypted email will appear like any other email in their inbox. If the recipient is not using the Gmail app, they can still read the encrypted email and reply to it on their own browser — with the entire conversation remaining encrypted.

A screen capture from a mobile device of an email sent from Gmail to a non-Gmail address.

An example of an email from a Gmail app consumer sent to a recipient without the Gmail app.

Advertisement

Google

For example, say a Gmail app customer sends an encrypted message to someone using an iPhone with the native iPhone email app. That person using the iPhone will still be able to read the encrypted email and then answer back with an encrypted message.

Enterprise Plus customers can use the new feature now, whether they are on either the Rapid Release or Scheduled Release domains. To encrypt an email, click the lock icon and select additional encryption. Then create your message.

Business and organization administrators must enable the Android and iOS clients in the CSE admin interface in the Admin Console to grant access to their Gmail users.

Proton is an alternative for businesses and consumers

Proton Workspace, an enterprise solution that launched last month, also has end-to-end email encryption but with the added benefit of being based in Europe (Switzerland), which does have to comply with the US CLOUD Act and, thus, hand over data to the US government.

Advertisement

For the everyday consumer, Proton Mail has end-to-end email encryption and is available for free or in paid plans, some of which include bundled privacy and security apps, like a VPN and a password manager.

Source link

Advertisement
Continue Reading

Tech

Mems Photonics Chip Shrinks Quantum Computer Control Limits

Published

on

By many estimates, quantum computers will need millions of qubits to realize their potential applications in cybersecurity, drug development, and other industries. The problem is, anyone who has wanted to simultaneously control millions of a certain kind of qubits has run into the problem of trying to control millions of laser beams.

That’s exactly the challenge that was faced by scientists working on the MITRE Quantum Moonshot project, which brought together scientists from MITRE, MIT, the University of Colorado at Boulder, and Sandia National Laboratories. The solution they developed came in the form of an image projection technology that they realized could also be the fix for a host of other challenges in augmented reality, biomedical imaging, and elsewhere. The device is a one-square-millimeter photonic chip capable of projecting the Mona Lisa onto an area smaller than the size of two human egg cells.

“When we started, we certainly never would have anticipated that we would be making a technology that might revolutionize imaging,” says Matt Eichenfield, one of the leaders of the Quantum Moonshot project, a collaborative research effort focused on developing a scalable diamond-based quantum computer, and a professor of quantum engineering at the University of Colorado at Boulder. Each second, their chip is capable of projecting 68.6 million individual spots of light—called scannable pixels to differentiate them from physical pixels. That’s more than fifty times the capability of previous technology, such as micro-electromechanical systems (MEMS) micromirror arrays.

“We have now made a scannable pixel that is at the absolute limit of what diffraction allows,” says Henry Wen, a visiting researcher at MIT and a photonics engineer at QuEra Computing.

Advertisement

The chip’s distinguishing feature is an array of tiny micro-scale cantilevers, which curve away from the plane of the chip in response to voltage and act as miniature “ski-jumps” for light. Light is channeled along the length of each cantilever via a waveguide, and exits at its tip. The cantilevers contain a thin layer of aluminum nitride, a piezoelectric which expands or contracts under voltage, thus moving the micromachine up and down and enabling the array to scan beams of light over a two-dimensional area.

Despite the magnitude of the team’s achievement, Eichenfield says that the process of engineering the cantilevers was “pretty smooth.” Each cantilever is composed of a stack of several submicrometer layers of material and curls approximately 90 degrees out of the plane at rest. To achieve such a high curvature, the team took advantage of differences in the contraction and expansion of individual layers caused by physical stresses in the material resulting from the fabrication process. The materials are first deposited flat onto the chip. Then, a layer in the chip below the cantilever is removed, allowing the material stresses to take effect, releasing the cantilever from the chip and allowing it to curl out. The top layer of each cantilever also features a series of silicon dioxide bars running perpendicular to the waveguide, which keep the cantilever from curling along its width while also improving its length-wise curvature.

A micro-cantilever wiggles and waggles to project light in the right place.Matt Saha, Y. Henry Wen, et al.

What was more of a challenge than engineering the chip itself was figuring out the details of actually making the chip project images and videos. Working out the process of synchronizing and timing the cantilevers’ motion and light beams to generate the right colors at the right time was a substantial effort, according to Andy Greenspon, a researcher at MITRE who also worked on the project. Now, the team has successfully projected a variety of videos from a single cantilever, including clips from the movie A Charlie Brown Christmas.

Advertisement

A warped projection of the Mona Lisa. The chip projected a roughly 125-micrometer image of the Mona Lisa.Matt Saha, Y. Henry Wen, et al.

Because the chip can project so many more spots in any given time interval than any previous beam scanners, it could also be used to control many more qubits in quantum computers. The Quantum Moonshot program’s mission is to build a quantum computer that can be scaled to millions of qubits. So clearly, it needs a scalable way of controlling each one, explains Wen. Instead of using one laser per qubit, the team realized that not every qubit needed to be controlled at every given moment. The chip’s ability to move light beams over a two-dimensional area, would allow them to control all of the qubits with many fewer lasers.

Another process that Wen thinks the chip could improve is scanning objects for 3D printing. Today, that typically involves using a single laser to scan over the entire surface of an object. The new chip, however, could potentially employ thousands of laser beams. “I think now you can take a process that would have taken hours and maybe bring it down to minutes,” says Wen.

Wen is also excited to explore the potential of different cantilever shapes. By changing the orientations of the bars perpendicular to the waveguide, the team has been able to make the cantilevers curl into helixes. Wen says that such unusual shapes could be useful in making a lab-on-a-chip for cell biology or drug development. “A lot of this stuff is imaging, scanning a laser across something, either to image it or to stimulate some response. And so we could have one of these ski jumps curl not just up, but actually curl back around, and then move around and scan over a sample,” Wen explains. “If you can imagine a structure that will be useful for you, we should try it.”

From Your Site Articles

Advertisement

Related Articles Around the Web

Source link

Continue Reading

Tech

Nearly 4,000 US industrial devices exposed to Iranian cyberattacks

Published

on

Hacker

The attack surface targeted by Iranian-linked hackers in cyberattacks against U.S. critical infrastructure networks includes thousands of Internet-exposed programmable logic controllers (PLCs) manufactured by Rockwell Automation.

According to a joint advisory issued by multiple U.S. federal agencies on Tuesday, Iranian state-backed hacking groups have been targeting Rockwell Automation/Allen-Bradley PLC devices since March 2026, causing operational disruptions and financial losses.

“Iranian-affiliated APT targeting campaigns against U.S. organizations have recently escalated, likely in response to hostilities between Iran, and the United States and Israel,” the authoring agencies warned.

Wiz

“The FBI identified that this activity resulted in the extraction of the device’s project file and data manipulation on HMI and SCADA displays.”

As cybersecurity firm Censys reported one day later, three-quarters of more than 5,200 such industrial control systems found exposed online globally are from the United States.

Advertisement

“Censys data identifies 5,219 internet-exposed hosts globally responding to EtherNet/IP (EIP) and self-identifying as Rockwell Automation/Allen-Bradley devices,” Censys said.

“The United States accounts for 74.6% of global exposure (3,891 hosts), with a disproportionate share on cellular carrier ASNs indicative of field-deployed devices on cellular modems.”

Internet exposed Rockwell/Allen Bradley PLCs
Internet-exposed Rockwell/Allen Bradley PLCs (Censys)

​To defend against these ongoing attacks, network defenders are advised to secure PLCs using a firewall or disconnect them from the Internet, scan logs for signs of malicious activity, and check for suspicious traffic on OT ports (especially when it originates from overseas hosting providers).

Admins should also enforce multifactor authentication (MFA) for access to OT networks, keep all PLC devices up to date, and disable unused services and authentication methods.

This ongoing campaign follows similar attacks from nearly three years ago, when a threat group affiliated with the Iranian Government’s Islamic Revolutionary Guard Corps (IRGC) and tracked as CyberAv3ngers targeted vulnerabilities in U.S.-based Unitronics operational technology (OT) systems.

Advertisement

CyberAv3ngers hackers compromised at least 75 Unitronics PLC devices in multiple waves of cyberattacks between November 2023 and January 2024, with half of those in Water and Wastewater Systems critical infrastructure networks across the United States.

More recently, the Handala hacktivist group (linked to Iran’s Ministry of Intelligence and Security) wiped approximately 80,000 devices from the network of U.S. medical giant Stryker, including employees’ mobile devices and company-managed personal computers.

Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.

This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.

Source link

Advertisement
Continue Reading

Tech

Weakest Engineer In the Room: Turn Fear Into Fuel

Published

on

This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Parsity and delivered to your inbox for free!

The Worst Engineer in the Room

My salary doubled. My confidence tanked.

That’s what happened when I had just joined a five-person startup in San Francisco in my third year as a software engineer. Two of the founders had been recognized in Forbes 30 Under 30. The team was exceptional by any measure.

On my first day, someone made a joke about Dijkstra’s algorithm. Everyone laughed. I smiled along, then looked it up afterward so I could understand why it was funny. Dijkstra’s algorithm finds the shortest path between 2 points—the math underlying GPS navigation. It’s a foundational concept in virtually every formal computer science curriculum. I had never encountered it.

Advertisement

That moment reflected a broader pattern. Conversations about system design and tradeoffs often felt just out of reach. I could follow parts of them, but not enough to contribute meaningfully.

I was mostly self-taught. Wide coverage, shallow roots. The engineers around me had roots. You could feel it in how they reasoned through problems, how they talked about tradeoffs, how they debugged with patience instead of pure panic.

The Advice That Sounds Good Until You’re Living It

You’ve heard the phrase: “If you’re the smartest person in the room, you’re in the wrong room.”

It sounds aspirational. What nobody tells you is what it actually feels like to be in that room. It feels like barely following system design conversations. Like nodding along to discussions you can only partially decode. Like shipping solutions through trial and error and hoping nobody looks too closely.

Advertisement

Being the weakest engineer in the room is genuinely uncomfortable. It surfaces every gap. And if you’re not careful, it pushes you in exactly the wrong direction.

My instinct was to make myself smaller. On a team of five, every voice mattered. I stopped offering mine. I rushed toward working solutions without real understanding, hoping velocity would compensate for depth.

I was working harder and, at the same time, I was not improving.

The turning point came when one of the most senior engineers left. Before departing, he told me it was difficult to work with me because I lacked foundational programming knowledge, listing out the concepts he saw me struggle with.

Advertisement

For the first time, what had felt like vague inadequacy became something specific.

What the Cliché Misses

Proximity to stronger engineers is not sufficient on its own. You won’t absorb their skill through osmosis. The engineers who thrive when they’re outmatched are not the ones who wait for confidence to arrive. They treat the discomfort as diagnostic information.

What can they answer that I can’t? What do they see in a system that I’m missing?

I defined a clear picture of the engineer I wanted to become and compared it to where I was. I wrote down what I did not know. I identified how I would close each gap with books, tutorials and small projects. I asked for recommendations from the same engineer who gave me the hard feedback.

Advertisement

I figured out the gaps. Then the bridges. Then I worked through each of them.

Over time, conversations became clearer. Debugging became more systematic. I started contributing meaningfully rather than just executing tasks.

The Other Room Nobody Warns You About

There’s a less-obvious version of this same problem: when you’re the strongest engineer in the room.

It can feel rewarding. Less friction, more validation. But there’s also less growth. When you’re at the ceiling, there’s no external pressure to raise your own floor. The feedback loops that sharpen judgment go quiet. Some engineers spend years there without noticing. They’re good. They’re comfortable. They stop getting better.

Advertisement

Both rooms carry risk. One threatens your confidence. The other threatens your trajectory.

Being the weakest engineer in a strong room is an advantage, but only if you treat it like one. It gives you a clear benchmark. But the room doesn’t do the work for you. You have to name the gaps, build a plan, and follow through.

And if you ever find yourself in the other room, where you’re clearly the strongest, pay attention to how long you’ve been there.

Both rooms are trying to tell you something.

Advertisement

—Brian

Not every engineer has a doctorate, but Ph.D. engineers are an essential part of the workforce, researching and designing tomorrow’s high-tech products and systems. In the United States, early signs are emerging that Ph.D. programs in electrical engineering and related fields may be shrinking. Political and economic uncertainty mean some universities are now seeing smaller applicant pools and graduate cohorts.

Read more here.

Last November, three professors at Auburn University in Ala. hosted a gathering at a coffee shop to confront students’ concerns about AI. The event, which they call an “AI Café,” was meant to create an environment “where scholars engage their communities in genuine dialogue about AI. Not to lecture about technical capabilities, but to listen, learn, and co-create a vision for AI that serves the public interest.” In a guest article, they share what they learned at the event and tips for starting your own AI Café.

Advertisement

Read more here.

Inference, the process of running a trained AI model on new data, is increasingly becoming a focus in the world of AI engineering. The growth of open LLMs means that more engineers can now tweak the models to perform better at inference. Given this trend, a recent issue of the Substack “The Pragmatic Engineer” does a deep dive on inference engineering—what it is, when it’s needed, and how to do it.

Read more here.

From Your Site Articles

Advertisement

Related Articles Around the Web

Source link

Continue Reading

Tech

The future of insurance is AI, so why the hesitation?

Published

on

InsTech.ie’s CEO Gary Leyden explores the opportunities that AI presents to the Irish insurance industry.

The Irish insurance industry is in the midst of a shift from talking about innovation to proving it in practice. Ireland hosts the European operations of many of the world’s largest insurers and technology multinationals, alongside a strong base of academic research and a regulatory environment that is internationally respected.

The challenge is not ambition or capability. It is execution. Ireland has the capital, the regulatory credibility and the operational footprint, yet too much innovation still stalls at pilot stage. Ideas are tested, discussed, extended and deferred, rather than deployed at scale. In a market facing rising climate volatility and cost pressure, delay is no longer neutral. It compounds risk.

The cost of that delay is already being felt. Climate-related losses are rising sharply, with global insured losses from natural catastrophes reaching approximately $137bn in 2024, according to Swiss Re research. In Ireland, repeated flooding events are accelerating risk faster than traditional planning, pricing and product development cycles can adapt. Delayed innovation is not simply cautious, it is expensive. It increases exposure to loss by leaving insurers reliant on slower processes, older models and systems that cannot be tested early or adapt quickly, driving higher volatility, slower response and greater downstream costs when shocks materialise.

Advertisement

This is where Ireland’s Digital Sandbox for Insurance comes into focus. Similar models exist internationally, including regulatory sandboxes operated by authorities such as the UK Financial Conduct Authority. Ireland’s Digital Sandbox, however, is not a regulatory sandbox. It is a market-led testing environment that allows firms to trial new systems safely before full-scale implementation – complementing, not replacing, regulatory supervision.

It removes the main excuse insurers use for not testing new technology, operational risk. It narrows the gap between technical promise and commercial adoption.

The friction lies between innovators and incumbents – start-ups lack access to real insurance environments, while insurers hesitate to test early-stage technology safely. The result has been extended discussion with limited deployment, a pattern that a structured testing environment is designed to change. This is often referred to as ‘proof-of-concept purgatory’, where innovation ambition hits a wall of big corporate conservatism.

Proof of delivery is already visible across Ireland’s insurtech ecosystem. Dimply is modernising customer engagement and distribution. Inaza is enhancing underwriting performance through advanced data and automation. Blink Parametric has deployed real-time parametric solutions across global markets. Docosoft is streamlining regulatory and operational workflows for major insurers. These firms show that Irish innovation can scale. A sandbox accelerates that trajectory, helping earlier-stage companies prove readiness faster and enabling insurers to move from evaluation to adoption with confidence.

Advertisement

Reaching ‘tier one’ insurers

This is not simply about adopting new tools, but about changing decision velocity by shortening procurement cycles and moving from evaluation to deployment. Without that shift, infrastructure alone will not move the dial. Insurance, however, carries additional structural friction, from legacy systems and data constraints to complex procurement and compliance obligations, which can slow adoption even when technology performs well.

Breaking into ‘tier one’ insurers remains the single biggest barrier to scaling Irish insurtech – not because the technology fails, but because procurement cycles, risk aversion and internal complexity slow decision-making to a crawl. Building infrastructure that unlocks that access would be transformative for the more than 100 Irish insurtechs in the national cluster, while also signalling internationally that Ireland is serious about competing in an industry undergoing profound change.

A tier-one insurer recently used the secure sandbox to test AI-powered fraud detection in conditions that mirrored real investigations but without exposing customer data. The company’s fraud investigations were distressingly slow, manual and resource-intensive, barely keeping up with rising claim volumes. Why does this matter? Because global insurance fraud costs exceed $25bn annually. Insurers know AI could help detect fraud better and faster.

The uncomfortable truth is that the problem has never been about the technology. The real issue has been organisations’ willingness to open their minds to the future and accept that the old ways simply cannot keep up. By using the sandbox environment, the insurer reduced its proof-of-concept phase from 12-18 months to just eight weeks. That’s the difference between dial-up internet and fibre broadband. If AI testing can happen in two months, executives should be asking why they’d want to still move at the pace of another era.

Advertisement

Ireland likes to describe itself as a global hub for innovation, but a hub is only a hub if decisions actually happen there. The reality is that Ireland already runs major operations for global insurers, handles complex regulatory engagement and holds deep expertise, yet it is still too often treated as the place where strategy is executed rather than where it is shaped.

If Ireland is trusted to run the systems, manage the risk and operate the infrastructure, then it should also be trusted to run the experimentation. The Digital Sandbox removes the usual excuses. The question now is whether all stakeholders – insurers, policymakers and industry – will act like leaders and move with the times.

 

By Gary Leyden

Advertisement

Gary Leyden is CEO of InsTech.ie, where he leads the development of Ireland’s national insurtech ecosystem, working across industry and start-ups to embed innovation in regulated sectors. He focuses on building the conditions for innovation to scale within insurance, leading initiatives such as Ireland’s Digital Sandbox for Insurance, supporting a nationwide network of over 120 insurtech companies developing AI-driven solutions.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025