Connect with us
DAPA Banner

Tech

ChatGPT sucks at being a real robot

Published

on

This story was originally published in The Highlight, Vox’s member-exclusive magazine. To get access to member-exclusive stories every month, join the Vox Membership program today.

There’s something sad about seeing a humanoid robot lying on the floor. Without any electricity, these bipedal machines can’t stand up, so if they’re powered down and not hanging from a winch, they’re sprawled out on the floor, staring up at you, helpless.

That’s how I met Atlas a couple of months ago. I’d seen the robot on YouTube a hundred times, running obstacle courses and doing backflips. Then I saw it on the floor of a lab at MIT. It was just lying there. The contrast is jarring, if only because humanoid robots have become so much more capable and ubiquitous since Atlas got famous on YouTube.

Across town at Boston Dynamics, the company that makes Atlas, a newer version of the humanoid robot had learned not only to walk but also to drop things and pick them back up instinctively, thanks to a single artificial intelligence model that controls its movement. Some of these next-generation Atlas robots will soon be working on factory floors — and may venture further. Thanks in part to AI, general-purpose humanoids of all types seem inevitable.

Advertisement

“In Shenzhen, you can already see them walking down the street every once in a while,” Russ Tedrake told me back at MIT. “You’ll start seeing them in your life in places that are probably dull, dirty, and dangerous.”

Tedrake runs the Robot Locomotion Group at the MIT Computer Science and Artificial Intelligence Lab, also known as CSAIL, and he co-led the project that produced the latest AI-powered Atlas. Walking was once the hard thing for robots to learn, but not anymore. Tedrake’s group has shifted focus from teaching robots how to move to helping them understand and interact with the world through software, namely AI. They’re not the only ones.

In the United States, venture capital investment in robotics startups grew from $42.6 million in 2020 to nearly $2.8 billion in 2025. Morgan Stanley predicts the cumulative global sales of humanoids will reach 900,000 in 2030 and explode to more than 1 billion by 2050, the vast majority of which will be for industrial and commercial purposes. Some believe these robots will ultimately replace human labor, ushering in a new global economic order. After all, we designed the world for humans, so humanoids should be able to navigate it with ease and do what we do.

an illustration of one nervous person and three robots all transporting brown boxes together in a line

Janik Söllner for Vox

They won’t all be factory workers, if certain startups get their way. A company called X1 Technologies has started taking preorders for its $20,000 home robot, Neo, which wears clothes, does dishes, and fetches snacks from the fridge. Figure AI introduced its Figure 03 humanoid robot, which also does chores. Sunday Robotics said it would have fully autonomous robots making coffee in beta testers’ homes next year.

Advertisement

So far, we’ve seen a lot of demos of these AI-powered home robots and promises from the industrial humanoid makers, but not much in the way of a new global economic order. Demos of home robots, like the X1 Neo, have relied on human operators, making these automatons, in practice, more like puppets. Reports suggest that Figure AI and Apptronik have only one or two robots on manufacturing floors at any given time, usually doing menial tasks. That’s a proof of concept, not a threat to the human work force.

“In order to make them better, we have to make AI better.”

You can think of all these robots as the physical embodiment of AI, or just embodied AI. This is what happens when you put AI into a physical system, enabling it to interact with the real world. Whether that’s in the form of a humanoid robot or an autonomous car, it’s the next frontier for hardware and, arguably, technological progress writ large.

Embodied AI is already transforming how farming works, how we move goods around the world, and what’s possible in surgical theaters. We might be just one or two breakthroughs away from walking, talking, thinking machines that can work alongside us, unlocking a whole new realm of possibilities. “Might” is the key word there.

Advertisement

“If we’re looking for robots that will work side by side with us in the next couple of years, I don’t think it will be humanoids,” Daniela Rus, director of CSAIL, told me not long after I left Tedrake’s lab. “Humanoids are really complicated, and we have to make them better. And in order to make them better, we have to make AI better.”

So to understand the gap between the hype around humanoids and the technology’s real promise, you have to know what AI can and can’t do for robots. You also, unfortunately, have to try to understand what Elon Musk has been up to at Tesla for the past five years.

It’s still embarrassing to watch the part of the Tesla AI Day presentation in 2021 when a human person dressed in a robot costume appears on stage dancing to dubstep music. Musk eventually stops the dance and announces that Tesla, “a robotics company,” will have a prototype of a general-purpose humanoid robot, now known as Optimus, the following year. Not many people believed him, and now, years later, Tesla still has not delivered a fully functional Optimus. Never afraid to make a prediction, Musk told audiences at Davos in January 2026 that Tesla’s robot will go on sale next year.

“People took him seriously because he had a great track record,” said Ken Goldberg, a roboticist at the University of California-Berkeley and co-founder of Ambi Robotics. “I think people were inspired by that.”

Advertisement

You can imagine why people got excited, though. With the Optimus robot, Elon Musk promised to eliminate poverty and offer shareholders “infinite” profits. He said engineers could effectively translate Tesla’s self-driving car technology into software that could power autonomous robots that could work in factories or help around the house. It’s a version of the same vision humanoid robotics startups are chasing today, albeit colored by several years of Musk’s unfulfilled promises.

We now know that Optimus struggles with a lot of the same problems as other attempts at general-purpose humanoids. It often requires humans to remotely operate it, and it struggles with dexterity and precision. The 1X Neo, likewise, needed a human’s help to open a refrigerator door and collapsed onto the floor in a demo for a New York Times journalist last year. The hardware seems capable enough. Optimus can dance, and Neo can fold clothes, albeit a bit clumsily. But they don’t yet understand physics. They don’t know how to plan or to improvise. They certainly can’t think.

“People in general get too excited by the idea of the robot and not the reality.”

“People in general get too excited by the idea of the robot and not the reality,” said Rodney Brooks, co-founder of iRobot, makers of the Roomba robot vacuum. Brooks, a former CSAIL director, has written extensively and skeptically about humanoid robots.

Advertisement

Clearly, there’s a gap between what’s happening in research labs and what’s being deployed in the real world. Some of the optimism around humanoids is based on good science, though. In 2023, Tedrake coauthored a landmark paper with Tony Zhao, co-founder and CEO of Sunday Robotics, that outlined a novel method for training robots to move like humans. It involves humans performing the task wearing sensor-laden gloves that send data to an AI model that enables the robot to figure out how to do those tasks. This complemented work Tedrake was doing at the Toyota Research Institute that used the same kinds of methods AI models use to generate images to generate robot behavior. You’ve heard of large language models, or LLMs. Tedrake calls these large behavior models, or LBMs.

It makes sense. By watching humans do things over and over, these AI models collect enough data to generate new behaviors that can adapt to changing environments. Folding laundry, for example, is a popular example of a task that requires nimble hands and better brains. If a robot picks up a shirt and the fabric flops down in an unexpected way, it needs to figure out how to handle that uncertainty. You can’t simply program it to know what to do when there are so many variables. You can, however, teach it to learn.

That’s what makes the lemonade demo so impressive. Some of Rus’s students at CSAIL have been teaching a humanoid robot named Ruby to make lemonade — something that you might want a robot butler to do one day — by wearing sensors that measure not only the movements but the forces involved. It’s a combination of delicate movements, like pouring sugar, and strong ones, like lifting a jug of water. I watched Ruby do this without spilling a drop. It hadn’t been programmed to make lemonade. It had learned.

The real challenge is getting this method to scale. One way is simply to brute-force it: Employ thousands of humans to perform basic tasks, like folding laundry, to build foundation models for the physical world. Foundation models are the massive datasets that can be adapted to specific tasks like generating text, images, or in this case, robot behavior. You can also get humans to teleoperate countless robots in order to train these models. These so-called arm farms already exist in warehouses in Eastern Europe, and they’re about as dystopian as they sound.

Advertisement

Another option is YouTube. There are a lot of how-to videos on YouTube, and some researchers think that feeding them all into an AI model will provide enough data to give robots a better understanding of how the world works. These two-dimensional videos are obviously limited, if only because they can’t tell us anything about the physics of the objects in the frame. The same goes for synthetic data, which involves a computer rapidly and repeatedly carrying out a task in a simulation. The upside here, of course, is more data, more quickly. The downside is that the data isn’t as good, especially when it comes to physical forces like friction and torque, which also happen to be the most important for robot dexterity.

“Physics is a tough task to master,” Brooks said. “And if you have a robot, which is not good with physics, in the presence of people, it doesn’t end well.”

an illustration of a robot butler tripping up some stairs. Food and drinks fly everywhere.

Janik Söllner for Vox

That’s not even taking into account the many other bottlenecks facing robotics right now. While components have gotten cheaper — you can buy a humanoid robot right now for less than $6,000, compared to the $75,000 it cost to buy Boston Dynamics’ small, four-legged robot Spot five years ago — batteries represent a major bottleneck for robotics, limiting the run time of most humanoids to two to four hours.

Then you have the problem with processing power. The AI models that can make humanoids more human require massive amounts of compute. If that’s done in the cloud, you’ve got latency issues, preventing the robot from reacting in real time. And inevitably, to tie a lot of other constraints into a tidy bundle, the AI is just not good enough.

Advertisement

If you trace the history of AI and the history of robotics back to their origins, you’ll see a braided line. The two technologies have intersected time and again, since the birth of the term “artificial intelligence” at a Dartmouth summer research workshop in the summer of 1956. Then, half a century later, things started heating up on the AI front, when advances in machine learning and powerful processors called GPUs — the things that have now made Nvidia a $5 trillion company — ushered in the era of deep learning. I’m about to throw a few technical terms at you, so bear with me.

Machine learning is a type of AI. It’s when algorithms look for patterns in data and make decisions without being explicitly trained to do so. Deep learning takes it to another level with the help of a machine learning model called a neural network. You can think of a neural network, a concept that’s even older than AI, as a system loosely modeled on the human brain that’s made up of lots of artificial neurons that do math problems. Deep learning uses multilayered neural networks to learn from huge data sets and to make decisions and predictions. Among other accomplishments, neural networks have revolutionized computer vision to improve perception in robots.

There are different architectures for neural networks that can do different things, like recognize images or generate text. One is called a transformer. The “GPT” in ChatGPT stands for “generative pre-trained transformer,” which is a type of large language model, or LLM, that powers many generative AI chatbots. While you’d think LLMs would be good at making robots think, they really aren’t. Then there are diffusion models, which are often used for image generation and, more recently, making robots appear to think. The framework that Tedrake and his coauthors described in their 2023 research into using generative AI to train robots is based on diffusion.

“Under the hood, what’s actually going on should be something much more like our own brains.”

Advertisement

Three things stand out in this very limited explanation of how AI and robots get along. One is that deep learning requires a massive amount of processing power and, as a result, a huge amount of energy. The other is that the latest AI models work with the help of stacks of neural networks whose millions or even billions of artificial neurons do their magic in mysterious and usually inefficient ways. The third thing is that, while LLMs are good at language, and diffusion models are good at images, we don’t have any models that are good enough at physics to send a 200-pound robot marching into a crowd to shake hands and make friends.

As Josh Tenenbaum, a computational cognitive scientist at MIT, explained to me recently, an LLM can make it easier to talk to a robot, but it’s hardly capable of being the robot’s brains. “You could imagine a system where there’s a language model, there’s a chatbot, you want to talk to your robot,” Tenenbaum said. “Under the hood, what’s actually going on should be something much more like our own brains and minds or other animals, not just humans in terms of how it’s embodied and deals with the world.”

So we need better AI for robots, if not in general. Scientists at CSAIL have been working on a couple of physics-inspired and brain-like technologies they’re calling liquid neural networks and linear optical networks. They both fall into the category of state-space models, which are emerging as an alternative or rival to transformer-based models. Whereas transformer-based models look at all available data to identify what’s important, state-space models are much more efficient, as they maintain a summary of the world that gets updated as new data comes in. It’s closer to how the human brain works.

To be perfectly honest, I’d never heard of state-space models until Rus, the CSAIL director, told me about them when we chatted in her office a few weeks ago. She pulled up a video to illustrate the difference between a liquid neural network and a traditional model used for self-driving cars. In it, you can see how the traditional model focuses its attention on everything but the road, while the newer state-space model only looks at the road. If I’m riding in that car, by the way, I want the AI that’s watching the road.

Advertisement

“And instead of a hundred thousand neurons,” Rus says, referring to the traditional neural network, “I have only 19.” And here’s where it gets really compelling. She added, “And because I have only 19, I can actually figure out how these neurons fire and what the correlation is between these neurons and the action of the car.”

You may have already heard that we don’t really know how AI works. If newer approaches bring us a little bit closer to comprehension, it certainly seems worth taking them seriously, especially if we’re talking about the kinds of brains we’ll put in humanoid robots.

When a humanoid robot loses power, when electricity stops flowing to the motors that keep it upright, it collapses into a heap of heavy metal parts. This can happen for any number of reasons. Maybe it’s a bug in the code or a lost wifi connection. And when they’re on, humanoids are full of energy as their joints fight gravity or stand ready to bend. If you imagine being on the wrong side of that incredible mechanical power, it’s easy to doubt this technology.

Some companies that make humanoid robots also admit that they’re not very useful yet. They’re too unreliable to help out around the house, and they’re not efficient enough to be helpful in factories. Furthermore, most of the money being spent developing robots is being spent on making them safe around people. When it comes to deploying robots that can contribute to productivity, that can participate in the economy, it makes a lot more sense to make them highly specialized and not human-shaped.

Advertisement

“Let’s not do open heart surgery right away with these things.”

The embodied AI that will transform the world in the near future is what’s already out there. In fact, it’s what’s been out there for years. Early self-driving cars date back to the 1980s, when Ernst Dickmanns put a vision-guided Mercedes van on the streets of Munich. Researchers from Carnegie Mellon University got a minivan to drive itself across the United States in 1995. Now, decades later, Waymo is operating its robotaxi service in a half-dozen American cities, and the company says its AI-powered cars actually make the roads safer for everyone.

Then there are the Roombas of the world, the robots that are designed to do one thing and keep getting better at it. You can include the vast array of increasingly intelligent manufacturing and warehouse robots in this camp too. By 2027, the year Elon Musk is on track to miss his deadline to start selling Optimus humanoids to the public, Amazon will reportedly replace more than 600,000 jobs with robots. These would probably be boring robots, but they’re safe and effective.

Science fiction promised us humanoids, however. Pick an era in human history, in fact, and someone was dreaming about an automaton that could move like us, talk like us, and do all our dirty work. Replicants, androids, the Mechanical Turk — all these humanoid fantasies imagined an intelligent synthetic self.

Advertisement

Reality gave us package-toting platforms on wheels roving around Amazon warehouses or the sensor-heavy self-driving cars clogging San Francisco streets. In time, even the skeptics think that humanoids will be possible. Probably not in five years, but maybe in 50, we’ll get artificially intelligent companions who can walk alongside us. They’ll take baby steps.

“Good robots are going to be clumsy at first, and you have to find applications where it’s okay for the robot to make mistakes and then recover,” Tedrake said. “Let’s not do open-heart surgery right away with these things. This is more like folding laundry.”

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

How BYD Got EV Chargers to Work Almost as Fast as Gas Pumps

Published

on

Somehow, the whole thing got even faster. Earlier this month, Chinese automaker BYD announced that its Flash Chargers, first rolled out a year ago, can now charge some electric vehicle batteries from around 10 to 70 percent in five minutes, and from 10 to full in about nine. That’s more than 600 miles of range in the time it takes to order a cappuccino and leave a nice tip.

The new BYD chargers can add miles super quickly because they deliver up to 1,500 kilowatts (kW) per charge. Compare that to the 350 kW “hyper-fast” chargers seen more typically in the US, which can top up 80 percent of a battery in 15 to 25 minutes, and the full thing in closer to 40.

BYD’s move brings the charging experience closer to the auto industry’s holy grail: comparable to what drivers expect when they fill up their gas tanks. Survey after survey finds that potential EV buyers are worried about range and charging; speeding things up might go some way toward alleviating fears and getting more drivers seriously thinking about the plug. BYD, which doesn’t sell in the US because of high tariffs and national security concerns, has built more than 4,000 of the chargers in China so far, with plans to construct some 16,000 more by the end of the year, plus 2,000 in Europe.

There is, naturally, a catch—plus a few reasons to believe that a super fast charger won’t solve all of the world’s charging issues.

Advertisement

Right now, only one car will be able to take advantage of the Flash Chargers’ hyperspeed in Europe: BYD’s Denza Z9GT, due to make its Paris debut next month. That’s because the EV comes with the newest generation of BYD’s Blade battery. Making its own cars, its own chargers, and its own batteries gives BYD a significant leg-up in charging speeds over most global competitors, as the tech works together. (Tesla has also vertically integrated the charging experience.) To charge at such high speeds, the vehicles’ software and wiring need to be built to handle that much electric current.

BYD didn’t respond to WIRED’s questions, but according to Chinese language media, the newest Blade battery uses a lithium manganese iron phosphate (LMFP) chemistry to increase energy density. (The last version used lithium-iron phosphate, or LFP, which trades some energy density for durability and fast-charging capability). BYD says it has redesigned all of its battery elements, including the electrodes that store and release energy, the electrolytes that allow for ion transfer between electrodes during charging and discharging cycles, and the separators that disconnect and then conduct ion flow.

This all ups the battery’s energy density by 5 percent compared to what it touted as the latest and greatest last year. BYD says the Denza Z9GT can hit more than 620 miles per charge. (Real-life ranges tend to be a bit lower than claims by auto companies.)

The charger itself, a slick, teal T-shaped system that evokes—you guessed it—a gas station pump, belies its complexity. Dishing out more than a megawatt from the electric grid is no small feat, both in hardware and construction involved. BYD says it will make the rollout of the new charger a little easier by incorporating them into existing BYD charging banks, so that the infrastructure isn’t starting from scratch. Beyond that, BYD says it will use storage batteries at the charging sites to supplement the electrical grid, so the grid isn’t overloaded.

Advertisement

The Limits

Despite these impressive speeds, don’t expect BYD’s new system to change the game for EVs. “It’s a good, marginal improvement in technology,” says Gil Tal, who directs the EV Research Center at UC Davis’ Institute of Transportation Studies. “It’s not something that changes most people’s daily life.”

The first reason is practical. Today, most US EV owners have access to at-home charging and only use public fast-chargers on the occasional trip that stretches their 250-mile range. For those people, the difference between charging in 20 minutes and in 5 minutes might be close to negligible.

Source link

Advertisement
Continue Reading

Tech

Today’s NYT Wordle Hints, Answer and Help for March 21 #1736

Published

on

Looking for the most recent Wordle answer? Click here for today’s Wordle hints, as well as our daily answers and hints for The New York Times Mini Crossword, Connections, Connections: Sports Edition and Strands puzzles.


Today’s Wordle puzzle has mostly common letters, so you might get it right awa. If you need a new starter word, check out our list of which letters show up the most in English words. If you need hints and the answer, read on.

Read more: New Study Reveals Wordle’s Top 10 Toughest Words of 2025

Advertisement

Today’s Wordle hints

Before we show you today’s Wordle answer, we’ll give you some hints. If you don’t want a spoiler, look away now.

Wordle hint No. 1: Repeats

Today’s Wordle answer has no repeated letters.

Wordle hint No. 2: Vowels

Today’s Wordle answer has one vowel.

Wordle hint No. 3: First letter

Today’s Wordle answer begins with S.

Advertisement

Wordle hint No. 4: Last letter

Today’s Wordle answer ends with K.

Wordle hint No. 5: Meaning

Today’s Wordle answer can refer to something that is smooth and glossy.

TODAY’S WORDLE ANSWER

Today’s Wordle answer is SLICK.

Yesterday’s Wordle answer

Advertisement

Yesterday’s Wordle answer, March 20, No. 1735, was OASIS.

Recent Wordle answers

March 16, No. 1731: DRAMA

March 17, No. 1732: CLASP

March 18, No. 1733: AMPLY

Advertisement

March 19, No. 1734: REHAB

What’s the best Wordle starting word?

Don’t be afraid to use our tip sheet ranking all the letters in the alphabet by frequency of uses. In short, you want starter words that lean heavy on E, A and R, and don’t contain Z, J and Q. 

Some solid starter words to try:

ADIEU

Advertisement

TRAIN

CLOSE

STARE

NOISE

Advertisement

Source link

Continue Reading

Tech

Network 4K UHD Review: Mad as Hell and Still Watching

Published

on

Some movies age gracefully. Others age into prophecy. Network did the latter and then some. When Sidney Lumet released this ferocious satire in 1976 from a venomously brilliant script by Paddy Chayefsky, audiences didn’t laugh it off as some cute exaggeration about television news. They squirmed. The film landed like a brick through the newsroom window; biting, unnerving, and uncomfortably close to the truth even then. Nearly fifty years later it feels less like satire and more like a documentary with better lighting. Cable news shouting matches. Personality driven commentary replacing journalism. A nonstop outrage cycle designed to keep viewers emotionally hooked. Chayefsky didn’t just understand television. He understood America’s appetite for spectacle long before the algorithms figured it out.

The story kicks off when aging news anchor Howard Beale, played with electrifying intensity by Peter Finch, learns he’s about to be fired because the ratings stink. Instead of fading quietly into retirement, Beale cracks on live television and promises to kill himself on the air during the next broadcast. Not exactly the sort of programming decision that wins industry awards. But something strange happens. Viewers tune in. Ratings spike. Suddenly the breakdown is good television. Enter Diana Christensen, played with ice-cold ambition by Faye Dunaway, a programming executive who sees Beale not as a problem but as a product. Soon he isn’t a journalist anymore. He’s a spectacle. A televised rage prophet urging viewers to open their windows and shout, “I’m mad as hell and I’m not going to take this anymore!” America listens. The ratings explode. The network cashes in. If this all feels familiar, it should, we’ve been living inside that feedback loop for decades.

The emotional backbone of the film belongs to William Holden as Max Schumacher, a veteran newsman clinging to the dying belief that journalism should still mean something. Poor Max. He’s the last adult in a room full of ratings addicts. One of the film’s most devastating scenes arrives when Max confesses his affair with Christensen to his wife, played by Beatrice Straight. Straight detonates with decades of frustration and heartbreak in a performance so raw it feels almost invasive to watch. The scene lasts only a few minutes but it anchors the film’s wild satire in something painfully real. Straight won an Academy Award for it, and rightly so.

Advertisement

For a moment the movie stops being about television and becomes about the collateral damage people leave behind while chasing ambition; the spouses ignored, the families sacrificed, the human wreckage left behind while the ratings climb. We’ve seen the modern version enough times: star anchors imploding, cable personalities flaming out on air, influencers chasing the next outrage clip while the cameras keep rolling. Careers burn, reputations collapse, and the audience moves on before the next commercial break. Lumet and Chayefsky knew the truth the media machine still pretends not to see or care about: behind every viral moment there’s usually someone paying the bill while the network or platform counts the clicks.

Then comes the speech that still rattles around in your skull long after the credits roll. Corporate executive Arthur Jensen, played with thunderous authority by Ned Beatty, summons Beale to a dimly lit boardroom and calmly explains how the world actually works. Nations are illusions. Democracy is window dressing. The real power belongs to multinational corporations. In 1976 Jensen name-checked IBM, Exxon, and AT&T. Today you could easily swap those out for Apple, Amazon, Google, Microsoft, and Meta and the speech would land even harder. Chayefsky understood that television news wasn’t simply reporting events anymore, it was becoming part of the corporate machine that shaped them.

faye-dunaway-network-1976

And that’s where Network starts feeling downright uncomfortable in 2026. The film predicted the outrage economy decades before anyone put a label on it. Turn on the television today and it’s emotional theater twenty four hours a day. Panels yelling. Personalities performing. Headlines engineered to keep viewers angry enough to stay glued to the screen. The business model is simple: outrage drives engagement and engagement drives revenue. Diana Christensen figured that out in about thirty seconds. Calm reporting doesn’t trend. Anger does. Journalism slowly mutated into entertainment, and entertainment eventually became politics.

Watching Network today is like opening a time capsule that contains tomorrow’s headlines. It remains wickedly funny, brutally intelligent, and powered by one of the sharpest scripts ever written about American media culture. But what really hits is how little of it feels exaggerated anymore. Chayefsky saw the trajectory clearly: once outrage becomes profitable, it becomes irresistible. The cameras keep rolling. The ratings still rule everything. And somewhere in the digital noise of modern media, Howard Beale is still shouting into the void, mad as hell, begging the rest of us to wake up before the show consumes everything.

Advertisement

Criterion gives Network the kind of restoration treatment the film has long deserved. The new 4K digital restoration presents the movie in Dolby Vision HDR on a dedicated 4K UHD disc, with the film’s original uncompressed monaural soundtrack preserved intact. Lumet never intended this to be a sonic spectacle. This is a film powered by dialogue, and the restored mono track keeps Paddy Chayefsky’s machine gun script front and center where it belongs.

The restoration comes from a new 4K scan of the original 35mm camera negative and is presented in the film’s original 1.85:1 aspect ratio. Dolby Vision improves contrast and shadow detail, but the image still looks like film from the mid 1970s should look. Grain is intact. The newsroom lighting remains harsh and clinical. The endless televisions scattered around the sets finally reveal more texture and depth than older transfers ever managed.

Audio stays faithful to the original theatrical presentation. The uncompressed mono track is clean and focused, which matters because this movie lives and dies by the rhythm of Chayefsky’s dialogue. From Howard Beale’s televised sermons to Arthur Jensen’s thunderous boardroom lecture, every word lands with the bite Lumet intended. Criterion did not try to reinvent Network. They cleaned it up, respected the source, and delivered the sharpest home video presentation this film has ever had.

network-1976-criterion-collection-cover-art

Criterion also includes a strong slate of supplemental material. Director Sidney Lumet provides a feature length audio commentary that offers insight into the film’s production, the performances, and the controlled chaos of Chayefsky’s dialogue heavy script. The set also includes Paddy Chayefsky Collector of Words (2025), a feature length documentary by Matthew Miele that explores the legendary screenwriter’s life and influence. For those who want deeper historical context, The Making of Network (2006), a six part documentary by Laurent Bouzereau, takes viewers inside the writing, casting, and cultural impact of the film.

Advertisement. Scroll to continue reading.
Advertisement

Movie Details

  • STUDIO: United Artists
  • FORMAT: Ultra HD 4K Blu-ray (February 24, 2026)
  • THEATRICAL RELEASE YEAR: 1976
  • ASPECT RATIO: 1.85:1
  • HDR FORMATS: Dolby Vision HDR
  • AUDIO FORMAT: LPCM Mono (48kHz, 24-bit)
  • LENGTH: 121 mins.
  • MPAA RATING: R
  • DIRECTOR: Sidney Lumet
  • STARRING: William Holden, Faye Dunaway, Peter Finch, Robert Duvall, Wesley Addy, Ned Beatty, Beatrice Straight

Our Ratings

★★★★★★★★★★ Picture

★★★★★★★★★★ Sound

★★★★★★★★★★ Extras

Where to buy:

Source link

Advertisement
Continue Reading

Tech

PicoPal Is the Transparent Game Boy Color Remake Nobody Knew They Needed

Published

on

PicoPal Game Boy Color Handheld Mod Console
Gamers who remember sliding cartridges into their old Game Boy Color will feel right at home when they pick up the PicoPal. Its clear plastic shell displays all of the internal components while maintaining the classic shape and button layout of old. The small LEDs illuminate the directional pad and action buttons with customizable brightness, making them ideal for late-night gaming sessions when all you want to do is keep playing. And a 2.6-inch screen front and center displays lovely crisp colors on games that used to seem tiny on vintage Game Boys.



Hold the PicoPal and you’ll be surprised at how light and easy it is to slip into your pocket; it doesn’t feel like it’s going to bulge anytime soon. The buttons seem exactly right, with the firm tactile reaction that many players used to enjoy back then. The speakers are angled forward for good sound, but you can also use headphones if you prefer to be alone. A simple USB-C port on the side allows you to easily update and charge your device.

PicoPal Game Boy Color Handheld Mod Console
At the center of it all is a Raspberry Pi Pico 2 microcontroller. Some creative developers have managed to overclock it to 300 megahertz, allowing it to run through Game Boy and Game Boy Colour titles without lag. There’s a spare ESP32 chip ready for future wireless connections to be resolved. Games load directly from a microSD card, which can hold up to two terabytes if properly formatted, and the emulation software is based on some of the open-source projects available and appears to run everything just fine with a few tweaks to ensure it all works together smoothly across a wide range of titles.

PicoPal Game Boy Color Handheld Mod Console
It’s simple to navigate the menu and select a game, or to load up the last one right away, and you can even store your progress at any time and resume where you left off even if you turn the device off and on again. The deep sleep option preserves the last position you were in ready to go with little to no battery consumption. If you click one button when you turn it on, it can even function as a full-fledged MP3 player, streaming tunes directly from the same card with nice audio.

PicoPal Game Boy Color Handheld Mod Console
Battery life varies, however it can last anywhere from two to seventeen hours depending on screen brightness, volume, and whether the button lights are turned on or off. Most users appear to get approximately nine hours with the settings adjusted down slightly. There’s a decent solid DAC and amplifier combo that produces clean sound with no hiss or shaky bass. There’s even an IMU kicking around that can measure motion, possibly for future games or simply to show your G-forces during a vehicle journey.

PicoPal Game Boy Color Handheld Mod Console
Other nice touches include preserving screenshots as little files on the card and a fast-forward tool for sections that become repetitious. You may also choose from thirteen various color palettes or go with a lovely plain greyscale. With a rapid button combination, you can access the on-screen menu and change the brightness and other settings on the fly. The cartridge slot is now dormant, but there is plenty of area for future additions; you never know what they may come up with next.

PicoPal Game Boy Color Handheld Mod Console
For the truly dedicated makers, there are even more freebies, like a full open-source schematics firmware and a comprehensive bill of materials, allowing you to study the design, tweak the code, or even construct your own version. With future updates, you may expect the ESP32 to come to life for wireless connectivity and the like. Real-time clock support ensures that the time is kept accurate even after long interruptions.

Source link

Advertisement
Continue Reading

Tech

iOS 26.4 brings mood-based Music widgets to your iPhone’s home screen

Published

on

If you’ve ever unlocked your iPhone at midnight, looking for a sleep playlist while already half asleep, Apple’s iOS 26.4 can make life easier for you. 

The iOS 26.4 release candidate is here, and among several additions, it introduces something called Ambient Music widgets. These are mood-based playlists that you can play with a single tap on your home screen (on the widget). 

What moods can you choose from?

So, from now on, you don’t have to open the app, search for the required playlist, and go through the three-step journey through Apple Music’s menus. The widgets cover four broad mood categories: Chill, Productivity, Sleep, and Wellbeing. 

You also get two widget sizes to pick from: the smaller widget features just one playlist (of your choice), while the larger version gives you one-tap access to all four moods at once. Both widgets are built on the Ambient Music feature, which first appeared in the Control Center. 

However, now it rests front and center on your home screen, where it’s hard to miss. 

Advertisement

Can you customize what plays?

Yes, and Apple has made the process quite seamless. Apple includes built-in playlist presets for each mood. Sleep, for instance, offers options like Sleep Sound, Bedtime Beats, Sound Bath, and Plano Sleep. 

However, if the curated options aren’t your thing, you can set your own custom playlists by long-pressing the widget and tapping “Edit Widget.” And before you even ask, the Ambient Music widget only works with Apple Music; it won’t benefit Spotify users. 

The Ambient Music widgets are just a tiny part of the new iOS 26.4 update. The release candidate also brings a Playlist Playground feature, eight new emojis, urgent reminder flagging in the Reminders app, and a Purchase Sharing update for family users. 

Source link

Advertisement
Continue Reading

Tech

Microsoft is cutting down Copilot “bloat” in Windows 11

Published

on

Microsoft is starting to rethink how much AI it really needs inside Windows 11, and that rethink includes dialing back Copilot. As part of its broader push to improve Windows quality, the company is reducing the number of Copilot entry points across the OS and its apps.

According to Microsoft, this rollback will begin with apps like Photos, Notepad, Widgets, and the Snipping Tool, where Copilot integrations had started to feel excessive. The change is part of a wider shift in Microsoft’s strategy of moving away from aggressively embedding AI everywhere and toward integrating it only where it actually makes sense.

Why is Microsoft pulling back on Copilot?

Let’s be honest, most users weren’t exactly thrilled with Copilot integrations. Over the past year, Microsoft has pushed Copilot into almost every corner of Windows, from the taskbar to system apps and even experimental features like notifications. But that approach hasn’t landed well with everyone.

Critics have pointed out that Copilot often felt forced, difficult to remove, and not always useful, especially when it showed up in places users didn’t ask for. Even internally, Microsoft seems to be acknowledging the feedback. The new statement suggests the company is now aiming to be more “intentional” about where Copilot appears, focusing on genuinely helpful experiences instead of everywhere by default.

What exactly is changing in Windows 11?

The biggest shift is simple: less AI clutter. Microsoft is reducing Copilot integrations across multiple apps and has already scrapped or scaled back some planned features, including deeper system-level integrations in areas like Settings, notifications, and File Explorer.

Advertisement

This doesn’t mean Copilot is going away. Instead, the company wants it to feel more like a useful assistant rather than a constant presence. In practical terms, that could mean fewer pop-ups, fewer forced integrations, and more optional AI features. Recent updates also show Microsoft stepping back from automatically pushing Copilot into places like the Start menu or system notifications, signaling a broader course correction.

Source link

Continue Reading

Tech

4 Useful Apps Designed To Help Improve Your Health And Wellness

Published

on





Whether you recently got on a workout plan or you’re looking for ways to unwind after a stressful week at work, there are tons of workout apps out there that can aid you or even make your job easier. For instance, we’re all aware of the usual fitness-tracking apps that come bundled with the best smartwatches and budget fitness trackers. However, these apps are quite generic and can be overwhelming for those who are simply looking for assistance and don’t want to be shown random numbers and stats all over the screen. This is exactly why we went down the rabbit hole to find useful, interesting apps designed to help you manage your health and wellness.

These apps not only aid in improving your physical health but also prioritize your mental health. After all, both aspects are equally important. Moreover, the apps I’ve chosen make the journey fun rather than boring with attractive visuals, games, or even communities where users can interact with one another. I’ve used these apps personally for over a month to see if they had an impact on my sense of well-being. Instead of the usual bunch of apps like Strava and MyFitnessPal, I’ve included lesser-known apps with interesting and effective features. Moreover, all the apps mentioned on this list are platform-agnostic, so you can use them whether you’re on team Android or inside Apple’s walled garden.

Advertisement

Impulse

When it comes to overall wellness, we often sideline our mental health. That’s exactly where an app like Impulse (available on both Android and iOS) steps in. It is a brain-training platform designed to sharpen cognitive skills such as memory, attention, problem-solving, logic, and speed. But don’t fret, it isn’t rocket science or grueling academic work. Instead, Impulse replaces tedious study with a series of short, highly entertaining puzzle games. For instance, there are games where you arrange numbers in ascending order, memory tests asking you to recall if a particular tile had a ghost image, and various time-based challenges. Who wouldn’t like improving their brain health under the guise of fun?

The clean, user-friendly interface makes it the perfect game to play while commuting on the subway or just killing time waiting in a queue. I sometimes catch myself mindlessly scrolling on my phone, either watching TikTok or Instagram Reels. I started using Impulse to break this habit, and I can confidently say I am now much more mindful of my screen time. In an era dominated by doomscrolling and brain rot, replacing even a few minutes of that mindless screen time with something that actually keeps your mind sharp feels incredibly valuable.

Advertisement

While the app lets you play a few games for free, you’ll have to pay for the premium tier to get the full experience. The paid plan is where Impulse really shines. It completely removes ads, grants access to the entire library of games, and unlocks detailed progress-tracking so you can visualize your cognitive growth over time.

Advertisement

Hevy

While most smartwatches are good at tracking runs and other activities like cycling and swimming, they can’t log the specific weight you lifted or the number of times you repeated a certain exercise. Hevy solves that exact problem. It’s a clean, intuitive workout tracker that lets you log sets, reps, and weights with just a few taps. It even features automatic rest timers and plate calculators to take the mental math out of lifting.

I used to catch myself zoning out between sets, sometimes mindlessly refreshing my feed and losing track of time. Having Hevy open on my phone helps me focus on my workout and stops me from taking unnecessarily long breaks because I got distracted by my phone. Hevy offers a clean graphical chart of your workout, focus areas, weight lifted, and reps that you can share with your trainer or workout buddies.

The app offers a generous free tier, letting you log unlimited workouts and create a few staple routines. Most people will be happy using this, so you don’t really have to shell out any extra bucks. Hevy also has a smartwatch version, so you can use it straight from your wrist if you have an Apple Watch or a WearOS smartwatch. Among all the apps for weightlifters, Hevy stands out for its intuitive and straightforward interface. From bench presses to push-ups, this is my go-to app for logs. It’s among the best apps for health and fitness — as proven by excellent ratings on both the Google Play Store and Apple App Store.

Advertisement

Headspace

During your daily hustle, finding a quiet moment and making the most of it can be challenging. That’s where Headspace comes into the picture. It’s a beautifully designed mindfulness platform that gets rid of the intimidating, mystical elements of meditation and makes it approachable to the masses. Whether you are looking for a quick breathing exercise to improve concentration or a guided course on managing anxiety episodes, the app breaks everything down into easy-to-follow sessions.

Another issue with increasing screen time and workload is poor sleep quality. I’ve found that using Headspace’s “Sleepcasts” — which are basically soothing ambient stories — works wonders to quiet a racing mind. It acts as a much-needed buffer between staring at a screen and actually getting restful sleep.

Advertisement

The biggest catch with both the Android and iOS versions of Headspace, however, is the cost. While you can try out a handful of introductory basics for free, the app locks its best content behind a paywall. Upgrading gives you the keys to their massive library of multi-week mindfulness courses, sleep aids, and curated focus music. If you struggle to switch off your brain at the end of the day, it’s a highly polished tool that delivers.

Advertisement

Pausa

While long-term meditation is great, sometimes you just need immediate relief when you experience unexpected stress spikes. Pausa is built for exactly those moments. It is a no-nonsense breathwork app designed to help you regulate your nervous system with the help of conscious breathing patterns. Pausa uses science-backed respiratory patterns — like box breathing — to actively lower your heart rate when things get overwhelming.

The interface is minimalistic, and the instructions are easy to follow, which is exactly what you need when you are feeling anxious. I’ve noticed that during a chaotic day, especially when work notifications are piling up and I’ve reached the end of every social media feed, taking just 2 minutes to follow Pausa’s visual breathing guide has helped me feel a lot calmer. It even has an SOS button for sudden moments of panic.

Like the others, Pausa operates on a freemium model. The free tier on Android and iOS gives you access to basic breathing exercises that are perfectly fine for occasional use. However, to unlock the app’s full potential, you need the premium plan. The paid version opens up specialized breathing techniques, a mood tracker that recommends specific exercises based on exactly how you are feeling, and advanced statistics to monitor your daily stress levels over time.

Advertisement

How we picked these apps

Our aim was to recommend apps that are unique and not widely known. Most people are aware of the usual fitness tracking apps that can track how many calories you burn in a day or how many steps you take, but the apps mentioned on this list aren’t as popular, yet they address more than basic physical issues. I’ve also included apps available on the Apple Watch and WearOS smartwatches, so that those of you who like to leave your phones behind can also take advantage of these services. Notably, all the apps have an average rating of 4.2 or higher on their respective marketplaces, with most of them having hundreds of thousands of reviews.

Advertisement



Source link

Continue Reading

Tech

Widely used Trivy scanner compromised in ongoing supply-chain attack

Published

on

Hackers have compromised virtually all versions of Aqua Security’s widely used Trivy vulnerability scanner in an ongoing supply chain attack that could have wide-ranging consequences for developers and the organizations that use them.

Trivy maintainer Itay Shakury confirmed the compromise on Friday, following rumors and a thread, since deleted by the attackers, discussing the incident. The attack began in the early hours of Thursday. When it was done, the threat actor had used stolen credentials to force-push all but one of the trivy-action tags and seven setup-trivy tags to use malicious dependencies.

Assume your pipelines are compromised

A forced push is a git command that overrides a default safety mechanism that protects against overwriting existing commits. Trivy is a vulnerability scanner that developers use to detect vulnerabilities and inadvertently hardcoded authentication secrets in pipelines for developing and deploying software updates. The scanner has 33,200 stars on GitHub, a high rating that indicates it’s used widely.

“If you suspect you were running a compromised version, treat all pipeline secrets as compromised and rotate immediately,” Shakury wrote.

Advertisement

Security firms Socket and Wiz said that the malware, triggered in 75 compromised trivy-action tags, causes custom malware to thoroughly scour development pipelines, including developer machines, for GitHub tokens, cloud credentials, SSH keys, Kubernetes tokens, and whatever other secrets may live there. Once found, the malware encrypts the data and sends it to an attacker-controlled server.

The end result, Socket said, is that any CI/CD pipeline using software that references compromised version tags executes code as soon as the Trivy scan is run. Spoofed version tags include the widely used @0.34.2, @0.33, and @0.18.0. Version @0.35.0 appears to be the only one unaffected.

Source link

Advertisement
Continue Reading

Tech

Why Don’t The Prices Rise At The Same Rate?

Published

on





When the cost of oil goes up, immediate reactions among drivers in the U.S. vary from annoyed head shaking to full-blown panic mode, as people rush to the pumps before the price goes up. But while it’s easy to get wrapped up in the chaos, the question of why fuel prices don’t immediately increase as the oil does, can be tricky. In fact, the truth is nuanced.

The country’s existing gas supply provides a cushion from instant price hikes. This means gasoline stocks can delay price increases, preventing businesses from marking up their gas at the first sign of an oil price increase. Additionally, as long as oil refineries are running normally without disruptions, there’s no immediate pressure to raise prices. However, as supplies run thin and need to be restocked in one location to the next, you can expect a difference at the pump. This is also part of the reason why gas stations sometimes have different prices.

Other factors play a part in the price difference between oil and gasoline as well, including demand. That’s why you sometimes see gas prices increase with warm weather as more people hit the road. Seasonal variations, like the summer blend gas, are more expensive to produce because of their contents, which also impacts the price. The cost of refinery production can also fluctuate, because of different technology in some facilities. All of these factors go into what your gasoline will cost you.

Advertisement

Understanding gas prices beyond oil cost

Gasoline prices in the U.S. can vary by location, regardless of the relative cost of oil. As an example, prices tend to be higher in states and areas farther from oil refineries, ports, or pipelines. This is mostly due to transportation costs. There are also specific environmental requirements, like those in California, which causes the state’s gas to be completely different from the rest of the U.S. This affects the cost of production, storage, and distribution, thus resulting in higher prices at the pump.

But if a retailer increases their gas prices without a justifiable reason in the U.S., they could be subject to civil or criminal fines, depending on their location. Many U.S. states and territories have anti-price gouging laws in place, designed to prevent such premature markups. 

Advertisement

In fact, aside from taxes and regulation, the U.S. government only gets involved during major supply disruptions. This is done with the Strategic Petroleum Reserve which is the country’s emergency oil supply. The decision to release oil from the reserve is made by the President, under federal law. When this happens, the oil is sold into the market to help keep the supply stable. This means that while the government can intercede when things get tough, it doesn’t happen on a regular basis.



Advertisement

Source link

Continue Reading

Tech

City of Seattle awards $455k in ‘Technology Matching Fund’ grants to support digital equity efforts

Published

on

The TMF program is a partnership between the City of Seattle and community organizations improving digital literacy and skills for underserved communities. (City of Seattle Photo)

The City of Seattle is awarding $455,000 in Technology Matching Fund grants to help support 11 community organizations and their projects aimed at overcoming barriers to accessing and using technology.

The TMF grant program was started in 1997 to support community and nonprofit groups and improve digital equity. The Seattle Information Technology Dept. announced the list of winners on Thursday, which are expected to serve 20 different language groups by providing digital literacy training, devices, technical support, digital navigator services, and internet connectivity. 

“Too many of our neighbors have been left behind by the digital divide, creating challenges for them to get an education, a higher-paying job, or find communities and express themselves,” Seattle Mayor Katie Wilson said in a statement.

To receive funding, applicants must match at least 25% of their request with cash, volunteer time, or other contributions. The community match for this year’s projects totals $168,136.90. 

The program attracted 53 applications for grants this year. Comcast and T-Mobile are corporate contributors to this year’s grants.

Advertisement

2026 award recipients: 

  • Creation Culture, Youth Graphic Design Career Pipeline Program, $8,935 
  • Ada Developers Academy, Ada Build Live: Community, $45,000 
  • Horn of Africa Services, Digital Access and Navigation for East African Immigrants and Refugees, $45,000 
  • Chinese Information and Service Center, CISC’s Touch Screen Pilot Project, $44,928 
  • Per Scholas Seattle, Expanding Access to Technology Career Training in Seattle, $45,000 
  • Friends of Little Saigon, Little Saigon Small Business Digital Literacy Project, $44,979 
  • The IF Project, WE THRIVE Digital Access Initiative, $45,000 
  • Villa Communitaria, Familias Digitales en Acción, $45,000 
  • Asian Counseling and Referral Service, Digital Literacy for the Community at ACRS, $45,000 
  • Renaissance 21, Project She/Her/HEALTH by STGA, $45,000  
  • Solid Ground Washington, Internet Access for Residents Exiting Homelessness, $41,266  

Source link

Continue Reading

Trending

Copyright © 2025