When it comes to programming, you may think about use case first and environmental impact second, but you can prioritise both.
Programming is a great skill to have in your back pocket and in 2026 there are a number of factors that can affect which language you choose to learn, whether that be its degree of navigability, how difficult it is to pick up, the resources available to you or its overall usefulness for your career.
But in the modern era, with mounting concerns around ethics, the planet and sustainability goals, it is vital that we be climate-conscious where we can, and a great place to start is with how you code. But what exactly is green code and why should it become the norm for programmers in 2026?
Conscientious code
As it runs, software consumes energy and the more complex or complicated a system, the more processing time and resources it will require. This often leads to increased carbon emissions, as a device essentially ‘works overtime’ to meet high output demands, consuming energy in large quantities. Sometimes, too, organisations’ overly complicated infrastructure will waste more energy than simpler viable systems would.
Advertisement
Defined as an environmentally sustainable computing practice that aims to minimise energy and resource loss in code processing, some organisations are turning to green coding as a means of meeting greenhouse emission reduction goals, as well as contributing to wider CSR and ESG targets. But with that in mind, how can programmers write sustainably?
Core concepts
The main differentiator between what could be considered standard programming and green code is simple, in that it comes down to the amount of energy needed by developers to process the lines of code. Lower energy output can be achieved by applying less energy-intensive principles to your work throughout the day, until it just becomes how you personally do things. It becomes the norm.
Multinational technology company IBM’s research supports the lean coding method, which places core emphasis on using the minimal amount of processing necessary for an end result or final application. It suggests that developers make an effort to reduce file size and eliminate unnecessarily long or slow code that tends to burn through resources.
For example, large amounts of open-source code is typically designed for a range of applications and can contain code that is surplus to a user’s specific requirements. In such cases, a developer may have pulled many files that won’t be part of their final output, but this redundant code still uses additional processing power, leading to excess carbon emissions.
Advertisement
By becoming more aware of the impact you have and applying conscientious policies to your work, the need to act sustainably can become ingrained in everyday operations. It just takes a little focus and commitment at the beginning to develop what will hopefully become a natural and standard practice.
Programmes aplenty
So, where do you start? What can you learn now to help you develop those (hopefully) life-long sustainable behaviours?
Well, to start, take a look at Rust. This newer language is typically used for low-level systems programming by developers conscious of memory safety and performance. It can be tricky to learn, say some; however, where there is a will there is a way, and the cherry on top is that Rust is considered to be among the most energy efficient programs, due to a combination of close-to-metal performance and a minimal runtime.
Designed with security in mind, Ada is also considered a green coding platform, and this unique language has the added benefit of being named after an inspirational woman in the STEM space: Ada Lovelace, a mathematician and a woman often referred to as the world’s first computer programmer. Similar to Rust, this classical, stack-based, general-purpose language is often found to require less energy and time in the execution of solutions.
Advertisement
C, which is part of the C programming family, is another ideal language to learn for those aiming to be more responsible in their job and indeed in their personal use of coding technologies. Considered to be efficient and hardy, it is a relatively popular and often powerful language for professionals and programming enthusiasts.
As a hardware-independent language, it can be moved with ease. This, coupled with simple data structures and the use of compiled language, results in far more efficiency and a less wasteful process overall.
Other languages to consider include Pascal, which offers clarity in writing, speed and efficient use of computer resources; the ever-evolving and popular Java, which is moving along nicely with the times; and Lisp, a language that enables the use of highly adaptable and extensible programs, facilitating maintenance and evolution of software over time.
Evolution is the real catalyst here. Arguably, you could say that any language can be made green if those creating the technologies and implementing them want it to be so. It’s really about modernising our systems so that they reflect the world we want to live in and the tech we will need to make it a sustainable reality.
Advertisement
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Enterprise AI programs rarely fail because of bad ideas. More often, they get stuck in ungoverned pilot mode and never reach production. At a recent VentureBeat event, technology leaders from MassMutual and Mass General Brigham explained how they avoided that trap — and what the results look like when discipline replaces sprawl.
At MassMutual, the results are concrete: 30% developer productivity gains, IT help desk resolution times reduced from 11 minutes to one, and customer service calls cut from 15 minutes to just one or two.
“We’re always starting with why do we care about this problem?” Sears Merritt, MassMutual’s head of enterprise technology and experience, said at the event. “If we solve the problem, how are we gonna know we solved it? And, how much value is associated with doing that?”
MassMutual, a 175-year-old company serving millions of policy owners and customers, has pushed AI into production across the business — customer support, IT, customer acquisition, underwriting, servicing, claims, and other areas.
Advertisement
Merritt said his team follows the scientific method, beginning with a hypothesis and testing whether it has an outcome that will tangibly drive the business forward. Some ideas are great, but they may be “intractable in the business” due to factors like lack of data or access, or regulatory constraint.
“We won’t go any further with an idea until we get crystal clear on how we’re going to measure, and how we’re going to define success.”
Ultimately, it’s up to different departments and leaders to define what quality means: Choose a metric and define the minimum level of quality before a tool is placed into the hands of teams and partners.
That starting point creates a quick feedback loop. “The things that we find slow us down is where there isn’t shared clarity on what outcome we’re trying to achieve,” which can lead to confusion and constant re-adjusting, said Merritt. “We don’t go to production until there is a business partner that says, ‘Yes, that works.’”
Advertisement
His team is strategic about evaluating emerging tools, and “extremely rigorous” when testing and measuring what “good” means. For instance, they perform trust scoring to lower hallucination rates, establish thresholds and evaluation criteria, and monitor for feature and output drift.
Merritt also operates with a no-commitment policy — meaning the company doesn’t lock itself into using a particular model. It has what he calls an “incredibly heterogeneous” technology environment combining best of breed models alongside mainframes running on COBOL. That flexibility isn’t accidental. His team built common service layers, microservices and APIs that sit between the AI layer and everything underneath — so when a better model comes along, swapping it in doesn’t mean starting over.
Because, Merritt explained, “the best of breed today might be the worst of breed tomorrow, and we don’t want to set ourselves up to fall behind.”
Credit: Brian Malloy Photo
Advertisement
Weeding instead of letting a thousand flowers bloom
Mass General Brigham (MGB), for its part, took more of a spray and pray approach — at first.
Around 15,000 researchers in the not-for-profit health system have been using AI, ML, and deep learning for the last 10 to 15 years, CTO Nallan “Sri” Sriraman said at the same VB event.
But last year, he made a bold choice: His team shut down a sprawl of non-governed AI pilots. Initially, “we did follow the thousand flowers bloom [methodology], but we didn’t have a thousand flowers, we had probably a few tens of flowers trying to bloom,” he said.
Like Merritt’s team at MassMutual, MGB pivoted to a more holistic view, examining why they were developing certain tools for specific departments of workflows. They questioned what capabilities they wanted and needed and what investment those required.
Advertisement
Sriraman’s team also spoke with their primary platform providers — Epic, Workday, ServiceNow, Microsoft — about their roadmaps. This was a “pivotal moment,” he noted, as they realized they were building in-house tools that vendors were already providing (or were planning to roll out).
As Sriraman put it: “Why are we building it ourselves? We are already on the platform. It is going to be in the workflow. Leverage it.”
That said, the marketplace is still nascent, which can make for difficult decisions. “The analogy I will give is when you ask six blind men to touch an elephant and say, what does this elephant look like?” Sriraman said. “You’re gonna get six different answers.”
There’s nothing wrong with that, he noted; it’s just that everybody is discovering and experimenting as the landscape keeps shifting.
Advertisement
Instead of a wild West environment, Sriraman’s team distributes Microsoft Copilot to users across the business, and uses a “small landing zone” where they can safely test more sophisticated products and control token use.
They also began “consciously embedding AI champions“ across business groups. “This is kind of a reverse of letting a thousand flowers bloom, carefully planting and nourishing,” Sriraman said.
Observability is another big consideration; he describes real-time dashboards that manage model drift and safety and allow IT teams to govern AI “a little more pragmatically.” Health monitoring is critical with AI systems, he noted, and his team has established principles and policies around AI use, not to mention least access privileges.
In clinical settings, the guardrails are absolute: AI systems never issue the final decision. “There’s always going to be a doctor or a physician assistant in the loop to close the decision,” Sriraman said. He cited radiology report generation as one area where AI is used heavily, but where a radiologist always signs off.
Advertisement
Sriraman was clear: “Thou shall not do this: Don’t show PHI [protected health information] in Perplexity. As simple as that, right?”
And, importantly, there must be safety mechanisms in place. “We need a big red button, kill it,” Sriraman emphasized. “We don’t put anything in the operational setting without that.”
Ultimately, while agentic AI is a transformative technology, the enterprise approach to it doesn’t have to be dramatically different. “There is nothing new about this,” Sriraman said. “You can replace the word BPM [business process management] from the ’90s and 2000s with AI. The same concepts apply.”
Three YouTube channels have banded together and filed a class action lawsuit against Apple, as first spotted by MacRumors. According to the lawsuit, the creators behind h3h3 Productions, MrShortGameGolf and Golfholics have accused Apple of violating the Digital Millennium Copyright Act by scraping copyrighted videos on YouTube to train its AI models.
While the YouTubers’ videos are available to watch on the platform, the lawsuit alleged that Apple illegally circumvented the “controlled streaming architecture” that regular users are limited to. The creators claimed that Apple’s video scraping was used to train its generative AI products, adding that the tech giant’s “massive financial success would not have been possible without the video content created” by the YouTubers. MacRumors noted that these YouTube channels have also filed similar lawsuits against other tech companies, including Meta, Nvidia, ByteDance and Snap.
It’s not the first time a company’s alleged AI training methods have gotten them in legal trouble. OpenAI and Microsoft were both accused of using copyrighted articles from the NYTimes to train its AI chatbots. Similarly, Perplexity was recently sued by Reddit and Encyclopedia Britannica for alleged copyright and trademark infringements. Last year, Apple was also named in a separate class action lawsuit from two neuroscience professors who claimed their copyrighted works were used without permission. We reached out to Apple for comment and will update the story when we hear back.
The latest Pew Research Center survey, conducted Jan. 20-26, 2026, finds that most White evangelicals (69%) approve of the way Trump is handling his job as president. And a majority (58%) say they support all or most of his plans and policies.
Let that sink in for a bit. The operative term here is probably “white,” but Trump has been embraced by the evangelical community, despite his being about as far removed from the ideals of Christianity as their arch-nemesis, trans people the Devil. (And let’s not forget I’m talking about the ideals, which are often preached but rarely practiced.)
Here’s how Trump handled Easter morning, one of the holiest (no pun intended) holidays observed by the people most likely to support him no matter what:
President Trump: “Tuesday will be Power Plant Day, and Bridge Day, all wrapped up in one, in Iran. There will be nothing like it!!! Open the Fuckin’ Strait, you crazy bastards, or you’ll be living in Hell – JUST WATCH! Praise be to Allah. President DONALD J. TRUMP”
In Trump’s own words, at 5:03 am on Easter Sunday:
Tuesday will be Power Plant Day, and Bridge Day, all wrapped up in one, in Iran. There will be nothing like it!!! Open the Fuckin’ Strait, you crazy bastards, or you’ll be living in Hell – JUST WATCH! Praise be to Allah. President DONALD J. TRUMP
Now, I have to admit that when I first read this, I thought Trump was announcing some new celebration of US infrastructure before derailing his own train of thought. But it’s definitely not that.
Both sides have threatened and hit civilian targets like oil fields and desalination plants critical for drinking water. Iran’s U.N. mission on social media called Trump’s threat “clear evidence of intent to commit war crime.”
Iran’s military joint command warned of stepped-up retaliatory attacks on regional oil and civilian infrastructure if the U.S. and Israel attack such targets there, according to state television.
Advertisement
The laws of armed conflict allow attacks on civilian infrastructure only if the military advantage outweighs the civilian harm, legal scholars say. It’s considered a high bar to clear, and causing excessive suffering to civilians can constitute a war crime.
While it looks like both sides in this war are willing to strike civilian infrastructure, the United States should be trying to take the high road (the one without war crimes). And if it can’t be bothered to do that, the administration should — at the very least — try to keep the president from publicly saying we’re going to commit war crimes.
But, alas, there’s no one willing to stop him. Pete Hegseth is definitely relishing his unearned role as the Secretary of Defense (“Back to the Stone Age.”) And he appears to be firing anyone who disagrees with things like drone-killing people in international waters and, you know, engaging in war crimes.
Shamefully, they won’t see a drop in support despite Trump threatening war crimes, dropping an F-bomb, and promising to send people halfway around the world to hell, as if he were a god himself. And that’s a damning indictment of an entire segment of Americans who choose to treat their religion as a weapon and want the world to be remade in their own image — something they often accuse Muslims of doing. The irony is lost on them, along with the man they’ve chosen to treat as God’s appointed leader.
We’ve had a lot of low points as a nation, but usually we’ve at least tried to improve. That’s no longer the case. We’re under the rule of people who debase and abuse the nation they claim to love. Happy Fuckin’ Easter, you crazy bastards. Welcome to Hell.
Apple released the first public beta of iOS 26.5 on Monday, about two weeks after the company released the massive iOS 26.4 update, which included new emoji, video podcasts and more. The iOS 26.5 beta brings a few smaller — but significant — changes to the iPhones of developers and beta testers, including one feature that will be familiar to anyone who has kept up with past iOS betas.
Apple/Screenshot by CNET
Because this is a beta, I recommend downloading it only on something other than your primary device. This isn’t the final version of iOS 26.5, so the update might be buggy and battery life may be short, so it’s best to keep those troubles on a secondary device.
Also, since this isn’t the final version, there could be more features to land on your iPhone whenever 26.5 is released.
Advertisement
Here are some features developers and beta testers can try now, and what could land on your iPhone when Apple releases iOS 26.5.
End-to-end encrypted RCS messaging returns
The iOS 26.5 beta brings back an option to enable end-to-end encrypted RCS messaging on your device. When Apple brought RCS messaging to iPhones with iOS 18, one feature the messaging protocol was missing was end-to-end encryption, and iOS 26.5 could finally bring this privacy protection to your iPhone.
To find this setting, go to Settings > Apps > Messages > RCS Messaging and tap the slider next to End-to-End Encryption (Beta).
Advertisement
Apple/Screenshot by CNET
Apple writes in the feature’s description that it’s still in beta and it works only on certain carriers and devices. Apple also writes that these encrypted messages will be labeled as such, so you should know when your messages do and don’t have this level of protection.
Apple included end-to-end encrypted RCS messaging in beta versions of iOS 26.4, but the tech giant didn’t include the feature in the final release.
Suggested Places in Maps
The iOS 26.5 beta also brings a new section called Suggested Places to your Maps app. Once in the app, tap your Search bar like you’re going to look up a nearby cafe or restaurant, and the section Suggested Places will appear below Recents.
Apple/Screenshot by CNET
Those are a few of the new features developers and public beta testers can try now with the first public beta of iOS 26.5. There will likely be more betas before the OS is released to the public, so there’s plenty of time for Apple to change these features and add others. Apple has not said when it will release iOS 26.5 to the general public.
SRM stands for Smyrna Ready Mix. SRM Concrete, which lays claim to the “largest privately-owned ready-mix concrete manufacturer in the country,” is owned by the Hollingshead family of Smyrna, Tennessee. The company’s founders, Mike and Melissa Hollingshead, got into the ready-mix concrete business as a way to improve the supply of concrete to Hollingshead Concrete. Mike Hollingshead started Hollingshead Concrete early in his career as a concrete finishing business that stands as the Hollingsheads’ first company, although recent iterations of that business are known as Hollingshead Cement.
In 1999, frustrated with the poor customer service he received from local concrete suppliers, Mike and Melissa bought their own ready-mix concrete plant, assembled it in their backyard, and acquired five used concrete trucks at an auction to start SRM Concrete. Even that first backyard operation likely exceeded the capacity of mixing multiple bags of concrete in a Harbor Freight cement mixer.
Advertisement
The Hollingsheads launched SRM Concrete with a tight budget and immediately had obstacles to overcome. While assembling SRM’s first ready-mix plant at their home in the backyard was a sizable commitment to the project, Mike had little knowledge of operating a ready-mix plant or the formula for making a quality mix. To make matters worse, two of the five used concrete trucks bought at auction, meant to deliver SRM’s product, suffered engine failure before making it back to the SRM Concrete plant.
Advertisement
Where is SRM Concrete today?
What started in Mike and Melissa Hollingshead’s backyard in 1999 has expanded dramatically over the past quarter-century. It took six months for word to spread that SRM Concrete was open for business. What started as a way for Mike to get the concrete he needed for his concrete finishing business quickly expanded to serving other concrete finishers in the area and across Middle Tennessee.
Today, SRM Concrete and Hollingshead Cement operate in 24 states across the U.S. with 563 concrete plants, 33 quarries, and 12 cement terminals. The company’s rapid growth is the result of a mixture of expansion and acquisition. SRM Concrete boasts the opening of 21 new facilities in 2025 alone, with three more announced in the first quarter of 2026.
Like many family-owned businesses in the building trade, Mike and Melissa’s sons have grown up with the business and become part of the leadership team at SRM. Jeff took on the role of Chief Executive Officer in 2014, and Ryan is the President of the company’s materials division. Mike Hollingshead is still involved in the business. He’s currently serving as the company Chairman while still making deals with suppliers, overseeing the Smyrna quarry, and driving the occasional concrete truck.
Building the next generation of robots for successful integration into our homes, offices, and factories is more than just solving the hardware and software problems – we also need to understand how they will be perceived and how they can work effectively with people in those spaces.
In summer 2025, RAI Institute set up a free popup robot experience in the CambridgeSide mall, designed to let people experience state-of-the-art robotics first hand. While news stories about robots and AI are common, with some being overly critical and some overly optimistic, most people have not encountered robots in the flesh (or metal) as it were. With no direct experience, their opinions are largely shaped by pop culture and social media, both of which are more focused on sensational stories instead of accurate information about how the robots might be used effectively and where the technology still falls short. Our goal with the popup was two-fold: first, to give people an opportunity to see robots that they would otherwise not have a chance to experience and second, to better understand how the public feels about interacting with these robots.
Designing a Robot Experience for the General Public
Some earlier versions legged robots, built by the RAI Institute’s Executive Director, Marc RaibertRAI Institute
The ANYmal by ANYrobotics (left) and a previous model of the RAI Institute’s UMV (right)RAI Institute
The pop-up space had two areas: a museum area where people could see historical and modern robots, including some RAI Institute builds like the UMV and an interactive experience called “Drive-a-Spot”. This area was a driving arena where anyone who came by could take the controls of a Spot quadruped, one of the more recognizable, commercially available robots available today.
The guest robot drivers used a custom controller built on an adaptive video game controller that was designed so that anyone of any age could use it. It featured basic controls: move forward, back, left, right, adjust height, sit, stand, and tilt. The buttons were large so that tiny or elderly hands could use the controller and the people who drove Spot ranged in age from two to over 90.
Advertisement
The guest robot drivers used a custom controller built on an adaptive video game controller that was designed so that anyone of any age could use it.RAI Institute
The demo area was designed to be a bit challenging for the Spot robot to maneuver in – it contained tight passages, low obstacles to step over, a barrier to crouch under, and taller objects the robot had to avoid. Much to the surprise of many of our guests, Spot is able to autonomously adjust itself to traverse and avoid those obstacles when being supervised by the joystick.
RAI Institute
The driving arena’s theme rotated every few weeks across four scenarios: a factory, a home, a hospital, and an outdoor/disaster environment. These were chosen to contrast settings where robots are broadly accepted (industrial, emergency response) with settings where public ambivalence is well-documented (domestic, healthcare).
The visitors who chose to drive the Spot robot could also participate in a short survey before and after their driving experience. The survey focused on two core dimensions:
Advertisement
Comfort: how comfortable would you feel if you encountered a robot in a factory, home, hospital, office, or outdoor/disaster scenario?
Suitability: how well would this robot work in each of those contexts?
The survey also recorded emotional reactions immediately after driving, likelihood to recommend the experience, and open-ended responses about what they found memorable or surprising. The researchers were careful to separate the environment participants drove through from the scenarios they were asked to evaluate in the survey). This distinction is important for interpreting the results given below.
Did Interacting with the Robot Change People’s Feelings about Robots?
Out of approximately 10,000 guests that visited the Robot Lab, 10 percent of those drove the Spot and opted-in to our surveys. Of those surveyed, more than 65% of people had seen images or videos of Spot robots online, but most had never seen one of the robots in person.
Increased Comfort Through Experience
Across all five contexts presented in the survey (factory, home, hospital, office, and outdoor/disaster scenarios), comfort scores increased significantly after the driving session. The effects were small to moderate in magnitude, but they were consistent and statistically robust after correcting for multiple comparisons across all participants spanning children to older adults.
The largest gain appeared in the outdoor/disaster context, which started with low comfort despite high-perceived suitability. People already thought Spot would be useful in search-and-rescue scenarios; they just weren’t comfortable with it performing in that scenario. This discomfort may stem from media portrayals of quadruped robots in military contexts. A few minutes of hands-on control appears to partially dissolve that apprehension.
Participants who drove through the factory-themed arena showed no significant increase in comfort, but this scenario already had the highest rating of any rated context at baseline, leaving little room for improvement.
Advertisement
No matter their previous experience, most people were neutral about having a Spot robot in their home before their driving experience. However, after the experience of controlling the Spot robot, people had a statistically significant increase in their comfort at having a Spot in their home and also felt that a Spot robot was more suitable for work in any environment, not just the one they had driven it in.
Better Understanding of Where Robots Can Fit into Daily Life
Perceived suitability for Spot to operate in each context also increased. However, the pattern in the data is different. The largest gains weren’t in the high-baseline industrial and outdoor contexts. They were in home, office, and hospital – the very environments where people started out most skeptical.
Participants who drove the Spot robot in a home-themed environment didn’t just consider homes more suitable for robots; they also rated hospitals and offices as more suitable. This result suggests that hands-on control alters something more fundamental than just context-specific familiarity. It may change a person’s underlying understanding of a robot’s capabilities and, consequently, where they believe robots are appropriate.
Results by Demographic
The hands-on experience seems to be similarly effective across genders, although it does not completely eliminate existing disparities. For example, men reported higher baseline comfort than women across all five contexts. However, all genders improved at similar rates after interaction. The gap didn’t significantly widen or close in most contexts, though it did narrow in factory and office settings.
Advertisement
Age effects were more context dependent. Children (aged 8–17) rated factory environments as less comfortable and less suitable before the study. However, this could be because most children do not have experience with factory settings or industrial environments. After interaction, this gap largely persisted. By contrast, children showed stronger gains in office comfort than older adults and entered the study rating home contexts more favorably than adults did.
Participants ranged from age 8 to over age 75.RAI Institute
Participants who had previously driven Spot (mainly robotics professionals) began with higher comfort across the board. But after the hands-on session, people with no prior exposure caught up to experienced drivers. This level of familiarity would be difficult to replicate with images and videos alone.
Post-Interaction Results
Post-interaction emotional data was overwhelmingly positive. “Excitement” was reported by 74% of participants, “happiness” by 50%, and only 12% reported “nervousness.” Over 55% rated the experience as “brilliant” and 62% said they were very likely to recommend it to a friend.
The open-ended responses added a lot more color. The most commonly mentioned moments were locomotion and terrain adaptation (22%). This included the way Spot navigated steps, tight spaces, and uneven ground and expressive tilt movements (22%), which people found surprisingly dog-like or dance-like. A smaller set of responses (3%) described anthropomorphic reactions: worrying about “hurting” the robot or finding its behavior “silly” in a way that prompted genuine emotional response.
Advertisement
When asked what tasks they’d want a robot to perform, responses shifted meaningfully. Before driving, answers clustered around domestic assistance and heavy or hazardous labor. After driving, domestic help remained prominent, but entertainment and play jumped from 7.5% to 19.4%. Companionship also appeared at 5%. References to hazardous or industrial tasks declined as people who had operated the robot began imagining it as a companion and playmate, not just a labor-replacement tool.
Key Takeaways from The Robot Lab
In the not-so-distant future, robots will become more common in public and private spaces. But whether that integration into daily life will be accepted by the general public remains to be seen. The standard approach to building acceptance has been passive exposure such as videos, exhibits, and articles. This study suggests giving people agency and letting them actually operate a robot is a qualitatively different intervention.
Short, well-designed, hands-on encounters can raise comfort in precisely the social domains where ambivalence is highest and where future robotics deployment will likely take place. This hands-on experience shouldn’t be limited to tech conferences and museums, as it may be more valuable than just entertaining.
Fun for all ages!RAI Institute
We consider the popup a success, but as with all experiments, we also learned a lot along the way. For our takeaways, in addition to the increased comfort with robots, we also found that the guests to our space really enjoyed talking to the robotics experts that staffed the location. For many people, the opportunity to talk to a roboticist was as unique as the opportunity to drive a robot, and in the future, we are excited to continue to share our technical work as well as the experiences of our humans in addition to our humanoids.
Advertisement
Does building a space where folks can experience robots firsthand have the potential to create meaningful, long-term attitude shifts? That remains an open question. But the effect’s direction and consistency across different situations, ages, and genders are hard to ignore.
For the fifth and final time, The Boys are coming back. Season 5 of Amazon’s hit series, which was created by Eric Kripke and inspired by the comic book run by Garth Ennis and Darick Robertson, returns for its victory lap. That’s not to say that this is really the end — The Boys’ cinematic universe has grown with spinoffs Gen V, the short-lived Diabolical animated series and the other in-development spin-off currently in the works.
But by all accounts, this is the last run of Homelander and Billy Butcher’s respective crews. Their five-season-long story arc, and the enduring battle between Butcher’s resistance fighters and Vought’s 7, will reach its final crescendo. Needless to say, the internet is about to be lit up with Supe reviews, rumors and gossip.
All the familiar faces are back, including Antony Starr, Karl Urban, Jack Quaid, Erin Moriarty, Chace Crawford, Jessie T. Usher, Laz Alonso and Tomer Capone. Jensen Ackles makes his return as Soldier Boy; his Supernatural co-stars Jared Padalecki and Misha Collins are also on board for the new season.
Advertisement
Scroll on to learn when and where to watch The Boys season 5 and more streaming details.
The Boys will have a two-episode premiere for its fifth season, kicking off on Wednesday, April 8, at 12 a.m. PT (3 a.m. ET) on Prime Video. Each Wednesday after, a new episode will drop until the finale, which is scheduled to hit streaming on May 20.
Advertisement
Here’s the complete episode schedule:
Episode 1: Fifteen Inches of Sheer Dynamite — April 8
Episode 2: Teenage Kix —April 8
Episode 3: Every One of You Sons of Bitches — April 15
Episode 4:Though the Heavens Fall — April 22
Episode 5:One-Shots — April 29
Episode 6:King of Hell — May 6
Episode 7:The Frenchman, the Female and the Man Called Mother’s Milk — May 13
Episode 8:Blood and Bone — May 20
Prime Video is one of the membership perks of Amazon Prime, which costs $15 a month or $139 a year. If you’d like a membership but don’t want to get Amazon Prime, you can do so for $9 a month.
James Martin/CNET
Prime Video’s standard service comes with ad breaks for viewers in the US. If you want to go ad-free, there’s an additional $3 monthly fee. This option is available to both Amazon Prime subscribers and those who pay for a standalone Prime Video membership. For more information about the streamer, check out our review.
Advertisement
How to watch The Boys season 5 with a VPN
If you’re traveling abroad and want to watch The Boys while away from home, a VPN can help enhance your privacy and security when streaming.
It encrypts your traffic and prevents your internet service provider from throttling your speeds. Additionally, it can be helpful when connecting to public Wi-Fi networks while traveling, providing an extra layer of protection for your devices and logins. VPNs are legal in many countries, including the US and Canada, and can be used for legitimate purposes such as improving online privacy and security.
However, some streaming services may have policies restricting VPN use to access region-specific content. If you’re considering a VPN for streaming, check the platform’s terms of service to ensure compliance.
If you choose to use a VPN, follow the provider’s installation instructions to ensure you’re connected securely and in compliance with applicable laws and service agreements. Some streaming platforms may block access when a VPN is detected, so verifying if your streaming subscription allows VPN use is crucial.
Advertisement
James Martin/CNET
ExpressVPN is our current best VPN pick for people who want a reliable and safe VPN, and it works on a variety of devices. Prices start at $3.49 a month on a two-year plan for the service’s Basic tier.
Note that ExpressVPN offers a 30-day money-back guarantee.
Advertisement
If you choose to use a VPN, follow the provider’s installation instructions, ensuring you’re connected securely and in compliance with applicable laws and service agreements. Some streaming platforms may block access when a VPN is detected, so verifying if your streaming subscription allows VPN usage is crucial.
While browsing our website a few weeks ago, I stumbled upon “How and When the Memory Chip Shortage Will End” by Senior Editor Samuel K. Moore. His analysis focuses on the current DRAM shortage caused by AI hyperscalers’ ravenous appetite for memory, a major constraint on the speed at which large language models run. Moore provides a clear explanation of the shortage, particularly for high bandwidth memory (HBM).
As we and the rest of the tech media have documented, AI is a resource hog. AI electricity consumption could account for up to 12 percent of all U.S. power by 2028. Generative AI queries consumed 15 terawatt-hours in 2025 and are projected to consume 347 TWh by 2030. Water consumption for cooling AI data centers is predicted to double or even quadruple by 2028 compared to 2023.
But Moore’s reporting shines a light on an obscure corner of the AI boom. HBM is a particular type of memory product tailor-made to serve AI processors. Makers of those processors, notably Nvidia and AMD, are demanding more and more memory for each of their chips, driven by the needs and wants of firms like Google, Microsoft, OpenAI, and Anthropic, which are underwriting an unprecedented buildout of data centers. And some of these facilities are colossal: You can read about the engineering challenges of building Meta’s mind-boggling 5-gigawatt Hyperion site in Louisiana, in “What Will It Take to Build the World’s Largest Data Center?”
We realized that Moore’s HBM story was both important and unique, and so we decided to include it in this issue, with some updates since the original published on 10 February. We paired it with a recent story by Contributing Editor Matthew S. Smith exploring how the memory-chip shortage is driving up the price of low-cost computers like the Raspberry Pi. The result is “AI Is a Memory Hog.”
Advertisement
The big question now is, When will the shortage end? Price pressure caused by AI hyperscaler demand on all kinds of consumer electronics is being masked by stubborn inflation combined with a perpetually shifting tariff regime, at least here in the United States. So I asked Moore what indicators he’s looking for that would signal an easing of the memory shortage.
“On the supply side, I’d say that if any of the big three HBM companies—Micron, Samsung, and SK Hynix—say that they are adjusting the schedule of the arrival of new production, that’d be an important signal,” Moore told me. “On the demand side, it will be interesting to see how tech companies adapt up and down the supply chain. Data centers might steer toward hardware that sacrifices some performance for less memory. Startups developing all sorts of products might pivot toward creative redesigns that use less memory. Constraints like shortages can lead to interesting technology solutions, so I’m looking forward to covering those.”
The news is full of reports from the moon-bound Integrity, otherwise known as Artemis II. Mostly, the news is good, but there has been one “Houston, we have a problem…” moment. The space toilet, otherwise known as the Universal Waste Management System or UWMS is making a burning smell while in use. While we would love to be astronauts, we really don’t want to go ten days without using the can, and it made us wonder how, exactly, the astronauts answered the call of nature.
The Old Days
Back in the Apollo-era, going to the bathroom was a messy business. The capsule wasn’t that big, and there were no women on board. So you simply strapped an adhesive-rimmed bag or tube to yourself and answered nature’s call with your two closest coworkers right there.
Space Shuttle facilities (by [Svobodat] CC BY-SA 3.0)
To add insult to injury, the “#2 bags” needed some packet mixed in to keep it from going bad in the bag before it could return to Earth for — no kidding — scientific study.
The system was far from perfect. Apollo 8 and Apollo 10 both had to do some housekeeping due to leaky bags.
Astronaut Ken Mattingly reportedly said, “Man, one of the feats of my existence the other day was, in 42 minutes, I strapped on a bag, went out of both ends, and ate lunch…. I used to want to be the first man to Mars. This has convinced me that, if we got to go on Apollo, I ain’t interested.”
Advertisement
Still, it was better than the first Mercury launch, where Alan Shepard famously relieved himself in his spacesuit while sitting on the pad for over eight hours. Later missions used hoses.
Things got slightly better with Skylab, where there was more room. The Shuttle also had a toilet. You got a curtain for privacy, but you couldn’t go #1 and #2 at the same time. Also, apparently, the contraptions were not easily workable for females.
Modern Times
This UMWS went to the ISS (NASA)
The early International Space Station used a similar system to the shuttle. However, in 2020, the UWMS debuted. It is easier to use for the female anatomy, and it has a door. This is essentially the same bathroom crammed into Integrity. Given the size of the capsule, we doubt the door is more than a symbol, but still.
Rather than explain the UWMS operation, you can watch the video below. Note that everyone has their own funnel. There are some things you just don’t want to share.
What’s That Smell?
We don’t know what the burning smell is on Integrity, but we are sure we are going to find out. One other thing we never quite see addressed is how you clean up afterward. We aren’t sure we want to know.
Advertisement
Perhaps it is ironic that the first Artemis mission with a crew is having bathroom problems. After all, the Artemis slogan is “Let’s Go!” You’ll have to finish that joke on your own.
Microsoft says that Storm-1175, a China-based financially motivated cybercriminal group known for deploying Medusa ransomware payloads, has been deploying n-day and zero-day exploits in high-velocity attacks.
This cybercrime gang quickly shifts to targeting new security vulnerabilities to gain access to its victims’ networks, weaponizing some of them within a day and, in some cases, exploiting them a week before patches are released.
“Storm-1175 rapidly moves from initial access to data exfiltration and deployment of Medusa ransomware, often within a few days and, in some cases, within 24 hours,” Microsoft said.
“The threat actor’s high operational tempo and proficiency in identifying exposed perimeter assets have proven successful, with recent intrusions heavily impacting healthcare organizations, as well as those in the education, professional services, and finance sectors in Australia, United Kingdom, and United States.”
Advertisement
Microsoft has also observed Storm-1175 operators chaining multiple exploits to gain persistence on compromised systems by creating new user accounts, deploying remote monitoring and management software, stealing credentials, and disabling security software before dropping ransomware payloads.
Storm-1175 attack chain (Microsoft)
In October, Microsoft reported that Storm-1175 had been exploiting a maximum-severity GoAnywhere MFT vulnerability (CVE-2025-10035) in Medusa ransomware attacks for over one week before it was patched.
“While these more recent attacks demonstrate an evolved development capability or new access to resources like exploit brokers for Storm-1175, it is worth noting that GoAnywhere MFT has previously been targeted by ransomware attackers, and that the SmarterMail vulnerability was reportedly similar to a previously disclosed flaw,” Microsoft added.
“These factors may have helped to facilitate subsequent zero-day exploitation activity by Storm-1175, who still primarily leverages N-day vulnerabilities.”
CISA issued a joint advisory with the FBI and the Multi-State Information Sharing and Analysis Center (MS-ISAC) in March 2025, warning that the Medusa ransomware gang’s attacks had impacted over 300 critical infrastructure organizations across the United States.
In July 2024, Microsoft also linked the Storm-1175 threat group, along with three other cybercrime gangs, to Black Basta and Akira ransomware attacks that exploited a VMware ESXi authentication-bypass flaw.
Advertisement
Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.
This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.
You must be logged in to post a comment Login