Just over a year ago, OpenAI co-founder Andrej Karpathy coined the term “vibe coding” and it’s exactly what it sounds like. In a post on X, he wrote that it’s where “you fully give in to the vibes, embrace exponentials, and forget that the code even exists.”
Tech
What is vibe coding? AI coding with Claude, Codex, and Gemini, explained
Since then, coders from all backgrounds — and folks with zero experience — have tapped into their vibes to make apps and websites. Vibe coding platforms, powered by AI models like Claude, Codex, and Gemini, have gained traction as a way to give normies a toolset to code whatever they want, without writing a single line of script.
Tech behemoths like Amazon and bustling Silicon Valley startups even have their coders using it. It’s doing the grunt work for now, but they say it’s opening up a whole new world of possibilities. One possibility: It takes their job. But it’s a trade-off that some of them are willing to make.
Clive Thompson wrote a book about this and spent time with over 70 vibe coders to understand how the technology is upending the industry and if this is the end of computer programming as we know it. On Today, Explained, co-host Sean Rameswaram dug into these questions and even vibe coded a simple website while doing it.
Below is an excerpt of the conversation, edited for length and clarity. There’s much more in the full podcast, so listen to Today, Explained wherever you get podcasts, including Apple Podcasts, Pandora, and Spotify.
You spent a lot of time hanging out with coders who were vibe coding. And from what I could tell from reading your piece in the New York Times Magazine is that they’re not vibe coding the same way that I was vibe coding.
No, they’re doing something that’s a lot more aggressive and ambitious. What they’re doing is they are using multiple agents, kind of swarms of agents at the same time. If they’re using Claude Code or Codex or Gemini they will have it wired into their laptops. Those agents can create files, destroy files. They can take code that’s been written, they can push it live into production in the world.
And they will also work little teams. So when they want to create a piece of software, sometimes they’ll write, like, a spec, like a page saying, “Here’s what I want to do.” Or sometimes they’ll just talk to the agent. But they’ll be kind of talking to the lead agent that’s going to be the head of the team and they’ll talk to it and say, “Here’s what I want you to do. What do you think? Give me your ideas.” And they’ll sort of go back and forth generating a plan. And when they’re confident that this top agent understands what is to be done, they’ll say, “All right. Go do it.”
And that one will spawn off several subagents. It will have one agent that’s writing code, another one that is testing the code. It’s quite wild to watch them do this. And sometimes if it does something wrong, they’ll have to yell at it. They’ll be like, “This is unacceptable.” Or they’ll say things like, you know, “This is embarrassing. You’re humiliating me.”
And I said to him, “What’s up with that? Does that language improve the sort of output of these agents?” And he was like, “I couldn’t prove it. But generally we find that when we sort of reprimand them a little bit, they become a little more reliable.”
Can you help us understand just how much time, money, human labor is being saved by vibe coding at the level that you observed?
Yeah, it can be really significant. They’re most significant when someone is building something new from scratch. The startup founders, one- or two-person, three-person shops, they’re like, “I need to get to market fast. There might be 10 other people with this idea. I got to beat them.” It’s dizzying. Some of those people were telling me that they were working 20 times faster than they would on their own. Stuff that would normally have taken them a day now takes half an hour.
But at a very large and mature company like Amazon or Google, you’ve got billions of lines of existing code and if one little part of it stops working, that could cascade through everything. So those folks are definitely using the agents, but they are less likely to be pushing stuff rapidly out. They’re more likely to be looking carefully at it and putting it through what’s known as code review, where multiple humans look at it and go, “Oh, okay, does that work?” So for them, basically it’s like a 10 percent improvement in terms of the velocity of productivity of the engineers, how fast they go from having an idea to making it happen.
And what’s really interesting, and you may have discovered this too, in your vibe coding: a lot of engineers told me that it was even less about speed than about the ability to experiment with a bunch of ideas and see which one might really work.
In the before times, you’d have an idea for a feature. Are you really going to spend six weeks developing it just to discover that it’s not really what you thought it was going to be?
Now, well, let’s just do 10 different versions of that over the next week and let’s look at all of them and then we can pick the one we want. You might not necessarily have gone faster, but the feature that you’ve got is exactly the one you wanted and you know because you held it in your hands.
A lot of tech layoffs in the past few years, and now we’re talking about how vibe coding has dramatically overturned the norms in engineering. How are developers feeling about that?
Well, here’s the thing. So there is definitely a civil war insofar as there is the majority of people that I spoke to, and I reached out to a very wide array — I talked to 75 developers.
And I actively wanted to talk to ones that didn’t like AI because I wanted to know their feelings. It’s a minority of people that are really hotly opposed, but they’re very, very strongly opposed. They don’t like the fact that these are trained on stolen materials. They don’t like the fact that it uses tons of energy. They don’t like the fact that they think it’s going to de-skill [people].
Why do you think they’re not the majority, when this is so clearly going to replace so many of them and bypass all of their ethical, moral concerns and objections?
I think it’s because for a lot of developers it’s just such a delightful experience in the short term of going from everything being a slow slog to it being like, “Oh my God, all these ideas and things I wanted to do, I can now try them and do them.”
Because it’s fun, basically.
It’s enormously fun. The pleasure of coding used to be that there were a lot of these little wins when you got something working. Those little wins have gone away because you’re not doing that bug fixing, you’re not doing that line writing.
So the big wins are just coming in avalanches and it’s very intoxicating. Also, there are ones who essentially don’t think that those bad labor things are going to obtain. They think there’s a potential that more [jobs] will get created in areas that they have previously been unable to be created.
Give it five years for us. Does this harken the end of computer programming as we know it?
No, I would not go so far as to say that it ends in five years. I do think it becomes something very different potentially. I still think — everyone told me, and I believe — that you still need some understanding of the way a code base works to do the complicated things.
Weirdly, what you might see is something a little different, which is the explosion of code in areas where there is currently none. There’s a bazillion people out there that are code-adjacent. You work in accounting, you are a wizard at Excel, and you can import data if you’re given the ability now to have an agent say, “Okay, could you bring more data in?”
There is going to be this really weird world where there’s a lot of customized software for an audience of two, three people. We have thought of software historically as something that only exists if 10,000 people or a million people want it because it costs a lot of money to make it.
But if you can now start making it for next to nothing, you can start using it the way that we use Post-it notes. Put it all over the place. I need to jot this idea down. I’m going to make this happen. And maybe this software solves one problem for this afternoon and we never use it again. Software starts becoming almost disposable.
Tech
Build This Open-Source Graphics Calculator
Graphics calculators are one of those strange technological cul-de-sacs. They rely on outdated technology and should not be nearly as expensive as they are, but market effects somehow keep prices well over $100 to this day. Given that fact, you might like to check out an open-source solution instead.
NumOS comes to us from [El-EnderJ]. It’s a scientific and graphic calculator system built to run on the ESP32-S3 with an ILI9341 screen. It’s intended to rival calculators like the Casio fx-991EX ClassWiz and the TI-84 Plus CE in terms of functionality. To that end, it has a full computer algebra system and a custom math engine to do all the heavy lifting a graphic calculator is expected to do, like symbolic differentiation and integration. It also has a Natural V.P.A.M-like display—if you’re unfamiliar with Casio’s terminology, it basically means things like fractions and integrals are rendered as you’d write them on paper rather than in uglier simplified symbology.
If you’ve ever wanted a graphics calculator that you could really tinker with down to the nuts and bolts, this is probably a great place to start. With that said, don’t expect your local school or university to let you take this thing into an exam hall. They’re pretty strict on that kind of thing these days.
We’ve seen some neat hacks on graphics calculators before, like this TI-83 running CircuitPython. If you’re doing your own magic with these mathematical machines, don’t hesitate to notify the tips line.
Tech
Elon Musk Unwraps $25 Billion Terafab Chip-Building Project
Elon Musk took the stage over the weekend to announce a new partnership between Tesla, SpaceX and xAI to build a $25 billion chip-making factory in Austin, Texas, called Terafab.
Acknowledging Samsung, TSMC and other chipmakers, Musk said the Terafab project needs to get off the ground because existing semiconductor partners aren’t making chips fast enough. If built, Terafab would be the largest semiconductor manufacturing plant in the world.
Bringing more semiconductor facilities to the US isn’t new. The CHIPS Act of 2022 saw a dramatic rise in announcements for further investments in such facilities on American soil. Nvidia began manufacturing chips in its Arizona factory last year, and the motivation wasn’t only due to tariffs.
The CHIPS Act has paid out for several chip-making projects, including Intel’s massive $8 billion factory, though the introduction of additional semiconductor fabs in the US has been slow. Terafab would be a significant addition to the infrastructure for onshore chip making in the US, and by far the most expensive. There’s no word yet on whether Terafab would receive funding under the CHIPS Act.
Powering all of your electronic devices are chips that serve as their brains. They vary from the likes of Apple’s M series to Nvidia’s Vera Rubin CPU and beyond. The Terafab project aims to ease the current shortage of chips powering devices that will bring AI robotics and more to life. (The AI boom has also brought about a massive RAM shortage, with no expected relief until 2028, affecting prices on electronics like smartphones and laptops.)
Musk gave details on two of the chips he plans to build, the AI5 and AI6, which would power the likes of existing earthly ventures, such as Tesla’s Optimus robots and self-driving cars. Also detailed was the D3 chip, which he said would be made for orbital satellites in space. This type of ambition isn’t just coming from Musk, either. Nvidia announced similar goals to build orbital AI data centers during its GTC conference last week.
The project aims to have every piece of the manufacturing process take place at the facility to churn out chips by the billions, targeting the 2-nanometer process. Musk believes the project will help propel us into becoming a “galactic civilization.”
It sounds like an ambitious project, though not everyone is buying it. Musk has historically announced wild projects, like the “million-mile” battery that never quite got off the ground. Whether the Terafab facility actually becomes a reality is a waiting game for now.
Tech
Today’s NYT Mini Crossword Answers for March 24
Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.
Need some help with today’s Mini Crossword? It helps to know a little about birds. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.
If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.
Read more: Tips and Tricks for Solving The New York Times Mini Crossword
Let’s get to those Mini Crossword clues and answers.
The completed NYT Mini Crossword puzzle for March 24, 2026.
Mini across clues and answers
1A clue: Apple computers
Answer: MACS
5A clue: Colorful parrot with a long tail
Answer: MACAW
6A clue: Enticing scent
Answer: AROMA
7A clue: __ song (dangerous lure)
Answer: SIREN
8A clue: “Barbie” character whose job is “beach”
Answer: KEN
Mini down clues and answers
1D clue: Singing sister in the Osmond family
Answer: MARIE
2D clue: Future oak tree, perhaps
Answer: ACORN
3D clue: Attended
Answer: CAME
4D clue: ___ song (farewell performance)
Answer: SWAN
5D clue: Conceal
Answer: MASK
Tech
Historic Navy Shipwreck Breaks Through The Surface After 400 Years Under Sea
After 400 years underwater, a Swedish Navy vessel in the Baltic Sea off Stockholm has become visible. Sunk on purpose back in the 17th century, the ship has resurfaced after the waters reached their lowest level in the past 100 years. Marine Archeologist, Jim Hansson, from Stockholm’s Vrak Museum of Wrecks, explained the conditions which led to its reemergence to AFP, as reported by CBS. “There has been a really long period of high pressure here around our area in the Nordics. So the water from the Baltic has been pushed out to the North Sea and the Atlantic.” .
The unidentified ship was sunk around 1640 so it could be used to form the foundation of a bridge connecting to the Kastellholmen island. There are currently five sunken ships in the area. The Swedish Navy is looking into reusing their oak hulls rather than using new wood. Researchers are currently attempting to identify these sunken ships as part of a research program called “The Lost Navy.”
How did the shipwreck survive underwater for 400 years?
It might seem baffling that a wooden ship could survive in the ocean for 400 years, but the Baltic Sea had the right conditions to keep the Swedish Navy vessel largely intact. According to Hansson, that part of the ocean doesn’t have shipworms, meaning the sunken ship wasn’t eaten. Shipworms, which can grow up to two meters long, are sea creatures that use bacteria in their gut to break down wood and consume it. They’re so proficient at it that they can sink a boat.
Rather than rotting the wood away as you’d possibly expect, the water actually keeps the boat intact — especially at deep levels — creating a time capsule of sorts. In fact, most boats can remain undisturbed deep under the water indefinitely — but bringing the shipwreck to the surface can cause the wood to break down since it was only being held together by water between its cells.
This has been a big issue with recovering the Vasa, another vessel in Sweden that sank back in 1628. Its wood is being ruined by iron and metal pieces that have started to acidify, now it’s out of the water. Scientists discovered that earth alkaline hydroxides can neutralize the acid, stopping the chemical reaction that destroys the wood, but it’s still a challenge to preserve uncovered shipwrecks. This means the low water levels in the Baltic Sea could pose a problem for the newly uncovered warship.
Tech
Microsoft hires former Ai2 CEO Ali Farhadi and key researchers for Suleyman’s AI team

Microsoft is hiring a group of top AI researchers from the Seattle-based Allen Institute for AI and the University of Washington, including former Ai2 CEO Ali Farhadi, GeekWire has learned.
Farhadi, Hanna Hajishirzi, and Ranjay Krishna are expected to join Mustafa Suleyman’s organization at Microsoft while retaining their faculty positions at the UW’s Allen School of Computer Science and Engineering. Also joining is Sophie Lebrecht, the former Ai2 chief operating officer.
The move follows Farhadi’s departure from Ai2, announced March 12. Farhadi had led the Seattle-based nonprofit research institute for more than two and a half years.
Suleyman, the CEO of Microsoft AI, narrowed his focus last week from overseeing consumer-oriented Copilot products to leading Microsoft’s Superintelligence team.
The hires come as Microsoft works to reduce its dependence on OpenAI for frontier AI models, competing against Amazon, Google, and others. Suleyman’s Superintelligence team, formed in November, is part of a broader push to further develop advanced foundation models.
Microsoft has already hired researchers from Google DeepMind, Meta, OpenAI, and Anthropic, and the addition of the Ai2 and UW group would bring deep expertise in open-source model development and training efficiency — where Ai2 has punched well above its weight.
Backing from NSF and Nvidia
The exits represent a notable collective loss for Ai2, which was founded in 2014 by the late Microsoft co-founder Paul Allen. Hajishirzi is a co-lead of the OLMo open-source language model project and a co-principal investigator on a $152 million, five-year initiative backed by the National Science Foundation and Nvidia to build open AI models for scientific research.
She represented Ai2 in multiple sessions last week at Nvidia’s GTC conference in San Jose, including a panel on the future of open models alongside Nvidia CEO Jensen Huang.
Krishna has led the development of Ai2’s Molmo multimodal models, among other projects. He also presented at the Nvidia conference last week on behalf of the institute.
Farhadi, a computer vision specialist, co-founded Ai2 spinout Xnor.ai, which Apple acquired in 2020 for an estimated $200 million. He led machine learning efforts at Apple before returning to lead Ai2 as CEO in July 2023.
Ai2 interim CEO Peter Clark acknowledged the departures in a statement, saying the institute remains committed to its mission and its partnerships with the NSF and Nvidia, including the OMAI initiative.
“These initiatives are backed by a broad, experienced team with the expertise and continuity needed to carry this work forward,” Clark said. “We’re confident in our ability to build on the strong foundation already in place and to expand the impact of these efforts in the months ahead.”
He added that the institute is “grateful for the leadership and contributions of Ali, Hanna, Ranjay, and others” in advancing Ai2’s work, and wished them well.
In a post about the hires on LinkedIn, Suleyman praised Farhadi for leading Ai2 in releasing more than 100 models in a single year and called Hajishirzi “one of the most cited researchers of natural language processing in the world, full stop.”
Suleyman described Lebrecht as having scaled Ai2’s operations and open-source efforts, noting that she also co-founded the AI company Neon Labs and holds a PhD in cognitive neuroscience from Brown University.
He said they will help pursue Microsoft’s mission of “humanist superintelligence: safer, controllable, more capable AI systems in service of humanity and our toughest problems.”
When news broke earlier this month that Farhadi was leaving, Ai2 board chair Bill Hilf told GeekWire that Farhadi wanted to pursue research at the extreme frontier of AI, where for-profit companies are spending billions on training the most advanced models.
At the time, Hilf said the board had to weigh whether a nonprofit’s philanthropic dollars were best spent trying to keep pace, acknowledging that competing against tech giants at the largest scale of model development had become extraordinarily difficult.
Changes in Ai2’s funding realities
Behind the scenes, the changing nature of Ai2’s funding environment has also been playing a role in the exits, according to people with knowledge of the situation.
Ai2 was originally funded by Allen’s Vulcan Inc. and later by his estate. Its primary backer is now the Fund for Science and Technology, a $3.1 billion foundation created under Allen’s instructions and publicly launched in August, with a focus on applying science and technology to problems in areas aligned with Allen’s passions, including AI, bioscience, and the environment.
FFST, led by CEO Dr. Lynda Stuart, a physician-scientist who previously led the Institute for Protein Design at the UW, favors applied uses of AI over the costly work of frontier models.
In addition, while all Ai2 programs for 2026 are fully funded, these people said, FFST is moving from providing Ai2 with overall annual funding to a proposal-based process, with future support expected to favor real-world applications of AI over building open-source foundation models. The shift helps explain the departures of researchers focused on model development.
A spokesperson for the Fund for Science and Technology said Ai2’s “work and mission remain the same” and that FFST’s broader program strategies are still under development.
Farhadi, Hajishirzi, and Krishna are researchers whose work centers on building and advancing AI models. Microsoft’s Superintelligence team, backed by billions in compute investment, offers the resources and mandate to pursue that work at a much larger scale.
Tech
Low Self-Discharge, High-Voltage Supercapacitors Using Porous Carbon
Supercapacitors rely mostly on double-layer capacitance to bridge the divide between chemical batteries and traditional capacitors, but they come with a number of weaknesses. Paramount among these are their relatively low voltage of around 2.7 V before their electrolyte begins to decompose, as well as their relatively high rates of self-discharge. Here a new design using lignin-derived porous carbon electrodes and a fluorinated diluent was demonstrated by [Shichao Zhang] et al., as published in Carbon Research, that seems to address these issues.
Most notable are the relatively high voltage of 4 V, an energy density of 77 Wh/kg and a self-discharge rate that’s much slower than that of conventional supercapacitors. In comparison with these supercapacitors, these demonstrated versions are also superior in terms of recharge cycles with 90% of capacity remaining after 10,000 cycles, which together with their much higher energy density should prove to be quite useful.
This feat is accomplished by using lignin as the base for the carbon electrodes to make a highly porous surface, along with the new electrolyte formulation consisting of alithium salt (LiBF4) dissolved in sulfolane with TTE as a non-solvating diluent. The idea of using lignin-derived carbon for such a purpose has previously been pitched by [Jia Liu] et al. in 2022 and [Zhihao Ding] in 2025, with this seemingly one of the first major applications we may be seeing.
Although the path towards commercialization from a lab-assembled prototype is a rough one, we may be seeing some of these improvements come to supercapacitors near you sooner rather than later.
Tech
Use AI to Find Your Next Rental Car With Turo’s ChatGPT App
ChatGPT can now help you find and book a rental car thanks to a new Turo integration that launched Monday. The Turo app for ChatGPT allows you to just tell the AI chatbot what you’re looking for — from pickup location and dates to number of seats, EV preference and more — using natural language, and be presented with real Turo rental cars, advice and links directly to the Turo website to book.
Turo is a peer-to-peer marketplace that lets private owners rent out their personal vehicles to travelers and locals. I like to think of it as the “Airbnb of cars” or drive-it-yourself Uber. Unlike traditional rental agencies that own and maintain large fleets of cars, Turo merely provides the tech, insurance and support to connect vehicle hosts with guest drivers. Turo has proven to be a popular alternative to the airport rental counter thanks to its more varied selection of unique car models (including luxury or high-tech vehicles), competitive pricing, and the convenience of having certain vehicles delivered.
And now, it has a ChatGPT integration. You can access the new Turo app within ChatGPT by first searching for and then adding Turo to the list of available agents in ChatGPT’s Apps menu. Once connected, adding “@Turo” to any chat with the AI bot will trigger the new functionality.
I fired up ChatGPT after setting it up for myself and typed the prompt: “@Turo, I’m going to be landing in Atlanta on Friday and would like to rent an EV for the weekend with enough range to make it to Augusta. What’s available?”
I used natural language to find available cars on the Turo service using ChatGPT.
The app replied with listings of vehicles currently available to rent near the airport with enough range to make the approximately 300-mile round trip with as little as one quick top-up. Each listing featured photos, price estimates (including tax and fees), star ratings and the number of times each car had been rented.
Clicking on a listing took me straight to the Turo website (or app on mobile) to complete the booking. I also tried asking for “an EV near my home that seats six people” and “a hybrid that would be useful for moving,” and found the results to be adequate.
In addition to the listings, ChatGPT and Turo provided details (like range) about each car as well as pros and cons, such as Tesla’s plentiful Superchargers between Atlanta and Augusta or the Kia EV6’s very fast charging speed. Overall, the new functionality looks like a fairly convenient and decent starting point for someone who knows nothing about cars to choose a rental.
Turo’s app for ChatGPT is the latest example of AI’s rapid advance into every aspect of the automotive industry, from natural language AI assistants in the dashboard to AI-powered inspection of rental car returns.
(Disclosure: Ziff Davis, CNET’s parent company, filed a lawsuit against OpenAI last year, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Tech
Sony confirms AI frame generation is coming to PlayStation, just not this year
![]()
Speaking to Digital Foundry about Project Amethyst, Mark Cerny confirmed that Sony is developing AI-powered frame generation for PlayStation, but noted that no new releases are planned this year. He also declined to reveal whether the feature will be rolled out to the PS5 and PS5 Pro or be exclusive…
Read Entire Article
Source link
Tech
OpenAI rolls out ChatGPT Library to store your personal files
OpenAI is rolling out a new feature called ‘Library’ for ChatGPT, which allows you to store your personal files or images on OpenAI’s cloud storage.
OpenAI says ChatGPT Library requires Plus, Pro, and Business. It’s rolling out to customers across the world except the European Economic Area, Switzerland, and the United Kingdom.
I refreshed the ChatGPT web, and the Library automatically showed up on the sidebar.

To my surprise, it’s actually not empty, as ChatGPT has already saved some of the files I uploaded in the last two weeks.
Turns out it’s an expected behaviour. By default, GPT will save your uploaded files in a dedicated, secure location, and they can be used for reference in a future chat.
“ChatGPT automatically saves uploaded and created files, including files uploaded in chats (for example: documents, spreadsheets, presentations, and images) in a dedicated, secure location so they can be easily accessed later,” OpenAI noted in a document.
On the other hand, if you use ChatGPT to generate AI images, they will continue to appear in the Images tab.
The Library section only has files you uploaded, and you can upload files by following these steps:
- Open the composer menu (the attachment/add button).
- Select Add from library.
- Choose the file you want to use.
OpenAI says ChatGPT automatically saves uploaded and created files, including files uploaded in chats, in a dedicated, secure location so they can be easily accessed later.
This means files are saved to your account until you delete them manually, but deleting a chat containing a file does not delete those files saved to Library.
To delete a file:
- Select the file in the “Library” tab
- Click Delete, or click the trash icon next to the file.
OpenAI will remove files from its servers within 30 days of deletion.
It’s unclear why it takes nearly a month to purge files, but it is likely due to legal reasons.
Tech
What is the release date for The Pitt season 2 episode 12 on HBO Max?
Slowly but surely, the ante is starting to be upped in The Pitt season 2. Last week, a woman detained by two male ICE agents was brought in to see Dr. Robbie (Noah Wyle) and Cassie (Fiona Dourif).
Blood covered her arms as her handcuffs cut into her wrists, with the distressed woman clearly too scared to say anything. When Cassie asks her if there’s anybody she’d like to call, one ICE agent immediately responds that there are “no calls allowed.”
Article continues below
What time can I watch The Pitt season 2 episode 12 on HBO Max?
For US viewers, The Pitt season 2 episode 12 will drop on Thursday, March 26 at 6pm PT/ 9pm ET. As always, it’ll come out on HBO Max, too.
Internationally, you’re looking out for these timings:
- US – 6pm PT / 9pm ET
- Canada – 6pm PT / 9pm ET
- India – Friday, March 20 at 7:30am IST
- Singapore – Friday, March 20 at 10am SGT
- Australia – Friday, March 20 at 1pm AEDT
- New Zealand – Friday, March 20 at 3pm NZDT
You’ll notice that I’ve not included the UK here. That’s because March 26 is the same day that HBO Max UK actually launches.
Currently, we know that The Pitt season 1 will be available, but there’s been no confirmation about season 2.
When do new episodes of The Pitt season 2 come out?
New episodes of The Pitt will make landfall every Thursday in the US and on Fridays everywhere else. Here are the all-important dates you need to know about:
- Episode 1: out now
- Episode 2: out now
- Episode 3: out now
- Episode 4: out now
- Episode 5: out now
- Episode 6: out now
- Episode 7: out now
- Episode 8: out now
- Episode 9: out now
- Episode 10: out now
- Episode 11: out now
- Episode 12: March 26/27
- Episode 13: April 2/3
- Episode 14: April 9/10
- Episode 15: April 16/17
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
-
Crypto World3 days ago
NIO (NIO) Stock Plunges 6.5% as Shelf Registration Sparks Dilution Worries
-
Fashion3 days agoWeekend Open Thread: Adidas – Corporette.com
-
Politics3 days agoJenni Murray, Long-Serving Woman’s Hour Presenter, Dies Aged 75
-
Tech6 days agoAre Split Spacebars the Next Big Gaming Keyboard Trend?
-
Crypto World2 days agoBest Crypto to Buy Now: Strategy Just Spent $1.57 Billion on Bitcoin During Fear While Early Investors Quietly Enter Pepeto for 150x Potential
-
News Videos5 days agoRBA board divided on rate cut, unusually buoyant share market | Finance Report | ABC NEWS
-
Crypto World2 days agoBitcoin Price News: Bhutan Sells $72 Million in BTC Under Fiscal Pressure, but the Smart Money Entering Pepeto Sees What the Market Does Not
-
Politics6 days agoThe House | The new register to protect children from their abusers shows Parliament at its best
-
Tech4 days agoinKONBINI Lets You Spend Summer Days Behind the Register
-
Politics7 days agoReal-time pollution monitoring calls after boy nearly dies
-
Crypto World5 days agoCanada’s FINTRAC revokes registrations of 23 crypto MSBs in AML crackdown
-
Sports17 hours agoRemo Stars and Kano Pillars Strengthen Survival Hopes in NPFL
-
NewsBeat5 days agoResidents in North Lanarkshire reminded to register to vote in Scottish Parliament Election
-
News Videos6 days agoPARLIAMENT OF MALAWI – PAC MEETING WITH REGISTRAR OF FINANCIAL ON AMARYLLIS HOTEL – INQUIRY LIVE
-
Politics4 days agoGender equality discussions at UN face pushbacks and US resistance
-
Business2 days agoNo Winner in March 21 Drawing as Prize Rolls to $133 Million for Next
-
Business5 days agoWho Was Alex Pretti? 5 Key Facts About the ICU Nurse Killed by Federal Agents in Minneapolis
-
Sports16 hours agoGary Kirsten Accuses Pakistan Cricket Board Of ‘Interference’, Mohsin Naqvi Responds
-
Tech1 day agoGive Your Phone a Huge (and Free) Upgrade by Switching to Another Keyboard
-
Sports3 days ago2026 Kentucky Derby horses, odds, futures, preview, date: Expert who nailed 12 Derby-Oaks Doubles enters picks


You must be logged in to post a comment Login