Before heading on a trip to Tahoe last weekend, GM offered me the use of the company’s 9,000-pound monument to excess – the new 2026 electric Escalade IQL (starting at $130,405) – for a week to test-drive. Before you continue, note that I’m not a professional car reviewer. TechCrunch has excellent transportation writers; I am not one of them. I do, however, drive an electric car.
I was immediately game. I’d first glimpsed one last summer at a car show, where some regional car dealers had stationed themselves at the end of a long field dotted with exquisite vintage automobiles. My immediate reaction was “Jesus, that’s enormous,” followed by a surprising admiration for its design, which, despite its enormous scale, shows restraint. For lack of a better word, I’m going to say it’s “strapping.” Its proportions just work.
My excitement waned pretty quickly when the car was dropped off at my house a day before our departure time. This thing is a monstrosity — at 228.5 inches long and 94.1 inches wide, it made our own cars look like toys. My first apartment in San Francisco was smaller. Trying to drive it up my driveway was a little harrowing, too; it’s so big, and its hood is so high, that if you’re ascending a road at a certain slope – we live midway down a hill; our mailbox is at the top of it – you can’t see whatever is directly in front of the car.
I thought about just leaving it in the driveway for the duration of the trip. The other alternative was doing what I could to grow more comfortable with the prospect of driving it 200 miles to Tahoe City, so I tooled around in it that night and the next day, picking up dinner, heading to an exercise class — just basic stuff around town. When I ran into a friend on the street, I volunteered as quickly as possible that this was not my new car, that I was going to possibly review it, and wasn’t its size ridiculous? It felt like a tank. I thought: other than hotels that use SUVs like the Escalade to ferry guests around, what kind of monster chooses a car like this?
Advertisement
Five days later, it turns out that I am that kind of monster.
Image Credits:Connie Loizos
Look, I don’t know how or when I fell for this car. If I’d written this review after two days, it would read very differently. Even now, I’m not so blind that I don’t see its shortcomings.
It was the Escalade’s performance in a terrible snowstorm that really won my heart, but let me walk you through the steps between “Ugh, this car is a tank” and “Yes! This car is a tank.”
Techcrunch event
Boston, MA | June 9, 2026
Advertisement
Just getting into it requires a little more exertion than would seem to make sense. I’m fairly athletic and I still found myself wondering if this thing shouldn’t come with an automated step stool.
Inside is where digital maximalism does its work. The dashboard opens with a 55-inch curved LED screen with 8K resolution that reads less like a car display and more like a situation room. Front passengers get their own screens. Second-row passengers also get 12.6-inch personal screens along with stowable tray tables, dual wireless chargers, and — with the most lavish version of the car — massage seats that will make them forget they’re in a vehicle at all. Google Maps handles navigation. And the polarized screen technology deserves its own praise: while one of my kids binge-watched Hulu in the front seat, not a frame of it leaked into my sightline from behind the wheel.
Advertisement
The cabin itself is built around the premise that no one inside should feel crowded, and it delivers. Front legroom stretches to 45.2 inches; the second row offers 41.3; even the third row manages 32.3 inches. Seven adults could share this machine for a long while without fraying each other’s nerves. Heated and ventilated leather seats with 14-way power adjustment come standard in the first two rows, and the whole operation runs on 5G Wi-Fi.
The car also comes standard with Super Cruise, GM’s hands-free driving system, which I’m not sure I quite figured out. True car reviewers seem to love it; when I tried it, the car felt like it was drifting to an alarming degree between the outer boundaries of the highway lane, and when that happens, it unleashes an escalating sequence of warnings. First, a red steering wheel icon materializes on-screen. Then your seat pulses haptic warnings against your rump. Ignore those and a chime — both reminder and reproach — fills the cabin. GM calls this impolite series a “driver takeover request.”
Did I mention the 38-speaker AKG Studio sound system? So good.
As for the exterior — this is a handsome giant, but it takes some getting used to. At first, I found the grille, which is just for show, almost comically imposing. This is definitely a car for people who are the boss, or want to be the boss, or want to look like the boss while privately dealing with existential crises. Pulling up to a glass-lined restaurant one night, I’m pretty sure I blinded half the patrons as I swung into a parking spot perpendicular to the building, the Escalade’s headlights flooding through the windows.
Advertisement
Then there is the light show the car launches whenever it detects you approaching via the key or the MyCadillac app. It’s as if it’s saying, “Hey, chief, where we headed?” before you’ve so much as touched a door handle. (In the vernacular of Cadillac, this is thanks to its “advanced, all-LED exterior lighting system,” highlighted by a “crystal shield” illuminated grille and crest, along with vertical LED headlamps and “choreography-capable tail lamps.”)
It is, objectively, a bit much. I loved it immediately.
Image Credits:Connie Loizos
Despite its size, the Escalade IQL is unexpectedly nimble. Not “sports car darting through traffic” nimble, but “I can’t quite believe something this colossal doesn’t handle like a battleship” nimble.
Now we arrive at the frustrations. The front trunk — or “frunk” in the lexicon of EV devotees — operates in mysterious and frustrating ways. Opening requires holding the button until completion. Release prematurely and it halts mid-ascent, frozen in automotive purgatory, forcing you to restart the entire sequence. Closing demands the same sustained pressure. The rear trunk, conversely, requires two distinct taps followed by immediate button abandonment. Hold too long and nothing happens.
Relatedly, twice, the vehicle refused to power down after I’d finished driving. The car simply sat there, running, even when shifted to park and opened the door (which tells the car to turn off). One clunky solution: open the frunk, close the frunk, shift into drive, then park, then exit entirely.
Advertisement
As for the software, it’s absolutely fine unless you’ve owned a Tesla, in which case, prepare for disappointment. This seems to be true across the board — everyone I know who owns both a Tesla and another EV, no matter how high end, says the same thing. Once you’ve internalized how effortlessly Tesla’s software dissolves barriers between intention and execution, every other automaker’s software feels like a compromise.
Which brings us to the nadir of the trip: charging in Tahoe during winter. For all its virtues, the Escalade IQL is, by any measure, a thirsty machine. The battery is a 205 kWh pack — enormous, and it needs to be, because the car burns through roughly 45 kWh per 100 miles, which is considerably more than comparable electric SUVs. Cadillac estimates 460 miles of range on a full charge, and in ideal conditions that holds up. Tahoe in winter, however, is not ideal conditions. We’d also arrived with less charge than we should have. A series of side trips on the way up, including an emergency detour to find shirts for a family member who had packed none, had eaten into the battery more than expected. By the time we needed to charge, we genuinely needed to charge.
We approached a Tesla Supercharger in Tahoe City that appeared on the MyCadillac app, but when we plugged in to the designated stall, nothing happened. We searched for answers, discovering that even Tesla stations that accept non-Tesla vehicles throttle energy to 6 kilowatts per hour anyway, but it was a frustrating experience. A nearby EVGo had shuttered a month prior. ChargePoint’s two units at the Tahoe City Public Utility lot were, respectively, broken and willing to connect but not to actually charge anything. We briefly contemplated a 35-mile drive to Incline Village, did the math on what stranded would actually look like, and decided against it. Then I discovered an Electrify America station 12 miles away. We drove through gathering snow, arrived shortly before 11 p.m., and it worked. We sat there for an hour fighting exhaustion before driving home.
The following morning revealed another issue via an app alert: tire pressure had dropped to 53 and 56 PSI in the front (recommended: 61) and 62 PSI in the rear (recommended: 68). I have no idea whether the car had been delivered that way or whether something else was going on — either way, it meant someone standing at a gas station filling tires while being pelted directly in the face with ice. (That someone was my husband.) The tires held steady after that, even as the week kept doing its worst. For a family trip, it was going great.
Advertisement
At this point, in fact, I would have told you that the Escalade IQL is unquestionably luxurious and ideal for families of four or more who value space and technology. I would tell you it came burdened by real tradeoffs: forward visibility obstructed by its commanding hood, parking challenges inherent to its dimensions, limited charging infrastructure for a machine this ravenous, and tires tasked with supporting 9,000 pounds. It’s a beautiful car, I would have said, but it’s not for me.
But the snow that had started to fall kept falling. Within two days, eight feet had accumulated, making it impossible to ski — the entire point of the trip — and terrifying to drive. Except I found that I wasn’t terrified because we had the Escalade, which, because of its weight, felt like driving a tank through the snow. What could have been harrowing felt serene. It was quiet, it was strong, it was taking charge in a bad situation.
I also adjusted to the size. By the end of this past week I had stopped mouthing “I’m sorry” to whoever who was waiting for me to figure out where to park it. I had stopped caring what it said about me that I was driving a car whose entire design philosophy is: the owner of this vehicle is not waiting in line. Eight feet of snow had fallen, we needed groceries, and I was the one with the tank, suckers! I could sense my husband falling for the car, too.
Image Credits:Connie Loizos
Then, as tends to happen in Tahoe, the snow stopped all at once and the sun came out, and the Escalade was just a very dirty car sitting in the driveway (sorry, GM!). It was in this moment that I realized: I still like it, and it’s not because of the emergency alone. I love riding high, with the speaker system flooding the car with a favorite soundtrack. That light show still gets me. The car’s long, curved LED screen is a marvel, among other features.
The frunk is still unhinged. I won’t soon forget the panic of not being able to charge the car where I thought I could. Parking this thing is truly an exercise in patience. I have strong opinions about unnecessary consumption. None of that has changed.
Advertisement
I just also, somehow, want this car, so when the GM middleman comes to collect it, I may hide it under a tarp — a very large tarp — and tell him he has the wrong address.
Nvidia on Monday unveiled a deskside supercomputer powerful enough to run AI models with up to one trillion parameters — roughly the scale of GPT-4 — without touching the cloud. The machine, called the DGX Station, packs 748 gigabytes of coherent memory and 20 petaflops of compute into a box that sits next to a monitor, and it may be the most significant personal computing product since the original Mac Pro convinced creative professionals to abandon workstations.
The announcement, made at the company’s annual GTC conference in San Jose, lands at a moment when the AI industry is grappling with a fundamental tension: the most powerful models in the world require enormous data center infrastructure, but the developers and enterprises building on those models increasingly want to keep their data, their agents, and their intellectual property local. The DGX Station is Nvidia’s answer — a six-figure machine that collapses the distance between AI’s frontier and a single engineer’s desk.
What 20 petaflops on your desktop actually means
The DGX Station is built around the new GB300 Grace Blackwell Ultra Desktop Superchip, which fuses a 72-core Grace CPU and a Blackwell Ultra GPU through Nvidia’s NVLink-C2C interconnect. That link provides 1.8 terabytes per second of coherent bandwidth between the two processors — seven times the speed of PCIe Gen 6 — which means the CPU and GPU share a single, seamless pool of memory without the bottlenecks that typically cripple desktop AI work.
Twenty petaflops — 20 quadrillion operations per second — would have ranked this machine among the world’s top supercomputers less than a decade ago. The Summit system at Oak Ridge National Laboratory, which held the global No. 1 spot in 2018, delivered roughly ten times that performance but occupied a room the size of two basketball courts. Nvidia is packaging a meaningful fraction of that capability into something that plugs into a wall outlet.
Advertisement
The 748 GB of unified memory is arguably the more important number. Trillion-parameter models are enormous neural networks that must be loaded entirely into memory to run. Without sufficient memory, no amount of processing speed matters — the model simply won’t fit. The DGX Station clears that bar, and it does so with a coherent architecture that eliminates the latency penalties of shuttling data between CPU and GPU memory pools.
Always-on agents need always-on hardware
Nvidia designed the DGX Station explicitly for what it sees as the next phase of AI: autonomous agents that reason, plan, write code, and execute tasks continuously — not just systems that respond to prompts. Every major announcement at GTC 2026 reinforced this “agentic AI” thesis, and the DGX Station is where those agents are meant to be built and run.
The key pairing is NemoClaw, a new open-source stack that Nvidia also announced Monday. NemoClaw bundles Nvidia’s Nemotron open models with OpenShell, a secure runtime that enforces policy-based security, network, and privacy guardrails for autonomous agents. A single command installs the entire stack. Jensen Huang, Nvidia’s founder and CEO, framed the combination in unmistakable terms, calling OpenClaw — the broader agent platform NemoClaw supports — “the operating system for personal AI” and comparing it directly to Mac and Windows.
The argument is straightforward: cloud instances spin up and down on demand, but always-on agents need persistent compute, persistent memory, and persistent state. A machine under your desk, running 24/7 with local data and local models inside a security sandbox, is architecturally better suited to that workload than a rented GPU in someone else’s data center. The DGX Station can operate as a personal supercomputer for a solo developer or as a shared compute node for teams, and it supports air-gapped configurations for classified or regulated environments where data can never leave the building.
Advertisement
From desk prototype to data center production in zero rewrites
One of the cleverest aspects of the DGX Station’s design is what Nvidia calls architectural continuity. Applications built on the machine migrate seamlessly to the company’s GB300 NVL72 data center systems — 72-GPU racks designed for hyperscale AI factories — without rearchitecting a single line of code. Nvidia is selling a vertically integrated pipeline: prototype at your desk, then scale to the cloud when you’re ready.
This matters because the biggest hidden cost in AI development today isn’t compute — it’s the engineering time lost to rewriting code for different hardware configurations. A model fine-tuned on a local GPU cluster often requires substantial rework to deploy on cloud infrastructure with different memory architectures, networking stacks, and software dependencies. The DGX Station eliminates that friction by running the same NVIDIA AI software stack that powers every tier of Nvidia’s infrastructure, from the DGX Spark to the Vera Rubin NVL72.
Nvidia also expanded the DGX Spark, the Station’s smaller sibling, with new clustering support. Up to four Spark units can now operate as a unified system with near-linear performance scaling — a “desktop data center” that fits on a conference table without rack infrastructure or an IT ticket. For teams that need to fine-tune mid-size models or develop smaller-scale agents, clustered Sparks offer a credible departmental AI platform at a fraction of the Station’s cost.
The early buyers reveal where the market is heading
The initial customer roster for DGX Station maps the industries where AI is transitioning fastest from experiment to daily operating tool. Snowflake is using the system to locally test its open-source Arctic training framework. EPRI, the Electric Power Research Institute, is advancing AI-powered weather forecasting to strengthen electrical grid reliability. Medivis is integrating vision language models into surgical workflows. Microsoft Research and Cornell have deployed the systems for hands-on AI training at scale.
Advertisement
Systems are available to order now and will ship in the coming months from ASUS, Dell Technologies, GIGABYTE, MSI, and Supermicro, with HP joining later in the year. Nvidia hasn’t disclosed pricing, but the GB300 components and the company’s historical DGX pricing suggest a six-figure investment — expensive by workstation standards, but remarkably cheap compared to the cloud GPU costs of running trillion-parameter inference at scale.
The list of supported models underscores how open the AI ecosystem has become: developers can run and fine-tune OpenAI’s gpt-oss-120b, Google Gemma 3, Qwen3, Mistral Large 3, DeepSeek V3.2, and Nvidia’s own Nemotron models, among others. The DGX Station is model-agnostic by design — a hardware Switzerland in an industry where model allegiances shift quarterly.
Nvidia’s real strategy: own every layer of the AI stack, from orbit to office
The DGX Station didn’t arrive in a vacuum. It was one piece of a sweeping set of GTC 2026 announcements that collectively map Nvidia’s ambition to supply AI compute at literally every physical scale.
At the top, Nvidia unveiled the Vera Rubin platform — seven new chips in full production — anchored by the Vera Rubin NVL72 rack, which integrates 72 next-generation Rubin GPUs and claims up to 10x higher inference throughput per watt compared to the current Blackwell generation. The Vera CPU, with 88 custom Olympus cores, targets the orchestration layer that agentic workloads increasingly demand. At the far frontier, Nvidia announced the Vera Rubin Space Module for orbital data centers, delivering 25x more AI compute for space-based inference than the H100.
Advertisement
Between orbit and office, Nvidia revealed partnerships spanning Adobe for creative AI, automakers like BYD and Nissan for Level 4 autonomous vehicles, a coalition with Mistral AI and seven other labs to build open frontier models, and Dynamo 1.0, an open-source inference operating system already adopted by AWS, Azure, Google Cloud, and a roster of AI-native companies including Cursor and Perplexity.
The pattern is unmistakable: Nvidia wants to be the computing platform — hardware, software, and models — for every AI workload, everywhere. The DGX Station is the piece that fills the gap between the cloud and the individual.
The cloud isn’t dead, but its monopoly on serious AI work is ending
For the past several years, the default assumption in AI has been that serious work requires cloud GPU instances — renting Nvidia hardware from AWS, Azure, or Google Cloud. That model works, but it carries real costs: data egress fees, latency, security exposure from sending proprietary data to third-party infrastructure, and the fundamental loss of control inherent in renting someone else’s computer.
The DGX Station doesn’t kill the cloud — Nvidia’s data center business dwarfs its desktop revenue and is accelerating. But it creates a credible local alternative for an important and growing category of workloads. Training a frontier model from scratch still demands thousands of GPUs in a warehouse. Fine-tuning a trillion-parameter open model on proprietary data? Running inference for an internal agent that processes sensitive documents? Prototyping before committing to cloud spend? A machine under your desk starts to look like the rational choice.
Advertisement
This is the strategic elegance of the product: it expands Nvidia’s addressable market into personal AI infrastructure while reinforcing the cloud business, because everything built locally is designed to scale up to Nvidia’s data center platforms. It’s not cloud versus desk. It’s cloud and desk, and Nvidia supplies both.
A supercomputer on every desk — and an agent that never sleeps on top of it
The PC revolution’s defining slogan was “a computer on every desk and in every home.” Four decades later, Nvidia is updating the premise with an uncomfortable escalation. The DGX Station puts genuine supercomputing power — the kind that ran national laboratories — beside a keyboard, and NemoClaw puts an autonomous AI agent on top of it that runs around the clock, writing code, calling tools, and completing tasks while its owner sleeps.
Whether that future is exhilarating or unsettling depends on your vantage point. But one thing is no longer debatable: the infrastructure required to build, run, and own frontier AI just moved from the server room to the desk drawer. And the company that sells nearly every serious AI chip on the planet just made sure it sells the desk drawer, too.
GTC 2026 brought some news that caught a lot of people off guard. Three major automakers have signed on to work with NVIDIA to bring autonomous driving to their vehicles in defined conditions, and sooner than most would have expected. BYD, Nissan, and Isuzu are all on board, each bringing their own strengths to the table as the technology edges closer to becoming an everyday reality on public roads.”.
BYD is no stranger to pushing technology forward, and they plan to roll the system out across their next generation of models, the ones already turning heads on the road. Nissan is taking a broader approach, bringing it to their entire passenger vehicle lineup, while Isuzu is focused on the commercial side of things, teaming up with TIER IV to keep their buses running smoothly with minimal need for human supervision.”
Drive Hyperion is a full system that includes sensors, processing units, and software that are ready to use right out of the box. That means automakers don’t have to start from scratch; instead, they can take the parts that work and modify them to their own vehicles. It all adds up to L4 autonomy, in which the car does all of the driving in particular scenarios such as highways or mapped urban areas, eliminating the need for someone to be on high alert at all times.
Fourteen high-definition cameras provide a continuous 360-degree picture of everything around the automobile, while nine radar units monitor distances and speeds even in bad weather. One LiDAR scanner creates precise 3D images of the environment around the car, while twelve ultrasonic detectors handle short-range tasks such as parking and merging. At the center of it all are two computers powered by the latest NVIDIA chips, capable of handling over 2 trillion operations per second. And if one of them fails, there is a backup system in place to keep everything going smoothly.
Raw sensor data is fed directly into the computers, where software develops an understanding of the vehicle’s location and surroundings. Then separate parts of the system weigh the options by looking at what the cameras show, what the vehicle has done before, the planned route, and even what the navigation system says, and they have an open model called Alpamayo that shows how all of this works, tracing out every step of the decision-making process, making it easier for developers to refine things and ensure they’re doing the right thing.
Advertisement
In real life, engineers can test the system in a digital environment before installing it in a real car. They use real-world data to reproduce difficult or unique circumstances, which helps them detect issues that would otherwise arise years later. One of the most important aspects is ensuring that the system is safe, and to that end, they’ve created an operating system called Halos that puts a few layers of safety around the entire thing. It is designed to meet the strictest automobile standards and incorporates active monitoring, which acts as a constant safety net to prevent anything from going wrong. Users have already begun to put the platform into action. Ride sharing services are preparing to debut fleets of robotaxis and delivery cars in dozens of locations beginning in 2027.
Apple launched the $599 MacBook Neo on March 11, a budget Mac powered by the A18 Pro chip from the iPhone 16 Pro, 8GB of unified memory, and a 13-inch screen. Though it offers decent specifications for the price, there’s a catch: the storage tops out at 512GB.
However, a Chinese repair technician, DirectorFeng, has swapped the default NAND chip for a 1TB chip, effectively unlocking the MacBook Neo’s storage. The technician has posted the entire video on a YouTube channel.
How did DirectorFeng pull this off?
DirectorFeng replaced the NAND flash drive soldered to the MacBook’s logic board and then reflashed macOS, so it recognizes the third-party driver and storage. The process involved removing the original chip, cleaning the solder pads, and installing a higher-capacity replacement using professional repair tools.
Advertisement
This wasn’t a screwdriver-and-YouTube-tutorial situation; this is microsurgery on a logic board, the kind that makes most people’s palms sweat. However, once reassembled, macOS recognized the larger-capacity NAND drive without firmware issues, and storage performance appeared normal as well.
The storage, as seen in the video, goes up from 256GB to 994.61GB (marketed as 1TB). Once the process was complete, the replaced drive offered read and write speeds of 1,551 MB/s and 1,506MB/s, respectively.
MacBook NeoApple
Should you try upgrading your MacBook Neo’s storage?
It’s worth noting that Apple uses soldered NAND rather than a removable SSD, which implies that any capacity change would require microsoldering and would almost certainly void the manufacturer’s warranty. However, the successful storage upgrade indicates that the Neo is relatively easier to work on than other MacBooks.
Is this a consumer-friendly upgrade? No. Should you try upgrading your MacBook Neo’s storage yourself? Certainly not. The only key takeaway here is that the device works with third-party storage without any firmware issues. So, a storage upgrade, at least in theory, is possible.
We’ve discussed at length how Trump’s “fix” for TikTok’s problems basically involved forcing the sale of the platform to his greedy billionaire buddies (with the help of pathetic Democrats). The deal fixed none of the real issues Trumpland pretended to be concerned about (national security, privacy, propaganda), and China still maintains a significant ownership stake.
As the Wall Street Journal notes (paywalled), the “Trump administration” is set to receive a $10 billion fee from investors for facilitating the deal. The new owners, which include Trump’s friend Larry Ellison, private equity giant Silver Lake, and MGX (controlled by the UAE) are funneling the payments, which will total $10 billion, to the “Treasury Department”:
Advertisement
“They and other backers paid the Treasury Department about $2.5 billion when the deal closed in January and are set to make several additional payments until hitting the $10 billion total, the people said.”
We, of course, don’t actually know where that money is going and will actually be used for. You can confidently assume it will somehow eventually wind its way into Trump’s pocket somehow, since the entirety of U.S. democratic oversight has been wholly corrupted by these whiny zealots, who are busy stripping the country for parts and selling it for scrap off the back loading dock.
Rupert Murdoch’s Wall Street Journal goes to comical lengths to normalize this bribe, though they do at least try to express how “unprecedented” this sort of thing is by citing an unnamed, ambiguous historian:
“The $10 billion payment would be nearly unprecedented for a government helping arrange a transaction, historians have said. Vice President JD Vance previously said the new TikTok entity running the U.S. operations is valued at about $14 billion in the deal, which some tech analysts have said dramatically undervalues the company.”
The outlet goes on to note that the $10 billion fee absolutely towers over any remotely comparable historical precedent:
“Investment bankers advising on a typical deal receive fees of less than 1% of the transaction value, and the percentage generally gets smaller as the deal size increases. Bank of America is in line to make some $130 million for advising railroad operator Norfolk Southern on its $71.5 billion sale to Union Pacific, one of the largest fees on record for a single bank on a deal.
Administration officials have said the fee is justified given Trump’s role in saving TikTok in the U.S. and navigating negotiations with China to get the deal done while addressing the security concerns of lawmakers. “
Advertisement
The Wall Street Journal can’t be bothered to note that the deal fixed absolutely none of the purported concerns raised about TikTok. China still has a major ownership stake, and the new owners seem every bit as hostile to democracy and free expression as the worst Chinese autocrat (they’re just not honest enough with themselves or you to admit it yet).
All of these owners are equally just as likely to engage in privacy and surveillance violations as the Chinese (which again, despite a lot of pretense, did not have full direct control over the app). In fact, you could even argue that the previous TikTok was likely to be better on all of these subjects because they were at least trying to adhere to ethical standards to remain operating in the country.
TikTok’s new American owners are very up front about their plans to demolish the entirety of regulatory autonomy, corporate oversight, and consumer protection, leaving them with absolute freedom to pursue whatever unethical bullshit they can dream up. I suspect they’ll try to leave things alone for a year (to avoid a mass exodus of young people) before their goals become… unsubtle.
Again, Trump, with Democratic help, managed to steal the world’s most popular short form video app and offload it to his radical billionaire friends under the pretense he was protecting national security and U.S. consumer privacy. Even before you get to this $10 billion bribe, it’s easily one of the ugliest examples of corruption and U.S. tech policy dysfunction we’ve ever seen.
Advertisement
I like to convince myself history will not be kind.
Every ham radio shack needs a clock; ideally one with operator-friendly features like multiple time zones and more. [cburns42] found that most solutions relied too much on an internet connection for his liking, so in true hacker fashion he decided to make his own: the operator-oriented Ham Clock CYD.
A tabbed interface goes well with the touchscreen LCD.
The Ham Clock CYD is so named for being based on the Cheap Yellow Display (CYD), an economical ESP32-based color touchscreen LCD which provides most of the core functionality. The only extra hardware is a BME280 temperature and humidity sensor, and a battery-backed DS3231 RTC module, ensuring that accurate time is kept even when the device is otherwise powered off.
It displays a load of useful operator-oriented data on the touchscreen LCD, and even has a web-based configuration page for ease of use. While the Ham Clock is a standalone device that does not depend on internet access in order to function, it does have the ability to make the most of it if available. When it has internet access over the built-in WiFi, the display incorporates specialized amateur radio data including N0NBH solar forecasts and calculated VHF/HF band conditions alongside standard meteorological data.
The CYD, sensor, and RTC are very affordable pieces of hardware which makes this clock an extremely economical build. Check out the GitHub repository for everything you’ll need to make your own, and maybe even put your own spin on it with a custom enclosure. On the other hand, if you prefer your radio-themed clocks more on the minimalist side, this Morse code clock might be right up your alley.
It’s hard to believe that Scrubs season 10 will hit its halfway point with its next episode. By all accounts, the hospital-set sitcom is performing pretty well, so it begs the question why more entries weren’t greenlit.
But that’s a debate for another day. Right now, you’re here to find out when season 10’s fifth episode, titled ‘My Angel’, will premiere on some of the world’s best streaming services. Don’t delay, then — read on for more details!
What is the launch time for Scrubs season 10 episode 5?
Advertisement
Like its four predecessors, Scrubs‘ next chapter will debut on US TV network ABC before it heads to a streaming platform near you. If you plan on catching it on ABC, then you’ll need to tune in on Wednesday, March 18, at 5pm PT / 8pm ET.
Article continues below
Following its premiere on that cable channel, ‘My Angel’ will drop on Hulu (US) and Disney+ (internationally). This week’s installment won’t be available on this pair until a day later — Thursday, March 19 — though, so don’t load up either service until then.
Advertisement
As for the time it’ll be released, here’s what you need to know:
US — Thursday, March 19 at 12am PT / 3am ET
Canada — Thursday, March 19 at 12am PT / 3am ET
UK — Thursday, March 19 at 7am GMT
India — Thursday, March 19 at 12:30pm IST
Singapore — Thursday, March 19 at 3pm SGT
Australia — Thursday, March 19 at 6pm AEDT
New Zealand — Thursday, March 19 at 8pm NZDT
When can I watch new episodes of Scrubs season 10?
Thanks for scrubbing in, see you next week 🫶 pic.twitter.com/yG0TMsFhPMMarch 12, 2026
Unless there’s a sudden change to this season’s full release schedule, new episodes will continue to air on Wednesdays (ABC) and Thursdays (Hulu and Disney+). That’ll remain the case until its finale drops on April 15/16, too.
Advertisement
For a full breakdown of when each new entry of Scrubs‘ 10th season will premiere, check out the list below:
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Scrubs season 10 episode 1 — out now
Scrubs season 10 episode 2 — out now
Scrubs season 10 episode 3 — out now
Scrubs season 10 episode 4 — out now
Scrubs season 10 episode 5 — Wednesday, March 18 / March 19
Scrubs season 10 episode 6 — Wednesday, March 25 / Thursday, March 26
Scrubs season 10 episode 7 — Wednesday, April 1 / Thursday, April 2
Scrubs season 10 episode 8 — Wednesday, April 8 / Thursday, April 9
Scrubs season 10 episode 9 — Wednesday, April 15 / Thursday, April 16
At its GTC 2026 event, NVIDIA has officially announced DLSS 5, a new version of its Deep Learning Super Sampling technology. The next generation of its AI-powered graphics technology introduces neural rendering techniques designed to create more realistic lighting and materials in games. The feature is expected to launch later this year.
NVIDIA
DLSS has long been used to upscale lower-resolution frames into higher-resolution images using AI, boosting performance while maintaining visual quality, with DLSS 4.5 being the most recent update. The new version takes that concept further by using neural networks to assist with parts of the rendering pipeline itself, rather than simply reconstructing pixels.
What’s new in DLSS 5?
The biggest shift with DLSS 5 is the introduction of neural rendering, a technique where AI helps generate elements of a scene, such as lighting, materials, and surface detail, rather than relying entirely on traditional rendering methods. The system can produce photorealistic lighting effects and more accurate material reflections, potentially improving realism in ray-traced environments while maintaining high frame rates.
The technology builds on earlier DLSS features like Super Resolution, Ray Reconstruction, and Frame Generation, but moves further toward an AI-assisted graphics pipeline where neural networks play a bigger role in how scenes are constructed.
Advertisement
Which hardware will support DLSS 5?
NVIDIA hasn’t officially confirmed which GPU architectures will support DLSS 5 yet, but the company has said the technology will arrive alongside RTX 50-series GPUs later this year. According to Digital Foundry, NVIDIA described the lighting improvements shown in its demo as “transformational,” with the feature expected to roll out around Fall 2026.
NVIDIA
Interestingly, the demo setup used to showcase DLSS 5 wasn’t running on a typical gaming PC. Digital Foundry reports that NVIDIA used two GeForce RTX 5090 GPUs: one dedicated to running the game itself, while the second handled the DLSS 5 neural-rendering workload. This setup is currently required because the technology still needs significant optimization, particularly in terms of performance efficiency and VRAM usage.
That said, NVIDIA says DLSS 5 is ultimately designed to run on a single GPU, and that’s how it’s expected to ship when the technology launches publicly later this year.
OnePlus could soon launch a budget phone that could seriously endanger the so-called feature-packed mid-ranger from other brands. The company has already kicked off the handset’s teaser campaign in India, dropping cryptic visuals of a silhouette smartphone, alongside a tagline that reads “Entering the Nord era soon.”
The handset, purported to be Nord 6, could put other mid-rangers to shame, or at least that is what the leaked specifications suggest. Before we talk about the hardware upgrades, it’s important to note that the Nord 6 is believed to be a rebranded version of the OnePlus Turbo 6, which is available only in China.
So, what’s actually inside the upcoming OnePlus midranger?
That said, let’s tackle all the leaked Nord 6’s specifications one by one. First of all, the upcoming smartphone could sport a 9,000 mAh battery that supports 80W wired charging.
Currently, the OnePlus 15R holds the crown for the company’s biggest battery smartphone, but it might not hold that position for much longer.
The Nord 6’s battery could, in a very real way, provide over 12 to 14 hours of screen-on time between charges, making it a two-day battery phone for most users.
Upgraded specs could result in a serious price jump
On the performance front, the handset will offer a serious jump, thanks to the Snapdragon 8s Gen 4 (4nm) chipset. Its GPU is powerful enough to support high frame rate gaming.
For capturing pictures, the smartphone could come with a 50MP primary camera with optical image stabilization and a 16MP selfie shooter. Finally, the OnePlus Nord 6 could also feature a 6.78-inch 1.5K AMOLED screen that supports a refresh rate of up to 165Hz.
Advertisement
However, all the upgraded specifications could result in a serious bump in the phone’s price. The Nord 6’s price tag could be around $500 in India, where it is confirmed to launch in early April. A United States launch is still under question, though.
Workhuman’s Ciara Walsh discusses career development and her advice to others looking to take a similar professional route.
“Growing up, I was always interested in science and engineering, so I knew I would end up in some kind of STEM-related field, but I had quite a difficult time figuring out which direction to go in when approaching my career initially,” said senior software engineer at Workhuman, Ciara Walsh.
Encouraged to build computing skills from a young age, she joined a local CoderDojo, which is a community-based coding club, where she helped the younger children with basic computer skills and later taught her own classes. From there, she realised that she could have a future in software.
“The connection that this could be my career eventually came through my late grandmother, who suggested it one afternoon while I was struggling with my CAO application. That conversation was a lightbulb moment for me and my whole career journey has followed from it.”
Advertisement
What do you enjoy most about your job?
I really enjoy problem solving and having to really think about how to approach solutions. Software engineering is essentially problem solving as a career in many ways, whether that’s figuring out how to build a new feature for users or triaging why a test is failing. At the core of what I do every day involves figuring out a way forward on some combination of puzzle or problem. For me, that’s really satisfying, and I love getting to the ‘aha’ moment at the end where it all works.
What’s the most exciting development you’ve witnessed in your sector?
I remember a meeting very early in my career which was centred on the ‘internet of things’ and how connected devices were going to change everything about daily life within the next 10 to 15 years. The conversation at that time was around how ambitious of an idea it was, and how many technologies and tools would need to be invented to even achieve a quarter of the concepts being laid out at that stage. It’s been fascinating to be part of the industry since then and see many of the ideas that were being discussed in that meeting come to life within the real world.
The sheer number of technologies that we use daily now which simply didn’t exist when I started my career is amazing. It’s exciting to be part of a sector that moves this quickly, and I’m looking forward to seeing what the next 10 to 15 years brings us.
What’s been the hardest thing you’ve had to face in your career and how was it overcome?
The hardest thing I’ve had to face so far was the decision to step away from my career for a year, without knowing what came next.
Advertisement
In 2024, I decided to return to college and study for a master’s degree in electronic engineering. At that stage, the industry had slowed down quite a lot in terms of companies hiring, so stepping away from a job where I had a reasonable level of security was a big risk. However, I also felt that I needed to take that step back and spend time growing my knowledge and skills to be successful moving forward, especially given the direction that the industry has moved in, with AI and machine learning, so I took the risk.
During the course, I tried to ensure that I kept a balance between new topics I wanted to learn and those that I had some knowledge of but in which I could develop further depth, and this was of huge benefit to me because I managed to avoid losing my existing skills in the process of gaining new ones.
Having said that, the imposter syndrome and stress associated with that journey – particularly during the later stages, when my course had finished and I was trying to restart my career – wasn’t something I anticipated. I found it significantly more challenging than I expected and even after joining my current role it took some time to have full confidence in myself again. Looking back on it now though, I think the risk paid off, as I have a more solid understanding of some key concepts and – maybe more importantly – a stronger set of research skills, which will be useful going forward in my career.
If you had the power to change anything within the STEM sector, what would that be?
STEM is a very broad sector, so it’s hard to outline any specific things that I’d change across it all, but I think something I’d like to see celebrated and emphasised more is how creative many of the fields under the STEM umbrella are.
Advertisement
We tend to focus a lot on being data-driven and efficient, but the reality is that the majority of the work we do in STEM involves some kind of inventing and/or creative thinking. I think sometimes we lose sight of that amongst the deadlines and client requests, and we don’t leave ourselves enough space to be innovative and to really explore the crafts hidden in behind the science and technology of it all. If I could change anything, it would be that we gave ourselves more space and time to be purely creative, rather than always doing the most efficient thing.
Hackathons are a great example of this, where time is given to just experiment and explore with the tools of the trade. I’ve been involved with multiple hackathon projects that ended up being deployed as full products after some polishing. Those only exist because the team members were given the space to think and explore outside the structure of the usual day-to-day.
How do you make connections with others in the STEM community?
I have been incredibly fortunate in my career so far when it comes to mentors and mentoring in general. I was a recipient of a women in technology scholarship during my undergraduate degree, which provided me with some amazing mentors from the very beginning. Their advice and guidance have stood the test of time at this stage, and I genuinely think I’m a better engineer because of all the people who’ve worked with me along my career path so far.
I’ve continued to benefit from mentoring of many different forms throughout my career, and had the opportunity to mentor some people myself, which I think was equally beneficial to me. Mentoring others gives you so many opportunities to really explore your own growth, and for me it has also often resulted in development of my own in parallel to my mentees.
Advertisement
What advice would you give to someone thinking about a career in your area?
I think the best advice I could give someone looking to go into software engineering as a career is to just start coding and experimenting with building simple programs. Start with something like Scratch so you get to learn the basic logic patterns, and then experiment with other languages and tools as you get comfortable. There are lots of free resources and tutorials online, and you can actually learn all the technical skills you need to know to do this job using them. I still use some of them when I need to learn something new for my role.
The other advice I would give someone considering this career is that software is always changing, and there are always new frameworks and tools to learn. To be a successful software engineer, you need to be willing to learn new things across your whole career. This can be challenging at times, but once you learn the general basics, it’s a lot easier than you might expect to transfer skills.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Looking for a Bluetooth speaker that’s portable, affordable, and waterproof? You can pick up the Tribit Stormbox Mini+ for just $33 from Amazon, a $7 discount from its usual price. It’s super portable, totally waterproof, and has big, bold sound that’s great for traveling, backyard parties, or bike rides, making it an easy pick for our favorite portable Bluetooth speaker.
While there are plenty of Bluetooth speakers that are table-worthy, there are far fewer competent speakers at this more pocketable footprint. At 4.69 inches tall and 3.58 inches and diameter, it’s just about the same size as the Ultimate Ears Wonderboom 4, but at less than half the price. It sports similar features too, with 360-degree speakers for complete sound no matter where you’re standing, albeit without quite as much clarity. You can also pair up multiple speakers for a proper stereo setup that will cover your whole backyard, a trick not a lot of speakers at this price point have up their sleeve.
Despite its compact size and affordable pricing, the Stormbox Mini+ is surprisingly sturdy. It has an IPX7 rating, which means you can submerge it fully underwater for up to 30 minutes, although it doesn’t float the right way, so it’s probably better for surviving a quick rainstorm than powering a pool party. You can crank the volume without any distortion, although it will affect your battery life, which is typically just under 10 hours with the lights on and volume up. It also has a speakerphone mode, in case you want to give that one friend running late to the party some FOMO.
While all the Stormbox Mini+ is marked down by $7 in all three colors, the black version has a lower starting price of $40, while the blue and green models started at $42, and are now marked down to $35. There was also a coupon on the page for an extra 10% off applied at checkout, but your mileage may vary. Make sure you check out our full roundup if you’re curious which other Bluetooth speakers we like, or you’re willing to spend a bit more.
You must be logged in to post a comment Login