What started off more than five years ago as one-off bans in individual classrooms grew into statewide efforts to curb student cellphone use during school. Now, the idea of limiting children’s tech use has arrived at the Capitol steps in Washington, D.C., where bipartisan efforts are reaching even further by considering plans to ultimately ban kids under 13 from using social media at all.
The proposed legislation comes at a time when technology is being pushed harder than ever, both by tech companies and by the White House.
And it raises questions about whether and how lawmakers, educators and parents should draw distinctions about the various ways children use screens — for learning, for socializing and for entertainment.
The latest was a joint effort by Republican Sen. Ted Cruz of Texas and Democratic Sen. Brian Schatz of Hawaii. In a Jan. 15 listening session dubbed “Plugged Out: Examining the Impact of Technology on America’s Youth,” a panel of four experts spoke on the potential damages of screen time. The hearing covered a wide swath of ground, from AI-enabled toys, to the shuttered e-Rates program and the ideal age minimum to keep young kids off social media.
Advertisement
“It’s incredibly hard to be a kid right now; all the parents I know, myself included, are deeply concerned about all the time kids spend glued to screens, watching and reading insidious content that puts their minds and their bodies at risk,” Cruz said in the hearing. “Parents are fighting a constant battle to keep their children safe in a rapidly evolving digital world.”
“I think we were aware we had to monitor cellphone usage, but because of the pandemic, everyone was pushing kids in front of these tech vehicles and now we don’t know how to take them away,” says Annette Anderson, deputy director of the Johns Hopkins’ Center for Safe and Healthy Schools. “There has probably been a role to consider the federal piece of it for a while.”
The mid-January hearing called by Cruz through the Senate Committee on Commerce, Science, and Transportation did not have any legal weight, but it dovetailed with the proposed Kids Off Social Media Act from Cruz and Schatz. If passed, the act would ban children under 13 from social media sites and prohibit social media sites from recommending algorithm-based content to children under 17. And it would require schools to work “in good faith” to limit access to social media sites on their own networks. The bill has advanced to be up for a Senate vote. It was last up for discussion in 2024.
Advertisement
“It’s a real struggle to keep your kid offline when you’re told that, ‘All my friends are on Instagram or TikTok,’’” Cruz said. “It’s incredibly hard to be the one parent who won’t let your kid have a phone or social media accounts. So, [this bill] says, ‘We’re going to hold Big Tech accountable to their terms of service.’”
The push comes as the executive branch presses for more artificial intelligence in the classroom, with President Trump’s “Advancing Artificial Intelligence Education for American Youth” executive order.
The order called for the creation of a task force, which was charged with establishing public-private partnerships to develop online resources that would help teachers and students with AI literacy and usage.
Brian Jacob, the co-director of University of Michigan’s Youth Policy Lab, believes the two initiatives can co-exist, as they address two separate ideas. One expresses enthusiasm for applying AI for educational purposes, while the other centers fear of screen time spent on non-educational uses, like watching social media videos.
Advertisement
“There is a bit of an odd nature of these things happening at the same time,” he says. “I think you could want students to be off devices more, but when they’re on them, [to be] utilizing AI or having AI be part of intelligent tutoring systems that would better assist students. I think in practice you could try and incorporate AI more into the education space while still limiting, having less, online time.”
School-focused organizations leaned into that nuance. In response to Cruz’s listening session, 17 organizations, not all of them tech-related — including the American Federation of Teachers, American Library Association and the National Education Association — pushed back on rhetoric about the dangers of technology. They appealed for continued federal support for educational technology and funding, and pointed toward edtech’s helpful role in the classroom.
“Because technology is now integral to the environments in which students live and learn, a school’s focus must be on intentional implementation rather than assumptions about ‘more’ or ‘less’ technology,” the organizations wrote in an open letter to Cruz prior to the hearing. “Effective learning depends on selecting the right tools to support specific instructional goals. Fragmented or inconsistent implementation — not technology itself — is what overwhelms teachers and families.”
The organizations argued that “‘screen time’ is not a single category and should not be evaluated as such,” adding that technology used in the classroom, that is “aligned to curriculum, guided by educators, and governed by locally developed school district privacy and security policies,” is “fundamentally different” than students using devices for entertainment purposes.
Advertisement
“It is essential to distinguish between largely unsupervised, entertainment-driven technology use at home and the intentional, monitored, and carefully curated use of technology in schools — where digital tools are employed to support learning and prepare students for future academic and workforce demands,” the letter says.
State Efforts Set the Foundation
The federal efforts, while new, build on legislation over the last few years from several states. As of last fall, more than half of the nation’s states have adopted a phone ban in schools, with most mandating that phones cannot be used during instructional time.
The efforts initially began school by school, such as the 2019 rule at California’s San Mateo High School that all 1,700 students place their phones in pouches. The first statewide effort occurred in Florida in 2023, which initially allowed students to use their phones between passing periods and at lunch, but banned them in the classroom unless explicitly allowed for a lesson. It also banned social media apps on school computers and Wi-Fi networks.
Many early adopter schools initially turned toward phone pouches when curbing cellphone use in the classroom. Source: Shutterstock/ChameleonsEye
As of late January, only five states do not have any statewide policies, with the majority having some sort of ban or restriction of cellphones in the classroom.
Phone bans represent a rare flashpoint of bipartisan agreement.
Advertisement
“Children’s safety online is not a partisan issue,” Sen. Maria Cantwell, Democrat of Washington, said during the Senate hearing. “Every parent, teacher, lawmaker, wants the same things. We want kids who are safe, healthy and able to thrive … But in the absence of federal legislation, states and governments have stepped up.”
States have begun amping up restrictions, with some eyeing “bell-to-bell” laws that ban phones from the start of the school day through the end, including passing periods and lunch time. Florida amended its 2023 bill in 2025 to the bell-to-bell language. Several others, including Indiana and Kansas, are considering beefing up their restrictions.
But some dissenters suggest that it should be a school-by-school issue.
“States and legislators really are concerned, but I think it’s a challenge when you’re making state legislation [to weigh] how much do you want to mandate decisions,” says Jacob at the University of Michigan’s Youth Policy Lab. “Do you want to make every district do the exact same policy? I can see arguments for leaving it up to local leaders.”
Advertisement
The open letter by the school associations also pushed for more local control, instead of federal control.
“Decisions about education devices, classroom technology, and local screen-use practices should remain in the hands of local educators and their families who best understand their own students’ needs,” the letter stated.
Lack of Consistency in Schools
Some states adhere to restrictions more than others. According to the newly released “Phone-Free Schools State Report Card,” 17 states received a “B” for their bell-to-bell policies, getting lower marks for allowing cellphones in accessible places or not explicitly stating where phones should be stored.
Both Jacob, and Anderson of Johns Hopkins, are concerned about the lack of explicit, consistent guidelines in schools.
Advertisement
“Everyone sees a need for some kind of limitation; what’s kind of crazy, and it’s the same with the artificial intelligence push, is it doesn’t look the same,” Anderson says. “It’s different from school to school, classroom to classroom, district to district. The lack of consistency makes it difficult to show the effect these bans have.”
Jacob worries the guidelines will place the burden on teachers.
“I fear a lot of schools will ban them but say ‘Kids have to keep them in their pockets and teachers have to police that,’ and that approach will be really tough to implement in any way,” he says, adding it is best to mandate keeping them in lockers or a centralized location.
Many at the federal level believe full phone bans in schools are key to fixing excessive screen time. However, Anderson — who testified before the D.C. State Board of Education about phones’ effects on children — believes officials should be looking at the bigger picture.
Advertisement
“I feel like we’re putting a Band-Aid on the ocean,” she says. “I think people in schools feel they can control the hours of 8:30 to 2:30, but there also needs to be more conversations on what can happen outside of school — and managing that.”
The SEC is working on a proposal to allow public companies to release earnings reports twice a year instead of quarterly, per the WSJ.
Chatter about making the 50-plus-year-old quarterly requirement optional has picked up steam in the past year, as companies lament the cost and burden of preparing for quarterly earnings. The requirement is also thought to be one reason why some companies choose to stay private longer.
Those in favor of change hope that a semiannual requirement will encourage more companies to go public by making it easier to maintain public company status. SEC Chairman Paul Atkins and President Trump have both voiced support for the idea. The Journal reports that the SEC has already begun discussions with exchanges about potential next steps, though any change is still a long way away.
If the SEC releases its proposal — which could come within the next few weeks — it will be subject to a public comment period and then a vote. There is precedent for this rule, notes the Journal. Both the European Union and the U.K. eliminated mandatory quarterly reporting roughly a decade ago in favor of semiannual disclosures, though many companies in both markets still report quarterly by choice.
In the ubiquitous age of artificial intelligence, the technology appears as if it is largely earmarked for optimizing the mechanics of work. Buzzwords like speed, automation, efficiency, and productivity often dominate conversations that are shaping the digital era. Today, AI’s capability has expanded into streamlining operations at an unprecedented scale, yet even this magnitude of modern technology does not compensate for a fundamental human need: connection.
“At a time when loneliness is consistently increasing, there is very little AI can do today to strengthen human connection,” says Freddy del Barrio, founder of Companion AI, who believes the next chapter of artificial intelligence holds the potential to address that gap.
Freddy, who is currently building systems designed to support emotional well-being and long-term human relationships, believes the next wave of AI isn’t just smarter. It’s more human. This insight guides his work with Companion AI, which frames his effort as an attempt to restore a fundamental dimension to digital innovation.
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
Loneliness continues to be recognized as a public health crisis across the US, with studies linking social isolation to increased risks of depression, anxiety, cognitive decline, and even cardiovascular disease. Freddy believes that seniors living alone, military veterans transitioning back into civilian life, and younger adults navigating digital-first social environments all face rising levels of disconnection.
As he argues that technology has yet to build an emotional infrastructure to support those needs, Companion AI attempts to fill that void through systems designed around empathy, continuity, and memory.
Advertisement
The platform uses advanced AI models but layers them with proprietary infrastructure that tracks emotional patterns and remembers conversations across time. That architecture allows interactions to develop into ongoing relationships rather than isolated exchanges.
“We designed it around memory and long-term understanding,” Freddy says. “It remembers conversations, understands emotional patterns over time, and helps people feel seen rather than processed by software.” That distinction, he believes, shapes the user experience.
In practical terms, the platform can check in with users, recall previous discussions, and respond with awareness of personal history. The aim is to create a sense of continuity that mirrors human interaction. Within that vision, artificial intelligence can evolve into a support system for mental and emotional health.
Freddy notes that early pilots are already exploring how that infrastructure can operate in real-world environments. Companion AI recently announced a free pilot program for US veterans, a community that frequently experiences higher rates of social isolation and mental health challenges after service. Freddy highlights that the company is also preparing a deployment with a large US-based organization, which, to him, offers a signal that interest in emotionally aware AI systems is extending beyond experimental use.
Advertisement
“Building for sensitive human interactions requires careful technical decisions,” he says. Companion AI integrates large language models while maintaining its own technology stack and data infrastructure to maintain tighter control over privacy, security, and future products. Freddy explains, “That ensures user data stays secure with us and gives us the flexibility to plug in new features as the technology evolves.”
Initial deployments of the platform focus on senior living communities and assisted living facilities, where loneliness may be particularly acute. The company sees those environments as the starting point, with long-term plans aimed at making emotionally intelligent AI accessible across demographics and income levels.
Expanding on that mission, companion AI is also exploring pathways for integration with public healthcare frameworks such as Medicare and Medicaid in the United States to further democratize the technology. “We’re a people-first company using AI,” he says. “Human well-being comes first, and the technology supports that mission.”
Artificial intelligence has already transformed productivity, reshaped industries, and accelerated innovation across sectors. The next stage, in Freddy’s view, may prove just as transformative in a different dimension. Systems capable of remembering, responding with empathy, and supporting emotional well-being could redefine how humans experience technology in everyday life.
Advertisement
As he continues strengthening Companion AI’s framework, Freddy del Barrio believes that evolution is already underway.
Apple released the Apple Watch Ultra 2 a while ago, and despite fresh options entering the market since then, it’s still a popular value-oriented choice among users, priced at $499 (was $799). The battery life is definitely one of the standout features of this watch, as in day-to-day use, you can get a solid 36 hours out of it before needing to recharge, which means you can get through a whole day of using it for fitness tracking, receiving notifications, and monitoring your sleep without needing to charge the battery.
The case is made of durable titanium and features a beautiful sapphire glass-covered, Always-On Retina LTPO2 OLED screen. That means it will endure all of the bumps and scrapes that come with normal wear and tear, as well as scratches from everyday use. It’s also water resistant up to 100 meters, and the screen is bright enough that you can see what you’re looking at even in direct sunlight. It has a 3,000 nit brightness, which is rather good. They’ve also included some specific controls, such as an action button that allows you to rapidly access custom shortcuts, which is useful whether you’re on a run or diving, and because built-in GPS is also included, you can get accurate route mapping rather quickly.
WHY APPLE WATCH ULTRA 2 — Meet the ultimate sports and adventure watch. Advanced features for runners, cyclists, swimmers, hikers, divers, and more…
EXTREMELY RUGGED, INCREDIBLY CAPABLE — 49mm corrosion-resistant titanium case. Sapphire front crystal. Large Digital Crown and customizable Action…
THE FREEDOM OF CELLULAR — With a cellular service plan you can call and text without your iPhone nearby.* Stream your favorite music and podcasts…
For the majority of consumers, the Ultra 2 still includes all they need for fitness workouts, daily tracking, and easy access to messages or directions. In regular use, the difference between the Ultra 2 and Ultra 3 is negligible, and the savings from not purchasing the latest model make it an easy decision. Software upgrades for both new and older models are always being released, allowing you to stay current while also enjoying the most recent health insights and training modes. One of the most notable aspects is its consistency; users have been using the Ultra 2 for years and have never experienced any slowdowns or missing functionality; it simply keeps going. This means it truly feels like a reliable buddy, rather than something you have to replace all the time.
OpenCFO co-founders: On the left, Sankalp Singayapally, chief operating officer, and CEO Prudhvi Rao Shedimbi. (OpenCFO Photo)
OpenCFO, a Seattle-based startup tackling the ‘fragmented and manual’ nature of modern finance, has raised $2 million to automate financial functions for mid-sized companies.
“CFOs are being asked to operate with greater speed and accuracy than ever, yet the underlying finance stack remains fragmented and manual,” said Prudhvi Rao Shedimbi, co-founder and CEO of OpenCFO, in a statement.
To address that challenge, OpenCFO unifies accounts payable, accounts receivable and treasury into a single system — connecting directly to banking, payment infrastructure and enterprise resource planning (ERP) platforms. Human oversight is built in through approvals and audit trails.
“By combining agentic AI with modern treasury infrastructure, we’re building a unified platform that automates financial operations while giving finance teams enhanced visibility and control,” said Sankalp Singayapally, co-founder and chief operating officer.
The startup launched in December, now has a team of 15, and is hiring for roles in engineering, customer success and sales.
Advertisement
Before co-founding OpenCFO, Shedimbi served as an engineering manager at StarTree, an analytics tech company, and previously held engineering roles at Confluent and CrowdStrike.
Singayapally was an engagement manager at Keystone AI prior to OpenCFO, and served as an intern at Endiya Partners while earning an MBA at Harvard Business School. He worked as an R&D engineer at Bloomberg earlier in his career.
The funding round was led by Endiya and included participation from angel investors in the U.S. and India.
Startups are hustling to apply agentic AI to the back office to autonomously perform complicated financial operations. Investors have been backing platforms serving big enterprises, while a newer crop of startups like OpenCFO is bringing similar automation to mid-sized companies.
Advertisement
StreamOS, Cartwheel and Fazeshift are the startup’s fintech competitors, according to PitchBook, while the analytics site Bayelsa Watch pointed to Fyorin and Kolleno as comparable early-stage rivals.
When an AI agent loses context mid-task because traditional storage can’t keep pace with inference, it is not a model problem — it is a storage problem. At GTC 2026, Nvidia announced BlueField-4 STX, a modular reference architecture that inserts a dedicated context memory layer between GPUs and traditional storage, claiming 5x the token throughput, 4x the energy efficiency and 2x the data ingestion speed of conventional CPU-based storage.
The bottleneck STX targets is key-value cache data. KV cache is the stored record of what a model has already processed — the intermediate calculations an LLM saves so it does not have to recompute attention across the entire context on every inference step. It is what allows an agent to maintain coherent working memory across sessions, tool calls and reasoning steps. As context windows grow and agents take more steps, that cache grows with them. When it has to traverse a traditional storage path to get back to the GPU, inference slows and GPU utilization drops.
STX is not a product Nvidia sells directly. It is a reference architecture the company is distributing to its storage partner ecosystem so vendors can build AI-native infrastructure around it.
STX puts a context memory layer between GPU and disk
The architecture is built around a new storage-optimized BlueField-4 processor that combines Nvidia’s Vera CPU with the ConnectX-9 SuperNIC. It runs on Spectrum-X Ethernet networking and is programmable through Nvidia’s DOCA software platform.
Advertisement
The first rack-scale implementation is the Nvidia CMX context memory storage platform. CMX extends GPU memory with a high-performance context layer designed specifically for storing and retrieving KV cache data generated by large language models during inference. Keeping that cache accessible without forcing a round trip through general-purpose storage is what CMX is designed to do.
“Traditional data centers provide high-capacity, general-purpose storage, but generally lack the responsiveness required for interaction with AI agents that need to work across many steps, tools and different sessions,” Ian Buck, Nvidia’s vice president of hyperscale and high-performance computing said in a briefing with press and analysts.
In response to a question from VentureBeat, Buck confirmed that STX also ships with a software reference platform alongside the hardware architecture. Nvidia is expanding DOCA to include a new component referred to in the briefing as DOCA Memo.
“Our storage providers can leverage the programmability of the BlueField-4 processor to optimize storage for the agentic AI factory,” Buck said. “In addition to having a reference rack architecture, we’re also providing a reference software platform for them to deliver those innovations and optimizations for their customers.”
Advertisement
Storage partners building on STX get both a hardware reference design and a software reference platform — a programmable foundation for context-optimized storage.
Nvidia’s partner list spans storage incumbents and AI-native cloud providers
Storage providers co-designing STX-based infrastructure include Cloudian, DDN, Dell Technologies, Everpure, Hitachi Vantara, HPE, IBM, MinIO, NetApp, Nutanix, VAST Data and WEKA. Manufacturing partners building STX-based systems include AIC, Supermicro and Quanta Cloud Technology.
On the cloud and AI side, CoreWeave, Crusoe, IREN, Lambda, Mistral AI, Nebius, Oracle Cloud Infrastructure and Vultr have all committed to STX for context memory storage.
That combination of enterprise storage incumbents and AI-native cloud providers is the signal worth watching. Nvidia is not positioning STX as a specialty product for hyperscalers. It is positioning it as the reference standard for anyone building storage infrastructure that has to serve agentic AI workloads — which, within the next two to three years, is likely to include most enterprise AI deployments running multi-step inference at scale.
Advertisement
STX-based platforms will be available from partners in the second half of 2026.
IBM shows what the data layer problem looks like in production
IBM sits on both sides of the STX announcement. It is listed as a storage provider co-designing STX-based infrastructure, and Nvidia separately confirmed that it has selected IBM Storage Scale System 6000 — certified and validated on Nvidia DGX platforms — as the high-performance storage foundation for its own GPU-native analytics infrastructure.
IBM also announced a broader expanded collaboration with Nvidia at GTC, including GPU-accelerated integration between IBM’s watsonx.data Presto SQL engine and Nvidia’s cuDF library. A production proof of concept with Nestlé put numbers on what that acceleration looks like: a data refresh cycle across the company’s Order-to-Cash data mart, covering 186 countries and 44 tables, dropped from 15 minutes to three minutes. IBM reported 83% cost savings and a 30x price-performance improvement.
The Nestlé result is a structured analytics workload. It does not directly demonstrate agentic inference performance. But it makes IBM and Nvidia’s shared argument concrete: the data layer is where enterprise AI performance is currently constrained, and GPU-accelerating it produces material results in production.
Advertisement
Why the storage layer is becoming a first-class infrastructure decision
STX is a signal that the storage layer is becoming a first-class concern in enterprise AI infrastructure planning, not an afterthought to GPU procurement.
General-purpose NAS and object storage were not designed to serve KV cache data at inference latency requirements. STX-based systems from partners including Dell, HPE, NetApp and VAST Data are what Nvidia is putting forward as the practical alternative, with the DOCA software platform providing the programmability layer to tune storage behavior for specific agentic workloads.
The performance claims — 5x token throughput, 4x energy efficiency, 2x data ingestion — are measured against traditional CPU-based storage architectures. Nvidia has not specified the exact baseline configuration for those comparisons. Before those numbers drive infrastructure decisions, the baseline is worth pinning down.
Platforms are expected from partners in the second half of 2026. Given that most major storage vendors are already co-designing on STX, enterprises evaluating storage refreshes for AI infrastructure in the next 12 months should expect STX-based options to be available from their existing vendor relationships.
Uri Gal of the University of Sydney discusses the factors impacting the working landscape and tech-based jobs.
In the past few months, a wave of tech corporations have announced significant staff cuts and attributed them to efficiency gains driven by artificial intelligence (AI).
Companies such as Atlassian, Block and Amazon have announced they would lay off thousands of employees due to increased reliance on AI.
The narrative these companies offer is consistent: AI is making human labour replaceable, and responsible management demands adjustment.
Advertisement
The evidence, however, tells a more nuanced story.
The automation story is partly true
Genuine disruption is visible in specific corners of the labour market, though the scale of that disruption is commonly overstated. Research from Anthropic published earlier this month shows that although many work tasks are susceptible to automation, the vast majority are still performed primarily by humans rather than AI tools.
Moreover, some occupations are more exposed to displacement than others: computer programmers sit at the top of the list, followed by customer service representatives and data entry workers. Yet even within the most exposed occupations, AI use is still limited.
The aggregate economic data reflects this reality. A 2025 Goldman Sachs report estimated that if AI were used across the economy for all the things it could currently do, roughly 2.5pc of US employment would be at risk of job loss.
Advertisement
That’s not a trivial number. However, the report notes that workers in AI-exposed occupations are currently no more likely to lose their jobs, face reduced hours or earn lower wages than anyone else.
The report does note early signs of strain in specific industries. Goldman Sachs identifies sectors where employment growth has slowed that align with AI-related efficiency gains. Examples include marketing consulting, graphic design, office administration and call centres.
In the tech sector, US workers in their 20s in AI-exposed occupations saw unemployment rise by almost 3pc in the first half of 2025. Anthropic’s research also found that job-finding rates (the chance of an unemployed person finding a job in a one-month period) for workers aged 22 to 25 entering AI-exposed occupations have fallen by around 14pc since the launch of ChatGPT in 2022. This is a tentative but telling signal about where the pressure is being felt first.
These are meaningful signals, but they are sector-specific and concentrated – not the evidence of sweeping displacement that corporate announcements often imply. That gap between the evidence and the rhetoric raises an obvious question: what else might be driving these decisions?
Advertisement
What is the motive?
The timing and framing of the layoffs attributed to AI warrants closer examination. Corporate restructuring, over-hiring during the post-pandemic boom as demand for online services soared, and pressure from investors to demonstrate improved profit margins are all forces operating at the same time as genuine advances in AI.
While these are not mutually exclusive explanations, they are rarely acknowledged alongside one another in corporate communications.
There is a powerful financial incentive for companies to be seen to be embracing AI aggressively. Since the launch of ChatGPT, AI-related stocks have accounted for about 75pc of S&P 500 returns.
A workforce reduction framed around AI adoption sends a signal to investors that a straightforward cost-cutting announcement does not. A company making AI-related innovations looks a lot better than one sacking staff due to declining revenues or poor strategic decisions.
Advertisement
It is also worth distinguishing between two kinds of workforce reduction. In the first, AI genuinely increases productivity to the point where fewer workers are needed to produce the same output. In the second, staff reductions are not a consequence of AI, but a way to fund it.
Meta illustrates this distinction. The social media giant is reportedly planning to lay off as much as 20pc of its workforce, while simultaneously committing $600bn to build data centres and recruit top AI researchers.
In this case, the workers being let go are not being replaced by AI today; they are subsidising the AI bet their employer is making on the future.
The more plausible future
The big picture is likely one of transformation rather than elimination. According to a recent PwC report, employment is still growing in most industries exposed to AI, although growth tends to be slower than in less exposed sectors.
Advertisement
At the same time, wages in AI-exposed industries are rising roughly twice as fast as in those least touched by the technology. Workers with AI skills command an average wage premium of about 56pc across the industries analysed.
Together, the data points toward a flattening of the traditional workplace pyramid rather than mass displacement. Firms require fewer junior employees for routine analytical and administrative work, while experienced professionals who deploy AI tools effectively become more productive and command greater value.
AI is a consequential technology and will have a significant impact in the long term. What is in doubt is whether the dramatic, AI-attributed workforce reductions announced by individual companies accurately reflect that trajectory, or whether they conflate genuine technological change with decisions that would have been made regardless.
Making this distinction is not merely an academic exercise. It shapes how policymakers, educators and workers themselves understand the nature of the disruption they are navigating.
Uri Gal is a professor of business information systems at the University of Sydney Business School. His research focuses on the organisational and ethical aspects of digital technologies. He is particularly interested in the relationships between people and technology, and in the changes in the nature of work associated with the introduction of algorithmic technologies.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
When a water-treatment plant outside Denver discovered an algae problem in its pipes, it did not call an engineering firm. It called the students.
The aquatic robotics team at the Innovation Center at St. Vrain Valley Schools in Longmont, Colorado, sent underwater robots into the facility, collected data, identified the algae species and helped eradicate it. The plant now contracts with the student team for quarterly checkups. Neighboring towns have started calling, too.
This is not a simulation or a classroom exercise conjured up to look like real work. It is real work, and it reflects a broader shift underway in districts. Increasingly, schools are building career learning pathways that connect students directly with professional challenges, industry mentors and, in some cases, a paycheck.
The Case for Real Work
The urgency behind these efforts is hard to ignore. A 2023 review from the American Institutes for Research, drawing on two decades of studies, found that career and technical education participation has statistically significant positive impacts on academic achievement, high school completion, employability skills and college readiness.
Advertisement
The question districts are now wrestling with is not whether to offer career pathways, but whether those pathways lead anywhere real.
Policy leaders are paying attention. The Education Commission of the States has identified building aligned career pathways and removing barriers to economic opportunity as one of its top priorities through 2027.
At St. Vrain, Assistant Superintendent of Innovation Joe McBreen has spent years trying to answer that question through a program known as project teams.
After school each day, roughly 264 students log in at the district’s Innovation Center and begin work as paid district employees, billing hours against accounts for actual clients. Students can join a drone show team, a cybersecurity unit, an AI development group or a dozen other teams, rotating among them as their interests evolve.
Advertisement
“It’s low threat, high reward,” says McBreen. “Students get paid, grow their network, develop soft skills and test drive careers. And if they get into a team and realize it’s not for them, there’s real value in that, too.”
The model relies heavily on industry mentors who bring in real work rather than invented classroom projects. Damon Brown, a senior cybersecurity adviser for the U.S. Department of State focused on Ecuador, mentored seven St. Vrain students on a complex assignment.
He asked them to design the architecture for a cyber intelligence fusion center using open-source tools — work that could have cost hundreds of thousands of dollars if contracted from a professional firm.
“The students knocked it out of the park,” says Brown.
Advertisement
They built the system architecture, wrote user manuals, recommended equipment and conducted a threat analysis of countries surrounding Ecuador. Brown was so impressed he is now hiring six St. Vrain interns.
“This experience binds people together,” he says.
The program also has a way of growing in unexpected directions. After one student’s grandparent was victimized by a cybercrime, the cybersecurity team created an awareness curriculum for senior citizens. They taught five classes to 24 senior citizens in the first year; the second session was standing room only. Senior facilities now pay the students to come in and teach.
Meanwhile, the drone team flies commercial shows for companies across the country on Friday afternoons, billing clients at rates few drone pilots in the country can match. One former member is now studying aerospace engineering and using money from drone flying to help pay for college.
Advertisement
Taking the Model Out West
St. Vrain’s work has drawn attention from educators around the country, some of whom are adapting pieces of the model to fit their own communities.
Kris Hagel, chief information officer of Peninsula School District in Washington state, visited the Innovation Center and came away convinced he could build something similar.
Two years ago, Peninsula launched a paid drone internship program, starting with seven students and gradually expanding. Students work alongside industry partners while learning how to navigate FAA regulations, program autonomous flight paths and repair drones.
“When you’re willing to look at what’s cutting edge and think innovatively without being constrained by traditional systems, you can create opportunities for kids that transcend what we think of as traditional education,” says Hagel. “This program has become so much more than I thought was possible.”
Advertisement
The district partnered with Firefly Drone Systems, one of the few American drone manufacturers, to train students and help them operate drone shows.
The program also includes multiple roles beyond piloting, including marketing, animation design and equipment maintenance. Hagel envisions a future where students studying business management hire other students to operate the program.
A skilled drone operator who leaves high school with the capital to purchase equipment can enter a six-figure career almost immediately, says Hagel.
Finding the Problem First
Not every district is building toward robotics contracts or drone shows. For Michele Davis, CTE department chair at Metropolitan School District of Steuben County in Indiana, the real-world pathway is entrepreneurship.
Advertisement
Working with the StartED Up Foundation, Davis guides students through a three-year sequence: identifying an actual problem, developing a solution, building out the business model and presenting it to real audiences.
Students take “opportunity walks” around the school, documenting everyday frustrations and brainstorming solutions. They learn how to market their ideas professionally by practicing elevator pitches, presenting case studies to various audiences and explaining their ideas to elementary school students.
“Opportunities are everywhere,” says Davis.
The ideas that emerge can be surprisingly practical. One student designed a reversible outfit to solve a quick-change problem in theater productions. Another class developed a mobile trailer concept that could help unhoused people access hygiene services.
Advertisement
Beyond the business concepts themselves, Davis says the program focuses heavily on communication skills and confidence. “We get students comfortable doing things that are normally uncomfortable,” she says.
A Credential, Not Just a Class
At Suffern Central School District in Rockland County, New York, Superintendent P. Erik Gundersen has taken yet another approach.
Through a partnership with the League of Innovative Schools and curriculum provider Paradigm, the district launched a three-year cybersecurity certification pathway embedded directly into the high school. About 60 students are currently enrolled.
The program was designed to reach students who might not otherwise see themselves in a cybersecurity career. The district actively recruited students from immigrant communities and others who are new to the U.S.
Advertisement
Students work in a “sandbox” environment that simulates real cyber incidents, allowing them to practice identifying threats and responding to attacks.
“The means to send a kid to college is not as great as it was, and a lot of what we’re reading questions the importance of a college education,” says Gundersen.
Those economic realities, he says, are pushing districts to rethink how they prepare students for the workforce.
Career credentials embedded with traditional high schools can open doors for students who may not otherwise have clear pathways into high-skill industries.
Advertisement
Education That Looks Like Life
Across these programs, the details vary widely, but the philosophy is the same: Authentic experience is not a supplement to education. It is education.
As McBreen says, “I encourage districts to expand their vision. Anyone can do this. Start small.”
To help attract talent in those fields, the Labor Department last month introduced incentives for apprenticeships, including a US $145 million “pay for performance” grant program. The funding aims to develop registered apprenticeships in high-demand fields including artificial intelligence and information technology.
“As a software engineer, this impending shortage concerns me because I believe that the U.S. AI and cybersecurity skills gap would show up first in the early-career pipeline,” Tibrewala says. “Students will be entering the U.S. workforce without enough hands-on experience building secure AI-enabled enterprise and cloud systems, and this gap will persist without practical, mentor-led training before graduation.”
“I was able to establish a partnership with NJIT, recruit speakers, design the event’s agenda, and promote the event to ensure it was aligned with the strategy outlined in the workforce report,” he says. “This effort aligns with broader U.S. workforce development priorities focused on industry-driven skills training in critical technology areas.”
The IEEE Buildathon event was held on 1 November at NJIT’s Newark campus. More than 30 students and early-career engineers heard from 11 speakers. Through interactive workshops, live demonstrations, and networking opportunities, they left with practical, employer-aligned skills and clearer career pathways for AI-era skills-building.
Tibrewala chaired the event and also serves as chair of the IEEE Buildathon program.
Advertisement
Session takeaways
Region 1 Director Bala S. Prasanna, a life senior member, gave the keynote address. He emphasized the need for universities, industry practitioners, and IEEE volunteer leaders to collaborate on programs to enhance technical skills.
IEEE Member Kalyani Matey, cochair of the IEEE North Jersey Section’s Young Professionals, conducted a workshop on how to build one’s personal brand and a responsive network. Participants received valuable insights about résumé building, effective communication strategies, and enhancing their visibility and employability.
“Over time, this kind of structured, employer-aligned training will help increase confidence, employability, and technical readiness across the country. With sustained support, programs like the IEEE Buildathon can become a practical bridge from education to industry in the AI era.” —Alok Tibrewala
Tibrewala led the Unlocking AI’s Potential: Solving Big Challenges With Smart Data and IEEE DataPort session. The web-based DataPort platform allows researchers to store, share, access, and manage their research datasets in a single, trusted location. He discussed needed skills including AI literacy, strong data handling and dataset stewardship, and turning data into actionable insights.
Advertisement
Chaitali Ladikkar, a senior software engineer, delivered the insightful Brains Behind the Game seminar. Ladikkar, an IEEE member, highlighted the transformative impact AI is having on gaming and game engine technologies. She explained how AI is reshaping game development. She also covered how machine learning is being used for animation, faster content generation and testing of new titles. Her seminar received enthusiastic feedback from participants.
The Building Better Business Relationships DiSC workshop provided insights into enhancing professional relationships and communication within an engineering workforce. DiSC is a behavioral self-assessment used to understand an individual’s communication style and to adapt to others.
Participant experience and testimonials
The event received high praise from participants for its practical and industry-relevant content, according to Tibrewala.
“This training significantly enhanced my understanding and readiness for industry roles, filling gaps my regular academic coursework did not fully address,” said Humna Sultan, an IEEE student member who is a senior studying computer science at Stevens Institute of Technology, in Hoboken, N.J.
Advertisement
“The Buildathon was structured around real engineering challenge scenarios that deepened my understanding of AI and cloud technologies,” said Carlos Figueredo, an IEEE graduate student member who is studying data science at the University of Michigan, in Ann Arbor. “It boosted my confidence and practical skills essential for the industry.”
Bavani Karthikeyan Janaki said “it was incredible to see how technology and sustainability came together to drive real-world impact, thanks to the dedicated efforts of the organizers including Tibrewala, Matey, and the IEEE North Jersey Young Professionals.” Janaki is pursuing a master’s degree in computer and information science at Long Island University, in New York.
Funding and collaborative efforts
The Buildathon was made possible through grants from the IEEE Young Professionals group and funding from the IEEE North Jersey Section and IEEE Member and Geographic Activities. Their support shows how IEEE’s professional organizations can collaborate to address workforce needs by supporting the delivery of technical sessions that strengthen early-career pipelines.
Future plans and a call to action
Building on the event’s success, Tibrewala and Matey plan to make the IEEE Buildathon an ongoing initiative. They are exploring ways to expand it to additional university campuses and IEEE communities.
Advertisement
Tibrewala says they plan to refine the format based on participant feedback and lessons learned. To support consistent quality, he and Matey say, they are working on a playbook for organizers that will include a repeatable agenda, a workshop template, speaker guidelines, and post-event feedback forms.
The approach depends on continued coordination among host universities, local IEEE sections, and Young Professional volunteers, Tibrewala says.
“Enabling other groups to run similar events,” he says, “can help more students and early-career engineers gain practical exposure to AI, data, cloud, cybersecurity, and other key emerging technologies in a structured setting.
“Efforts like this help translate national workforce priorities into real training that students and early-career engineers can apply immediately to their projects. This also helps close the gap between classroom learning and the realities of building secure, reliable systems in production environments. Over time, this kind of structured, employer-aligned training will help increase confidence, employability, and technical readiness across the country.
Advertisement
“With sustained support, programs like the IEEE Buildathon can become a practical bridge from education to industry in the AI era.”
In the fictional nation of Beryllia, the 2026 World Chalice Games were set to begin as the country faced an unrelenting heat wave. The grid, already under strain from the circumstances, was dealt a further blow when a coordinated set of attacks including vandalism, drone, and ballistic attacks by an adversary, Crimsonia, crippled the grid’s physical infrastructure.
This scenario, inspired by the upcoming 2026 World Cup and the 2028 Olympic Games in Los Angeles, was an exercise in studying how utilities can prevent and mitigate, among other dangers, physical attacks on power grids. Called GridEx, the exercise was hosted by the Electricity Information Sharing and Analysis Center (E-ISAC) from 18 to 20 November, 2025. GridEx has been held every two years since 2011.
“We know that threat actors look to exploit certain circumstances,” says Michael Ball, CEO of E-ISAC, which is a program of the North American Electric Reliability Corporation (NERC), about designing the Beryllia scenario. “The Chalice Games became a good example of how we could build a scenario around a threat actor.”
Physical attacks on the grid are rising in the U.S., and GridEx attendance was up in November as utilities grapple with how to prevent and mitigate attacks. Participation in the exercise was at its highest level since 2019, according to a report released on 2 March. Given the number of organizations present, GridEx estimates that more than 28,000 individual players participated, including utility workers and government partners, an all-time high since the exercise began.
Advertisement
Rising Physical Threats to Power Grids
The U.S. and Canadian grids face growing security issues from physical threats, including vandalism, assault of utility workers, intrusion of property, and theft of components, like copper wiring. NERC’s 2025 E-ISAC end of year report cites more than 3,500 physical security breaches that calendar year, about 3 percent of which disrupted electricity. That’s up from 2,800 events cited in the 2023 report (3 percent of those also resulted in electricity disruptions). Yet despite a number of recent high-profile attacks in the U.S., physical attacks on the grid are happening worldwide.
“They’re not uniquely a U.S. thing,” says Danielle Russo, executive director of the Center for Grid Security at Securing America’s Future Energy, a nonpartisan organization focused on advancing national energy security. Russo says that while attacks are common in places like Ukraine, they’re not limited to wartime scenarios. “Other countries that are not experiencing direct conflict are experiencing increasing amounts of physical attacks on their energy infrastructure,” she says. Take Germany for example: On 3 January, an arson attack by left-wing activists in Berlin caused a five-day blackout impacting 45,000 households. That comes after a suspected arson attack on two pylons in September 2025 left 50,000 Berlin households without power. Some German officials cite domestic extremism and fears of Russian sabotage in recent years as reasons for heightened security concerns over critical infrastructure.
The uptick in attacks on the U.S. grid has been anchored by a number of incidents in recent years. In December 2025, an engineer in San Jose, California was sentenced to 10 years in prison for bombing electric transformers in 2022 and 2023. A Tennessee man was arrested in November 2024 for attempting to attack a Nashville substation using a drone armed with explosives. And in 2023, a neo-Nazi leader was among two arrested in a plot to attack five substations around Baltimore with firearms, part of an increasing trend in white supremacist groups planning to attack the U.S. energy sector.
“Since [E-ISAC] started publishing data back in 2016, we’ve seen a large and consistent increase in the number of reported physical security incidents per year,” says Michael Coe, the vice president of physical and cyber security programs at the American Public Power Association, a trade group that works with E-ISAC to plan GridEx. While not all data is publicly available, Coe says there’s been a “tenfold” increase over the past decade in the number of reported physical attacks on the grid.
Advertisement
Drone Attacks: A Growing Security Challenge
During the fictional World Chalice Games scenario, drone attacks destroyed Beryllia’s substation equipment, highlighting a threat that’s gained traction as more drones enter the airspace.
“The question we get all the time is, how do you tell if it’s a bad actor, or if it’s a 12-year-old kid that got the drone for their birthday?” says Erika Willis, the program manager for the substations team at the Electric Power Research Institute (EPRI).
One strategy to track and alert utilities to potential threats such as drones is called sensor fusion. The system includes a pan-tilt-zoom camera capable of 360-degree motion mounted on top of a tripod or pole with four installed radars. The radars combine with the camera for a dual system that can track drones even if they’re obstructed from view, says Willis. For instance, if a nearby drone flies behind a tree, hidden from the camera, the radars will still pick up on it. The technology is currently being tested at EPRI’s labs in Charlotte, North Carolina and Lenox, Massachusetts.
EPRI is also exploring how robotics and AI can improve security systems, Willis says. One approach involves integrating AI analysis into robotic technology already surveilling substation perimeters. Using AI can improve detection of break-ins and damage to fencing around substations, Willis says. “As opposed to a human having to go through 200 images of a fence, you can have the AI overlays do some of those algorithms…If the robot has done the inspection of the substation 100 times, it can then relay to you that there’s an anomaly,” Willis says.
Advertisement
Prisma Photonics deploys fiber sensing technology that uses reflected optical signals to detect perturbations from vehicles and other sources near underground fiber cable.Prisma Photonics
Already, a number of utilities in the U.S. are using AI integrations in their security and monitoring processes. That’s thanks in part to the Tel Aviv, Israel-based Prisma Photonics, a software company that launched in 2017 and has since deployed its fiber sensing technology across thousands of miles of transmission infrastructure in the U.S., Canada, Europe, and Israel. A file-cabinet-sized unit plugs into a substation and sends light pulses down existing fiber optic cables 30 miles in each direction. As the pulses travel down the cables, a tiny fraction of the light is reflected back to the substation unit. An AI model processes the results and can classify events based on patterns in the optical signal as a result of perturbations happening around the fiber cable.
“If we identify an event that we don’t have a classification for, and we get a feedback from a customer saying, ‘oh, this was a car crash,’ then we can classify that in the model to say this is actually what happened,” says Tiffany Menhorn, Prisma Photonics’ vice president of North America.
As preparations get underway for the ninth GridEx in 2027, Ball says participation in the exercises alone isn’t enough to bolster grid security. Instead, he wants utilities to take what they learn from the training and apply it in their own operations. “It’s the action of doing it, versus our statistic of saying, ‘here’s what our growth was.’ That growth should relate to the readiness and capability of the industry.”
Advertisement
I changed the tense on this because the subsequent sentences use past tense. It seemed weird to switch from present tense in the first sentence to past tense in the rest of the paragraph, but I could be mistaken.
Nvidia on Monday unveiled a deskside supercomputer powerful enough to run AI models with up to one trillion parameters — roughly the scale of GPT-4 — without touching the cloud. The machine, called the DGX Station, packs 748 gigabytes of coherent memory and 20 petaflops of compute into a box that sits next to a monitor, and it may be the most significant personal computing product since the original Mac Pro convinced creative professionals to abandon workstations.
The announcement, made at the company’s annual GTC conference in San Jose, lands at a moment when the AI industry is grappling with a fundamental tension: the most powerful models in the world require enormous data center infrastructure, but the developers and enterprises building on those models increasingly want to keep their data, their agents, and their intellectual property local. The DGX Station is Nvidia’s answer — a six-figure machine that collapses the distance between AI’s frontier and a single engineer’s desk.
What 20 petaflops on your desktop actually means
The DGX Station is built around the new GB300 Grace Blackwell Ultra Desktop Superchip, which fuses a 72-core Grace CPU and a Blackwell Ultra GPU through Nvidia’s NVLink-C2C interconnect. That link provides 1.8 terabytes per second of coherent bandwidth between the two processors — seven times the speed of PCIe Gen 6 — which means the CPU and GPU share a single, seamless pool of memory without the bottlenecks that typically cripple desktop AI work.
Twenty petaflops — 20 quadrillion operations per second — would have ranked this machine among the world’s top supercomputers less than a decade ago. The Summit system at Oak Ridge National Laboratory, which held the global No. 1 spot in 2018, delivered roughly ten times that performance but occupied a room the size of two basketball courts. Nvidia is packaging a meaningful fraction of that capability into something that plugs into a wall outlet.
Advertisement
The 748 GB of unified memory is arguably the more important number. Trillion-parameter models are enormous neural networks that must be loaded entirely into memory to run. Without sufficient memory, no amount of processing speed matters — the model simply won’t fit. The DGX Station clears that bar, and it does so with a coherent architecture that eliminates the latency penalties of shuttling data between CPU and GPU memory pools.
Always-on agents need always-on hardware
Nvidia designed the DGX Station explicitly for what it sees as the next phase of AI: autonomous agents that reason, plan, write code, and execute tasks continuously — not just systems that respond to prompts. Every major announcement at GTC 2026 reinforced this “agentic AI” thesis, and the DGX Station is where those agents are meant to be built and run.
The key pairing is NemoClaw, a new open-source stack that Nvidia also announced Monday. NemoClaw bundles Nvidia’s Nemotron open models with OpenShell, a secure runtime that enforces policy-based security, network, and privacy guardrails for autonomous agents. A single command installs the entire stack. Jensen Huang, Nvidia’s founder and CEO, framed the combination in unmistakable terms, calling OpenClaw — the broader agent platform NemoClaw supports — “the operating system for personal AI” and comparing it directly to Mac and Windows.
The argument is straightforward: cloud instances spin up and down on demand, but always-on agents need persistent compute, persistent memory, and persistent state. A machine under your desk, running 24/7 with local data and local models inside a security sandbox, is architecturally better suited to that workload than a rented GPU in someone else’s data center. The DGX Station can operate as a personal supercomputer for a solo developer or as a shared compute node for teams, and it supports air-gapped configurations for classified or regulated environments where data can never leave the building.
Advertisement
From desk prototype to data center production in zero rewrites
One of the cleverest aspects of the DGX Station’s design is what Nvidia calls architectural continuity. Applications built on the machine migrate seamlessly to the company’s GB300 NVL72 data center systems — 72-GPU racks designed for hyperscale AI factories — without rearchitecting a single line of code. Nvidia is selling a vertically integrated pipeline: prototype at your desk, then scale to the cloud when you’re ready.
This matters because the biggest hidden cost in AI development today isn’t compute — it’s the engineering time lost to rewriting code for different hardware configurations. A model fine-tuned on a local GPU cluster often requires substantial rework to deploy on cloud infrastructure with different memory architectures, networking stacks, and software dependencies. The DGX Station eliminates that friction by running the same NVIDIA AI software stack that powers every tier of Nvidia’s infrastructure, from the DGX Spark to the Vera Rubin NVL72.
Nvidia also expanded the DGX Spark, the Station’s smaller sibling, with new clustering support. Up to four Spark units can now operate as a unified system with near-linear performance scaling — a “desktop data center” that fits on a conference table without rack infrastructure or an IT ticket. For teams that need to fine-tune mid-size models or develop smaller-scale agents, clustered Sparks offer a credible departmental AI platform at a fraction of the Station’s cost.
The early buyers reveal where the market is heading
The initial customer roster for DGX Station maps the industries where AI is transitioning fastest from experiment to daily operating tool. Snowflake is using the system to locally test its open-source Arctic training framework. EPRI, the Electric Power Research Institute, is advancing AI-powered weather forecasting to strengthen electrical grid reliability. Medivis is integrating vision language models into surgical workflows. Microsoft Research and Cornell have deployed the systems for hands-on AI training at scale.
Advertisement
Systems are available to order now and will ship in the coming months from ASUS, Dell Technologies, GIGABYTE, MSI, and Supermicro, with HP joining later in the year. Nvidia hasn’t disclosed pricing, but the GB300 components and the company’s historical DGX pricing suggest a six-figure investment — expensive by workstation standards, but remarkably cheap compared to the cloud GPU costs of running trillion-parameter inference at scale.
The list of supported models underscores how open the AI ecosystem has become: developers can run and fine-tune OpenAI’s gpt-oss-120b, Google Gemma 3, Qwen3, Mistral Large 3, DeepSeek V3.2, and Nvidia’s own Nemotron models, among others. The DGX Station is model-agnostic by design — a hardware Switzerland in an industry where model allegiances shift quarterly.
Nvidia’s real strategy: own every layer of the AI stack, from orbit to office
The DGX Station didn’t arrive in a vacuum. It was one piece of a sweeping set of GTC 2026 announcements that collectively map Nvidia’s ambition to supply AI compute at literally every physical scale.
At the top, Nvidia unveiled the Vera Rubin platform — seven new chips in full production — anchored by the Vera Rubin NVL72 rack, which integrates 72 next-generation Rubin GPUs and claims up to 10x higher inference throughput per watt compared to the current Blackwell generation. The Vera CPU, with 88 custom Olympus cores, targets the orchestration layer that agentic workloads increasingly demand. At the far frontier, Nvidia announced the Vera Rubin Space Module for orbital data centers, delivering 25x more AI compute for space-based inference than the H100.
Advertisement
Between orbit and office, Nvidia revealed partnerships spanning Adobe for creative AI, automakers like BYD and Nissan for Level 4 autonomous vehicles, a coalition with Mistral AI and seven other labs to build open frontier models, and Dynamo 1.0, an open-source inference operating system already adopted by AWS, Azure, Google Cloud, and a roster of AI-native companies including Cursor and Perplexity.
The pattern is unmistakable: Nvidia wants to be the computing platform — hardware, software, and models — for every AI workload, everywhere. The DGX Station is the piece that fills the gap between the cloud and the individual.
The cloud isn’t dead, but its monopoly on serious AI work is ending
For the past several years, the default assumption in AI has been that serious work requires cloud GPU instances — renting Nvidia hardware from AWS, Azure, or Google Cloud. That model works, but it carries real costs: data egress fees, latency, security exposure from sending proprietary data to third-party infrastructure, and the fundamental loss of control inherent in renting someone else’s computer.
The DGX Station doesn’t kill the cloud — Nvidia’s data center business dwarfs its desktop revenue and is accelerating. But it creates a credible local alternative for an important and growing category of workloads. Training a frontier model from scratch still demands thousands of GPUs in a warehouse. Fine-tuning a trillion-parameter open model on proprietary data? Running inference for an internal agent that processes sensitive documents? Prototyping before committing to cloud spend? A machine under your desk starts to look like the rational choice.
Advertisement
This is the strategic elegance of the product: it expands Nvidia’s addressable market into personal AI infrastructure while reinforcing the cloud business, because everything built locally is designed to scale up to Nvidia’s data center platforms. It’s not cloud versus desk. It’s cloud and desk, and Nvidia supplies both.
A supercomputer on every desk — and an agent that never sleeps on top of it
The PC revolution’s defining slogan was “a computer on every desk and in every home.” Four decades later, Nvidia is updating the premise with an uncomfortable escalation. The DGX Station puts genuine supercomputing power — the kind that ran national laboratories — beside a keyboard, and NemoClaw puts an autonomous AI agent on top of it that runs around the clock, writing code, calling tools, and completing tasks while its owner sleeps.
Whether that future is exhilarating or unsettling depends on your vantage point. But one thing is no longer debatable: the infrastructure required to build, run, and own frontier AI just moved from the server room to the desk drawer. And the company that sells nearly every serious AI chip on the planet just made sure it sells the desk drawer, too.
You must be logged in to post a comment Login