Connect with us
DAPA Banner

Tech

Nvidia’s DGX Station is a desktop supercomputer that runs trillion-parameter AI models without the cloud

Published

on

Nvidia on Monday unveiled a deskside supercomputer powerful enough to run AI models with up to one trillion parameters — roughly the scale of GPT-4 — without touching the cloud. The machine, called the DGX Station, packs 748 gigabytes of coherent memory and 20 petaflops of compute into a box that sits next to a monitor, and it may be the most significant personal computing product since the original Mac Pro convinced creative professionals to abandon workstations.

The announcement, made at the company’s annual GTC conference in San Jose, lands at a moment when the AI industry is grappling with a fundamental tension: the most powerful models in the world require enormous data center infrastructure, but the developers and enterprises building on those models increasingly want to keep their data, their agents, and their intellectual property local. The DGX Station is Nvidia’s answer — a six-figure machine that collapses the distance between AI’s frontier and a single engineer’s desk.

What 20 petaflops on your desktop actually means

The DGX Station is built around the new GB300 Grace Blackwell Ultra Desktop Superchip, which fuses a 72-core Grace CPU and a Blackwell Ultra GPU through Nvidia’s NVLink-C2C interconnect. That link provides 1.8 terabytes per second of coherent bandwidth between the two processors — seven times the speed of PCIe Gen 6 — which means the CPU and GPU share a single, seamless pool of memory without the bottlenecks that typically cripple desktop AI work.

Twenty petaflops — 20 quadrillion operations per second — would have ranked this machine among the world’s top supercomputers less than a decade ago. The Summit system at Oak Ridge National Laboratory, which held the global No. 1 spot in 2018, delivered roughly ten times that performance but occupied a room the size of two basketball courts. Nvidia is packaging a meaningful fraction of that capability into something that plugs into a wall outlet.

Advertisement

The 748 GB of unified memory is arguably the more important number. Trillion-parameter models are enormous neural networks that must be loaded entirely into memory to run. Without sufficient memory, no amount of processing speed matters — the model simply won’t fit. The DGX Station clears that bar, and it does so with a coherent architecture that eliminates the latency penalties of shuttling data between CPU and GPU memory pools.

Always-on agents need always-on hardware

Nvidia designed the DGX Station explicitly for what it sees as the next phase of AI: autonomous agents that reason, plan, write code, and execute tasks continuously — not just systems that respond to prompts. Every major announcement at GTC 2026 reinforced this “agentic AI” thesis, and the DGX Station is where those agents are meant to be built and run.

The key pairing is NemoClaw, a new open-source stack that Nvidia also announced Monday. NemoClaw bundles Nvidia’s Nemotron open models with OpenShell, a secure runtime that enforces policy-based security, network, and privacy guardrails for autonomous agents. A single command installs the entire stack. Jensen Huang, Nvidia’s founder and CEO, framed the combination in unmistakable terms, calling OpenClaw — the broader agent platform NemoClaw supports — “the operating system for personal AI” and comparing it directly to Mac and Windows.

The argument is straightforward: cloud instances spin up and down on demand, but always-on agents need persistent compute, persistent memory, and persistent state. A machine under your desk, running 24/7 with local data and local models inside a security sandbox, is architecturally better suited to that workload than a rented GPU in someone else’s data center. The DGX Station can operate as a personal supercomputer for a solo developer or as a shared compute node for teams, and it supports air-gapped configurations for classified or regulated environments where data can never leave the building.

Advertisement

From desk prototype to data center production in zero rewrites

One of the cleverest aspects of the DGX Station’s design is what Nvidia calls architectural continuity. Applications built on the machine migrate seamlessly to the company’s GB300 NVL72 data center systems — 72-GPU racks designed for hyperscale AI factories — without rearchitecting a single line of code. Nvidia is selling a vertically integrated pipeline: prototype at your desk, then scale to the cloud when you’re ready.

This matters because the biggest hidden cost in AI development today isn’t compute — it’s the engineering time lost to rewriting code for different hardware configurations. A model fine-tuned on a local GPU cluster often requires substantial rework to deploy on cloud infrastructure with different memory architectures, networking stacks, and software dependencies. The DGX Station eliminates that friction by running the same NVIDIA AI software stack that powers every tier of Nvidia’s infrastructure, from the DGX Spark to the Vera Rubin NVL72.

Nvidia also expanded the DGX Spark, the Station’s smaller sibling, with new clustering support. Up to four Spark units can now operate as a unified system with near-linear performance scaling — a “desktop data center” that fits on a conference table without rack infrastructure or an IT ticket. For teams that need to fine-tune mid-size models or develop smaller-scale agents, clustered Sparks offer a credible departmental AI platform at a fraction of the Station’s cost.

The early buyers reveal where the market is heading

The initial customer roster for DGX Station maps the industries where AI is transitioning fastest from experiment to daily operating tool. Snowflake is using the system to locally test its open-source Arctic training framework. EPRI, the Electric Power Research Institute, is advancing AI-powered weather forecasting to strengthen electrical grid reliability. Medivis is integrating vision language models into surgical workflows. Microsoft Research and Cornell have deployed the systems for hands-on AI training at scale.

Advertisement

Systems are available to order now and will ship in the coming months from ASUS, Dell Technologies, GIGABYTE, MSI, and Supermicro, with HP joining later in the year. Nvidia hasn’t disclosed pricing, but the GB300 components and the company’s historical DGX pricing suggest a six-figure investment — expensive by workstation standards, but remarkably cheap compared to the cloud GPU costs of running trillion-parameter inference at scale.

The list of supported models underscores how open the AI ecosystem has become: developers can run and fine-tune OpenAI’s gpt-oss-120b, Google Gemma 3, Qwen3, Mistral Large 3, DeepSeek V3.2, and Nvidia’s own Nemotron models, among others. The DGX Station is model-agnostic by design — a hardware Switzerland in an industry where model allegiances shift quarterly.

Nvidia’s real strategy: own every layer of the AI stack, from orbit to office

The DGX Station didn’t arrive in a vacuum. It was one piece of a sweeping set of GTC 2026 announcements that collectively map Nvidia’s ambition to supply AI compute at literally every physical scale.

At the top, Nvidia unveiled the Vera Rubin platform — seven new chips in full production — anchored by the Vera Rubin NVL72 rack, which integrates 72 next-generation Rubin GPUs and claims up to 10x higher inference throughput per watt compared to the current Blackwell generation. The Vera CPU, with 88 custom Olympus cores, targets the orchestration layer that agentic workloads increasingly demand. At the far frontier, Nvidia announced the Vera Rubin Space Module for orbital data centers, delivering 25x more AI compute for space-based inference than the H100.

Advertisement

Between orbit and office, Nvidia revealed partnerships spanning Adobe for creative AI, automakers like BYD and Nissan for Level 4 autonomous vehicles, a coalition with Mistral AI and seven other labs to build open frontier models, and Dynamo 1.0, an open-source inference operating system already adopted by AWS, Azure, Google Cloud, and a roster of AI-native companies including Cursor and Perplexity.

The pattern is unmistakable: Nvidia wants to be the computing platform — hardware, software, and models — for every AI workload, everywhere. The DGX Station is the piece that fills the gap between the cloud and the individual.

The cloud isn’t dead, but its monopoly on serious AI work is ending

For the past several years, the default assumption in AI has been that serious work requires cloud GPU instances — renting Nvidia hardware from AWS, Azure, or Google Cloud. That model works, but it carries real costs: data egress fees, latency, security exposure from sending proprietary data to third-party infrastructure, and the fundamental loss of control inherent in renting someone else’s computer.

The DGX Station doesn’t kill the cloud — Nvidia’s data center business dwarfs its desktop revenue and is accelerating. But it creates a credible local alternative for an important and growing category of workloads. Training a frontier model from scratch still demands thousands of GPUs in a warehouse. Fine-tuning a trillion-parameter open model on proprietary data? Running inference for an internal agent that processes sensitive documents? Prototyping before committing to cloud spend? A machine under your desk starts to look like the rational choice.

Advertisement

This is the strategic elegance of the product: it expands Nvidia’s addressable market into personal AI infrastructure while reinforcing the cloud business, because everything built locally is designed to scale up to Nvidia’s data center platforms. It’s not cloud versus desk. It’s cloud and desk, and Nvidia supplies both.

A supercomputer on every desk — and an agent that never sleeps on top of it

The PC revolution’s defining slogan was “a computer on every desk and in every home.” Four decades later, Nvidia is updating the premise with an uncomfortable escalation. The DGX Station puts genuine supercomputing power — the kind that ran national laboratories — beside a keyboard, and NemoClaw puts an autonomous AI agent on top of it that runs around the clock, writing code, calling tools, and completing tasks while its owner sleeps.

Whether that future is exhilarating or unsettling depends on your vantage point. But one thing is no longer debatable: the infrastructure required to build, run, and own frontier AI just moved from the server room to the desk drawer. And the company that sells nearly every serious AI chip on the planet just made sure it sells the desk drawer, too.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Nvidia BlueField-4 STX adds a context memory layer to storage to close the agentic AI throughput gap

Published

on

When an AI agent loses context mid-task because traditional storage can’t keep pace with inference, it is not a model problem — it is a storage problem. At GTC 2026, Nvidia announced BlueField-4 STX, a modular reference architecture that inserts a dedicated context memory layer between GPUs and traditional storage, claiming 5x the token throughput, 4x the energy efficiency and 2x the data ingestion speed of conventional CPU-based storage.

The bottleneck STX targets is key-value cache data. KV cache is the stored record of what a model has already processed — the intermediate calculations an LLM saves so it does not have to recompute attention across the entire context on every inference step. It is what allows an agent to maintain coherent working memory across sessions, tool calls and reasoning steps. As context windows grow and agents take more steps, that cache grows with them. When it has to traverse a traditional storage path to get back to the GPU, inference slows and GPU utilization drops.

STX is not a product Nvidia sells directly. It is a reference architecture the company is distributing to its storage partner ecosystem so vendors can build AI-native infrastructure around it.

STX puts a context memory layer between GPU and disk

The architecture is built around a new storage-optimized BlueField-4 processor that combines Nvidia’s Vera CPU with the ConnectX-9 SuperNIC. It runs on Spectrum-X Ethernet networking and is programmable through Nvidia’s DOCA software platform.

Advertisement

The first rack-scale implementation is the Nvidia CMX context memory storage platform. CMX extends GPU memory with a high-performance context layer designed specifically for storing and retrieving KV cache data generated by large language models during inference. Keeping that cache accessible without forcing a round trip through general-purpose storage is what CMX is designed to do.

“Traditional data centers provide high-capacity, general-purpose storage, but generally lack the responsiveness required for interaction with AI agents that need to work across many steps, tools and different sessions,” Ian Buck, Nvidia’s vice president of hyperscale and high-performance computing said in a briefing with press and analysts.

In response to a question from VentureBeat, Buck confirmed that STX also ships with a software reference platform alongside the hardware architecture. Nvidia is expanding DOCA to include a new component referred to in the briefing as DOCA Memo. 

“Our storage providers can leverage the programmability of the BlueField-4 processor to optimize storage for the agentic AI factory,” Buck said. “In addition to having a reference rack architecture, we’re also providing a reference software platform for them to deliver those innovations and optimizations for their customers.”

Advertisement

Storage partners building on STX get both a hardware reference design and a software reference platform — a programmable foundation for context-optimized storage.

Nvidia’s partner list spans storage incumbents and AI-native cloud providers

Storage providers co-designing STX-based infrastructure include Cloudian, DDN, Dell Technologies, Everpure, Hitachi Vantara, HPE, IBM, MinIO, NetApp, Nutanix, VAST Data and WEKA. Manufacturing partners building STX-based systems include AIC, Supermicro and Quanta Cloud Technology.

On the cloud and AI side, CoreWeave, Crusoe, IREN, Lambda, Mistral AI, Nebius, Oracle Cloud Infrastructure and Vultr have all committed to STX for context memory storage.

That combination of enterprise storage incumbents and AI-native cloud providers is the signal worth watching. Nvidia is not positioning STX as a specialty product for hyperscalers. It is positioning it as the reference standard for anyone building storage infrastructure that has to serve agentic AI workloads — which, within the next two to three years, is likely to include most enterprise AI deployments running multi-step inference at scale.

Advertisement

STX-based platforms will be available from partners in the second half of 2026.

IBM shows what the data layer problem looks like in production

IBM sits on both sides of the STX announcement. It is listed as a storage provider co-designing STX-based infrastructure, and Nvidia separately confirmed that it has selected IBM Storage Scale System 6000 — certified and validated on Nvidia DGX platforms — as the high-performance storage foundation for its own GPU-native analytics infrastructure.

IBM also announced a broader expanded collaboration with Nvidia at GTC, including GPU-accelerated integration between IBM’s watsonx.data Presto SQL engine and Nvidia’s cuDF library. A production proof of concept with Nestlé put numbers on what that acceleration looks like: a data refresh cycle across the company’s Order-to-Cash data mart, covering 186 countries and 44 tables, dropped from 15 minutes to three minutes. IBM reported 83% cost savings and a 30x price-performance improvement.

The Nestlé result is a structured analytics workload. It does not directly demonstrate agentic inference performance. But it makes IBM and Nvidia’s shared argument concrete: the data layer is where enterprise AI performance is currently constrained, and GPU-accelerating it produces material results in production.

Advertisement

Why the storage layer is becoming a first-class infrastructure decision

STX is a signal that the storage layer is becoming a first-class concern in enterprise AI infrastructure planning, not an afterthought to GPU procurement.

General-purpose NAS and object storage were not designed to serve KV cache data at inference latency requirements. STX-based systems from partners including Dell, HPE, NetApp and VAST Data are what Nvidia is putting forward as the practical alternative, with the DOCA software platform providing the programmability layer to tune storage behavior for specific agentic workloads.

The performance claims — 5x token throughput, 4x energy efficiency, 2x data ingestion — are measured against traditional CPU-based storage architectures. Nvidia has not specified the exact baseline configuration for those comparisons. Before those numbers drive infrastructure decisions, the baseline is worth pinning down.

Platforms are expected from partners in the second half of 2026. Given that most major storage vendors are already co-designing on STX, enterprises evaluating storage refreshes for AI infrastructure in the next 12 months should expect STX-based options to be available from their existing vendor relationships.

Advertisement

Source link

Continue Reading

Tech

Tech companies are blaming layoffs on AI, but what’s really going on?

Published

on

Uri Gal of the University of Sydney discusses the factors impacting the working landscape and tech-based jobs.

In the past few months, a wave of tech corporations have announced significant staff cuts and attributed them to efficiency gains driven by artificial intelligence (AI).

Companies such as Atlassian, Block and Amazon have announced they would lay off thousands of employees due to increased reliance on AI.

The narrative these companies offer is consistent: AI is making human labour replaceable, and responsible management demands adjustment.

Advertisement

The evidence, however, tells a more nuanced story.

The automation story is partly true

Genuine disruption is visible in specific corners of the labour market, though the scale of that disruption is commonly overstated. Research from Anthropic published earlier this month shows that although many work tasks are susceptible to automation, the vast majority are still performed primarily by humans rather than AI tools.

Moreover, some occupations are more exposed to displacement than others: computer programmers sit at the top of the list, followed by customer service representatives and data entry workers. Yet even within the most exposed occupations, AI use is still limited.

The aggregate economic data reflects this reality. A 2025 Goldman Sachs report estimated that if AI were used across the economy for all the things it could currently do, roughly 2.5pc of US employment would be at risk of job loss.

Advertisement

That’s not a trivial number. However, the report notes that workers in AI-exposed occupations are currently no more likely to lose their jobs, face reduced hours or earn lower wages than anyone else.

The report does note early signs of strain in specific industries. Goldman Sachs identifies sectors where employment growth has slowed that align with AI-related efficiency gains. Examples include marketing consulting, graphic design, office administration and call centres.

In the tech sector, US workers in their 20s in AI-exposed occupations saw unemployment rise by almost 3pc in the first half of 2025. Anthropic’s research also found that job-finding rates (the chance of an unemployed person finding a job in a one-month period) for workers aged 22 to 25 entering AI-exposed occupations have fallen by around 14pc since the launch of ChatGPT in 2022. This is a tentative but telling signal about where the pressure is being felt first.

These are meaningful signals, but they are sector-specific and concentrated – not the evidence of sweeping displacement that corporate announcements often imply. That gap between the evidence and the rhetoric raises an obvious question: what else might be driving these decisions?

Advertisement

What is the motive?

The timing and framing of the layoffs attributed to AI warrants closer examination. Corporate restructuring, over-hiring during the post-pandemic boom as demand for online services soared, and pressure from investors to demonstrate improved profit margins are all forces operating at the same time as genuine advances in AI.

While these are not mutually exclusive explanations, they are rarely acknowledged alongside one another in corporate communications.

There is a powerful financial incentive for companies to be seen to be embracing AI aggressively. Since the launch of ChatGPT, AI-related stocks have accounted for about 75pc of S&P 500 returns.

A workforce reduction framed around AI adoption sends a signal to investors that a straightforward cost-cutting announcement does not. A company making AI-related innovations looks a lot better than one sacking staff due to declining revenues or poor strategic decisions.

Advertisement

It is also worth distinguishing between two kinds of workforce reduction. In the first, AI genuinely increases productivity to the point where fewer workers are needed to produce the same output. In the second, staff reductions are not a consequence of AI, but a way to fund it.

Meta illustrates this distinction. The social media giant is reportedly planning to lay off as much as 20pc of its workforce, while simultaneously committing $600bn to build data centres and recruit top AI researchers.

In this case, the workers being let go are not being replaced by AI today; they are subsidising the AI bet their employer is making on the future.

The more plausible future

The big picture is likely one of transformation rather than elimination. According to a recent PwC report, employment is still growing in most industries exposed to AI, although growth tends to be slower than in less exposed sectors.

Advertisement

At the same time, wages in AI-exposed industries are rising roughly twice as fast as in those least touched by the technology. Workers with AI skills command an average wage premium of about 56pc across the industries analysed.

Together, the data points toward a flattening of the traditional workplace pyramid rather than mass displacement. Firms require fewer junior employees for routine analytical and administrative work, while experienced professionals who deploy AI tools effectively become more productive and command greater value.

AI is a consequential technology and will have a significant impact in the long term. What is in doubt is whether the dramatic, AI-attributed workforce reductions announced by individual companies accurately reflect that trajectory, or whether they conflate genuine technological change with decisions that would have been made regardless.

Making this distinction is not merely an academic exercise. It shapes how policymakers, educators and workers themselves understand the nature of the disruption they are navigating.

Advertisement

The Conversation

By Uri Gal

Uri Gal is a professor of business information systems at the University of Sydney Business School. His research focuses on the organisational and ethical aspects of digital technologies. He is particularly interested in the relationships between people and technology, and in the changes in the nature of work associated with the introduction of algorithmic technologies.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Beyond the Classroom: How School Districts Are Building Real-World Career Pathways

Published

on

When a water-treatment plant outside Denver discovered an algae problem in its pipes, it did not call an engineering firm. It called the students.

The aquatic robotics team at the Innovation Center at St. Vrain Valley Schools in Longmont, Colorado, sent underwater robots into the facility, collected data, identified the algae species and helped eradicate it. The plant now contracts with the student team for quarterly checkups. Neighboring towns have started calling, too.

This is not a simulation or a classroom exercise conjured up to look like real work. It is real work, and it reflects a broader shift underway in districts. Increasingly, schools are building career learning pathways that connect students directly with professional challenges, industry mentors and, in some cases, a paycheck.

The Case for Real Work

The urgency behind these efforts is hard to ignore. A 2023 review from the American Institutes for Research, drawing on two decades of studies, found that career and technical education participation has statistically significant positive impacts on academic achievement, high school completion, employability skills and college readiness.

Advertisement

The question districts are now wrestling with is not whether to offer career pathways, but whether those pathways lead anywhere real.

Policy leaders are paying attention. The Education Commission of the States has identified building aligned career pathways and removing barriers to economic opportunity as one of its top priorities through 2027.

At St. Vrain, Assistant Superintendent of Innovation Joe McBreen has spent years trying to answer that question through a program known as project teams.

After school each day, roughly 264 students log in at the district’s Innovation Center and begin work as paid district employees, billing hours against accounts for actual clients. Students can join a drone show team, a cybersecurity unit, an AI development group or a dozen other teams, rotating among them as their interests evolve.

Advertisement

“It’s low threat, high reward,” says McBreen. “Students get paid, grow their network, develop soft skills and test drive careers. And if they get into a team and realize it’s not for them, there’s real value in that, too.”

The model relies heavily on industry mentors who bring in real work rather than invented classroom projects. Damon Brown, a senior cybersecurity adviser for the U.S. Department of State focused on Ecuador, mentored seven St. Vrain students on a complex assignment.

He asked them to design the architecture for a cyber intelligence fusion center using open-source tools — work that could have cost hundreds of thousands of dollars if contracted from a professional firm.

“The students knocked it out of the park,” says Brown.

Advertisement

They built the system architecture, wrote user manuals, recommended equipment and conducted a threat analysis of countries surrounding Ecuador. Brown was so impressed he is now hiring six St. Vrain interns.

“This experience binds people together,” he says.

The program also has a way of growing in unexpected directions. After one student’s grandparent was victimized by a cybercrime, the cybersecurity team created an awareness curriculum for senior citizens. They taught five classes to 24 senior citizens in the first year; the second session was standing room only. Senior facilities now pay the students to come in and teach.

Meanwhile, the drone team flies commercial shows for companies across the country on Friday afternoons, billing clients at rates few drone pilots in the country can match. One former member is now studying aerospace engineering and using money from drone flying to help pay for college.

Advertisement

Taking the Model Out West

St. Vrain’s work has drawn attention from educators around the country, some of whom are adapting pieces of the model to fit their own communities.

Kris Hagel, chief information officer of Peninsula School District in Washington state, visited the Innovation Center and came away convinced he could build something similar.

Two years ago, Peninsula launched a paid drone internship program, starting with seven students and gradually expanding. Students work alongside industry partners while learning how to navigate FAA regulations, program autonomous flight paths and repair drones.

“When you’re willing to look at what’s cutting edge and think innovatively without being constrained by traditional systems, you can create opportunities for kids that transcend what we think of as traditional education,” says Hagel. “This program has become so much more than I thought was possible.”

Advertisement

The district partnered with Firefly Drone Systems, one of the few American drone manufacturers, to train students and help them operate drone shows.

The program also includes multiple roles beyond piloting, including marketing, animation design and equipment maintenance. Hagel envisions a future where students studying business management hire other students to operate the program.

A skilled drone operator who leaves high school with the capital to purchase equipment can enter a six-figure career almost immediately, says Hagel.

Finding the Problem First

Not every district is building toward robotics contracts or drone shows. For Michele Davis, CTE department chair at Metropolitan School District of Steuben County in Indiana, the real-world pathway is entrepreneurship.

Advertisement

Working with the StartED Up Foundation, Davis guides students through a three-year sequence: identifying an actual problem, developing a solution, building out the business model and presenting it to real audiences.

Students take “opportunity walks” around the school, documenting everyday frustrations and brainstorming solutions. They learn how to market their ideas professionally by practicing elevator pitches, presenting case studies to various audiences and explaining their ideas to elementary school students.

“Opportunities are everywhere,” says Davis.

The ideas that emerge can be surprisingly practical. One student designed a reversible outfit to solve a quick-change problem in theater productions. Another class developed a mobile trailer concept that could help unhoused people access hygiene services.

Advertisement

Beyond the business concepts themselves, Davis says the program focuses heavily on communication skills and confidence. “We get students comfortable doing things that are normally uncomfortable,” she says.

A Credential, Not Just a Class

At Suffern Central School District in Rockland County, New York, Superintendent P. Erik Gundersen has taken yet another approach.

Through a partnership with the League of Innovative Schools and curriculum provider Paradigm, the district launched a three-year cybersecurity certification pathway embedded directly into the high school. About 60 students are currently enrolled.

The program was designed to reach students who might not otherwise see themselves in a cybersecurity career. The district actively recruited students from immigrant communities and others who are new to the U.S.

Advertisement

Students work in a “sandbox” environment that simulates real cyber incidents, allowing them to practice identifying threats and responding to attacks.

“The means to send a kid to college is not as great as it was, and a lot of what we’re reading questions the importance of a college education,” says Gundersen.

Those economic realities, he says, are pushing districts to rethink how they prepare students for the workforce.

Career credentials embedded with traditional high schools can open doors for students who may not otherwise have clear pathways into high-skill industries.

Advertisement

Education That Looks Like Life

Across these programs, the details vary widely, but the philosophy is the same: Authentic experience is not a supplement to education. It is education.

As McBreen says, “I encourage districts to expand their vision. Anyone can do this. Start small.”

Source link

Advertisement
Continue Reading

Tech

IEEE Young Professionals Tackle Skills Gap in Tech

Published

on

The America’s Talent Strategy: Building the Workforce for the Golden Age report, published last year by the U.S. Departments of Commerce, Education, and Labor, identified a significant engineering and skills gap. The 27-page report concluded that the shortage of talent in essential areas—including advanced manufacturing, artificial intelligence, cloud computing, and cybersecurity—poses significant risks to U.S. economic and technological leadership.

To help attract talent in those fields, the Labor Department last month introduced incentives for apprenticeships, including a US $145 million “pay for performance” grant program. The funding aims to develop registered apprenticeships in high-demand fields including artificial intelligence and information technology.

Reacting to the urgent national need for targeted workforce development were members of IEEE Young Professionals, led by Alok Tibrewala, an IEEE senior member. He is a cochair of the IEEE North Jersey Section’s Young Professionals group.

“As a software engineer, this impending shortage concerns me because I believe that the U.S. AI and cybersecurity skills gap would show up first in the early-career pipeline,” Tibrewala says. “Students will be entering the U.S. workforce without enough hands-on experience building secure AI-enabled enterprise and cloud systems, and this gap will persist without practical, mentor-led training before graduation.”

Advertisement

Tibrewala led a strategic planning session with representatives from the New Jersey Institute of Technology, IEEE Member and Geographic Activities, and IEEE Young Professionals to discuss holding an event that would provide practical, industry-relevant training by experts and IEEE leaders.

“I was able to establish a partnership with NJIT, recruit speakers, design the event’s agenda, and promote the event to ensure it was aligned with the strategy outlined in the workforce report,” he says. “This effort aligns with broader U.S. workforce development priorities focused on industry-driven skills training in critical technology areas.”

The IEEE Buildathon event was held on 1 November at NJIT’s Newark campus. More than 30 students and early-career engineers heard from 11 speakers. Through interactive workshops, live demonstrations, and networking opportunities, they left with practical, employer-aligned skills and clearer career pathways for AI-era skills-building.

Tibrewala chaired the event and also serves as chair of the IEEE Buildathon program.

Advertisement

Session takeaways

Region 1 Director Bala S. Prasanna, a life senior member, gave the keynote address. He emphasized the need for universities, industry practitioners, and IEEE volunteer leaders to collaborate on programs to enhance technical skills.

IEEE Member Kalyani Matey, cochair of the IEEE North Jersey Section’s Young Professionals, conducted a workshop on how to build one’s personal brand and a responsive network. Participants received valuable insights about résumé building, effective communication strategies, and enhancing their visibility and employability.

“Over time, this kind of structured, employer-aligned training will help increase confidence, employability, and technical readiness across the country. With sustained support, programs like the IEEE Buildathon can become a practical bridge from education to industry in the AI era.” —Alok Tibrewala

Tibrewala led the Unlocking AI’s Potential: Solving Big Challenges With Smart Data and IEEE DataPort session. The web-based DataPort platform allows researchers to store, share, access, and manage their research datasets in a single, trusted location. He discussed needed skills including AI literacy, strong data handling and dataset stewardship, and turning data into actionable insights.

Advertisement

Chaitali Ladikkar, a senior software engineer, delivered the insightful Brains Behind the Game seminar. Ladikkar, an IEEE member, highlighted the transformative impact AI is having on gaming and game engine technologies. She explained how AI is reshaping game development. She also covered how machine learning is being used for animation, faster content generation and testing of new titles. Her seminar received enthusiastic feedback from participants.

The Building Better Business Relationships DiSC workshop provided insights into enhancing professional relationships and communication within an engineering workforce. DiSC is a behavioral self-assessment used to understand an individual’s communication style and to adapt to others.

Participant experience and testimonials

The event received high praise from participants for its practical and industry-relevant content, according to Tibrewala.

“This training significantly enhanced my understanding and readiness for industry roles, filling gaps my regular academic coursework did not fully address,” said Humna Sultan, an IEEE student member who is a senior studying computer science at Stevens Institute of Technology, in Hoboken, N.J.

Advertisement

“The Buildathon was structured around real engineering challenge scenarios that deepened my understanding of AI and cloud technologies,” said Carlos Figueredo, an IEEE graduate student member who is studying data science at the University of Michigan, in Ann Arbor. “It boosted my confidence and practical skills essential for the industry.”

Bavani Karthikeyan Janaki said “it was incredible to see how technology and sustainability came together to drive real-world impact, thanks to the dedicated efforts of the organizers including Tibrewala, Matey, and the IEEE North Jersey Young Professionals.” Janaki is pursuing a master’s degree in computer and information science at Long Island University, in New York.

Funding and collaborative efforts

The Buildathon was made possible through grants from the IEEE Young Professionals group and funding from the IEEE North Jersey Section and IEEE Member and Geographic Activities. Their support shows how IEEE’s professional organizations can collaborate to address workforce needs by supporting the delivery of technical sessions that strengthen early-career pipelines.

Future plans and a call to action

Building on the event’s success, Tibrewala and Matey plan to make the IEEE Buildathon an ongoing initiative. They are exploring ways to expand it to additional university campuses and IEEE communities.

Advertisement

Tibrewala says they plan to refine the format based on participant feedback and lessons learned. To support consistent quality, he and Matey say, they are working on a playbook for organizers that will include a repeatable agenda, a workshop template, speaker guidelines, and post-event feedback forms.

The approach depends on continued coordination among host universities, local IEEE sections, and Young Professional volunteers, Tibrewala says.

“Enabling other groups to run similar events,” he says, “can help more students and early-career engineers gain practical exposure to AI, data, cloud, cybersecurity, and other key emerging technologies in a structured setting.

“Efforts like this help translate national workforce priorities into real training that students and early-career engineers can apply immediately to their projects. This also helps close the gap between classroom learning and the realities of building secure, reliable systems in production environments. Over time, this kind of structured, employer-aligned training will help increase confidence, employability, and technical readiness across the country.

Advertisement

“With sustained support, programs like the IEEE Buildathon can become a practical bridge from education to industry in the AI era.”

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

GridEx Highlights Drone Risks to Power Grids

Published

on

In the fictional nation of Beryllia, the 2026 World Chalice Games were set to begin as the country faced an unrelenting heat wave. The grid, already under strain from the circumstances, was dealt a further blow when a coordinated set of attacks including vandalism, drone, and ballistic attacks by an adversary, Crimsonia, crippled the grid’s physical infrastructure.

This scenario, inspired by the upcoming 2026 World Cup and the 2028 Olympic Games in Los Angeles, was an exercise in studying how utilities can prevent and mitigate, among other dangers, physical attacks on power grids. Called GridEx, the exercise was hosted by the Electricity Information Sharing and Analysis Center (E-ISAC) from 18 to 20 November, 2025. GridEx has been held every two years since 2011.

“We know that threat actors look to exploit certain circumstances,” says Michael Ball, CEO of E-ISAC, which is a program of the North American Electric Reliability Corporation (NERC), about designing the Beryllia scenario. “The Chalice Games became a good example of how we could build a scenario around a threat actor.”

Physical attacks on the grid are rising in the U.S., and GridEx attendance was up in November as utilities grapple with how to prevent and mitigate attacks. Participation in the exercise was at its highest level since 2019, according to a report released on 2 March. Given the number of organizations present, GridEx estimates that more than 28,000 individual players participated, including utility workers and government partners, an all-time high since the exercise began.

Advertisement

Rising Physical Threats to Power Grids

The U.S. and Canadian grids face growing security issues from physical threats, including vandalism, assault of utility workers, intrusion of property, and theft of components, like copper wiring. NERC’s 2025 E-ISAC end of year report cites more than 3,500 physical security breaches that calendar year, about 3 percent of which disrupted electricity. That’s up from 2,800 events cited in the 2023 report (3 percent of those also resulted in electricity disruptions). Yet despite a number of recent high-profile attacks in the U.S., physical attacks on the grid are happening worldwide.

“They’re not uniquely a U.S. thing,” says Danielle Russo, executive director of the Center for Grid Security at Securing America’s Future Energy, a nonpartisan organization focused on advancing national energy security. Russo says that while attacks are common in places like Ukraine, they’re not limited to wartime scenarios. “Other countries that are not experiencing direct conflict are experiencing increasing amounts of physical attacks on their energy infrastructure,” she says. Take Germany for example: On 3 January, an arson attack by left-wing activists in Berlin caused a five-day blackout impacting 45,000 households. That comes after a suspected arson attack on two pylons in September 2025 left 50,000 Berlin households without power. Some German officials cite domestic extremism and fears of Russian sabotage in recent years as reasons for heightened security concerns over critical infrastructure.

The uptick in attacks on the U.S. grid has been anchored by a number of incidents in recent years. In December 2025, an engineer in San Jose, California was sentenced to 10 years in prison for bombing electric transformers in 2022 and 2023. A Tennessee man was arrested in November 2024 for attempting to attack a Nashville substation using a drone armed with explosives. And in 2023, a neo-Nazi leader was among two arrested in a plot to attack five substations around Baltimore with firearms, part of an increasing trend in white supremacist groups planning to attack the U.S. energy sector.

“Since [E-ISAC] started publishing data back in 2016, we’ve seen a large and consistent increase in the number of reported physical security incidents per year,” says Michael Coe, the vice president of physical and cyber security programs at the American Public Power Association, a trade group that works with E-ISAC to plan GridEx. While not all data is publicly available, Coe says there’s been a “tenfold” increase over the past decade in the number of reported physical attacks on the grid.

Advertisement

Drone Attacks: A Growing Security Challenge

During the fictional World Chalice Games scenario, drone attacks destroyed Beryllia’s substation equipment, highlighting a threat that’s gained traction as more drones enter the airspace.

“The question we get all the time is, how do you tell if it’s a bad actor, or if it’s a 12-year-old kid that got the drone for their birthday?” says Erika Willis, the program manager for the substations team at the Electric Power Research Institute (EPRI).

One strategy to track and alert utilities to potential threats such as drones is called sensor fusion. The system includes a pan-tilt-zoom camera capable of 360-degree motion mounted on top of a tripod or pole with four installed radars. The radars combine with the camera for a dual system that can track drones even if they’re obstructed from view, says Willis. For instance, if a nearby drone flies behind a tree, hidden from the camera, the radars will still pick up on it. The technology is currently being tested at EPRI’s labs in Charlotte, North Carolina and Lenox, Massachusetts.

EPRI is also exploring how robotics and AI can improve security systems, Willis says. One approach involves integrating AI analysis into robotic technology already surveilling substation perimeters. Using AI can improve detection of break-ins and damage to fencing around substations, Willis says. “As opposed to a human having to go through 200 images of a fence, you can have the AI overlays do some of those algorithms…If the robot has done the inspection of the substation 100 times, it can then relay to you that there’s an anomaly,” Willis says.

Advertisement

A fiber sensing technology unit, roughly the size and shape of a filing cabinet. Prisma Photonics deploys fiber sensing technology that uses reflected optical signals to detect perturbations from vehicles and other sources near underground fiber cable.Prisma Photonics

Already, a number of utilities in the U.S. are using AI integrations in their security and monitoring processes. That’s thanks in part to the Tel Aviv, Israel-based Prisma Photonics, a software company that launched in 2017 and has since deployed its fiber sensing technology across thousands of miles of transmission infrastructure in the U.S., Canada, Europe, and Israel. A file-cabinet-sized unit plugs into a substation and sends light pulses down existing fiber optic cables 30 miles in each direction. As the pulses travel down the cables, a tiny fraction of the light is reflected back to the substation unit. An AI model processes the results and can classify events based on patterns in the optical signal as a result of perturbations happening around the fiber cable.

“If we identify an event that we don’t have a classification for, and we get a feedback from a customer saying, ‘oh, this was a car crash,’ then we can classify that in the model to say this is actually what happened,” says Tiffany Menhorn, Prisma Photonics’ vice president of North America.

As preparations get underway for the ninth GridEx in 2027, Ball says participation in the exercises alone isn’t enough to bolster grid security. Instead, he wants utilities to take what they learn from the training and apply it in their own operations. “It’s the action of doing it, versus our statistic of saying, ‘here’s what our growth was.’ That growth should relate to the readiness and capability of the industry.”

Advertisement

I changed the tense on this because the subsequent sentences use past tense. It seemed weird to switch from present tense in the first sentence to past tense in the rest of the paragraph, but I could be mistaken.

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

NVIDIA’s Drive Hyperion System Gains Three New Allies in the Push for Cars That Handle Themselves

Published

on

NVIDIA Drive Hyperion BYD Isuzu Nissan Partners
GTC 2026 brought some news that caught a lot of people off guard. Three major automakers have signed on to work with NVIDIA to bring autonomous driving to their vehicles in defined conditions, and sooner than most would have expected. BYD, Nissan, and Isuzu are all on board, each bringing their own strengths to the table as the technology edges closer to becoming an everyday reality on public roads.”.



BYD is no stranger to pushing technology forward, and they plan to roll the system out across their next generation of models, the ones already turning heads on the road. Nissan is taking a broader approach, bringing it to their entire passenger vehicle lineup, while Isuzu is focused on the commercial side of things, teaming up with TIER IV to keep their buses running smoothly with minimal need for human supervision.”

Drive Hyperion is a full system that includes sensors, processing units, and software that are ready to use right out of the box. That means automakers don’t have to start from scratch; instead, they can take the parts that work and modify them to their own vehicles. It all adds up to L4 autonomy, in which the car does all of the driving in particular scenarios such as highways or mapped urban areas, eliminating the need for someone to be on high alert at all times.


Fourteen high-definition cameras provide a continuous 360-degree picture of everything around the automobile, while nine radar units monitor distances and speeds even in bad weather. One LiDAR scanner creates precise 3D images of the environment around the car, while twelve ultrasonic detectors handle short-range tasks such as parking and merging. At the center of it all are two computers powered by the latest NVIDIA chips, capable of handling over 2 trillion operations per second. And if one of them fails, there is a backup system in place to keep everything going smoothly.


Raw sensor data is fed directly into the computers, where software develops an understanding of the vehicle’s location and surroundings. Then separate parts of the system weigh the options by looking at what the cameras show, what the vehicle has done before, the planned route, and even what the navigation system says, and they have an open model called Alpamayo that shows how all of this works, tracing out every step of the decision-making process, making it easier for developers to refine things and ensure they’re doing the right thing.

Advertisement

In real life, engineers can test the system in a digital environment before installing it in a real car. They use real-world data to reproduce difficult or unique circumstances, which helps them detect issues that would otherwise arise years later. One of the most important aspects is ensuring that the system is safe, and to that end, they’ve created an operating system called Halos that puts a few layers of safety around the entire thing. It is designed to meet the strictest automobile standards and incorporates active monitoring, which acts as a constant safety net to prevent anything from going wrong. Users have already begun to put the platform into action. Ride sharing services are preparing to debut fleets of robotaxis and delivery cars in dozens of locations beginning in 2027.

Source link

Advertisement
Continue Reading

Tech

Someone gave the MacBook Neo the 1TB storage upgrade it never got from Apple

Published

on

Apple launched the $599 MacBook Neo on March 11, a budget Mac powered by the A18 Pro chip from the iPhone 16 Pro, 8GB of unified memory, and a 13-inch screen. Though it offers decent specifications for the price, there’s a catch: the storage tops out at 512GB. 

However, a Chinese repair technician, DirectorFeng, has swapped the default NAND chip for a 1TB chip, effectively unlocking the MacBook Neo’s storage. The technician has posted the entire video on a YouTube channel. 

How did DirectorFeng pull this off?

DirectorFeng replaced the NAND flash drive soldered to the MacBook’s logic board and then reflashed macOS, so it recognizes the third-party driver and storage. The process involved removing the original chip, cleaning the solder pads, and installing a higher-capacity replacement using professional repair tools. 

Advertisement

This wasn’t a screwdriver-and-YouTube-tutorial situation; this is microsurgery on a logic board, the kind that makes most people’s palms sweat. However, once reassembled, macOS recognized the larger-capacity NAND drive without firmware issues, and storage performance appeared normal as well. 

The storage, as seen in the video, goes up from 256GB to 994.61GB (marketed as 1TB). Once the process was complete, the replaced drive offered read and write speeds of 1,551 MB/s and 1,506MB/s, respectively. 

Should you try upgrading your MacBook Neo’s storage?

It’s worth noting that Apple uses soldered NAND rather than a removable SSD, which implies that any capacity change would require microsoldering and would almost certainly void the manufacturer’s warranty. However, the successful storage upgrade indicates that the Neo is relatively easier to work on than other MacBooks. 

Is this a consumer-friendly upgrade? No. Should you try upgrading your MacBook Neo’s storage yourself? Certainly not. The only key takeaway here is that the device works with third-party storage without any firmware issues. So, a storage upgrade, at least in theory, is possible. 

Advertisement

Source link

Continue Reading

Tech

Trump Gets $10 Billion Kickback To The Treasury For Offloading TikTok To His Billionaire Buddies

Published

on

from the bribes-for-the-king dept

We’ve discussed at length how Trump’s “fix” for TikTok’s problems basically involved forcing the sale of the platform to his greedy billionaire buddies (with the help of pathetic Democrats). The deal fixed none of the real issues Trumpland pretended to be concerned about (national security, privacy, propaganda), and China still maintains a significant ownership stake.

It was one of the more embarrassing examples of U.S. cronyism and corruption in recent memory.

But wait, as they say, there’s more!

As the Wall Street Journal notes (paywalled), the “Trump administration” is set to receive a $10 billion fee from investors for facilitating the deal. The new owners, which include Trump’s friend Larry Ellison, private equity giant Silver Lake, and MGX (controlled by the UAE) are funneling the payments, which will total $10 billion, to the “Treasury Department”:

Advertisement

“They and other backers paid the Treasury Department about $2.5 billion when the deal closed in January and are set to make several additional payments until hitting the $10 billion total, the people said.”

We, of course, don’t actually know where that money is going and will actually be used for. You can confidently assume it will somehow eventually wind its way into Trump’s pocket somehow, since the entirety of U.S. democratic oversight has been wholly corrupted by these whiny zealots, who are busy stripping the country for parts and selling it for scrap off the back loading dock.

Rupert Murdoch’s Wall Street Journal goes to comical lengths to normalize this bribe, though they do at least try to express how “unprecedented” this sort of thing is by citing an unnamed, ambiguous historian:

“The $10 billion payment would be nearly unprecedented for a government helping arrange a transaction, historians have said. Vice President JD Vance previously said the new TikTok entity running the U.S. operations is valued at about $14 billion in the deal, which some tech analysts have said dramatically undervalues the company.”

The outlet goes on to note that the $10 billion fee absolutely towers over any remotely comparable historical precedent:

“Investment bankers advising on a typical deal receive fees of less than 1% of the transaction value, and the percentage generally gets smaller as the deal size increases. Bank of America is in line to make some $130 million for advising railroad operator Norfolk Southern on its $71.5 billion sale to Union Pacificone of the largest fees on record for a single bank on a deal

Administration officials have said the fee is justified given Trump’s role in saving TikTok in the U.S. and navigating negotiations with China to get the deal done while addressing the security concerns of lawmakers. “

Advertisement

The Wall Street Journal can’t be bothered to note that the deal fixed absolutely none of the purported concerns raised about TikTok. China still has a major ownership stake, and the new owners seem every bit as hostile to democracy and free expression as the worst Chinese autocrat (they’re just not honest enough with themselves or you to admit it yet).

All of these owners are equally just as likely to engage in privacy and surveillance violations as the Chinese (which again, despite a lot of pretense, did not have full direct control over the app). In fact, you could even argue that the previous TikTok was likely to be better on all of these subjects because they were at least trying to adhere to ethical standards to remain operating in the country.

TikTok’s new American owners are very up front about their plans to demolish the entirety of regulatory autonomy, corporate oversight, and consumer protection, leaving them with absolute freedom to pursue whatever unethical bullshit they can dream up. I suspect they’ll try to leave things alone for a year (to avoid a mass exodus of young people) before their goals become… unsubtle.

Again, Trump, with Democratic help, managed to steal the world’s most popular short form video app and offload it to his radical billionaire friends under the pretense he was protecting national security and U.S. consumer privacy. Even before you get to this $10 billion bribe, it’s easily one of the ugliest examples of corruption and U.S. tech policy dysfunction we’ve ever seen.

Advertisement

I like to convince myself history will not be kind.

Filed Under: autocrats, billionaires, corruption, donald trump, larry ellison, national security, privacy, propaganda, social media, video

Companies: bytedance, mgx, oracle, silver lake, tiktok

Source link

Advertisement
Continue Reading

Tech

Every Ham Shack Needs A Ham Clock

Published

on

Every ham radio shack needs a clock; ideally one with operator-friendly features like multiple time zones and more. [cburns42] found that most solutions relied too much on an internet connection for his liking, so in true hacker fashion he decided to make his own: the operator-oriented Ham Clock CYD.

A tabbed interface goes well with the touchscreen LCD.

The Ham Clock CYD is so named for being based on the Cheap Yellow Display (CYD), an economical ESP32-based color touchscreen LCD which provides most of the core functionality. The only extra hardware is a BME280 temperature and humidity sensor, and a battery-backed DS3231 RTC module, ensuring that accurate time is kept even when the device is otherwise powered off.

It displays a load of useful operator-oriented data on the touchscreen LCD, and even has a web-based configuration page for ease of use. While the Ham Clock is a standalone device that does not depend on internet access in order to function, it does have the ability to make the most of it if available. When it has internet access over the built-in WiFi, the display incorporates specialized amateur radio data including N0NBH solar forecasts and calculated VHF/HF band conditions alongside standard meteorological data.

The CYD, sensor, and RTC are very affordable pieces of hardware which makes this clock an extremely economical build. Check out the GitHub repository for everything you’ll need to make your own, and maybe even put your own spin on it with a custom enclosure. On the other hand, if you prefer your radio-themed clocks more on the minimalist side, this Morse code clock might be right up your alley.

Advertisement

Source link

Advertisement
Continue Reading

Tech

What is the release date for Scrubs season 10 episode 5 on Hulu and Disney+?

Published

on

It’s hard to believe that Scrubs season 10 will hit its halfway point with its next episode. By all accounts, the hospital-set sitcom is performing pretty well, so it begs the question why more entries weren’t greenlit.

But that’s a debate for another day. Right now, you’re here to find out when season 10’s fifth episode, titled ‘My Angel’, will premiere on some of the world’s best streaming services. Don’t delay, then — read on for more details!

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025