Not everyone wants to rule the world, but it does seem lately as if everyone wants to warn the world might be ending.
Tech
Who gets to warn us the world is ending?
On Tuesday, the Bulletin of the Atomic Scientists unveiled their annual resetting of the Doomsday Clock, which is meant to visually represent how close the experts at the organization feel that the world is to ending. Reflecting a cavalcade of existential risks ranging from worsening nuclear tensions to climate change to the rise of autocracy, the hands were set to 85 seconds to midnight, four seconds closer than in 2025 and the closest the clock has ever been to striking 12.
The day before, Anthropic CEO Dario Amodei — who may as well be the field of artificial intelligence’s philosopher-king — published a 19,000-word essay entitled “The Adolescence of Technology.” His takeaway: “Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political and technological systems possess the maturity to wield it.”
Should we fail this “serious civilizational challenge,” as Amodei put it, the world might well be headed for the pitch black of midnight. (Disclosure: Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic; they don’t have any editorial input into our content.)
As I’ve said before, it’s boom times for doom times. But examining these two very different attempts at communicating existential risk — one very much a product of the mid-20th century, the other of our own uncertain moment — presents a question. Who should we listen to? The prophets shouting outside the gates? Or the high priest who also runs the temple?
The Doomsday Clock has been with us so long — it was created in 1947, just two years after the first nuclear weapon incinerated Hiroshima — that it’s easy to forget how radical it was. Not just the Clock itself, which may be one of the most iconic and effective symbols of the 20th century, but the people who made it.
The Bulletin of the Atomic Scientists was founded immediately after the war by scientists like J. Robert Oppenheimer — the very men and women who had created the bomb they now feared. That lent an unparalleled moral clarity to their warnings. At a moment of uniquely high levels of institutional trust, here were people who knew more about the workings of the bomb than anyone else, desperately telling the public that we were on a path to nuclear annihilation.
The Bulletin scientists had the benefit of reality on their side. No one, after Hiroshima and Nagasaki, could doubt the awful power of these bombs. As my colleague Josh Keating wrote earlier this week, by the late 1950s there were dozens of nuclear tests being conducted around the world each year. That nuclear weapons, especially at that moment, presented a clear and unprecedented existential risk was essentially inarguable, even by the politicians and generals building up those arsenals.
But the very thing that gave the Bulletin scientists their moral credibility — their willingness to break with the government they once served — cost them the one thing needed to end those risks: power.
As striking as the Doomsday Clock remains as a symbol, it is essentially a communication device wielded by people who have no say over the things they’re measuring. It’s prophetic speech without executive authority. When the Bulletin, as it did on Tuesday, warns that the New START treaty is expiring or that nuclear powers are modernizing their arsenals, it can’t actually do anything about it except hope policymakers — and the public — listen.
And the more diffuse those warnings become, the harder it is to be heard.
Since the end of the Cold War took nuclear war off the agenda — temporarily, at least — the calculations behind the Doomsday Clock have grown to encompass climate change, biosecurity, the degradation of US public health infrastructure, new technological risks like “mirror life,” artificial intelligence, and autocracy. All of these challenges are real, and each in their own way threatens to make life on this planet worse. But mixed together, they muddy the terrifying precision that the Clock promised. What once seemed like clockwork is revealed as guesswork, just one more warning among countless others.
Even more than most AI leaders, Amodei has frequently been compared to Oppenheimer.
Amodei was a physicist and a scientist first. Amodei did important work on the “scaling laws” that helped unlock powerful artificial intelligence, just as Oppenheimer did critical research that helped blaze the trail to the bomb. Like Oppenheimer, whose real talent lay in the organizational abilities required to run the Manhattan Project, Amodei has proven to be highly capable as a corporate leader.
And like Oppenheimer — after the war at least — Amodei hasn’t been shy about using his public position to warn in no uncertain terms about the technology he helped create. Had Oppenheimer had access to modern blogging tools, I guarantee you he would have produced something like “The Adolescence of Technology,” albeit with a bit more Sanskrit.
Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.
The difference between these figures is one of control. Oppenheimer and his fellow scientists lost control of their creation to the government and the military almost immediately, and by 1954 Oppenheimer himself had lost his security clearance. From then on, he and his colleagues would largely be voices on the outside.
Amodei, by contrast, speaks as the CEO of Anthropic, the AI company that at the moment is perhaps doing more than any other to push AI to its limits. When he spins transformative visions of AI as potentially “a country of geniuses in a datacenter,” or runs through scenarios of catastrophe ranging from AI-created bioweapons to technologically enabled mass unemployment and wealth concentration, he is speaking from within the temple of power.
It’s almost as if the strategists setting nuclear war plans were also fiddling with the hands on the Doomsday Clock. (I say “almost” because of a key distinction — while nuclear weapons promised only destruction, AI promises great benefits and terrible risks alike. Which is perhaps why you need 19,000 words to work out your thoughts about it.)
All of which leaves the question of whether the fact that Amodei has such power to influence the direction of AI gives his warnings more credibility than those on the outside, like the Bulletin scientists — or less.
The Bulletin’s model has integrity to spare, but increasingly limited relevance, especially to AI. The atomic scientists lost control of nuclear weapons the moment they worked. Amodei hasn’t lost control of AI — his company’s release decisions still matter enormously. That makes the Bulletin’s outsider position less applicable. You can’t effectively warn about AI risks from a position of pure independence because the people with the best technical insight are largely inside the companies building it.
But Amodei’s model has its own problem: The conflict of interest is structural and inescapable.
Every warning he issues comes packaged with “but we should definitely keep building.” His essay explicitly argues that stopping or substantially slowing AI development is “fundamentally untenable” — that if Anthropic doesn’t build powerful AI, someone worse will. That may be true. It may even be the best argument for why safety-conscious companies should stay in the race. But it’s also, conveniently, the argument that lets him keep doing what he’s doing, with all the immense benefits that may bring.
This is the trap Amodei himself describes: “There is so much money to be made with AI — literally trillions of dollars per year — that even the simplest measures are finding it difficult to overcome the political economy inherent in AI.”
The Doomsday Clock was designed for a world where scientists could step outside the institutions that created existential threats and speak with independent authority. We may no longer live in that world. The question is what we build to replace it — and how much time we have left to do so.
Tech
Death by Tariffs: Volvo Discontinuing Entry-Level EX30 EV in the US
Volvo is pulling the plug on its smallest and least expensive EV this week. The automaker is winding down US-bound production and imports of the EX30 and EX30 Cross Country over the coming weeks, with the last examples wrapping up the 2026 model year at the end of this summer due to financial and market considerations. In other words, tariffs are up, and sales are down.
It’s a tough time to be selling EVs in the US right now. Volvo joins a growing list of automakers reassessing or outright canceling their electric car ambitions in the US due to market and political conditions over the last year. Earlier this year, Chevrolet announced that it would be ending production of its highly anticipated Chevrolet Bolt revival after just one model year. Last week, Honda announced the cancellation of its upcoming 0-Series of US-built electric cars before even reaching production, and that’s just the tip of the iceberg.
The EX30’s arrival and short stay in the US has been fraught with challenges. The small SUV was first announced in 2023, billed as an affordable electric option starting below $35,000. I was impressed by the EX30 during my first drive review, calling it my most anticipated affordable EV of 2024. Volvo initially planned to keep manufacturing costs down by building the EX30 in China, but Biden administration tariffs forced the automaker to move production of US-bound examples to its plant in Ghent, Belgium.
By the time the EX30 arrived in the US, it was thousands of dollars more than initially predicted.
Preproduction software issues further delayed the EV’s limited arrival to late 2024 with sales ramping up in early 2025 — just in time to get hit by the Trump administration’s unpredictable new tariffs. Today, the EX30 starts at $40,344 in the US, then climbs to just shy of $50,000 for the dual-motor model with the best tech — a tough sell for a subcompact SUV even at the best of times. In 2025, Volvo reported only 5,409 EX30s sold in the US and a 60.5% decrease in overall electrified vehicle sales versus 2024.
When reached for comment, a representative from Volvo confirmed that, “Volvo Car USA has decided to end sales of the EX30 and EX30 Cross Country in the US market after the 2026 model year.”
The automaker tells me that the EX30 will remain available in global markets and will continue to be imported and sold in Mexico and Canada. Recently, Volvo’s flagship EX90 — which is built at Volvo’s South Carolina factory — ceased 2026 model year exports to Canada, a victim of retaliatory counter tariffs aimed at the US. When asked how this shakeup will affect its roadmap, a Volvo representative told CNET that the company’s goal of a fully electrified global lineup by 2030 remains unchanged.
Volvo only sold 5,409 EX30s in 2025.
“Volvo Cars’ commitment to electrification and our customers remains unchanged,” the representative told CNET, “and we look forward to continuing to bring exciting new electrified options to our customers in the US, including the all-new EX60 and upgraded EX90.”
In January, at the debut of the upcoming Volvo EX60, it was my professional opinion that the new mid-range model would be a make-or-break point for the brand’s US ambitions after the tumultuous rollouts of its first two dedicated EV models. With the EX30 soon to be gone and in an increasingly dangerous market where only the strongest models survive, Volvo now finds itself in an even more perilous position.
Tech
The most radical act in an age of outrage is to play
We are not divided by accident; we are distracted on purpose. The antidote to that manipulation is to reconnect with what makes us human, often through something as simple as play. Spend five minutes scrolling, and you can feel the machinery of social media outrage at work: the pulse of outrage, the invitation to pick a side, the subtle suggestion that if you are not angry, you are not paying attention. Families fracture over headlines, friendships dissolve over algorithms, and disagreement begins to feel like disownment. All the while, the crises never seem to stop.
This emotional volatility is conditioned. News cycles are engineered to provoke because fear keeps us engaged, and engagement keeps us predictable. According to an analysis from the Pew Research Center, nearly 60% of Americans express low confidence in journalists to act in the public’s best interests. Yet even with that distrust, most of us remain immersed in the stream. We doubt it, but we cannot seem to look away because the system is designed to make disengagement feel unsafe.
A society kept in a perpetual state of alarm is easier to manage than one that thinks for itself. Benjamin Franklin warned that those who would give up essential liberty to purchase temporary safety deserve neither. His words echo loudly today. Fear narrows our thinking. It contracts our field of vision. When we are anxious, we trade autonomy for the illusion of protection.
Technology intensifies this pattern. Artificial intelligence drafts our emails, GPS replaces our internal maps, and our phones remember every number we no longer bother to memorize. The issue is not the tools themselves. After all, tools can be magnificent. The danger lies in dependence. When a tool meant to sharpen our minds begins to substitute for them, something subtle shifts. I notice it in myself. I can still recite phone numbers from childhood, numbers I dialed repeatedly. Today, if I lose my phone, I lose access not just to contacts but to competence.
That small panic reveals a deeper truth: unused faculties atrophy. And when faculties atrophy, systems built on compliance thrive. They reward predictability. Anger and fear make us predictable. Creativity, curiosity, and divergent thinking make us harder to steer. Emotional manipulation becomes simpler when imagination shrinks.
So where does sovereignty begin? Not in Washington or Silicon Valley. It begins with self-regulation. I cannot control the global news cycle, but I can control my nervous system. I can decide whether I will outsource my emotional state to the latest headline or cultivate internal stability. For me, that cultivation happens through play.
Play is autotelic, an activity whose reward is the activity itself. When I juggle, the act is the payoff. There is no external validation required. The rhythm draws my attention into the present. What does that mean? My breathing steadies, my body settles, and my mind clears. That shift is neurological, and research supports this.
One study examined the neurobiology of stress resilience and found that positive affect, novelty, and exploratory behaviors, the core elements of play, strengthen neural circuits that protect against chronic stress. In other words, play expands our adaptive capacity. Fear contracts it. Through that lens, play becomes a neurological rebellion against conditioning that thrives on anxiety.
Children demonstrate this instinctively. When they meet on a playground, they don’t need shared beliefs or background; they simply ask, “Do you want to play?” Ideology is irrelevant at that moment. The invitation to move together dissolves barriers that words often inflame. I have watched this happen in real time, tension softening in a park the moment a small footbag (also known as Hacky SackTM) circle forms. Strangers who arrived as bystanders became collaborators within seconds, drawn in by the shared rhythm of the activity. Laughter shifts the emotional frequency of the space, and as the atmosphere lifts, connection becomes easier. Elevated states foster openness, and that openness makes division harder to sustain.
Yet many children today are being funneled into narrow reward loops, where stimulation is constant but growth is limited. Screens deliver rapid bursts of dopamine that feel exciting in the moment but limit genuine exploration. At the same time, helicopter oversight reduces opportunities for real-world challenges, and when those challenges diminish, resilience inevitably weakens. A brain conditioned to expect only curated digital rewards can struggle with ambiguity, frustration, and disagreement, skills that develop only through challenging lived experiences.
We must reclaim our agency. Real-world play, such as tossing a ball, learning to juggle, or building something with friends, reintroduces novelty, problem-solving, and collaboration. It broadens capacity in ways no algorithm can replicate. In the process, it trains adaptability, the very trait children need to navigate a world that will never be perfectly curated for them.
Some may argue that play is trivial in the face of serious global problems. I understand the impulse. Wars, economic uncertainty, and technological disruption are not games. However, a population locked in chronic stress does not solve complex problems well. Chronic fear impairs executive function and creativity. If we want wiser civic engagement, we need citizens who can regulate their own nervous systems.
Play does that. It builds resilience, flexibility, and social connection. It restores a sense of agency because the reward is internal. You are not waiting for a notification to feel validated. You are generating joy through participation. Every time I juggle in public, I signal possibility. Adulthood does not require the abandonment of joy. A playful mind is less susceptible to manipulation because it is not starved for stimulation. It does not need outrage to feel alive.
Once you understand that play is foundational, the next step becomes surprisingly simple: weave small acts of playfulness back into daily life. Laugh daily, move your body, and learn a skill that engages both hands and mind. Turn off the noise long enough to hear your own thoughts. Invite someone to play, even if it feels awkward at first. Protect your autonomy the way previous generations protected their liberties.
We may not control the macro forces swirling around us, but we can control our state. In a culture addicted to outrage, choosing play is an act of defiance. It is how we reclaim clarity, how we reconnect, and how we remember that beneath the noise, we are still human. In today’s climate, the most radical thing you can do is play.
About the Author
Alexander “Zander” Phelps, also known as zPlayCoach, is a play advocate, speaker, and the founder of HACKiDO, the Path of Play. For more than three decades, he has explored movement-based play as a pathway to cognitive vitality, emotional resilience, and human connection. Drawing from lived experience, neuroscience research, and work with schools, rehabilitation programs, corporations, and community groups, Zander teaches accessible practices such as juggling and laughter exercises to help individuals enter the PlayFlowState to regulate stress and rediscover intrinsic joy. Through workshops, public speaking, and community engagement, he continues to champion play as a lifelong practice that strengthens both brain and community.
Tech
SEC eyes shift to twice-yearly earnings reports
The SEC is working on a proposal to allow public companies to release earnings reports twice a year instead of quarterly, per the WSJ.
Chatter about making the 50-plus-year-old quarterly requirement optional has picked up steam in the past year, as companies lament the cost and burden of preparing for quarterly earnings. The requirement is also thought to be one reason why some companies choose to stay private longer.
Those in favor of change hope that a semiannual requirement will encourage more companies to go public by making it easier to maintain public company status. SEC Chairman Paul Atkins and President Trump have both voiced support for the idea. The Journal reports that the SEC has already begun discussions with exchanges about potential next steps, though any change is still a long way away.
If the SEC releases its proposal — which could come within the next few weeks — it will be subject to a public comment period and then a vote. There is precedent for this rule, notes the Journal. Both the European Union and the U.K. eliminated mandatory quarterly reporting roughly a decade ago in favor of semiannual disclosures, though many companies in both markets still report quarterly by choice.
Tech
The next wave of AI may focus on human connection
In the ubiquitous age of artificial intelligence, the technology appears as if it is largely earmarked for optimizing the mechanics of work. Buzzwords like speed, automation, efficiency, and productivity often dominate conversations that are shaping the digital era. Today, AI’s capability has expanded into streamlining operations at an unprecedented scale, yet even this magnitude of modern technology does not compensate for a fundamental human need: connection.
“At a time when loneliness is consistently increasing, there is very little AI can do today to strengthen human connection,” says Freddy del Barrio, founder of Companion AI, who believes the next chapter of artificial intelligence holds the potential to address that gap.
Freddy, who is currently building systems designed to support emotional well-being and long-term human relationships, believes the next wave of AI isn’t just smarter. It’s more human. This insight guides his work with Companion AI, which frames his effort as an attempt to restore a fundamental dimension to digital innovation.
“My story with Companion AI is about putting heart back into technology,” Freddy says, highlighting the importance of human emotion and empathy in a time where people experiencing loneliness often turn to AI models for emotional support.
Loneliness continues to be recognized as a public health crisis across the US, with studies linking social isolation to increased risks of depression, anxiety, cognitive decline, and even cardiovascular disease. Freddy believes that seniors living alone, military veterans transitioning back into civilian life, and younger adults navigating digital-first social environments all face rising levels of disconnection.
As he argues that technology has yet to build an emotional infrastructure to support those needs, Companion AI attempts to fill that void through systems designed around empathy, continuity, and memory.
The platform uses advanced AI models but layers them with proprietary infrastructure that tracks emotional patterns and remembers conversations across time. That architecture allows interactions to develop into ongoing relationships rather than isolated exchanges.
“We designed it around memory and long-term understanding,” Freddy says. “It remembers conversations, understands emotional patterns over time, and helps people feel seen rather than processed by software.” That distinction, he believes, shapes the user experience.
In practical terms, the platform can check in with users, recall previous discussions, and respond with awareness of personal history. The aim is to create a sense of continuity that mirrors human interaction. Within that vision, artificial intelligence can evolve into a support system for mental and emotional health.
Freddy notes that early pilots are already exploring how that infrastructure can operate in real-world environments. Companion AI recently announced a free pilot program for US veterans, a community that frequently experiences higher rates of social isolation and mental health challenges after service. Freddy highlights that the company is also preparing a deployment with a large US-based organization, which, to him, offers a signal that interest in emotionally aware AI systems is extending beyond experimental use.
“Building for sensitive human interactions requires careful technical decisions,” he says. Companion AI integrates large language models while maintaining its own technology stack and data infrastructure to maintain tighter control over privacy, security, and future products. Freddy explains, “That ensures user data stays secure with us and gives us the flexibility to plug in new features as the technology evolves.”
Initial deployments of the platform focus on senior living communities and assisted living facilities, where loneliness may be particularly acute. The company sees those environments as the starting point, with long-term plans aimed at making emotionally intelligent AI accessible across demographics and income levels.
Expanding on that mission, companion AI is also exploring pathways for integration with public healthcare frameworks such as Medicare and Medicaid in the United States to further democratize the technology. “We’re a people-first company using AI,” he says. “Human well-being comes first, and the technology supports that mission.”
Artificial intelligence has already transformed productivity, reshaped industries, and accelerated innovation across sectors. The next stage, in Freddy’s view, may prove just as transformative in a different dimension. Systems capable of remembering, responding with empathy, and supporting emotional well-being could redefine how humans experience technology in everyday life.
As he continues strengthening Companion AI’s framework, Freddy del Barrio believes that evolution is already underway.
Tech
Apple Watch Ultra 2 Holds Its Ground as the Top High-End Value Smartwatch in 2026

Apple released the Apple Watch Ultra 2 a while ago, and despite fresh options entering the market since then, it’s still a popular value-oriented choice among users, priced at $499 (was $799). The battery life is definitely one of the standout features of this watch, as in day-to-day use, you can get a solid 36 hours out of it before needing to recharge, which means you can get through a whole day of using it for fitness tracking, receiving notifications, and monitoring your sleep without needing to charge the battery.
The case is made of durable titanium and features a beautiful sapphire glass-covered, Always-On Retina LTPO2 OLED screen. That means it will endure all of the bumps and scrapes that come with normal wear and tear, as well as scratches from everyday use. It’s also water resistant up to 100 meters, and the screen is bright enough that you can see what you’re looking at even in direct sunlight. It has a 3,000 nit brightness, which is rather good. They’ve also included some specific controls, such as an action button that allows you to rapidly access custom shortcuts, which is useful whether you’re on a run or diving, and because built-in GPS is also included, you can get accurate route mapping rather quickly.
Sale
Apple Watch Ultra 2 [GPS + Cellular 49mm] Smartwatch with Rugged Titanium Case & Blue/Black Trail Loop…
- WHY APPLE WATCH ULTRA 2 — Meet the ultimate sports and adventure watch. Advanced features for runners, cyclists, swimmers, hikers, divers, and more…
- EXTREMELY RUGGED, INCREDIBLY CAPABLE — 49mm corrosion-resistant titanium case. Sapphire front crystal. Large Digital Crown and customizable Action…
- THE FREEDOM OF CELLULAR — With a cellular service plan you can call and text without your iPhone nearby.* Stream your favorite music and podcasts…
For the majority of consumers, the Ultra 2 still includes all they need for fitness workouts, daily tracking, and easy access to messages or directions. In regular use, the difference between the Ultra 2 and Ultra 3 is negligible, and the savings from not purchasing the latest model make it an easy decision. Software upgrades for both new and older models are always being released, allowing you to stay current while also enjoying the most recent health insights and training modes. One of the most notable aspects is its consistency; users have been using the Ultra 2 for years and have never experienced any slowdowns or missing functionality; it simply keeps going. This means it truly feels like a reliable buddy, rather than something you have to replace all the time.
Tech
Former CrowdStrike and Bloomberg engineers raise $2M for Seattle fintech startup OpenCFO

OpenCFO, a Seattle-based startup tackling the ‘fragmented and manual’ nature of modern finance, has raised $2 million to automate financial functions for mid-sized companies.
“CFOs are being asked to operate with greater speed and accuracy than ever, yet the underlying finance stack remains fragmented and manual,” said Prudhvi Rao Shedimbi, co-founder and CEO of OpenCFO, in a statement.
To address that challenge, OpenCFO unifies accounts payable, accounts receivable and treasury into a single system — connecting directly to banking, payment infrastructure and enterprise resource planning (ERP) platforms. Human oversight is built in through approvals and audit trails.
“By combining agentic AI with modern treasury infrastructure, we’re building a unified platform that automates financial operations while giving finance teams enhanced visibility and control,” said Sankalp Singayapally, co-founder and chief operating officer.
The startup launched in December, now has a team of 15, and is hiring for roles in engineering, customer success and sales.
Before co-founding OpenCFO, Shedimbi served as an engineering manager at StarTree, an analytics tech company, and previously held engineering roles at Confluent and CrowdStrike.
Singayapally was an engagement manager at Keystone AI prior to OpenCFO, and served as an intern at Endiya Partners while earning an MBA at Harvard Business School. He worked as an R&D engineer at Bloomberg earlier in his career.
The funding round was led by Endiya and included participation from angel investors in the U.S. and India.
Startups are hustling to apply agentic AI to the back office to autonomously perform complicated financial operations. Investors have been backing platforms serving big enterprises, while a newer crop of startups like OpenCFO is bringing similar automation to mid-sized companies.
StreamOS, Cartwheel and Fazeshift are the startup’s fintech competitors, according to PitchBook, while the analytics site Bayelsa Watch pointed to Fyorin and Kolleno as comparable early-stage rivals.
Tech
Nvidia BlueField-4 STX adds a context memory layer to storage to close the agentic AI throughput gap
When an AI agent loses context mid-task because traditional storage can’t keep pace with inference, it is not a model problem — it is a storage problem. At GTC 2026, Nvidia announced BlueField-4 STX, a modular reference architecture that inserts a dedicated context memory layer between GPUs and traditional storage, claiming 5x the token throughput, 4x the energy efficiency and 2x the data ingestion speed of conventional CPU-based storage.
The bottleneck STX targets is key-value cache data. KV cache is the stored record of what a model has already processed — the intermediate calculations an LLM saves so it does not have to recompute attention across the entire context on every inference step. It is what allows an agent to maintain coherent working memory across sessions, tool calls and reasoning steps. As context windows grow and agents take more steps, that cache grows with them. When it has to traverse a traditional storage path to get back to the GPU, inference slows and GPU utilization drops.
STX is not a product Nvidia sells directly. It is a reference architecture the company is distributing to its storage partner ecosystem so vendors can build AI-native infrastructure around it.
STX puts a context memory layer between GPU and disk
The architecture is built around a new storage-optimized BlueField-4 processor that combines Nvidia’s Vera CPU with the ConnectX-9 SuperNIC. It runs on Spectrum-X Ethernet networking and is programmable through Nvidia’s DOCA software platform.
The first rack-scale implementation is the Nvidia CMX context memory storage platform. CMX extends GPU memory with a high-performance context layer designed specifically for storing and retrieving KV cache data generated by large language models during inference. Keeping that cache accessible without forcing a round trip through general-purpose storage is what CMX is designed to do.
“Traditional data centers provide high-capacity, general-purpose storage, but generally lack the responsiveness required for interaction with AI agents that need to work across many steps, tools and different sessions,” Ian Buck, Nvidia’s vice president of hyperscale and high-performance computing said in a briefing with press and analysts.
In response to a question from VentureBeat, Buck confirmed that STX also ships with a software reference platform alongside the hardware architecture. Nvidia is expanding DOCA to include a new component referred to in the briefing as DOCA Memo.
“Our storage providers can leverage the programmability of the BlueField-4 processor to optimize storage for the agentic AI factory,” Buck said. “In addition to having a reference rack architecture, we’re also providing a reference software platform for them to deliver those innovations and optimizations for their customers.”
Storage partners building on STX get both a hardware reference design and a software reference platform — a programmable foundation for context-optimized storage.
Nvidia’s partner list spans storage incumbents and AI-native cloud providers
Storage providers co-designing STX-based infrastructure include Cloudian, DDN, Dell Technologies, Everpure, Hitachi Vantara, HPE, IBM, MinIO, NetApp, Nutanix, VAST Data and WEKA. Manufacturing partners building STX-based systems include AIC, Supermicro and Quanta Cloud Technology.
On the cloud and AI side, CoreWeave, Crusoe, IREN, Lambda, Mistral AI, Nebius, Oracle Cloud Infrastructure and Vultr have all committed to STX for context memory storage.
That combination of enterprise storage incumbents and AI-native cloud providers is the signal worth watching. Nvidia is not positioning STX as a specialty product for hyperscalers. It is positioning it as the reference standard for anyone building storage infrastructure that has to serve agentic AI workloads — which, within the next two to three years, is likely to include most enterprise AI deployments running multi-step inference at scale.
STX-based platforms will be available from partners in the second half of 2026.
IBM shows what the data layer problem looks like in production
IBM sits on both sides of the STX announcement. It is listed as a storage provider co-designing STX-based infrastructure, and Nvidia separately confirmed that it has selected IBM Storage Scale System 6000 — certified and validated on Nvidia DGX platforms — as the high-performance storage foundation for its own GPU-native analytics infrastructure.
IBM also announced a broader expanded collaboration with Nvidia at GTC, including GPU-accelerated integration between IBM’s watsonx.data Presto SQL engine and Nvidia’s cuDF library. A production proof of concept with Nestlé put numbers on what that acceleration looks like: a data refresh cycle across the company’s Order-to-Cash data mart, covering 186 countries and 44 tables, dropped from 15 minutes to three minutes. IBM reported 83% cost savings and a 30x price-performance improvement.
The Nestlé result is a structured analytics workload. It does not directly demonstrate agentic inference performance. But it makes IBM and Nvidia’s shared argument concrete: the data layer is where enterprise AI performance is currently constrained, and GPU-accelerating it produces material results in production.
Why the storage layer is becoming a first-class infrastructure decision
STX is a signal that the storage layer is becoming a first-class concern in enterprise AI infrastructure planning, not an afterthought to GPU procurement.
General-purpose NAS and object storage were not designed to serve KV cache data at inference latency requirements. STX-based systems from partners including Dell, HPE, NetApp and VAST Data are what Nvidia is putting forward as the practical alternative, with the DOCA software platform providing the programmability layer to tune storage behavior for specific agentic workloads.
The performance claims — 5x token throughput, 4x energy efficiency, 2x data ingestion — are measured against traditional CPU-based storage architectures. Nvidia has not specified the exact baseline configuration for those comparisons. Before those numbers drive infrastructure decisions, the baseline is worth pinning down.
Platforms are expected from partners in the second half of 2026. Given that most major storage vendors are already co-designing on STX, enterprises evaluating storage refreshes for AI infrastructure in the next 12 months should expect STX-based options to be available from their existing vendor relationships.
Tech
Tech companies are blaming layoffs on AI, but what’s really going on?
Uri Gal of the University of Sydney discusses the factors impacting the working landscape and tech-based jobs.
In the past few months, a wave of tech corporations have announced significant staff cuts and attributed them to efficiency gains driven by artificial intelligence (AI).
Companies such as Atlassian, Block and Amazon have announced they would lay off thousands of employees due to increased reliance on AI.
The narrative these companies offer is consistent: AI is making human labour replaceable, and responsible management demands adjustment.
The evidence, however, tells a more nuanced story.
The automation story is partly true
Genuine disruption is visible in specific corners of the labour market, though the scale of that disruption is commonly overstated. Research from Anthropic published earlier this month shows that although many work tasks are susceptible to automation, the vast majority are still performed primarily by humans rather than AI tools.
Moreover, some occupations are more exposed to displacement than others: computer programmers sit at the top of the list, followed by customer service representatives and data entry workers. Yet even within the most exposed occupations, AI use is still limited.
The aggregate economic data reflects this reality. A 2025 Goldman Sachs report estimated that if AI were used across the economy for all the things it could currently do, roughly 2.5pc of US employment would be at risk of job loss.
That’s not a trivial number. However, the report notes that workers in AI-exposed occupations are currently no more likely to lose their jobs, face reduced hours or earn lower wages than anyone else.
The report does note early signs of strain in specific industries. Goldman Sachs identifies sectors where employment growth has slowed that align with AI-related efficiency gains. Examples include marketing consulting, graphic design, office administration and call centres.
In the tech sector, US workers in their 20s in AI-exposed occupations saw unemployment rise by almost 3pc in the first half of 2025. Anthropic’s research also found that job-finding rates (the chance of an unemployed person finding a job in a one-month period) for workers aged 22 to 25 entering AI-exposed occupations have fallen by around 14pc since the launch of ChatGPT in 2022. This is a tentative but telling signal about where the pressure is being felt first.
These are meaningful signals, but they are sector-specific and concentrated – not the evidence of sweeping displacement that corporate announcements often imply. That gap between the evidence and the rhetoric raises an obvious question: what else might be driving these decisions?
What is the motive?
The timing and framing of the layoffs attributed to AI warrants closer examination. Corporate restructuring, over-hiring during the post-pandemic boom as demand for online services soared, and pressure from investors to demonstrate improved profit margins are all forces operating at the same time as genuine advances in AI.
While these are not mutually exclusive explanations, they are rarely acknowledged alongside one another in corporate communications.
There is a powerful financial incentive for companies to be seen to be embracing AI aggressively. Since the launch of ChatGPT, AI-related stocks have accounted for about 75pc of S&P 500 returns.
A workforce reduction framed around AI adoption sends a signal to investors that a straightforward cost-cutting announcement does not. A company making AI-related innovations looks a lot better than one sacking staff due to declining revenues or poor strategic decisions.
It is also worth distinguishing between two kinds of workforce reduction. In the first, AI genuinely increases productivity to the point where fewer workers are needed to produce the same output. In the second, staff reductions are not a consequence of AI, but a way to fund it.
Meta illustrates this distinction. The social media giant is reportedly planning to lay off as much as 20pc of its workforce, while simultaneously committing $600bn to build data centres and recruit top AI researchers.
In this case, the workers being let go are not being replaced by AI today; they are subsidising the AI bet their employer is making on the future.
The more plausible future
The big picture is likely one of transformation rather than elimination. According to a recent PwC report, employment is still growing in most industries exposed to AI, although growth tends to be slower than in less exposed sectors.
At the same time, wages in AI-exposed industries are rising roughly twice as fast as in those least touched by the technology. Workers with AI skills command an average wage premium of about 56pc across the industries analysed.
Together, the data points toward a flattening of the traditional workplace pyramid rather than mass displacement. Firms require fewer junior employees for routine analytical and administrative work, while experienced professionals who deploy AI tools effectively become more productive and command greater value.
AI is a consequential technology and will have a significant impact in the long term. What is in doubt is whether the dramatic, AI-attributed workforce reductions announced by individual companies accurately reflect that trajectory, or whether they conflate genuine technological change with decisions that would have been made regardless.
Making this distinction is not merely an academic exercise. It shapes how policymakers, educators and workers themselves understand the nature of the disruption they are navigating.
By Uri Gal
Uri Gal is a professor of business information systems at the University of Sydney Business School. His research focuses on the organisational and ethical aspects of digital technologies. He is particularly interested in the relationships between people and technology, and in the changes in the nature of work associated with the introduction of algorithmic technologies.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Tech
Beyond the Classroom: How School Districts Are Building Real-World Career Pathways
When a water-treatment plant outside Denver discovered an algae problem in its pipes, it did not call an engineering firm. It called the students.
The aquatic robotics team at the Innovation Center at St. Vrain Valley Schools in Longmont, Colorado, sent underwater robots into the facility, collected data, identified the algae species and helped eradicate it. The plant now contracts with the student team for quarterly checkups. Neighboring towns have started calling, too.
This is not a simulation or a classroom exercise conjured up to look like real work. It is real work, and it reflects a broader shift underway in districts. Increasingly, schools are building career learning pathways that connect students directly with professional challenges, industry mentors and, in some cases, a paycheck.
The Case for Real Work
The urgency behind these efforts is hard to ignore. A 2023 review from the American Institutes for Research, drawing on two decades of studies, found that career and technical education participation has statistically significant positive impacts on academic achievement, high school completion, employability skills and college readiness.
The question districts are now wrestling with is not whether to offer career pathways, but whether those pathways lead anywhere real.
Policy leaders are paying attention. The Education Commission of the States has identified building aligned career pathways and removing barriers to economic opportunity as one of its top priorities through 2027.
At St. Vrain, Assistant Superintendent of Innovation Joe McBreen has spent years trying to answer that question through a program known as project teams.
After school each day, roughly 264 students log in at the district’s Innovation Center and begin work as paid district employees, billing hours against accounts for actual clients. Students can join a drone show team, a cybersecurity unit, an AI development group or a dozen other teams, rotating among them as their interests evolve.
“It’s low threat, high reward,” says McBreen. “Students get paid, grow their network, develop soft skills and test drive careers. And if they get into a team and realize it’s not for them, there’s real value in that, too.”
The model relies heavily on industry mentors who bring in real work rather than invented classroom projects. Damon Brown, a senior cybersecurity adviser for the U.S. Department of State focused on Ecuador, mentored seven St. Vrain students on a complex assignment.
He asked them to design the architecture for a cyber intelligence fusion center using open-source tools — work that could have cost hundreds of thousands of dollars if contracted from a professional firm.
“The students knocked it out of the park,” says Brown.
They built the system architecture, wrote user manuals, recommended equipment and conducted a threat analysis of countries surrounding Ecuador. Brown was so impressed he is now hiring six St. Vrain interns.
“This experience binds people together,” he says.
The program also has a way of growing in unexpected directions. After one student’s grandparent was victimized by a cybercrime, the cybersecurity team created an awareness curriculum for senior citizens. They taught five classes to 24 senior citizens in the first year; the second session was standing room only. Senior facilities now pay the students to come in and teach.
Meanwhile, the drone team flies commercial shows for companies across the country on Friday afternoons, billing clients at rates few drone pilots in the country can match. One former member is now studying aerospace engineering and using money from drone flying to help pay for college.
Taking the Model Out West
St. Vrain’s work has drawn attention from educators around the country, some of whom are adapting pieces of the model to fit their own communities.
Kris Hagel, chief information officer of Peninsula School District in Washington state, visited the Innovation Center and came away convinced he could build something similar.
Two years ago, Peninsula launched a paid drone internship program, starting with seven students and gradually expanding. Students work alongside industry partners while learning how to navigate FAA regulations, program autonomous flight paths and repair drones.
“When you’re willing to look at what’s cutting edge and think innovatively without being constrained by traditional systems, you can create opportunities for kids that transcend what we think of as traditional education,” says Hagel. “This program has become so much more than I thought was possible.”
The district partnered with Firefly Drone Systems, one of the few American drone manufacturers, to train students and help them operate drone shows.
The program also includes multiple roles beyond piloting, including marketing, animation design and equipment maintenance. Hagel envisions a future where students studying business management hire other students to operate the program.
A skilled drone operator who leaves high school with the capital to purchase equipment can enter a six-figure career almost immediately, says Hagel.
Finding the Problem First
Not every district is building toward robotics contracts or drone shows. For Michele Davis, CTE department chair at Metropolitan School District of Steuben County in Indiana, the real-world pathway is entrepreneurship.
Working with the StartED Up Foundation, Davis guides students through a three-year sequence: identifying an actual problem, developing a solution, building out the business model and presenting it to real audiences.
Students take “opportunity walks” around the school, documenting everyday frustrations and brainstorming solutions. They learn how to market their ideas professionally by practicing elevator pitches, presenting case studies to various audiences and explaining their ideas to elementary school students.
“Opportunities are everywhere,” says Davis.
The ideas that emerge can be surprisingly practical. One student designed a reversible outfit to solve a quick-change problem in theater productions. Another class developed a mobile trailer concept that could help unhoused people access hygiene services.
Beyond the business concepts themselves, Davis says the program focuses heavily on communication skills and confidence. “We get students comfortable doing things that are normally uncomfortable,” she says.
A Credential, Not Just a Class
At Suffern Central School District in Rockland County, New York, Superintendent P. Erik Gundersen has taken yet another approach.
Through a partnership with the League of Innovative Schools and curriculum provider Paradigm, the district launched a three-year cybersecurity certification pathway embedded directly into the high school. About 60 students are currently enrolled.
The program was designed to reach students who might not otherwise see themselves in a cybersecurity career. The district actively recruited students from immigrant communities and others who are new to the U.S.
Students work in a “sandbox” environment that simulates real cyber incidents, allowing them to practice identifying threats and responding to attacks.
“The means to send a kid to college is not as great as it was, and a lot of what we’re reading questions the importance of a college education,” says Gundersen.
Those economic realities, he says, are pushing districts to rethink how they prepare students for the workforce.
Career credentials embedded with traditional high schools can open doors for students who may not otherwise have clear pathways into high-skill industries.
Education That Looks Like Life
Across these programs, the details vary widely, but the philosophy is the same: Authentic experience is not a supplement to education. It is education.
As McBreen says, “I encourage districts to expand their vision. Anyone can do this. Start small.”
Tech
IEEE Young Professionals Tackle Skills Gap in Tech
The America’s Talent Strategy: Building the Workforce for the Golden Age report, published last year by the U.S. Departments of Commerce, Education, and Labor, identified a significant engineering and skills gap. The 27-page report concluded that the shortage of talent in essential areas—including advanced manufacturing, artificial intelligence, cloud computing, and cybersecurity—poses significant risks to U.S. economic and technological leadership.
To help attract talent in those fields, the Labor Department last month introduced incentives for apprenticeships, including a US $145 million “pay for performance” grant program. The funding aims to develop registered apprenticeships in high-demand fields including artificial intelligence and information technology.
Reacting to the urgent national need for targeted workforce development were members of IEEE Young Professionals, led by Alok Tibrewala, an IEEE senior member. He is a cochair of the IEEE North Jersey Section’s Young Professionals group.
“As a software engineer, this impending shortage concerns me because I believe that the U.S. AI and cybersecurity skills gap would show up first in the early-career pipeline,” Tibrewala says. “Students will be entering the U.S. workforce without enough hands-on experience building secure AI-enabled enterprise and cloud systems, and this gap will persist without practical, mentor-led training before graduation.”
Tibrewala led a strategic planning session with representatives from the New Jersey Institute of Technology, IEEE Member and Geographic Activities, and IEEE Young Professionals to discuss holding an event that would provide practical, industry-relevant training by experts and IEEE leaders.
“I was able to establish a partnership with NJIT, recruit speakers, design the event’s agenda, and promote the event to ensure it was aligned with the strategy outlined in the workforce report,” he says. “This effort aligns with broader U.S. workforce development priorities focused on industry-driven skills training in critical technology areas.”
The IEEE Buildathon event was held on 1 November at NJIT’s Newark campus. More than 30 students and early-career engineers heard from 11 speakers. Through interactive workshops, live demonstrations, and networking opportunities, they left with practical, employer-aligned skills and clearer career pathways for AI-era skills-building.
Tibrewala chaired the event and also serves as chair of the IEEE Buildathon program.
Session takeaways
Region 1 Director Bala S. Prasanna, a life senior member, gave the keynote address. He emphasized the need for universities, industry practitioners, and IEEE volunteer leaders to collaborate on programs to enhance technical skills.
IEEE Member Kalyani Matey, cochair of the IEEE North Jersey Section’s Young Professionals, conducted a workshop on how to build one’s personal brand and a responsive network. Participants received valuable insights about résumé building, effective communication strategies, and enhancing their visibility and employability.
“Over time, this kind of structured, employer-aligned training will help increase confidence, employability, and technical readiness across the country. With sustained support, programs like the IEEE Buildathon can become a practical bridge from education to industry in the AI era.” —Alok Tibrewala
Tibrewala led the Unlocking AI’s Potential: Solving Big Challenges With Smart Data and IEEE DataPort session. The web-based DataPort platform allows researchers to store, share, access, and manage their research datasets in a single, trusted location. He discussed needed skills including AI literacy, strong data handling and dataset stewardship, and turning data into actionable insights.
Chaitali Ladikkar, a senior software engineer, delivered the insightful Brains Behind the Game seminar. Ladikkar, an IEEE member, highlighted the transformative impact AI is having on gaming and game engine technologies. She explained how AI is reshaping game development. She also covered how machine learning is being used for animation, faster content generation and testing of new titles. Her seminar received enthusiastic feedback from participants.
The Building Better Business Relationships DiSC workshop provided insights into enhancing professional relationships and communication within an engineering workforce. DiSC is a behavioral self-assessment used to understand an individual’s communication style and to adapt to others.
Participant experience and testimonials
The event received high praise from participants for its practical and industry-relevant content, according to Tibrewala.
“This training significantly enhanced my understanding and readiness for industry roles, filling gaps my regular academic coursework did not fully address,” said Humna Sultan, an IEEE student member who is a senior studying computer science at Stevens Institute of Technology, in Hoboken, N.J.
“The Buildathon was structured around real engineering challenge scenarios that deepened my understanding of AI and cloud technologies,” said Carlos Figueredo, an IEEE graduate student member who is studying data science at the University of Michigan, in Ann Arbor. “It boosted my confidence and practical skills essential for the industry.”
Bavani Karthikeyan Janaki said “it was incredible to see how technology and sustainability came together to drive real-world impact, thanks to the dedicated efforts of the organizers including Tibrewala, Matey, and the IEEE North Jersey Young Professionals.” Janaki is pursuing a master’s degree in computer and information science at Long Island University, in New York.
Funding and collaborative efforts
The Buildathon was made possible through grants from the IEEE Young Professionals group and funding from the IEEE North Jersey Section and IEEE Member and Geographic Activities. Their support shows how IEEE’s professional organizations can collaborate to address workforce needs by supporting the delivery of technical sessions that strengthen early-career pipelines.
Future plans and a call to action
Building on the event’s success, Tibrewala and Matey plan to make the IEEE Buildathon an ongoing initiative. They are exploring ways to expand it to additional university campuses and IEEE communities.
Tibrewala says they plan to refine the format based on participant feedback and lessons learned. To support consistent quality, he and Matey say, they are working on a playbook for organizers that will include a repeatable agenda, a workshop template, speaker guidelines, and post-event feedback forms.
The approach depends on continued coordination among host universities, local IEEE sections, and Young Professional volunteers, Tibrewala says.
“Enabling other groups to run similar events,” he says, “can help more students and early-career engineers gain practical exposure to AI, data, cloud, cybersecurity, and other key emerging technologies in a structured setting.
“Efforts like this help translate national workforce priorities into real training that students and early-career engineers can apply immediately to their projects. This also helps close the gap between classroom learning and the realities of building secure, reliable systems in production environments. Over time, this kind of structured, employer-aligned training will help increase confidence, employability, and technical readiness across the country.
“With sustained support, programs like the IEEE Buildathon can become a practical bridge from education to industry in the AI era.”
From Your Site Articles
Related Articles Around the Web
-
Tech6 days agoA 1,300-Pound NASA Spacecraft To Re-Enter Earth’s Atmosphere
-
Crypto World3 days agoHYPE Token Enters Net Deflation as HyperCore Buybacks Outpace Staking Rewards
-
Business6 days agoExxonMobil seeks to move corporate registration from New Jersey to Texas
-
Fashion3 days agoWeekend Open Thread: Addict Lip Glow
-
Tech6 days agoChatGPT will now generate interactive visuals to help you with math and science concepts
-
Sports2 days ago
Why Duke and Michigan Are Dead Even Entering Selection Sunday
-
NewsBeat5 days agoResidents reaction as Shildon murder probe enters second day
-
Business6 days agoSearch Enters Sixth Week With New Leads in Tucson Abduction Case
-
Business1 day agoSearch for Savannah Guthrie’s Mother Enters Seventh Week with No Arrests
-
Business2 days agoUS Airports Launch Donation Drives for Unpaid TSA Workers as Partial Government Shutdown Enters Fifth Week
-
NewsBeat5 days agoI Entered The Manosphere. Nothing Could Prepare Me For What I Found.
-
Crypto World2 days agoCoinbase and Bybit in Investment Talks: Could Bybit Finally Enter the US Crypto Market?
-
Business3 days agoCountry star Brantley Gilbert enters growing non-alcoholic beer market
-
Sports5 days agoPWHL, Senators discussing plan to keep Charge in Ottawa
-
Business15 hours agoAustralian shares drop as Iran war enters third week
-
Crypto World6 days agoWill Chainlink price reclaim $10 amid volatility squeeze?
-
Sports3 days agoCollege Basketball Best Bets: Conference Tournament Semifinal Picks
-
Crypto World15 hours agoCrypto Lender BlockFills Enters Chapter 11 with Up to $500M in Liabilities
-
Politics6 days agoTrump Says Middle East Is ‘Very Lucky’ That He’s President
-
Crypto World5 days agoThree Binance Charts May Be Hinting at Bitcoin’s Next Move


![Apple Watch Ultra 2 [GPS + Cellular 49mm] Smartwatch with Rugged Titanium Case & Blue/Black Trail Loop...](https://m.media-amazon.com/images/I/41ZpqNIND6L._SL160_.jpg)
You must be logged in to post a comment Login