Your project doesn’t necessarily have to be a refined masterpiece to have an impact on the global hacker hivemind. Case in point: this great demo of using a 64-point time-of-flight ranging sensor. [Henrique] took three modules, plugged them into a breadboard, and wrote some very interactive Python code that let him put them all through their paces. The result? I now absolutely want to set up a similar rig and expand on it.
That’s the power of a strong proof of concept, and maybe a nice video presentation of it in action. What in particular makes [Henrique]’s POC work is that he’s written the software to give him a number of sliders, switches, and interaction that let him tweak things in real time and explore some of the possibilities. This exploratory software not only helped him map out what directions to go, but they also work in demo mode, when he’s showing us what he has learned.
But the other thing that [Henrique]’s video does nicely is to point out the limitations of his current POC. Instantly, the hacker mind goes “I could work that out”. Was it strategic incompleteness? Either way, I’ve been nerd-sniped.
Advertisement
So are those the features of a good POC? It’s the bare minimum to convey the idea, presented in a way that demonstrates a wide range of possibilities, and leaving that last little bit tantalizingly on the table?
Artificial intelligence is no longer a back office enabler or a set of isolated automation software tools. It is becoming a core component of how organizations operate, compete, and deliver value.
As businesses accelerate their adoption of increasingly autonomous systems, often referred to as agentic AI, a significant leadership dilemma is emerging. The workforce is no longer exclusively human.
Digital agents capable of making decisions, initiating actions, and influencing outcomes, are now woven into the operational fabric of the company.
Advertisement
Article continues below
Peter Connolly
This shift represents far more than a technological upgrade. It is a structural transformation that puts business leaders in uncharted territory.
The World Economic Forum’s Four Futures framework warns of rising technological fragmentation, declining trust, and widening governance gaps.
In this context, the question for leaders is no longer whether to deploy autonomous AI, but how to govern a hybrid workforce of humans and digital agents without introducing systemic risk.
For many organizations, this is becoming one of the defining leadership challenges of the decade.
Advertisement
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The Rise of the Non Human Workforce
Agentic AI systems differ from traditional automation in one critical way: they do not merely execute predefined tasks but interpret data, make decisions, and adapt their behavior to context. In many organizations, these systems are already performing functions once reserved for skilled employees, triaging customer requests, optimizing supply chains, generating code, or even making financial recommendations.
The productivity gains are undeniable, but so is the complexity. When digital agents act with autonomy, they also introduce new forms of organizational risk. Decisions may be opaque, accountability may be unclear, and the potential for unintended consequences increases dramatically.
Advertisement
Leaders must now grapple with a workforce that does not think, behave, or act like humans, and who cannot be governed through traditional management structure. This is where structured identity, access, and behavioral governance become essential.
The Governance Gap: A Growing Leadership Risk
The most significant challenge is not the technology itself, but the governance vacuum surrounding it. Many organizations deploy autonomous systems faster than they establish the controls and guardrails required to manage them. This creates a widening gap between capability and oversight.
Advertisement
Several risks are already becoming visible:
1. Accountability gaps: When an AI agent makes a decision that leads to financial loss, regulatory exposure, or reputational harm, who is responsible? Without clear lines of accountability, organizations face legal and ethical uncertainty.
2. Insider threat like behavior: Autonomous systems often operate with high levels of privilege and can access sensitive data, trigger workflows, or interact with customers. If misconfigured or compromised, they can behave like highly privileged insider threats, an issue we frequently encounter when assessing digital identity posture.
3. Fragmentation and drift: As organizations deploy multiple AI agents across different functions, the risk of inconsistent behavior, configuration drift, and misaligned objectives increases. Without centralized governance, autonomous systems can evolve in ways that diverge from organizational intent.
Advertisement
4. Erosion of trust: Employees, customers, and regulators are increasingly concerned about how AI systems make decisions. A lack of transparency and explainability can undermine confidence and impede adoption.
AI adoption alone is no longer sufficient. Governance has become the true leadership mandate.
A Governance First Mindset: The New Leadership Imperative
To navigate this new landscape, business leaders must adopt a governance first mindset that aligns with the World Economic Forum’s call for Digital Trust and systemic resilience. This requires treating agentic AI not as a standalone technology, but as a governed member of the workforce.
Advertisement
Several principles should guide this shift:
Establish Clear Accountability Structures
Every AI agent must have an identified human owner responsible for its actions, performance, and outcomes. This includes defining escalation paths, decision boundaries, and audit requirements. Without explicit accountability, organizations risk regulatory exposure and operational ambiguity.
Apply Identity and Access Controls to Digital Agents
Advertisement
Just as employees have identities, permissions, and access levels, so too must AI agents. Leaders should ensure that digital agents are integrated into identity management frameworks with least privilege access, continuous monitoring, and lifecycle management. This reduces the risk of insider threat like behavior and prevents privilege creep, these are key principles central to our approach to digital workforce governance.
Implement Behavioral Guardrails
Autonomous systems require constraints that define acceptable behavior. These guardrails may include ethical guidelines, operational limits, safety checks, and real time monitoring. Guardrails ensure that AI agents act within organizational intent and do not drift into unsafe or unintended territory.
Build Oversight and Auditability into the System
Advertisement
Transparency is essential for trust. AI agents must be auditable, explainable, and observable. This includes maintaining logs of decisions, enabling post incident analysis, and ensuring that humans can intervene when necessary. Oversight is foundational to responsible autonomy.
Foster a Culture of Digital Trust
Governance is more than a technical challenge, it is a cultural one. Leaders must champion a culture that values transparency, accountability, and responsible innovation. This includes educating employees about how AI agents operate, how decisions are made, and how risks are managed. Organizations that succeed here tend to be those that treat governance as a strategic capability, not a compliance burden.
Advertisement
From Liability to Advantage: Building the Hybrid Workforce of the Future
When governed effectively, agentic AI can become a powerful force multiplier. It can enhance productivity, accelerate innovation, and enable organizations to operate with greater agility and precision. But without governance, the same systems can introduce systemic vulnerabilities that undermine resilience.
The role of business leaders is to ensure that autonomy does not outpace oversight. By reframing agentic AI as part of the workforce, subject to the same expectations, controls, and accountability as human employees, leaders can transform a potential liability into a strategic advantage.
The future of work will be hybrid. The organizations that continue to evolve in 2026 will be those that recognize that governing AI is not a technical task delegated to IT, but a core leadership responsibility.
Leaders who embrace this governance first approach will not only mitigate risk, but they will also build resilient, high performing organizations that define the future of the workplace and how businesses function.
As with any new tech, there’s a scale for AI adoption among businesses leaving some are ahead of the curve and others much further behind as they continue to resist and delay.
But what’s clear is that adoption is happening with or without formal strategy because nearly two-thirds (65%) of employees now say they intentionally use AI for work.
This shift is impacting expectations on many levels. It changes what organizations expect from their people, and it changes what people expect from their organizations.
Advertisement
Article continues below
Nick Pearson
Polished sounding, in-depth output can now be generated in minutes, meaning everyone has the ability at their fingertips to produce more in less time.
As managers and organizations increasingly realize that this doesn’t always lead to good work, the differentiator that defines what good really is, is becoming less about speed and more about who can work alongside AI well.
That means having the ability to analyze and assess its output and use it to make better human decisions – not replace them.
This marks a turning point for CIOs especially. The role that used to center simply on identifying and providing access to new tools to improve efficiency, is now increasingly responsible for shaping an environment in which AI tools truly raise the bar.
Advertisement
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
AI is resetting the performance baseline
AI is, and has for some time, been accelerating routine and repeatable work across every function, from drafting documents and analysing data to summarizing meetings and generating code. At first, many employees approached these tools with caution. AI made them faster, but they still treated its output as something to sense-check and refine.
Now, as AI becomes more normalized and trusted, that caution can slip. In some cases, speed is no longer paired with scrutiny and teams rely on confident-sounding outputs that may be incomplete, biased or wrong if they haven’t been properly reviewed. So, while managers are getting used to quicker turnaround and coming to expect it, they may also be receiving work that looks finished but hasn’t been validated.
Advertisement
If work is easier to produce across the board, then volume alone becomes a much less reliable indicator of value. It’s more about the ability to work with AI’s output, interpreting and analysing it in context and feeding it into final outputs and decisions rather than relying on it to do that for you.
Because of this, every role becomes more technical by default. This new expectation means employees need to be able to use AI tools but also use them well and understand their outputs. That includes framing prompts effectively, challenging assumptions, identifying bias and translating outputs within the right commercial and organizational context.
Without leaders prioritizing AI and how to use it correctly, this shift can create divergence. Some teams build confidence quickly, while others feel nervous and hesitate or over-rely on automation which can result in uneven standards and unnecessary risk. The responsibility for avoiding that fragmentation sits with the CIO.
Advertisement
The answer isn’t simply introducing more technology, in fact in many ways that may complicate things further. What employees need is better ways of working with existing tools that are embedded across the organization.
This starts with being clear about where AI is genuinely helping the business. Rather than experimenting everywhere at once, organizations need to identify the areas where AI can improve outcomes, whether that’s speeding up analysis, reducing manual work or improving decision-making.
Leadership teams play an important role here by setting priorities and making sure AI initiatives stay focused on solving real business challenges rather than chasing the latest trend.
Advertisement
But introducing tools alone aren’t enough. Employees need practical training on how to use AI well and how to check and interpret its outputs. Without that support, AI risks becoming either underused or over-relied on.
In many cases, the most effective approach is building confidence and competence over time through hands-on learning in the flow of work. When employees can experiment, feedback on what’s working and refine how they use AI in real situations, organizations create a much stronger foundation for long-term progress.
Governance that enables trust and better decisions
If capability enables AI use, governance ensures it is used responsibly and consistently. Without clear guardrails, AI adoption can quickly become fragmented, with employees using different tools, handling data inconsistently or relying on outputs that haven’t been properly checked.
Advertisement
In practice, governance means giving employees clear guidance on how AI should be used across the organization. That could include clearly outlining which AI tools or large language models are approved for work, when enterprise or paid versions must be used and what kinds of data can or cannot be entered into these systems.
It also means making sure teams understand how to handle sensitive information and comply with local regulations. When these boundaries are clear, employees can innovate confidently and leadership can better trust their employees, tools and the outputs that the two together are able to produce. Without governance, the risk is unchecked, low-value outputs that affect results and increase exposure.
The CIO has the power to connect aligning technology, ethics and responsibility. Embedding review mechanisms, defining who owns what and making sure human judgement sits firmly at the center of it all.
Advertisement
Conclusion
AI is raising the bar across the workplace. The organizations that approach it in the right way build in clear direction on where it should be applied, practical support that helps people use it well and a governance model that protects the integrity of decisions.
For CIOs, the aim is to create an environment where experimentation is encouraged while standards stay high and accountability is clear. When capability and trust are built in tandem, AI becomes a lever for stronger outcomes over time, not just quicker output in the short term.
Technology may be redefining how work is produced, but it is leadership that determines whether those higher standards translate into long-term advantage.
OpenAI said the purchase will be part of its strategy to further the conversation on the changes brought about by artificial intelligence.
OpenAI, in what is being described as an unusual move, is set to purchase the Technology Business Programming Network (TBPN), a daily, live tech talk show hosted by Jordi Hays and John Coogan, that often features high-profile tech leaders and entrepreneurs. OpenAI
OpenAI’s chief executive officer of applications Fidji Simo said: “As I’ve been thinking about the future of how we communicate at OpenAI, one thing that’s become clear is that the standard communications playbook just doesn’t apply to us. We’re not a typical company.
“We’re driving a really big technological shift. And with our mission to ensure artificial general intelligence benefits all of humanity comes a responsibility to help create a space for a real, constructive conversation about the changes AI creates, with builders and people using the technology at the centre.”
Advertisement
While the full details of the deal have yet to be disclosed, OpenAI said the TBPN team will maintain editorial independence and make decisions on their guests and programming. According to the Wall Street Journal, TBPN stated that it generated $5m in advertising revenue last year and is on track to exceed $30m in revenue in 2026.
However, an OpenAI spokesperson told Bloomberg that the platform is not aiming to make TBPN a money-making enterprise.
In a statement, Hays expressed excitement at the venture, while making note of the importance of a strong partnership where both parties work as a team to communicate change and innovation in the AI and tech spaces.
He said: “While we’ve been critical of the industry at times, after getting to know Sam and the OpenAI team, what stood out most was their openness to feedback and commitment to getting this right. Moving from commentary to real impact in how this technology is distributed and understood globally is incredibly important to us.”
Advertisement
Earlier this week OpenAI closed a larger than expected funding round in which it raised $122bn, exceeding the projected figure of $110bn. Part of that funding is expected to be put towards the scale and growth of the platform’s AI technologies and research, in line with current global demands.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
The best thing about retail warehouse stores is obviously the selection. After all, where else can you buy a new T-shirt, birthday cake, and a set of tires on the same day? But the ability to fill up with gas before leaving the parking lot is a plus as well. That’s why stores like Costco, where you can use these tips to save time at the pump, are so convenient. But now the company is moving forward with standalone gas stations, and the company’s first in California is members-only.
Members will need to insert or scan their membership card to refuel, just as they would at Costco’s attached gas stations. However, non-members may be able to access the pumps using a Costco Shop card, as they currently can at on-site locations. Costco’s new gas station is located in Mission Viejo, California, and it’s a 17,000 square foot facility operated by company employees. It has 40 pumps covered by a large canopy, and it will run from 5 a.m. to 10 p.m. daily, Sunday through Saturday.
The station is expected to open by the end of June 2026. But if you don’t live in California, you may not have to wait long. Costco is planning to build more standalone gas stations, beginning in Honolulu, Hawaii. As of this writing, the company hasn’t publicly addressed this new program. But the belief is that stand alone stations can help reduce the heavy traffic flow that currently plagues many on-site locations.
Advertisement
Costco’s gas boom and competitive pricing strategy
Gary Hershorn/Getty Images
Costco’s first standalone gas station (which will also strategically stay cheaper than most) was initially announced in the summer of 2025. The facility is located off Interstate 5 in Mission Viejo, California, at the site where a Bed Bath & Beyond once stood. At the time of the announcement, the company’s gas stations were experiencing a boom in business, thanks mostly to extended operating hours. The decision to move forward with a new test store may have been influenced by this positive reaction.
Costco members get access to gas prices that can often beat other competitors by anywhere from 10 to 25 cents per gallon. This is possible because of the company’s warehouse approach, which includes buying fuel in large quantities. Costco also works directly with suppliers to get the best cost and then passes that savings on to its members.
Advertisement
Costco’s first gas station opened in 1995 and since then, their fuel business has grown. The company currently has over 700 stations around the world, serving millions of paid members every day. Those members can use the Costco app to check fuel prices in real time, as well as store hours, and locations near them.
Getting a lush, green lawn sometimes requires a bit of help. This is where a lawn sprinkler system, be it an energy-saving smart sprinkler system or a more traditional setup, comes into the picture by providing a yard with sufficient moisture for sustained growth. Installing such a system is just the start, though, and it’s also crucial to know how to use it to the fullest. That means knowing the right time of year to power it up, which isn’t necessarily a specific day or month. Instead, it’s a decision that’s largely predicated on environmental factors that make it clear winter has come and gone, and that spring is finally in bloom.
First and foremost is the temperature. It’s recommended that a sprinkler system only be activated in spring once daily temperatures are higher than 40 degrees Fahrenheit for 10 days or longer. This way, you know for certain spring is here and you’re not experiencing a random warmer day within an overall cold period. In a similar vein, the ground itself should be completely thawed and free of frost, further indicating that sprinkler season has arrived. No matter where you live, you should also refer to previous years’ weather patterns to get a rough idea of when the final snowfalls and freezes usually happen. Some news outlets may also offer estimated dates for these, so be sure to check around.
If all else fails and you’re unsure whether it’s a good time to turn on your sprinklers, there’s no shame in playing it safe and waiting until temperatures are consistently warm and the last vestiges of winter are long gone. After all, erring on the side of caution is preferred to activating your system too early and suffering the consequences.
Advertisement
Why lawn sprinkler timing is so important
Bilanol/Shutterstock
Turning on your lawn sprinkler system is anything but an arbitrary decision. It needs to happen when the environment is just right, or else there could be serious consequences. For one, it’s no secret that running and leaving water through unprepared pipes in freezing conditions can lead to damage. This water freezes, expands, and cracks pipes and fittings. If you manage to avoid pipe or sprinkler damage, you’re still at risk of shortening the lifespan of the system by running it when it’s not necessary. The longer you run your system, the more wear and tear it endures, potentially leading to it failing sooner than it should.
The consequences of activating a sprinkler system early go beyond the health of the system itself. Ice and snow melt takes time to soak into the ground, so any excess water from a sprinkler system may lead to sogginess and puddles at best, or leave your grass susceptible to disease at worst. Not to mention, running your sprinklers more than necessary will, of course, lead to a higher water bill. Thus, don’t be afraid to show some restraint, even if it looks like your lawn is in need of watering right out of winter.
Advertisement
Lawn care can very easily go wrong. There are many mistakes everyone makes with lawn mowers, for instance, and homeowners can also turn on their lawn sprinklers at the wrong time of year. That’s why it’s key to keep an eye on the weather and sustained temperatures before officially beginning your spring watering.
People are constantly pushing the boundaries of 3D printing, but shoes have long been the holy grail, or rather the holy nightmare, of the technology. They must be able to bend with each step, provide traction on a variety of surfaces, and withstand regular use without falling apart at the seams. DaveRig Design took on this exact task in a recent project, resulting in a pair of casual shoes that look and feel right at home on the street.
He started with the CityStep casual everyday sneaker design, which you can get at MakerWorld. This design features a slip-on form with a contoured profile that wraps around your foot snugly at the back and sides, while leaving the top of the shoe open and breathable. The design features a dense infill pattern on top to give it a knit fabric look and feel; there are no separate parts or glue jobs necessary, and the greatest part is that each shoe prints upright in one piece with a tiny heel stand to protect it from tumbling over while printing. Print times on typical machines are roughly sixty-six hours each pair, so you’re looking at around seventy-six hours on some machines due to the fine details and support structures.
High-Speed Precision: Experience unparalleled speed and precision with the Bambu Lab A1 3D Printer. With an impressive acceleration of 10,000 mm/s…
Multi-Color Printing with AMS lite: Unlock your creativity with vibrant and multi-colored 3D prints. The Bambu Lab A1 3D printers make multi-color…
Full-Auto Calibration: Say goodbye to manual calibration hassles. The A1 3D printer takes care of all the calibration processes automatically…
The actual game changer was the material he chose. DaveRig chose BIQU MorPhlex filament, a flexible choice that handles like ordinary TPU out of the spool, with a hardness of roughly 90 A, which is rigid enough to keep the printer from stringing and jamming, which is a common problem with softer filaments. Once the print is completed and the material has cooled, it transitions to a considerably softer seventy-five A rubber-like feel that provides cushioning and traction without the need for any additional post-processing gimmicks. He was using a Snapmaker U1 tool changer, a machine designed to automatically swap between four separate extruders, which came in handy for a project that required over three thousand swaps to blend colors and hardness levels across different parts of the shoe, ensuring that the sole remained grippy, the midsection flexed naturally, and the upper remained light and airy all at once.
Before sending it to the printer, he spent some time in Blender fine-tuning the model, making subtle changes to get the layer bonding just perfect so the finished shoes wouldn’t split when stretched over your foot. Supports were made with a combination of flexible filament and conventional PLA to make them easy to remove when the print was completed, and he strengthened them to keep them from shifting around during the long print. To ensure perfect colour consistency, he ran both shoes side by side on the same build plate.
When the print was finally completed and the supports were removed without a hitch, the results were a pleasant surprise, nearly factory-fresh polished. The upper has a nice textured surface that smoothes over the layer lines so they are scarcely noticeable, and they appear to have come off a production line rather than a homemade work. The sole provides just enough traction, the MorPhlex’s post-print softness makes it easy to grab surfaces, and the heel cup keeps everything held in place without slipping around during normal walking.
The US Commodity Futures Trading Commission is suing Illinois, Arizona and Connecticut for attempting to outlaw or regulate prediction markets like Kalshi and Polymarket. The CFTC believes it has sole jurisdiction to regulate these platforms, and that states attempting to classify them as illegal gambling are overstepping their authority.
CFTC defines prediction markets as “designated contract markets” where futures contracts are traded, essentially letting people bet on the outcome of events (for example, who will be the Democratic nominee for president in 2028). And because futures contracts are financial instruments distinct from traditional bets, they arguably fall under the supervision of the CFTC rather than the sports gambling authorities of individual states.
Multiple states, including the three the CFTC is suing, have challenged that interpretation of what prediction markets are and how they operate. Nevada sued Kalshi in February for operating a sports gambling market without proper licenses, a lawsuit made possible because a federal appeals court declined to prevent Nevada from pursuing its case. Arizona’s attorney general filed a lawsuit against Kalshi in March along similar illegal sports gambling lines, and because the platform let people bet on Arizona elections, which violates state law. Both Illinois and Connecticut have also sent Kalshi and other prediction markets cease-and-desist letters, ordering them to stop advertising and offering their services in their respective states.
“The CFTC will continue to safeguard its exclusive regulatory authority over these markets and defend market participants against overzealous state regulators,” CFTC Chairman Michael S. Selig said in a statement. “This is not the first time states have tried to impose inconsistent and contrary obligations on market participants, but Congress specifically rejected such a fragmented patchwork of state regulations because it resulted in poorer consumer protection and increased risk of fraud and manipulation.”
Advertisement
Attempts to regulate, or in this case, stave off regulation of predication markets are complicated by the fact that President Donald Trump’s family has ties to the industry. Donald Trump Jr. is a paid advisor for Kalshi and investor in Polymarket. Major transactions made before recent US military actions in Iran have also suggested that people close to the government might be trading on prediction markets with insider knowledge. Some prediction markets have implemented new rules to prevent insider trading, but given the circumstances, it makes sense that states wouldn’t be satisfied with companies policing themselves.
Ahead of its premiere, Dave Filoni has revealed that the Star Wars animated series Maul: Shadow Lord will return for a second season. The Lucasfilm co-president revealed that season 2 is already in the works, telling Esquire that “at the end of the day, people like that character.”
Filoni didn’t reveal any other details about the plot or release date for season 2. However, the news isn’t a great surprise given Lucasfilm’s past history with its animated series — The Clone Wars ran seven seasons, Star Wars Rebels four seasons, Star Wars Resistance two seasons and Star Wars: The Bad Batch three seasons.
Maul: Shadow Lord explores the Zebrak Sith Lord’s story about a year after the time of the Clone Wars. Season 1’s 10 seasons will stream twice a week on Disney+ starting on April 6 and run through May 6. It covers Maul’s plot to rebuild his criminal syndicate “on a planet untouched by the Empire,” according to Lucasfilm. “There, he crosses paths with a disillusioned young Jedi Padawan who may just be the apprentice he is seeking to aid him in his relentless pursuit for revenge.”
Microsoft has been pushing AI on consumers whether they wanted it or not. Given the ferocity with which the company has been pushing AI into its products, you might be surprised to learn that it didn’t use its own AI. It took OpenAI’s technology, wrapped it into Copilot and Teams, and called it a day.
But things are changing. Whether the company noticed the public’s negative reaction to its bloated Windows 11 operating system or saw Linux gaining market share in gaming, Microsoft is finally working to introduce a calmer Windows 11 and focus on developing its own AI models.
As reported by Bloomberg, Mustafa Suleiman, CEO of Microsoft AI, made the ambition clear: “Certainly by 2027, the objective is to really get to state-of-the-art,” covering models that can handle text, images, and audio.
What was stopping Microsoft from doing this sooner?
A contract. Microsoft’s deal with OpenAI previously prevented the company from building its own broadly capable AI models. That clause was removed as part of a renegotiated agreement last year, giving Microsoft the freedom to operate independently.
Advertisement
Nadeem Sarwar / Digital Trends
The company isn’t starting from zero, either. In October, Microsoft began using a cluster of Nvidia GB200 chips to build the computing power needed for frontier-level AI development. Regarding the timeline, “we’re sort of ramping over the next sort of 12 to 18 months to get to frontier-scale compute,” Suleyman said.
What does this mean for you?
The first sign of this push is here. Microsoft has released a speech transcription model that outperforms rival products in 11 of the 25 most widely spoken languages. It’s built to handle noisy environments and will soon be rolling out to Teams and other Microsoft apps.
Microsoft
The bigger picture is that Microsoft wants long-term AI self-sufficiency. CEO Satya Nadella reinforced the message this week, emphasizing the importance of building state-of-the-art models over the next three to five years.
For everyday users, more competition in AI means better, smarter tools built into the apps you use. On the other hand, it also means another big company exponentially ramping up purchases of GPUs and RAM, which will drive prices for consumer RAM, GPUs, and SSDs even further.
The path from successful startup to industry heavyweight is often marked by the ability to solve massive, complex problems at scale — whether those challenges are on a farm, battlefield or in low-Earth orbit.
This GeekWire Award, presented by Baird, takes notice of the next dominant force in Pacific Northwest tech. The Next Tech Titan finalists are: Overland AI, Carbon Robotics, Stoke Space, Chainguard and MotherDuck.
Now in its 18th year, the GeekWire Awards is the premier event recognizing the top leaders, companies and breakthroughs in Pacific Northwest tech, bringing together hundreds of people to celebrate innovation and the entrepreneurial spirit. It takes place May 7 at the Showbox SoDo in Seattle.
Last year’s Next Tech Titan winner was Truveta, a Bellevue, Wash.-based company that aims to aggregate medical records data from partner institutions to link treatments with outcomes and underlying health. Truveta raised $320 million in fresh funding in 2025 to push its valuation above $1 billion.
Continue reading for information on the 2026 Next Tech Titan finalists, who were chosen by a panel of independent judges from community nominations. You can help pick the winner: Cast your ballot here or in the embedded form at the bottom. Voting runs through April 10.
Overland AI develops autonomous vehicle software and hardware designed specifically for complex, off-road environments. The company’s platform allows robotic vehicles to navigate high-speed, unpredictable terrain where GPS and cellular signals are often unavailable. Overland is focused on operational integration with the U.S. Army and Marine Corps, and is a key player in the emerging defense-tech corridor of the Pacific Northwest.
Advertisement
GeekWire first covered Overland AI in 2022 when it was a small, stealthy group of researchers spinning out of the University of Washington’s Robot Learning Laboratory. The company, No. 12 on the GeekWire 200, has grown to more than 100 employees, raised more than $140 million, and opened a 22,000 square-foot production facility in Seattle since then.
Ag-tech startup Carbon Robotics builds AI-powered machinery designed to eliminate weeds without the use of chemical herbicides. Its flagship LaserWeeder uses computer vision to identify and zap weeds with lasers, a process powered by the company’s “Large Plant Model.” This AI model, trained on 150 million labeled plants, allows the machines to adapt to new crops and environments in minutes. The company is also expanding into autonomous farm equipment with its Carbon ATK platform and an unrevealed new AI robot.
Founded in 2018 by Isilon Systems co-founder Paul Mikesell, the Seattle-based company has raised $177 million to date and employs about 260 people. Its LaserWeeders are now active on hundreds of farms across 15 countries, helping growers significantly reduce labor and pesticide costs. Carbon is No. 10 on the GeekWire 200.
Stoke Space is developing Nova, a medium-lift rocket designed for 100% reusability and rapid turnaround between flights. Unlike competitors that focus on heavy-lift vehicles, the Kent, Wash.-based company is targeting the medium-lift market with a unique second-stage design featuring an actively cooled heatshield for atmospheric reentry. The goal is to provide a more flexible and cost-effective launch platform that can be reused as seamlessly as an aircraft.
Advertisement
Founded by former Blue Origin and SpaceX engineers, Stoke Space has raised $1.34 billion to date, including a massive $860 million Series D round concluded in early 2026. The company, No. 8 on the GeekWire 200, is currently preparing for its first orbital launch from Cape Canaveral later this year and has already been selected by the U.S. Space Force for national security launches.
Chainguard secures the “software supply chain” by protecting the open-source components and container images used in modern cloud applications. The company’s tools allow developers to use verified, vulnerability-free code, automating the process of keeping foundational software secure. By focusing on the root of software production, Chainguard helps engineering teams eliminate security risks without slowing down development cycles.
Founded in 2021 and based in Kirkland, Wash., the startup has raised $892 million to date, reaching a $3.5 billion valuation. In fiscal year 2025, the company grew its annual recurring revenue sevenfold to $40 million. Now employing more than 500 people and serving over 200 customers — including GitLab and Hewlett Packard Enterprise — Chainguard is No. 2 on the GeekWire 200.
MotherDuck provides a serverless analytics platform built on the open-source DuckDB database engine. Designed for “small data” that doesn’t reach petabyte scale, the technology allows users to run fast SQL queries locally in a browser or in the cloud without the complexity of distributed architectures. By merging local processing speed with cloud scalability, the platform aims to make data analysis more cost-effective and accessible.
Advertisement
Founded in 2022 by former Google BigQuery founding engineer Jordan Tigani, the Seattle startup has raised more than $100 million and is No. 25 on the GeekWire 200.
The event will feature a VIP reception, sit-down dinner and fun entertainment mixed in. Tickets go fast. A limited number of half-table and full-table sponsorships available. Contact events@geekwire.com to reserve a spot for your team today.
(function(t,e,s,n){var o,a,c;t.SMCX=t.SMCX||[],e.getElementById(n)||(o=e.getElementsByTagName(s),a=o[o.length-1],c=e.createElement(s),c.type=”text/javascript”,c.async=!0,c.id=n,c.src=”https://widget.surveymonkey.com/collect/website/js/tRaiETqnLgj758hTBazgd5M58tggxeII7bOlSeQcq8A_2FgMSV6oauwlPEL4WBj_2Fnb.js”,a.parentNode.insertBefore(c,a))})(window,document,”script”,”smcx-sdk”); Create your own user feedback survey
You must be logged in to post a comment Login