Connect with us
DAPA Banner

Tech

43% of AI-generated code changes need debugging in production, survey finds

Published

on

The software industry is racing to write code with artificial intelligence. It is struggling, badly, to make sure that code holds up once it ships.

A survey of 200 senior site-reliability and DevOps leaders at large enterprises across the United States, United Kingdom, and European Union paints a stark picture of the hidden costs embedded in the AI coding boom. According to Lightrun’s 2026 State of AI-Powered Engineering Report, shared exclusively with VentureBeat ahead of its public release, 43% of AI-generated code changes require manual debugging in production environments even after passing quality assurance and staging tests. Not a single respondent said their organization could verify an AI-suggested fix with just one redeploy cycle; 88% reported needing two to three cycles, while 11% required four to six.

The findings land at a moment when AI-generated code is proliferating across global enterprises at a breathtaking pace. Both Microsoft CEO Satya Nadella and Google CEO Sundar Pichai have claimed that around a quarter of their companies’ code is now AI-generated. The AIOps market — the ecosystem of platforms and services designed to manage and monitor these AI-driven operations — stands at $18.95 billion in 2026 and is projected to reach $37.79 billion by 2031.

Yet the report suggests the infrastructure meant to catch AI-generated mistakes is badly lagging behind AI’s capacity to produce them.

Advertisement

“The 0% figure signals that engineering is hitting a trust wall with AI adoption,” said Or Maimon, Lightrun’s chief business officer, referring to the survey’s finding that zero percent of engineering leaders described themselves as “very confident” that AI-generated code will behave correctly once deployed. “While the industry’s emphasis on increased productivity has made AI a necessity, we are seeing a direct negative impact. As AI-generated code enters the system, it doesn’t just increase volume; it slows down the entire deployment pipeline.”

Amazon’s March outages showed what happens when AI-generated code ships without safeguards

The dangers are no longer theoretical. In early March 2026, Amazon suffered a series of high-profile outages that underscored exactly the kind of failure pattern the Lightrun survey describes. On March 2, Amazon.com experienced a disruption lasting nearly six hours, resulting in 120,000 lost orders and 1.6 million website errors. Three days later, on March 5, a more severe outage hit the storefront — lasting six hours and causing a 99% drop in U.S. order volume, with approximately 6.3 million lost orders. Both incidents were traced to AI-assisted code changes deployed to production without proper approval.

The fallout was swift. Amazon launched a 90-day code safety reset across 335 critical systems, and AI-assisted code changes must now be approved by senior engineers before they are deployed.

Maimon pointed directly to the Amazon episodes. “This uncertainty isn’t based on a hypothesis,” he said. “We just need to look back to the start of March, when Amazon.com in North America went down due to an AI-assisted change being implemented without established safeguards.”

Advertisement

The Amazon incidents illustrate the central tension the Lightrun report quantifies in survey data: AI tools can produce code at unprecedented speed, but the systems designed to validate, monitor, and trust that code in live environments have not kept pace. Google’s own 2025 DORA report corroborates this dynamic, finding that AI adoption correlates with an increase in code instability, and that 30% of developers report little or no trust in AI-generated code.

Maimon cited that research directly: “Google’s 2025 DORA report found that AI adoption correlates with an almost 10% increase in code instability. Our validation processes were built for the scale of human engineering, but today, engineers have become auditors for massive volumes of unfamiliar code.”

Developers are losing two days a week to debugging AI-generated code they didn’t write

One of the report’s most striking findings is the scale of human capital being consumed by AI-related verification work. Developers now spend an average of 38% of their work week — roughly two full days — on debugging, verification, and environment-specific troubleshooting, according to the survey. For 88% of the companies polled, this “reliability tax” consumes between 26% and 50% of their developers’ weekly capacity.

This is not the productivity dividend that enterprise leaders expected when they invested in AI coding assistants. Instead, the engineering bottleneck has simply migrated. Code gets written faster, but it takes far longer to confirm that it works.

Advertisement

“In some senses, AI has made the debugging problem worse,” Maimon said. “The volume of change is overwhelming human validation, while the generated code itself frequently does not behave as expected when deployed in Production. AI coding agents cannot see how their code behaves in running environments.”

The redeploy problem compounds the time drain. Every surveyed organization requires multiple deployment cycles to verify a single AI-suggested fix — and according to Google’s 2025 DORA report, a single redeploy cycle takes a day to one week on average. In regulated industries such as healthcare and finance, deployment windows are often narrow, governed by mandated code freezes and strict change-management protocols. Requiring three or more cycles to validate a single AI fix can push resolution timelines from days to weeks.

Maimon rejected the idea that these multiple cycles represent prudent engineering discipline. “This is not discipline, but an expensive bottleneck and a symptom of the fact that AI-generated fixes are often unreliable,” he said. “If we can move from three cycles to one, we reclaim a massive portion of that 38% lost engineering capacity.”

AI monitoring tools can’t see what’s happening inside running applications — and that’s the real problem

If the productivity drain is the most visible cost, the Lightrun report argues the deeper structural problem is what it calls “the runtime visibility gap” — the inability of AI tools and existing monitoring systems to observe what is actually happening inside running applications.

Advertisement

Sixty percent of the survey’s respondents identified a lack of visibility into live system behavior as the primary bottleneck in resolving production incidents. In 44% of cases where AI SRE or application performance monitoring tools attempted to investigate production issues, they failed because the necessary execution-level data — variable states, memory usage, request flow — had never been captured in the first place.

The report paints a picture of AI tools operating essentially blind in the environments that matter most. Ninety-seven percent of engineering leaders said their AI SRE agents operate without significant visibility into what is actually happening in production. Approximately half of all companies (49%) reported their AI agents have only limited visibility into live execution states. Only 1% reported extensive visibility, and not a single respondent claimed full visibility.

This is the gap that turns a minor software bug into a costly outage. When an AI-suggested fix fails in production — as 43% of them do — engineers cannot rely on their AI tools to diagnose the problem, because those tools cannot observe the code’s real-time behavior. Instead, teams fall back on what the report calls “tribal knowledge”: the institutional memory of senior engineers who have seen similar problems before and can intuit the root cause from experience rather than data. The survey found that 54% of resolutions to high-severity incidents rely on tribal knowledge rather than diagnostic evidence from AI SREs or APMs.

In finance, 74% of engineering teams trust human intuition over AI diagnostics during serious incidents

The trust deficit plays out with particular intensity in the finance sector. In an industry where a single application error can cascade into millions of dollars in losses per minute, the survey found that 74% of financial-services engineering teams rely on tribal knowledge over automated diagnostic data during serious incidents — far higher than the 44% figure in the technology sector.

Advertisement

“Finance is a heavily regulated, high-stakes environment where a single application error can cost millions of dollars per minute,” Maimon said. “The data shows that these teams simply do not trust AI not to make a dangerous mistake in their Production environments. This is a rational response to tool failure.”

The distrust extends beyond finance. Perhaps the most telling data point in the entire report is that not a single organization surveyed — across any industry — has moved its AI SRE tools into actual production workflows. Ninety percent remain in experimental or pilot mode. The remaining 10% evaluated AI SRE tools and chose not to adopt them at all. This represents an extraordinary gap between market enthusiasm and operational reality: enterprises are spending aggressively on AI for IT operations, but the tools they are buying remain quarantined from the environments where they would deliver the most value.

Maimon described this as one of the report’s most significant revelations. “Leaders are eager to adopt these new AI tools, but they don’t trust AI to touch live environments,” he said. “The lack of trust is shown in the data; 98% have lower trust in AI operating in production than in coding assistants.”

The observability industry built for human-speed engineering is falling short in the age of AI

The findings raise pointed questions about the current generation of observability tools from major vendors like Datadog, Dynatrace, and Splunk. Seventy-seven percent of the engineering leaders surveyed reported low or no confidence that their current observability stack provides enough information to support autonomous root cause analysis or automated incident remediation.

Advertisement

Maimon did not shy away from naming the structural problem. “Major vendors often build ‘closed-garden’ ecosystems where their AI SREs can only reason over data collected by their own proprietary agents,” he said. “In a modern enterprise, teams typically have a multi-tool stack to provide full coverage. By forcing a team into a single-vendor silo, these tools create an uncomfortable dependency and a strategic liability: if the vendor’s data coverage is missing a specific layer, the AI is effectively blind to the root cause.”

The second issue, Maimon argued, is that current observability-backed AI SRE solutions offer only partial visibility — defined by what engineers thought to log at the time of deployment. Because failures rarely follow predefined paths, autonomous root cause analysis using only these tools will frequently miss the key diagnostic evidence. “To move toward true autonomous remediation,” he said, “the industry must shift toward AI SRE without vendor lock-in; AI SREs must be an active participant that can connect across the entire stack and interrogate live code to capture the ground truth of a failure as it happens.”

When asked what it would take to trust AI SREs, the survey’s respondents coalesced unanimously around live runtime visibility. Fifty-eight percent said they need the ability to provide “evidence traces” of variables at the point of failure, and 42% cited the ability to verify a suggested fix before it actually deploys. No respondents selected the ability to ingest multiple log sources or provide better natural language explanations — suggesting that engineering leaders do not want AI that talks better, but AI that can see better.

The question is no longer whether to use AI for coding — it’s whether anyone can trust what it produces

The survey was administered by Global Surveyz Research, an independent firm, and drew responses from Directors, VPs, and C-level executives in SRE and DevOps roles at enterprises with 1,500 or more employees across the finance, technology, and information technology sectors. Responses were collected during January and February 2026, with questions randomized to prevent order bias.

Advertisement

Lightrun, which is backed by $110 million in funding from Accel and Insight Partners and counts AT&T, Citi, Microsoft, Salesforce, and UnitedHealth Group among its enterprise clients, has a clear commercial interest in the problem the report describes: the company sells a runtime observability platform designed to give AI agents and human engineers real-time visibility into live code execution. Its AI SRE product uses a Model Context Protocol connection to generate live diagnostic evidence at the point of failure without requiring redeployment. That commercial interest does not diminish the survey’s findings, which align closely with independent research from Google DORA and the real-world evidence of the Amazon outages.

Taken together, they describe an industry confronting an uncomfortable paradox. AI has solved the slowest part of building software — writing the code — only to reveal that writing was never the hard part. The hard part was always knowing whether it works. And on that question, the engineers closest to the problem are not optimistic.

“If the live visibility gap is not closed, then teams are really just compounding instability through their adoption of AI,” Maimon said. “Organizations that don’t bridge this gap will find themselves stuck with long redeploy loops, to solve ever more complex challenges. They will lose their competitive speed to the very AI tools that were meant to provide it.”

The machines learned to write the code. Nobody taught them to watch it run.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

The Batman Part II: Release date, cast, plot, and everything we know so far

Published

on

The Batman: Part II is now set for an October 1, 2027 release, following multiple delays that pushed the sequel well beyond its original 2025 window. The extended timeline reflects a longer development cycle for director Matt Reeves’ follow-up, with the script only recently completed and production now expected to begin in spring 2026.

The sequel continues Reeves’ grounded take on Gotham, which began with The Batman in 2022. That film earned over $770 million globally and established a more detective-driven version of Bruce Wayne, set within what Reeves has described as an “epic crime saga.” Part II is expected to build directly on that foundation, exploring the aftermath of Gotham’s collapse and Bruce’s evolving role within it.

The delays have been tied to both industry-wide disruptions and Reeves’ deliberate approach to the script. DC Studios co-head James Gunn has confirmed that a completed draft is now in place, allowing the project to move forward after a prolonged development phase. With production finally on the horizon, the sequel is shifting from uncertainty to execution.

Robert Pattinson will return as Bruce Wayne/Batman and there are details with regards to the other cast as well. In a recent interview to French reality show C à vous, Pattinson shared that  “The new script is so, so good, I’m very excited about it.” However, that is not all there is to update about this much-talked about film and thus, here is a complete rumor roundup.

Advertisement

When is The Batman Part II releasing?

Robert Pattinson in The Batman.
Warner Bros. Pictures

The Batman: Part II has seen several release date changes. The Batman Part II was initially supposed to premiere on October 3, 2025. Unfortunately, nothing is guaranteed in the world of filmmaking.

The Batman Part II‘s release date was pushed by nearly a year to October 2, 2026. The film was then delayed again by almost another year. The Batman Part II is now scheduled to premiere on October 1, 2027. Oof.

Yes, the world will have to wait even longer to see Pattinson suit up again, marking a five-year span between the original film and the sequel. The delays stem from a combination of factors. The writers’ and actors’ strikes slowed development across Hollywood, while Reeves took additional time to finalize the script. Given the scale and expectations surrounding the sequel, the extended timeline appears to be a deliberate choice rather than a production setback.

The result is a five-year gap between the first film and its sequel — longer than typical superhero franchise timelines, but not unusual for director-driven projects of this scope.

What’s the plot of The Batman Part II?

The cast of The Batman.
Warner Bros. Pictures

Plot details remain tightly under wraps, but the sequel is expected to continue directly from the events of The Batman.

The first film ended with Gotham flooded and its institutions exposed as deeply corrupt. Bruce Wayne, having begun his transformation from a symbol of vengeance into a figure of hope, now faces a city in deeper chaos. Crime is likely to rise in the power vacuum left behind, setting the stage for a more complex and unstable Gotham.

Advertisement

My money is on Batman facing Thomas Elliot, a.k.a. Hush. Once a childhood friend of Bruce Wayne, this villain is famous for teaming up with the Riddler in the comics, as well as recruiting multiple other villains to battle and torment Batman with his knowledge of the hero’s true identity. At one point, Batman was even forced to fight a brainwashed Superman because of Hush.

The Batman did have the Riddler reveal that Bruce’s father inadvertently caused the death of a reporter named Edward Elliot. Given his surname, Edward may very well be Hush’s father. Combined with online rumors, it seems likely that the sequel will feature Hush as the main villain, seeking vengeance against Bruce for his father’s role in Edward’s murder.

Reeves also revealed that Bruce Wayne is going to have trouble being the hero Gotham needs.

“This was a time of great turmoil in the city, it’s literally the week after what happened,” he explained to Digital Spy. “Much of the city is in desperation, so police can’t get everywhere, there’s crime everywhere, it’s a very, very dangerous time. [Batman’s] out there trying to grapple with the aftermath of everything that happened, which to some degree he blames himself for.”

Advertisement

Even in the recent interactions, Reeves has indicated that the sequel will explore that instability, focusing on how both Batman and Bruce Wayne evolve in response to the city’s changing conditions.

Who is in the cast of The Batman Part II?

Robert Pattinson in The Batman.
Warner Bros. Pictures

When Warner Bros. announced The Batman Part II in April 2022, only Pattinson was confirmed to return as Bruce Wayne/Batman. This seemed pretty obvious, given he plays the franchise’s lead character. Regardless, we’re almost 100% sure these core cast members will return: Jeffrey Wright as James Gordon and Andy Serkis as Alfred Pennyworth.

Reeves confirmed to SFX magazine that Colin Farrell’s character, Oz Cobb/The Penguin, will be part of the movie. Farrell also starred in HBO Max’s spinoff series, The Penguin, which chronicled Cobb’s rise to the top of Gotham’s criminal underworld.

Farrell already shared his expectations for the sequel and what his contract with the franchise entails. “I signed up for three Batman films, but I didn’t know if I’d be in the second film,” he told The Hollywood Reporter. “Matt Reeves is a brilliant writer and an extraordinary filmmaker, and what I’m most excited-slash-nervous about in the second film is not what Oz does – or what predicaments he finds himself in, or what moments of success he gets to experience – but what his voice is.”

“I was told I have five or six scenes. I don’t have any hopes or any expectations. I’m really an open book, and that’s the way I get excited by shit or not,” he continued. “I think sometimes actors, if they have a career that has a certain length of time, they sometimes get to make too many decisions. Which isn’t to say I won’t push back or argue or fight in Oz’s corner – I do believe I know him better than anyone now.”

Advertisement

Source link

Advertisement
Continue Reading

Tech

Malware campaign lures users with fake Windows Update website

Published

on


Malwarebytes recently uncovered a new malicious campaign targeting the Windows Update service. Focused on French-speaking users, the campaign uses layered obfuscation techniques to deliver multiple malicious payloads built with legitimate tools. The malware’s primary goal is to steal passwords and other sensitive user data.
Read Entire Article
Source link

Continue Reading

Tech

AI-powered hiring startup Humanly acquires Anthill to boost employee engagement

Published

on

(Image via Humanly)

Bellevue, Wash.-based Humanly, a startup that makes AI-powered interviewing tools for employers, announced it has acquired Anthill, a platform that uses AI to help companies connect with and support frontline employees.

It’s the latest acquisition for Humanly, which scooped up three recruiting technology companies last year — Sprockets, Qualifi, and HourWork. 

Humanly said Tuesday that the Anthill acquisition adds “post-hire engagement capabilities” to its offerings, which include helping organizations attract, screen, and interview job candidates.

Humanly will continue operating the Anthill platform as it explores how to integrate its capabilities into the broader Humanly platform. Terms of the deal were not disclosed.

Founded in 2018, Humanly is led by CEO Prem Kumar. The startup, ranked No. 152 on the GeekWire 200, just announced a $25 million Series B funding round last week, and has raised $52 million to date.

Advertisement

Founded in 2020 by Muriel Clauson Closs, Young-Jae Kim, and Laura Silvester, Chicago-based Anthill built technology designed to help frontline managers and distributed teams stay connected through messaging, feedback, and operational support. Anthill raised approximately $10 million in funding.

Source link

Continue Reading

Tech

Nvidia unveils open-source quantum AI model Ising

Published

on

Ising models are designed to help perform quantum error correction and calibration.

Nvidia has announced a new family of open-source quantum AI models on World Quantum Day (14 April).

‘Ising’, the “world’s first” open models for building quantum processors, joins a growing list of Nvidia open-source models including ‘Alpamayo’ for autonomous vehicles, ‘Nemotron’ for agentic systems and ‘Cosmos’ for physical AI.

Ising models are designed to help researchers and enterprises perform quantum error correction and calibration.

Advertisement

The family includes Ising Calibration, a vision language model that can interpret and react to measurements from quantum processors, and Ising Decoding, two variants of a 3D convolutional neural network model that can perform real-time decoding for quantum error correction.

Ising Decoding can deliver up to two and a half-times faster performance and three-times higher accuracy than current open-source industry standards, Nvidia said. The models are available for download on GitHub, Hugging Face and Nvidia.

The Ising models are already in use at the Harvard John A Paulson School of Engineering and Applied Sciences, IQM Quantum Computers, Lawrence Berkeley National Laboratory’s Advanced Quantum Testbed, the UK National Physical Laboratory and the University of California San Diego, as well as a list of other prominent names disclosed by the company.

“AI is essential to making quantum computing practical,” said Jensen Huang, the founder and CEO of Nvidia. “With Ising, AI becomes the control plane – the operating system of quantum machines – transforming fragile qubits to scalable and reliable quantum GPU systems.”

Advertisement

Ising joins other Nvidia quantum-specific products, including the CUDA-Q quantum software platform, and the NVQ Link that connects GPU computing with quantum processors.

With major funding rounds, and a strong focus on research and development, the quantum sector is expected grow to more than $11bn in value by 2030.

In Ireland, home-grown start-up Equal1, which announced a $60m round in January, is working towards bringing its rack-mounted quantum processing units to the enterprise market.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

What to expect from Google I/O 2026

Published

on

We’re sliding into developer conference season and one of the biggest events on the upcoming calendar is Google I/O. This year’s edition is taking place on May 19 and 20. As usual, the in-person element will happen in Mountain View, California, though many of the keynotes and sessions will be livestreamed. Google will surely make its biggest announcements during the opening keynote, which will start at 1PM ET on May 19. A developer keynote will take place later the same day.

As ever, the rumor mill will pick up speed in the leadup to Google I/O. We do have some ideas about what Google will discuss at the event. So let’s take a look at what to expect at Google I/O 2026 (we’ll update this story as we hear more credible rumors).

What’s officially on deck

Google I/O logo

Google I/O logo (Google)

When it confirmed the dates for this year’s I/O, Google revealed a little bit about what it has in store for us. As you might imagine, AI will be a major focus of the event. Google plans to share its “AI breakthroughs and updates in products across the company, from Gemini to Android, Chrome, Cloud and more,” it wrote in a blog post in February.

There will be news on Gemini model updates as well as agentic coding. Google will have some product demos too.

Advertisement

The company has released its initial schedule of keynotes and sessions, but it doesn’t provide us with a lot of specifics as yet. It has lined up discussions on what’s new in the likes of Google Play, Firebase (a mobile and web app development platform), the Gemma open model family and the open-source app development framework Flutter. Interestingly, there isn’t a dedicated session for Android XR on the schedule just yet.

What to expect

Leaked image of Google's Aluminium OS

Leaked image of Google’s Aluminium OS (9to5Google)

There haven’t been many credible leaks ahead of Google I/O as yet, but we can make some educated guesses about what to expect from the event. It’s all but certain that we’ll get more details about Android 17 at I/O. Developers need time to tweak their apps ahead of the next major version of the operating system rolling out to everyone if they want to take advantage of new features as soon as possible, and they invariably get a heads up about those at I/O every year. (That said, Google has been moving away from a big annual release approach in favor of juicier Pixel Drops/Android updates, so we may not see some of the new features it unveils at I/O for some time.)

As for other operating systems, Google is planning to meld ChromeOS and Android into a unified platform. This seems to be the project that’s being referred to as Aluminium OS, which we got a first glimpse of earlier this year thanks to some leaks. I/O seems like the perfect venue for Google to start showing that off to the public.

On the AI front, a reveal of Gemini 4 could be on the docket, along with details of the latest Veo text-to-video model. Maybe we’ll hear more about Project Astra, Google’s pitch for a universal AI assistant.

Advertisement

If Google has some consumer hardware to show off at this year’s event, I suspect it’ll be an Android XR device or devices, rather than a Pixel phone or watch. There is a chance that we’ll get a tease of the Google Pixel 11 lineup. But don’t be surprised if we don’t see that or the Pixel Watch 5 until Google’s dedicated hardware event, which has taken place in August or October in recent years (Google will want to stay well away from Apple’s iPhone event, which will likely take place in September as usual).

Here’s hoping for a big surprise or two

A banner image with the Google Beam logo on the left and a person sitting in front of the Beam screen talking to another person, who appears to pop slightly out of the screen.

Google

Sure, Android updates are all well and good. If Google insists on cramming Gemini and other AI tools into all of its tools and services, we’ll at least listen to what they have to say about all that.

But I have my fingers crossed for some cool surprises. Give us something new from Google X (Alphabet’s moonshot factory, not the thing that was once Twitter), an idea that could be a net benefit for humanity and boost the company’s bottom line at the same time. These events are always more fun when there’s something for us to get genuinely excited about, even if it’s something relatively niche but out there, like the Google Beam 3D video conferencing tech.

Source link

Advertisement
Continue Reading

Tech

How this master’s programme is building tech leadership talent

Published

on

Susan Kelly discusses Technology Ireland ICT Skillnet’s tech leadership master’s programme, which is celebrating 20 years in operation.

Last week, Technology Ireland ICT Skillnet announced its plans to award four fully funded places on its MSc in Leadership, Innovation and Technology programme to celebrate 20 years since the programme’s inception.

The funding – called the ‘Big 20 Giveaway’ – is valued at €20,000 per annum per place and will cover all tuition fees of the two-year programme for four candidates.

“The Big 20 Giveaway is a celebration of the programme’s 20-year impact, but also a very practical initiative to support future talent,” says Susan Kelly, network director at Technology Ireland ICT Skillnet.

Advertisement

“What we’re really celebrating is the impact the programme has had with over 300 graduates who have gone on to lead teams, functions and transformation initiatives across Ireland’s technology landscape and beyond.

“For us it is not just about looking back, it’s about investing in what comes next.”

The programme

But what is the course actually about?

The programme, which is delivered at Technological University Dublin, is a part-time, applied master’s designed specifically for experienced professionals working in technology and innovation-led environments.

Advertisement

“Its core objective is to help people move beyond technical expertise and develop the capability to lead, whether that is leading teams, driving innovation or shaping strategy at an organisational level,” says Kelly.

She tells SiliconRepublic.com that the programme focuses on three key areas: leadership capability, innovation and transformation, and business and strategic thinking.

“What really differentiates it is that it is applied, not theoretical,” she says. “Participants work on real challenges from their own organisations, so the learning is immediately relevant and delivers tangible value both to the individual and their employer.”

The programme has been in operation since 2006, and in the 20 years since then, technology has advanced considerably.

Advertisement

Kelly explains that a course such as this is more important than ever today because “the challenge right now isn’t access to technology, it is the ability to lead with it effectively”.

“Organisations are dealing with rapid change driven by AI, digital transformation and global competition,” she says. “The professionals who will stand out are those who can connect technology, strategy and people.”

She adds that the biggest benefit of the programme is that it enables participants to make the shift “from being the person who delivers technology to the person who shapes how and why it’s used”.

“It gives them the language of business and strategy, the confidence to operate at senior levels, and the ability to lead transformation and not just contribute to it.

Advertisement

“For many, it’s the difference between continuing to grow technically and actually stepping into leadership roles with broader organisational impact.”

Who it’s for

With four fully funded places on the programme up for grabs, what constitutes an ideal candidate for the course?

Kelly says the programme is designed for what she calls the “strategic technologist”, which she explains refers to someone who is already established in their career but is ready to take the next step.

“Typically, participants are mid- to senior-level professionals working in roles like software engineering, architecture, product, project management, cybersecurity or IT leadership,” she says. “They are already technically credible but looking to expand into broader leadership or strategic roles.”

Advertisement

She says course participants are often “at a career inflection point”, where they may be leading teams or projects already but “they recognise that technical expertise alone won’t get them to the next level”.

“Many are experiencing a technical ceiling, where they are highly capable but they don’t yet have the strategic, commercial or leadership toolkit to move into senior decision-making roles. This programme is designed specifically to help them break through that barrier.”

For anyone considering applying for one of the funded positions, Kelly says the organisation is looking for motivated, ambitious people who have strong technical or functional expertise and are already operating at a high level in their organisation, and who want to have a greater impact, “not just within their team but across their organisation”.

An important criteria that she emphasises is that they’re not looking for people at the start of their careers or those looking for purely academic study.

Advertisement

“This is for professionals who are already doing significant work and want to elevate their influence and capability,” she clarifies. “We’re also looking for people who will apply what they learn in real time by bringing challenges from their workplace into the programme and using it as a platform to drive meaningful change.

“Ultimately, the strongest candidates will be those who recognise that they’ve outgrown a purely technical role and are ready to take on the responsibilities and opportunities of leadership.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

AI is speeding up and improving production, driving millions of creators to invest in better cameras and accessories

Published

on


  • Smartphone limits drive creators toward microphones, lenses, gimbals, and dedicated cameras
  • Accessory spending rises as creators invest hundreds and thousands into gear upgrades
  • AI-driven production growth exposes capture weaknesses and boosts hardware demand worldwide

Smartphones still dominate video creation, but growing evidence suggests their physical limits are driving a new spending wave on dedicated gear among millions of creators, experts have said.

A new report from Futuresource Consulting estimates the global population of online video creators reached 246 million in 2025 and could grow to 267 million by 2030. That growth is only part of the story, however, as spending patterns and equipment upgrades appear to be the real commercial driver behind the next phase.

Source link

Advertisement
Continue Reading

Tech

Google app just launched on Windows, and it wants to pull a Spotlight trick from Macs

Published

on

Google has planted its flag on Windows territory. The Google app for desktop is now globally available in English for Windows users, graduating from its experimental phase on Search Labs

The browser tab we reflexively open to use Google every five minutes now has a faster, more efficient replacement sitting on the desktop. 

What Does The App Actually Do?

The centerpiece, mind you, is a keyboard shortcut: Alt + Space. It summons a floating search bar over whatever is on the screen, similar to how Cmd + Space summons the Spotlight search on Macs.

Once you summon the search bar, you can search across local computer files, installed apps, Google Drive documents, and the internet in general, all from one place. 

If I were a Windows user (which I was until about three years ago), I would have installed the Google app for the Spotlight-like search experience alone, but my Mac’s Spotlight has been working fine for the same amount of time.

Advertisement

What else can it do?

Quite a bit, actually. Google Lens, the company’s native image-based search tool, is built directly into the new Google app for Windows. It lets users click and search for anything that’s visible on their screen. 

From translating on-screen text to solving a maths problem, you can do such things without copying anything. The app also supports screen sharing within a search session, so users can keep a document or webpage open while asking follow-up questions. 

Of course, the new Google apps come with AI Mode embedded. So, answers go beyond blue links, responses are conversational, contextual, and connected to the internet with accurate information, along with appropriate citations. 

Google’s global Windows app rollout signals something bigger than convenience; it’s a direct challenge to Microsoft’s dominance over your desktop search experience. Copilot is already embedded in Windows, so Google’s presence is also making itself felt. In the future, we might get to see a dedicated Gemini app for Windows. 

Advertisement

Source link

Continue Reading

Tech

Apple is testing four smart glasses designs as it prepares to challenge Meta Ray-Bans

Published

on


While Meta partnered with Ray-Ban to position its smart glasses as more lifestyle-oriented than experimental, Apple’s design philosophy remains distinctly in-house. In his Power On newsletter, Gurman notes that Apple is taking an independent approach, choosing to develop the product internally rather than collaborate with an established eyewear brand. Each…
Read Entire Article
Source link

Continue Reading

Tech

GeekWire Awards: The machines of the future, from self-driving earthmovers to space robots

Published

on

The finalists for Hardware/Robotics/Physical AI of the Year at the 2026 GeekWire Awards. Clockwise from top left: AIM Intelligent Machines; Brinc’s Guardian drone; Starfish Space’s Otter spacecraft; Orbital Robotics; and Augmodo’s Smartbadge. (Company Photos)

An emerging class of startups is pushing the boundaries of what machines can do in the physical world — retrofitting bulldozers to dig on their own, launching drones that beat police cars to 911 calls, outfitting retail workers with spatial computing badges, building robotic arms for spacecraft, and servicing satellites in orbit.

Those are the innovations represented by the finalists for Hardware/Robotics/Physical AI of the Year at the 2026 GeekWire Awards. 

The finalists are: AIM, Augmodo, Brinc, Orbital Robotics, and Starfish Space.

Now in its 18th year, the GeekWire Awards is the premier event recognizing the top leaders, companies and breakthroughs in Pacific Northwest tech, bringing together hundreds of people to celebrate innovation and the entrepreneurial spirit. It takes place May 7 at the Showbox SoDo in Seattle.

Continue reading for information on the Hardware/Robotics/Physical AI of the Year finalists, who were chosen by a panel of independent judges from community nominations.

You can help pick the winner: Cast your ballot here or in the embedded form at the bottom. Voting runs through April 16.

Advertisement

AIM Intelligent Machines retrofits heavy earthmoving equipment such as bulldozers and excavators to operate autonomously, using sensors and an edge computing system to build real-time 3D maps of a machine’s surroundings and navigate without a human driver.

Originally focused on mining and construction, the company recently expanded into defense, winning $4.9 million in U.S. Air Force contracts to build and repair military bases and airfields.

The Seattle-area startup announced $50 million in funding in 2025 and was founded in 2021 by engineers with experience at Waymo, SpaceX, Google, Stripe, Tesla and Apple. CEO Adam Sadilek leads the company.

Augmodo makes wearable “Smartbadge” devices for retail store employees that use computer vision and 3D mapping to collect real-time inventory data as workers move through aisles, tracking empty shelves, overstocking and product availability. The approach is designed as a cheaper and more efficient alternative to robot scanners.

Advertisement

The Seattle startup, founded in 2023, raised $37.5 million in a Series A round on top of a previously announced $5.4 million seed round. CEO Ross Finman previously co-founded Escher Reality, which was acquired by Niantic Labs, and spent more than four years at the “Pokémon Go” maker. The company recently hired a new CTO from Microsoft HoloLens and Amazon Alexa and has grown its team nearly fivefold.

Brinc builds drones for police, fire and emergency response agencies, recently unveiling Guardian, the world’s first Starlink-connected drone. Guardian can auto-launch on a 911 call, fly up to eight miles at 60 mph for more than an hour, and deliver payloads such as defibrillators and emergency medication. 

The company’s products are used by more than 900 public safety agencies and more than 20% of SWAT teams in the U.S.

Founded in 2019 by CEO Blake Resnick, the Seattle-based company raised $75 million in a round that included a strategic alliance with Motorola Solutions, bringing total funding to $157.2 million. The company now employs 160 people and is moving to a new 35,000-square-foot headquarters and factory in Seattle’s Queen Anne neighborhood.

Advertisement

Orbital Robotics is developing AI-powered robotic arms for spacecraft, tackling the challenge of manipulating objects in orbit where every movement of an arm causes the spacecraft itself to move in response. 

The Puyallup, Wash.-based startup is also working to assemble a consortium to save NASA’s aging Hubble Space Telescope by building a robotic spacecraft to boost it to a more stable orbit.

Founded in late 2024, the company has raised about $310,000 and is working with a stealthy space venture on an orbital rendezvous project for the U.S. Space Force. Co-founders Aaron Borger, Doug Kohl, Riley Mark and Sohil Pokharna are former Blue Origin engineers.

Starfish Space builds satellite servicing spacecraft designed to autonomously inspect, dock with and reposition satellites in orbit — including satellites that weren’t originally built for on-orbit servicing. Its Otter spacecraft can extend satellite lifespans by boosting them to higher orbits or move them to lower orbits for safe disposal.

Advertisement

The Tukwila, Wash.-based company, founded in 2019 by former Blue Origin engineers Austin Link and Trevor Bennett, recently raised more than $110 million in a Series B round, pushing total funding past $150 million. 

Starfish has completed three demonstration missions in orbit and has Otter missions under contract with the U.S. Space Force, NASA, SES and others, with its first operational mission expected to launch this year.

Astound Business Solutions is the presenting sponsor of the 2026 GeekWire Awards. Thanks also to gold sponsors Amazon Sustainability, BairdBECU, JLLFirst Tech and Wilson Sonsini, and silver sponsors Prime Team Partners.

The event will feature a VIP reception, sit-down dinner and fun entertainment mixed in. Tickets go fast. A limited number of half-table and full-table sponsorships are available. Contact events@geekwire.com to reserve a spot for your team today.

Advertisement

(function(t,e,s,n){var o,a,c;t.SMCX=t.SMCX||[],e.getElementById(n)||(o=e.getElementsByTagName(s),a=o[o.length-1],c=e.createElement(s),c.type=”text/javascript”,c.async=!0,c.id=n,c.src=”https://widget.surveymonkey.com/collect/website/js/tRaiETqnLgj758hTBazgd5M58tggxeII7bOlSeQcq8A_2FgMSV6oauwlPEL4WBj_2Fnb.js”,a.parentNode.insertBefore(c,a))})(window,document,”script”,”smcx-sdk”); Create your own user feedback survey

Source link

Continue Reading

Trending

Copyright © 2025