TL;DR
ZoomInfo beat Q1 earnings but cut full-year revenue guidance by 62 million dollars, announced a 600-job restructuring (20 per cent of headcount), and lost 29 per cent of its stock price as AI-native competitors reprice B2B sales intelligence.
ZoomInfo beat Q1 earnings but cut full-year revenue guidance by 62 million dollars, announced a 600-job restructuring (20 per cent of headcount), and lost 29 per cent of its stock price as AI-native competitors reprice B2B sales intelligence.
TL;DR
ZoomInfo beat its first-quarter earnings estimates, cut its full-year revenue guidance by 62 million dollars, announced a restructuring that will eliminate 600 jobs, and lost 29 per cent of its stock price in a single trading session. The company reported 310.2 million dollars in revenue, up 1.5 per cent year over year. Adjusted earnings per share came in at 28 cents, beating estimates by nearly nine per cent. None of it mattered. Investors looked at the guidance cut, the 20 per cent headcount reduction, and the 90 per cent net revenue retention rate, and sold.
The stock closed at 4.32 dollars. In November 2021, it traded at 77.35 dollars. ZoomInfo’s market capitalisation has fallen from approximately 25 billion dollars at its peak to under two billion. The company that defined business-to-business sales intelligence is now worth four per cent of what it was three and a half years ago.
First-quarter GAAP revenue was 310.2 million dollars. Adjusted operating income was 109.7 million dollars, a 35 per cent margin. GAAP operating income was 57.9 million dollars, a 19 per cent margin. Cash flow from operations was 114.7 million dollars. Unlevered free cash flow was 119.7 million dollars.
The company closed the quarter with 1,900 customers paying more than 100,000 dollars in annual contract value, up 32 year over year but down 21 from the prior quarter. The net revenue retention rate was 90 per cent. That number compresses the entire story into a single metric. A retention rate below 100 per cent means existing customers are spending less than they did a year ago. At 90 per cent, ZoomInfo is losing ten cents of every dollar of existing revenue annually through downgrades and churn.
Beneath the headline figures, the balance sheet tells a more detailed story. Long-term debt stands at 1.32 billion dollars against 171 million dollars in cash. Unearned revenue, the backlog of contracted but unrecognised revenue, was 479 million dollars. Research and development spending fell 18 per cent year over year to 42.1 million dollars, while cost of service rose 15 per cent to 43.5 million dollars. Interest expense climbed 38 per cent to 13.5 million dollars. Capital expenditure jumped 63 per cent to 24.1 million dollars. Bad debt provisions rose 37 per cent to 5.9 million dollars.
The company recorded 4 million dollars in asset impairments and lease abandonment charges that did not exist a year ago. Restructuring expenses in the income statement were 10 million dollars, nearly double the prior year’s 5.4 million. A litigation settlement cost 3.7 million dollars, up from 900,000 dollars. Goodwill remained unchanged at 1.69 billion dollars, a legacy of the 2019 merger that created ZoomInfo Technologies from DiscoverOrg’s acquisition of the original ZoomInfo.
ZoomInfo repurchased 13.1 million shares at an average price of 6.91 dollars, spending 90.5 million dollars. The buyback consumed more cash than the company’s GAAP operating income for the quarter. When a company spends more on repurchasing its own stock than it earns from operations, it is making a statement about what it believes its shares are worth. Investors, who sent the stock below five dollars, disagreed.
The full-year revenue forecast was cut from 1.247 to 1.267 billion dollars to 1.185 to 1.205 billion dollars. At the midpoint, that is a reduction of approximately 62 million dollars, or five per cent. Prior adjusted operating income guidance of 456 to 466 million dollars was lowered to 437 to 447 million dollars. Unlevered free cash flow guidance fell from 435 to 465 million dollars to 400 to 420 million dollars, a 40 million dollar cut at the midpoint.
Adjusted earnings per share guidance was maintained at 1.10 to 1.12 dollars, but only because the share count dropped from 325 million to 315 million through buybacks. The earnings-per-share number held steady because the denominator shrank, not because the numerator improved.
Second-quarter guidance of 300 to 303 million dollars implies a sequential decline from the first quarter and a year-over-year decrease of approximately 1.7 per cent. The pattern is a company whose upmarket business is growing modestly while its downmarket base erodes. The downmarket segment declined 10 per cent for a second consecutive quarter. Management has stated its goal is to reach an 80/20 split between upmarket and downmarket revenue, effectively accepting that the smaller customer segment will continue to shrink.
CEO Henry Schuck framed the strategy around data and AI: “In a world that is increasingly driven by AI and intelligent automation, ZoomInfo data and our go-to-market context is the ultimate competitive advantage.” The argument is that ZoomInfo’s database of more than 100 million companies and 500 million contacts, combined with billions of intent signals, is the durable asset. The market’s response suggests doubt about whether data alone justifies a premium subscription when AI-native alternatives are assembling the same insights at a fraction of the cost.
On 5 May, ZoomInfo’s board approved the 2026 Restructuring Programme. The company will eliminate approximately 600 positions globally, roughly 20 per cent of its ending first-quarter headcount. Approximately one quarter of the impacted roles will be reallocated to other locations, resulting in a net reduction of around 450 positions. Three hundred and forty employees in the United States, India, and the United Kingdom were notified immediately, primarily in go-to-market and general and administrative functions.
The company will close its entire Israel site by the end of 2026, transferring operations to the United States, Canada, Ireland, and India. Pre-tax restructuring charges are estimated at 45 to 60 million dollars, primarily cash-based, with the majority recognised in the second and third quarters. The programme is expected to deliver 60 million dollars in annual run-rate operating expense savings.
Schuck’s internal email to employees described the restructuring as a plan to simplify operations, accelerate the move upmarket, and reduce resources allocated to the downmarket segment. He noted that the industry is moving toward consumption-based pricing and that the company’s largest enterprise customers are asking for a “deeper, forward-deployed engineering motion.” The savings will be redirected toward the platform, product roadmap, and customer-facing engineering capacity. Impacted employees receive cash severance, some equity acceleration, and subsidised medical premiums in the United States.
The scale of the restructuring is a signal. A company that cuts 20 per cent of its workforce is not fine-tuning. It is reorganising around a thesis that its current structure was built for a market that no longer exists. The 60 million dollars in annual savings is nearly equal to the 62 million dollar revenue guidance cut. ZoomInfo is not just reducing costs. It is trading a revenue line it believes is structurally declining for operating leverage it believes will sustain margins through the transition.
ZoomInfo’s competitive landscape has fragmented. Apollo.io offers a database of more than 275 million contacts with built-in sequencing for 49 dollars per user per month. Clay orchestrates data enrichment across more than 100 providers using waterfall logic, pulling the best available information from ZoomInfo, Apollo, and dozens of other sources automatically. The sales technology stack in 2026 increasingly treats contact databases as interchangeable inputs rather than differentiated platforms.
AI-native enterprise spending surged 94 per cent year over year while traditional SaaS growth cooled to eight per cent. Approximately 285 billion dollars in market capitalisation was erased from software-as-a-service companies in a single 48-hour window earlier this year. The repricing is not unique to ZoomInfo, but ZoomInfo is more exposed than most because its core product, a database of business contacts and company information, is the category most directly threatened by AI agents that can assemble the same data on the fly.
Every SaaS company is building AI features, and ZoomInfo is no exception. Its Copilot product, launched in early 2024, reached 250 million dollars in annual contract value within 18 months. Copilot uses AI to recommend next-best actions, generate outreach, and monitor buyer signals. The product has been the company’s most successful launch. But it also raises the question that haunts every legacy SaaS platform building an AI layer: if the AI is the value, what is the database worth on its own?
Palantir’s earnings arrived in the middle of an AI software sell-off that tested whether any enterprise software company could sustain its valuation against the expectation that AI will compress margins across the industry. ZoomInfo’s answer is a company earning 35 per cent adjusted operating margins while its revenue flatlines, its debt exceeds its cash by a factor of eight, and its stock trades at a fraction of its historical value. The margins are real. The growth is not.
SaaStock, the ten-year-old SaaS conference brand, retired its name and relaunched as Shift AI, a rebrand that its founder described as a response to the post-SaaS era. Seventy per cent of enterprises now demand usage-based or outcome-based contracts. Per-seat adoption has dropped from 21 per cent to 15 per cent of SaaS companies in the past twelve months. Schuck’s own internal email acknowledged the shift toward consumption-based pricing. The conference that celebrated the model ZoomInfo was built on has concluded that the model no longer defines the market.
The case that SaaS is not dead rests on the argument that AI features are additive rather than substitutive, that enterprises will pay more for software enhanced by AI rather than replacing the software with AI entirely. ZoomInfo’s Copilot is evidence for this argument. Its 250 million dollar ACV demonstrates that customers will pay for AI capabilities layered on top of a trusted data platform.
The case against ZoomInfo is that the data platform itself is becoming a commodity. When Clay can waterfall across a hundred data providers and an AI agent can research a prospect in seconds by crawling the open web, the value of a proprietary database diminishes with every improvement in the models that can replicate its output. The retention rate of 90 per cent suggests customers are already making this calculation, spending less each year as cheaper alternatives capture the margin.
Schuck built ZoomInfo from a bootstrapped data company into a platform that peaked at 25 billion dollars in market value. The company still generates more than 100 million dollars in quarterly free cash flow. It is not failing. It is restructuring around a bet that its upmarket enterprise customers and its AI layer will sustain the business while the downmarket base, the segment that made ZoomInfo ubiquitous, erodes. The beat did not matter. The guidance did. The 600 jobs did. The stock at 4.32 dollars is the market’s verdict on what a database is worth in the age of AI agents.

Satya Nadella drew a historical parallel to Microsoft’s early PC partnership with IBM as the tech giant prepared to invest $10 billion more in OpenAI in April 2022 — writing in an internal email that he didn’t want Microsoft to become IBM while OpenAI became the next Microsoft.
That email, presented as evidence by Elon Musk’s lead trial attorney Steven Molo, was one of the new details to emerge from the Microsoft CEO’s turn on the stand Monday morning in Musk’s lawsuit against Sam Altman, OpenAI and Microsoft in federal court in Oakland.
Nadella described the decision to invest in OpenAI as a “one-way door,” saying Microsoft couldn’t build two supercomputers — one for itself and one for OpenAI — and had to accept the opportunity cost of diverting scarce computing resources away from its own AI teams.
“We were outsourcing essentially a lot of the core IP development and taking a massive dependency on OpenAI,” Nadella testified, explaining that he wanted to ensure Microsoft had access to the intellectual property generated by the partnership, and continued to build its own knowledge and capabilities at the same time.
Board considerations unredacted: The testimony also provided new information from messages among Microsoft execs and Altman in the days following his brief ouster as OpenAI CEO in 2023. The names of potential candidates from that thread were previously redacted in public court records.
From Nadella’s testimony Monday, it emerged that two potential OpenAI board candidates for whom he voiced his disapproval were Diane Greene, the former Google Cloud CEO, and Bing Gordon, the veteran gaming exec and Kleiner Perkins partner previously on Amazon’s board. Nadella said he objected to both as potential candidates because of their ties to companies that compete directly with Microsoft in AI.
He said the discussions were initiated by Altman and other OpenAI insiders seeking his input, and that the board could have ignored his suggestions. One candidate he suggested, former Gates Foundation CEO Sue Desmond-Hellman, was later appointed to the board.
Musk argues that Microsoft’s efforts to protect its interests in the OpenAI partnership came at the expense of the OpenAI nonprofit’s original mission to develop AI for the benefit of humanity. His lawsuit alleges that Microsoft aided and abetted a breach of the charitable trust that governed OpenAI’s founding, misusing his original investment, estimated at $38 million to $44 million.
Enabling a massive nonprofit: Nadella offered a different view on the stand, describing a collaboration built on mutual benefit in which Microsoft took on enormous risk to support a fledgling AI lab that no one else was willing to fund. He said the partnership had created “one of the largest nonprofits in the world,” enabling products like ChatGPT and Copilot that put AI tools in the hands of millions of people.
Under cross-examination, however, Nadella acknowledged that he was not aware of any full-time employees at the OpenAI nonprofit before March 2026, or of any grants, research, or open-sourced technology it had produced.
One of Microsoft’s attorneys in the case, Jay Jurata of Dechert, also sought to undermine Musk’s standing in the case. He walked Nadella through three major milestones in the Microsoft-OpenAI partnership — the 2019 announcement, a 2020 exclusive license to GPT-3, and the 2023 $10 billion investment — and asked each time whether Musk had reached out to object.
Each time, Nadella said no. He and Musk have each other’s phone numbers, he added.
Microsoft estimates the OpenAI return: Musk’s attorney, on cross-examination, sought to show the benefits Microsoft has received from the partnership. He walked Nadella through a January 2023 memo from Microsoft President Brad Smith to the company’s board, projecting a $92 billion return on Microsoft’s cumulative $13 billion investment in OpenAI.
According to the testimony, a footnote in the memo showed a 20% annual increase kicking in starting in 2025, which could roughly double the return within four years.
Under the restructured deal announced last year, the caps on Microsoft’s returns were removed entirely. Microsoft and OpenAI also recently amended the partnership to make Microsoft’s IP license non-exclusive and open all OpenAI products to any cloud provider.
[Update: The Information reported Monday that revenue-sharing payments from OpenAI to Microsoft under the new deal are capped at $38 billion.]
Asked about the memo on the witness stand, Nadella confirmed the figures but noted that the investment carried real risk, saying the return could just as easily have been zero.
The trial, before U.S. District Judge Yvonne Gonzalez Rogers, is expected to continue through May 21, with OpenAI CEO Sam Altman also expected to take the stand this week.
GeekWire reported on today’s proceedings via the court’s audio livestream. Correction: The name of Microsoft’s outside counsel for Nadella’s testimony has been corrected since publication.
A doctor in a hospital exam room watches as a medical transcription agent updates electronic health records, prompts prescription options, and surfaces patient history in real time. A computer vision agent on a manufacturing line is running quality control at speeds no human inspector can match. Both generate non-human identities that most enterprises cannot inventory, scope, or revoke at machine speed.
That is the structural problem keeping agentic AI stuck in pilots. Not model capability. Not compute. Identity governance.
Cisco President Jeetu Patel told VentureBeat at RSAC 2026 that 85% of enterprises are running agent pilots while only 5% have reached production. That 80-point gap is a trust problem. The first questions any CISO will ask: which agents have production access to sensitive systems, and who is accountable when one acts outside its scope? IANS Research found that most businesses still lack role-based access control mature enough for today’s human identities, and agents will make it significantly harder. The 2026 IBM X-Force Threat Intelligence Index reported a 44% increase in attacks exploiting public-facing applications, driven by missing authentication controls and AI-enabled vulnerability discovery.
Michael Dickman, SVP and GM of Cisco’s Campus Networking business, laid out a trust framework in an exclusive interview with VentureBeat that security and networking leaders rarely hear stated this plainly. Before Cisco, Dickman served as Chief Product Officer at Gigamon and SVP of Product Management at Aruba Networks.
Dickman said that the network sees what other telemetry sources miss: actual system-to-system communications rather than inferred activity. “It’s that difference of knowing versus guessing,” he said. “What the network can see are actual data communications … not, I think this system needs to talk to that system, but which systems are actually talking together.” That raw behavioral data, he added, becomes the foundation for cross-domain correlation, and without it, organizations have no reliable way to enforce agent policy at what he called “machine speed.”
Dickman argues that agentic AI breaks a pattern he says defined every prior technology transition: deploy for productivity first, bolt on security later.
“I don’t think trust is one of those things where the business productivity comes first, and the security is an afterthought,” Dickman told VentureBeat. “Trust actually is one of the key requirements. Just table stakes from the beginning.”
Observing data and recommending decisions carries consequences that stay contained. Execution changes everything. When agents autonomously update patient records, adjust network configurations, or process financial transactions, the blast radius of a compromised identity expands dramatically.
“Now more than ever, it’s that question of who has the right to do what,” Dickman said. “The who is now much more complicated because you have the potential in our reality of these autonomous agents.”
Dickman breaks the trust problem into four conditions. The first is secure delegation, which starts by defining what an agent is permitted to do and maintaining a clear chain of human accountability. The second is cultural readiness; he pointed to alert fatigue as a case study. The traditional fix, Dickman noted, was to aggregate alerts, so analysts see fewer items. With agents capable of evaluating every alert, that logic changes entirely.
“It is now possible for an agent to go through all alerts,” Dickman said. “You can actually start to think about different workflows in a different way. And then how does that affect the culture of the work, which is amazing.”
The third is token economics: Every agent’s action carries a real computational cost. Dickman sees hybrid architectures as the answer, where agentic AI handles reasoning while traditional deterministic tools execute actions. The fourth is human judgment. For example, his team used an AI tool to draft a product requirements document. The agent produced 60 pages of repetitive filler that immediately provided how technically responsive the architecture was, yet showed signs of needing extensive fine-tuning to make the output relevant. “There’s no substitute for the human judgment and the talent that’s needed to be dextrous with AI,” he said.
Most enterprise data today is proprietary, internal, and fragmented across observability tools, application platforms, and security stacks. Each domain team builds its own view. None sees the full picture.
“It’s that difference of knowing versus guessing,” Dickman said. “What the network can see are actual data communications. Not ‘I think this system needs to talk to that system,’ but which systems are actually talking together.”
That telemetry grows more valuable as IoT and physical AI proliferate. Computer vision agents analyzing shopper behavior and running factory-floor quality control generate highly sensitive data that demands precise access controls.
“All of those things require that trust that we started with, because this is highly sensitive data around like who’s doing what in the shop or what’s happening on the factory floor,” Dickman said.
“It’s not only aggregation, but actually the creation of knowledge from the network,” Dickman said. “There are these new insights you can get when you see the real data communications. And so now it becomes what do we do first versus second versus third?”
That last question reveals where Dickman’s focus lands: the strategic challenge is sequencing, not capability.
“The real power comes from the cross-domain views. The real power comes from correlation,” Dickman said. “Versus just aggregation and deduplication of alerts, which is good, but it’s a little bit basic.”
This is where he sees the most common pitfall. Team A builds Agent A on top of Data A. Team B builds Agent B on top of Data B. Each silo produces incrementally useful automation. The cross-domain insight never materializes.
Independent practitioners validate the pattern. Kayne McGladrey, an IEEE senior member, told VentureBeat that organizations are defaulting to cloning human user profiles for agents, and permission sprawl starts on day one. Carter Rees, VP of AI at Reputation, identified the structural reason. “A significant vulnerability in enterprise AI is broken access control, where the flat authorization plane of an LLM fails to respect user permissions,” Rees told VentureBeat. Etay Maor, VP of Threat Intelligence at Cato Networks, reached the same conclusion from the adversarial side. “We need an HR view of agents,” Maor told VentureBeat at RSAC 2026. “Onboarding, monitoring, offboarding.”
Use this matrix to evaluate any platform or combination of platforms against the five trust gaps Dickman identified. Note that the enforcement approaches in the right column reflect Cisco’s framework.
|
Trust gap |
Current control failure |
What network-layer enforcement changes |
Recommended action |
|
Agent identity governance |
IAM built for human users cannot inventory, scope, or revoke agent identities at machine speed |
Agentic IAM registers each agent with defined permissions, an accountable human owner, and a policy-governed access scope |
Audit every agent identity in production. Assign a human owner. Define permitted actions before expanding the scope |
|
Blast radius containment |
Host-based agents and perimeter controls can be bypassed; flat segments give compromised agents lateral movement |
Microsegmentation enforces least-privileged access at the network layer, limiting blast radius independent of host-level controls |
Implement microsegmentation for every agent-accessible system. Start with the highest-sensitivity data (PHI, financial records) |
|
Cross-domain visibility |
Siloed observability tools create fragmented views; Team A’s agent data never correlates with Team B’s security telemetry |
Network telemetry captures actual system-to-system communications, feeding a unified data fabric for cross-domain correlation |
Unify network, security, and application telemetry into a shared data fabric before deploying production agents |
|
Governance-to-enforcement pipeline |
No formal process connecting business intent to agent policy to network enforcement |
Policy-to-enforcement pipeline translates governance decisions into machine-speed network rules |
Establish a formal pipeline from business-intent definition to automated network policy enforcement |
|
Cultural and workflow readiness |
Organizations automate existing workflows rather than redesigning for agent-scale processing |
Network-generated behavioral data reveals actual usage patterns, informing workflow redesign |
Run a 30-day telemetry capture before designing agent workflows. Build around observed data, not assumptions |
Dickman grounded his framework in a scenario from his own life. A family member recently broke an ankle, which put him in a hospital exam room watching a medical transcription agent update the EHR, prompt prescription options, and surface patient history in real time. The doctor approved each decision, but the agent handled tasks that previously required manual entry across multiple systems.
The security implications hit differently when it is a loved one’s records on the screen.
“I would call it do governance slowly. But do the enforcement and implementation rapidly,” he said. “It must be done in machine speed.”
It starts with agentic IAM, where each agent is registered with defined permitted actions and a human accountable for its behavior.
“Here’s my set of agents that I’ve built. Here are the agents. By the way, here’s a human who’s accountable for those agents,” Dickman said. “So if something goes wrong, there’s a person to talk to.”
That identity layer feeds microsegmentation — a network-enforced boundary Dickman says enforces least-privileged access and limits blast radius.
“Microsegmentation guarantees that least-privileged access,” Dickman said. “You’re not relying on a bunch of host agents, which can be bypassed or have other issues.”
If the governance model works for a medical transcription agent handling patient records in an emergency department, it scales to less sensitive enterprise use cases.
1. Force cross-functional alignment now. Define what the organization expects from agentic AI across line-of-business, IT, and security leadership. Dickman sees the human coordination layer moving more slowly than the technology. That gap is the bottleneck.
2. Get IAM and PAM governance production-ready for agents. Dickman called out identity and access management and privileged access management specifically as not mature enough for agentic workloads today. Solidify the governance before scaling the agents. “That becomes the unlock of trust,” he said. “Because when the technology platform is ready, you then need the right governance and policy on top of that.”
3. Adopt a platform approach to networking infrastructure. A platform strategy enables data sharing across domains in ways fragmented point solutions cannot. That shared foundation is what makes the cross-domain correlation in the trust gap assessment above operationally real.
4. Design hybrid architectures from the start. Agentic AI handles reasoning and planning. Traditional deterministic tools execute the actions. Dickman sees this combination as the answer to token economics: it delivers the intelligence of foundation models with the efficiency and predictability of conventional software. Do not build pure-agent systems when hybrid systems cost less and fail more predictably.
5. Make the first use cases bulletproof on trust. Pick two or three high-value use cases and build them with role-based access control, privileged access management, and microsegmentation from day one. Even modest deployments delivered with best practices intact build the organizational confidence that accelerates everything after.
“You can guarantee that trust to the organization, and that will unleash the speed,” Dickman said.
That is the structural insight running through every section of this conversation. The 85% of enterprises stuck in pilot mode are not waiting for better models. They are waiting for the identity governance, the cross-domain visibility, and the policy enforcement infrastructure that makes production deployment defensible. Whether they build on Cisco’s platform or assemble their own, Dickman’s framework holds: identity governance, cross-domain visibility, policy enforcement. None of those prerequisites is optional.
The organizations that satisfy them first will deploy agents at a pace the rest cannot match, because every new agent inherits the trust architecture the first ones required. The ones still debating whether to start will watch that gap widen. Theoretical trust does not ship.
The Musk v. Altman trial entered its third week Monday, with Microsoft CEO Satya Nadella and former OpenAI co-founder and renowned AI researcher Ilya Sutskever taking the stand. Nadella testified that Elon Musk never raised concerns to him that Microsoft’s investments in OpenAI violated any special commitments, and said he viewed the partnership as clearly commercial from the start. He also described OpenAI’s 2023 board crisis as “amateur city.”
Meanwhile, Sutskever testified that he had raised concerns about Sam Altman because he feared OpenAI could be “destroyed.” He expressed concerns about Altman’s behavior to the board, in part because he said he felt “a great deal of ownership” over the startup. “I simply cared for it, and I didn’t want it to be destroyed,” Sutskever said. CNBC reports: Nadella said he was “very proud” that Microsoft took the risk to invest in OpenAI when “no one else was willing” to bet on the fledgling lab. Musk, who testified late last month, said Microsoft’s $10 billion investment was the key tipping point that made him believe OpenAI was violating its nonprofit mission. He testified that the scale of the investment bothered him, and it prompted him to open a legal investigation into OpenAI. “I was concerned they were really trying to steal the charity,” Musk said from the stand.
Nadella said he did not believe Microsoft’s investments in OpenAI were donations, and that there was a clear commercial element to their partnership from the outset. He said during the partnership’s early years, Microsoft gave OpenAI sharp discounts on computing resources, and Microsoft believed it would reap marketing benefits from doing so. During a separate video deposition that was played on Monday morning, Michael Wetter, a corporate development executive at Microsoft, said the company has recognized approximately $9.5 billion in revenue to date through its partnership with OpenAI as of March 2025.
[…] Nadella said he was “pretty surprised” by the board’s decision [to fire Altman in November 2023], and that his priority was to try and figure out how to maintain continuity for Microsoft customers. Immediately after Altman was removed, Nadella said he made an effort to learn more about what happened, adding that he suspected jealousy and poor communication was at play. During conversations with OpenAI board members after the firing, Nadella said he was simply trying to understand the language in the OpenAI’s statement about Altman being “not consistently candid” while communicating with the board. That language, Nadella said, “just didn’t sort of suffice, because this is the CEO of a company that we are invested in and we’re deeply partnered with, and so I felt that they could have explained to me what are the incidents or what is the detail behind it.” There must have been instances of jealousy or miscommunication that could have justified pushing out Altman, Nadella said. He wanted more depth from the board members after the remark about candor, but no such information was available, he said. “It was sort of amateur city, as far as I’m concerned,” Nadella testified.
[…] Musk testified that he is not entirely against OpenAI having a for-profit unit, but he said it became “the tail wagging the dog.” He repeatedly accused Altman and Brockman of enriching themselves from a charity while also reaping the positive associations that come from running a nonprofit. “Microsoft has their own motivations, and that would be different from the motivations of the charity,” Musk said from the stand. “All due respect to Microsoft, do you really want Microsoft controlling digital superintelligence?”
During a videotaped deposition shown in court last week, former OpenAI director Tasha McCauley recalled a discussion with Nadella and her fellow board members after the 2023 decision to dismiss Altman as OpenAI’s CEO. “To the best of my recollection, Satya wanted to restore things to as they had been,” McCauley said. The board members didn’t think that was the right move, she said. But as a court witness on Monday, Nadella said he never demanded that the board reinstate Altman as OpenAI CEO. Recap:
Sam Altman Had a Bad Day In Court (Day Eight)
Sam Altman’s Management Style Comes Under the Microscope At OpenAI Trial (Day Seven)
Brockman Rebuts Musk’s Take On Startup’s History, Recounts Secret Work For Tesla (Day Six)
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company’s Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
The rapid integration of AI into healthcare devices raises ‘fresh ethical questions’, says Eoin O’Cearbhaill.
Dr Eoin O’Cearbhaill is the most recent recipient of the NovaUCD Innovation Award, a recognition given to those with success stories in commercialising research emerging from University College Dublin (UCD).
O’Cearbhaill is an associate professor in biomedical engineering at the UCD School of Mechanical and Materials Engineering, where his research focuses on developing minimally invasive medical technologies for diagnosing and treating disease.
He is also the director of the Centre for Biomedical Engineering, and leads the UCD Medical Device Design Group within the centre, which aims to address clinical needs by developing novel medical devices with real-world applications.
The research group has assisted in the creation of spin-out companies LaNua Medical, Latch Medical and Lia Eyecare, and has filed more than a dozen patents.
O’Cearbhaill has also been a member of multiple Enterprise Ireland Commercialisation Fund projects. He is a funded investigator with Research Ireland Centres Cúram and I-Form, and has consulted with a number of businesses including Boston Scientific, NeoGraft Technologies, Johnson & Johnson and CroíValve.
“A major focus of my work has been translating research from the lab into technologies that can ultimately improve patient care, including through university spin-outs and collaborations with clinicians, researchers and industry partners,” O’Cearbhaill tells SiliconRepublic.com.
I was fortunate to grow up with parents who were always supportive of education and curiosity. Rather than one defining moment, it was a gradual realisation that research offered the opportunity to contribute something new.
Some time spent working in the medical device industry was also influential, as it gave me a practical understanding of how engineering can directly improve patient care. It showed me the importance of developing technologies that are not only innovative, but manufacturable, reliable and capable of making a real clinical impact.
Our research focuses broadly on minimally invasive technologies for the diagnosis and treatment of disease. Ideally, we begin by identifying an unmet clinical need, often in partnership with clinicians, and then develop practical device concepts that could address it.
My own expertise is in mechanical-based design, but coming up with the best solution relies on teamwork. We work closely with experts in electronic engineering, materials science, pharmaceuticals and clinical medicine, both internal and external to our group, to bring the right mix of skills to each challenge.
Research rarely follows a straight line. Sometimes a technology developed for one purpose reveals an unexpected opportunity elsewhere. Being able to adapt and pivot is often essential if we want to maximise the chances of clinical translation and real patient impact.
Biomedical engineering research can improve lives directly through better diagnostics, smarter treatments and less invasive procedures. It also plays an important economic role. Research attracts talented people from around the world, helps build high-value industries, and creates the environment where the next generation of Irish start-ups can emerge.
Ireland has the ingredients to become a global leader in next-generation medtech if we continue to invest in talent, translational research and entrepreneurship.
My team and I have been fortunate to work with talented researchers and entrepreneurs who have already spun technologies out of UCD with the support of NovaUCD, into companies.
These innovations span areas such as microneedle-based drug delivery, tumour treatment technologies and device-based therapies for dry eye disease and include spin-outs such as Latch Medical, LaNua Medical and Lia Eyecare.
Medical devices require robust regulation, quality systems and clinical validation, so the path to commercialisation can be challenging. But when successful, it enables research discoveries to become real products that can benefit patients at scale.
UCD Medical Device Design Group. Image: University College Dublin
One of the biggest challenges is attracting and retaining outstanding talent. PhD researchers and postdoctoral fellows are central to scientific progress, so it is important that stipends and salaries remain competitive with industry, particularly during a cost-of-living crisis.
Another challenge is the rapid integration of AI and smart technologies into healthcare devices. This creates exciting new opportunities, but also raises fresh technical, regulatory and ethical questions.
People sometimes assume that creating a medical device is only about designing the hardware and now, particularly the software in next-generation smart devices. In reality, one of the most important challenges is understanding how that device interacts with the body over time.
We are particularly interested in the interface between implantable or wearable devices and surrounding tissue. Understanding that relationship is critical if we want devices to function reliably and safely over the long term.
There is enormous opportunity in remote patient monitoring, technologies that enable more outpatient procedures, and devices that shorten recovery times.
These kinds of innovations can improve patient experience while also helping healthcare systems manage growing demand more efficiently.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Bonus coupons are driving M5 MacBook Air prices lower this week, with both 13-inch and 15-inch models marked down, including configs with 24GB of RAM and 1TB of storage.
B&H’s MacBook Air sale delivers bonus discounts in the form of in-cart coupons on top of already reduced prices. With both 13-inch and 15-inch models on sale, now is a great time to pick up a thin-and-light notebook that’s was just released and is equipped with Apple’s M5 chip.
Save up to $180 on M5 MacBook Airs
The B&H deals above beat Amazon’s prices for the same systems, but you can find even more discounts in our 13-inch MacBook Air M5 Price Guide and 15-inch MacBook Air M5 Price Guide.
If you’d like to learn more about the M5 release, check out our hands-on M5 MacBook Air review.
For years, security experts warned that AI would eventually give hackers a dangerous new edge. That moment has arrived.
Google’s Threat Intelligence Group has published a report confirming that a criminal hacking group used an AI model to discover a zero-day vulnerability and nearly pulled off a mass cyberattack. Google says it caught and stopped the attack before the hackers could deploy the attack at scale.
The exploit targeted a popular open-source web-based system administration tool, the kind businesses use to remotely manage servers, employee accounts, and security settings.
Had it gone undetected, it would have let hackers bypass two-factor authentication, which is often the last line of defense protecting accounts.

The attackers planned to deploy it in a mass exploitation event targeting multiple organizations at once. Google alerted the tool’s developer in time for a patch to be issued before any damage was done.
The company declined to name the hacking group, the specific software targeted, or which AI model was used, but confirmed it was not Google’s own Gemini.
According to Google, groups linked to China and North Korea have also shown significant interest in using AI tools like OpenClaw for vulnerability discovery.

The Google attack is alarming, but it’s far from isolated. Georgia Tech researchers recently uncovered VillainNet, a hidden backdoor that embeds itself inside self-driving car’s AI and works 99% of the time when triggered.
Meanwhile, a Korean research team showed that AI models can be reverse-engineered remotely using a small antenna through walls, no system access needed. Recently, a group of Discord users bypassed access controls to reach Anthropic’s restricted Mythos model through a third-party vendor environment.
On the defense side, a growing discipline called AI pentesting is emerging to stress-test how language models behave when exposed to adversarial inputs, but the field is still in its early stages.
As part of a push to ensure as many Australians as possible are connected to fast, reliable internet, for the last few years NBN Co has been offering homes that currently connect to the fixed-line network via older copper-based technologies – such as fibre to the node (FTTN) and fibre to the curb (FTTC) – the opportunity to upgrade to superior fibre to the premises (FTTP) technology for free.
Up until now, the FTTP upgrade has been entirely optional for eligible premises, with the only caveat being that would-be upgraders need to sign up for one of the fastest NBN plans through a supporting internet service provider (ISP). However, from July 1, 2026 that latter requirement is being scrapped for certain premises – and the upgrade itself will no longer be optional..
If you’ve been holding out on enacting the upgrade, then your hand may soon be forced, as NBN Co has announced a new Targeted Upgrade program that it says will require 130,000 specific homes and businesses to upgrade from copper-based services to full-fibre technology. The program is currently scheduled to start midway through 2027.
NBN Co says the premises identified to receive the upgrade will start being sent official notifications from July 2027. If your home or business is one of those identified for the program, then there’s no real downside to taking it up – optical fibre NBN connections support massively faster speeds and are generally more reliable than older legacy technologies.
What’s more, in the official press release, NBN Co has made it clear that it plans to eventually disconnect all copper services at the premises the program targets, saying “the first suspensions of legacy copper services where a fibre upgrade order has not been placed are not expected to occur until January 2028.”
In short, if you don’t take up the free upgrade to full fibre, then you’ll eventually be left without a fixed-line internet connection. Reminders will be sent at six months before disconnection, three months before and 30 days before. If you ignore all of them, then your service will ultimately be suspended.
NBN Co adds that safeguards will be in place, however, including “the option to extend or defer before a service is suspended, and case‑managed support, particularly for customers who need additional assistance.”
If you know your home currently connects via FTTN or FTTC technology, then now’s a good time to begin preparing for the upgrade. Even if your home isn’t one of the initial 130,000 selected, it likely will be eventually.
Whether you’re on the list or not, if you connect via a legacy copper tech then there’s a good chance your home is already eligible for an upgrade. If what’s holding you back is uncertainty about what provider to pick, I’ve selected a few of my favourite NBN plans below that make the most of the superior full-fibre technology.
In fact, now’s actually a great time to consider switching your NBN provider, as the yearly price hike is also around the corner, and you can almost always find a better deal if you shop around.
A security researcher has released a proof-of-concept tool named GhostLock that demonstrates how a legitimate Windows file API can be abused in attacks to block access to files stored locally or on SMB network shares.
This technique, created by Kim Dvash of Israel Aerospace Industries, abuses the Windows ‘CreateFileW‘ API and file-sharing modes to prevent other users and applications from opening files while handles remain active.
The GhostLock technique abuses the ‘dwShareMode’ parameter in the CreateFileW() function, which specifies the type of access other processes have to a file while it is opened.
When a file is opened with ‘dwShareMode = 0`, Windows grants the process exclusive access to the file, preventing other users or applications from opening it.
For example, the following code will open the finance.xlsx file in exclusive mode, preventing any other process from accessing it.
HANDLE hFile = CreateFileW(
L"\\\\server\\share\\finance.xlsx",
GENERIC_READ,
0,
NULL,
OPEN_EXISTING,
FILE_ATTRIBUTE_NORMAL,
NULL
);
When attempting to do so, Windows will display the following ‘STATUS_SHARING_VIOLATION’ error instead.

The researcher has published a GhostLock tool on GitHub that automates this attack by recursively opening a large number of files on SMB shares. While these file handles are open, new attempts to access the files will fail with sharing violations.
The tool can be run by “standard” domain users, and does not need any elevated privileges to lock files.
This is further compounded if an attacker launches the attack from multiple compromised devices simultaneously, while continuously reacquiring file handles as previous processes are terminated.
However, once the associated SMB session is terminated, the GhostLock processes are killed, or the affected system is rebooted, Windows automatically closes the handles, and access to the files is restored.
Dvash told BleepingComputer that the technique should be viewed primarily as a disruption attack rather than a destructive one, like ransomware.
“Yes, the impact is disruption-based, not destructive. The parallel to ransomware is the operational downtime window, not data loss,” Dvash told BleepingComputer.
While this attack is more akin to a denial-of-service technique, it could be useful as a decoy during intrusions.
Attackers could use widespread file-access disruptions to overwhelm IT staff while conducting data theft, lateral movement, or other malicious activity elsewhere in the environment.
The researcher says that many security products and behavioral detection systems focus on detecting mass file writes or encryption operations. GhostLock primarily generates large numbers of legitimate file open requests, making it less likely to be detected.
“The only observable that reliably identifies this attack is the per-session open-file count with ShareAccess = 0 at the file server layer — a metric that lives inside storage platform management interfaces, not in Windows event logs, not in EDR telemetry, not in network flow data,” explains Dvash.
The researcher has shared SIEM queries and an NDR detection rule in the GhostLock whitepaper that IT teams and defenders can use as a template for detections.
AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.
At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.
For years, Google has been accused of harvesting data from Android phones without users’ consent. Following a California lawsuit that was settled for $314 million last year, a new settlement could mean payouts for another 100 million people.
A class action lawsuit alleging “Google caused Android mobile devices to transfer a variety of information to Google without users’ permission, consuming users’ cellular data,” is nearing its end. The two sides in Taylor v. Google LLC (PDF) have agreed to a settlement and have begun resolving it.
Without admitting fault, Google agreed to a preliminary settlement in January, committing to pay $135 million in damages. The settlement website is now live.
The final approval hearing won’t occur until June 23, when the court will hear objections and consider whether Google’s settlement is fair. After that, the court will decide whether to approve the $135 million settlement.
In the meantime, if you qualify and want to be paid as part of the settlement, you can select your preferred payment method on the official website. There, you can find information on speaking at the June 23 court hearing and on how to exclude yourself or write to the court to object by May 29.
As part of the settlement, Google will update its Google Play terms of service to clarify that certain data transfers do occur passively even when you’re not using your Android device, and that cellular data may be relied upon when not connected to Wi-Fi. This can’t always be disabled, but users will be asked to consent to it when setting up their device.
Google will also fully stop collecting data when its “allow background data usage” option is toggled off.
In order to join the Taylor v. Google LLC settlement, you must meet four qualifications:
To set your payment information on the official settlement website, you’ll need a Notice ID and Confirmation Code, which the settlement administrators mailed or emailed to eligible claimants.
The final approval hearing is on June 23, so you can add your payment method until then. The hearing’s date and time may change, and any updates will be posted on the settlement website.
To set your payment method, you’ll need a Notice ID and Confirmation Code from a settlement notification email or letter.
If you choose to do nothing and are eligible, you will still be issued a settlement payment, but not selecting a payment method might increase your risk of not getting paid.
Even if you didn’t receive a notification letter or email, you still might be eligible for a payout from Google. To find out, you can call the toll-free information number at 1-844-655-4255 or email info@FederalCellularClassAction.com. You can also mail a letter requesting more information to: Federal Cellular Class Action, 1650 Arch Street, Suite 2210, Philadelphia, PA 19103.
Watch this: Your Phone is Disgusting: Let’s Fix That
It’s not currently known exactly how much each settlement class member will receive, but the maximum is $100. Payments will be distributed after final court approval and after the resolution of any appeals.
After all administrative, tax and attorney costs are paid, the settlement administrator will attempt to pay each member an equal amount. If any funds remain after payments are sent, and it’s economically feasible, they will be redistributed to members who were previously and successfully paid. If it’s not economically feasible, the funds will go to an organization approved by the court.
Google has updated Gemini for Home so that it no longer acts like a strict parent when you ask it for cocktail recipes. In the past, you may have encountered a message that says “I cannot provide recipes for alcoholic beverages” when you ask the AI assistant for a margarita recipe on Google smart home devices, such as the Nest Hub. Now, Google has updated its safeguards to prevent adult users from encountering filters meant for younger ones.
Adults will “now experience improved availability for general queries, including recipes for age-gated beverages,” the company said in the Google Home support page. If Gemini still isn’t responding when you ask it for instructions on how to make a cocktail, you may have to check you Parental Control settings and your Gemini for Home response filter settings in the Google Home app.
You’ll now also be able to tell Google more easily whether you’re satisfied with Gemini’s responses. On smart displays, you’ll see thumbs-up and thumbs-down buttons following most voice interactions. The company says your responses will help it figure out what it needs to improve.
In addition, Google has enabled faster and more personalized Gemini responses. For instance, if you tell it that your nanny’s name is “Alice,” it will search for a familiar face in your security cam footage if you ask it if your nanny or Alice has arrived. You’ll also be able to ask it for a quick recap on what happened while you were away by telling Gemini to give you a “Home Brief.” Finally, Gemini now acts faster if you ask it to set alarms for you, reducing wait times and the need to repeat your commands.
HarrisX Poll Found 52% of Registered Voters Support the CLARITY Act
Weekend Open Thread: Marianne Dress
Upbit adds B3 Korean won pair as Base token gains Korea access
NCP car park operator enters administration putting 340 UK sites at risk of closure
Coffee Break: Travel Steam Iron
Auto Enthusiast Carves Functional Two-Stroke Engine from Solid Metal
What to expect when you’re expecting a budget
What to Know Before Buying a Curling Wand or Curling Iron
Politics Home Article | Starmer Enters The Danger Zone
Ignore market noise, India’s long-term story intact, say D-Street bulls Ramesh Damani and Sunil Singhania
UAE Free Zone Deploys Blockchain IDs to Verify Registered Firms
GM Agrees To Pay $12.75 Million To Settle California Lawsuit Over Misuse Of Customers’ Driving Data
BlackRock CEO Larry Fink Discusses a New Asset Class
Sarah Paulson Called Out For Met Gala ‘Hypocrisy’
Robinhood says Wall Street is building onchain
NBA playoff winners and losers: Austin Reaves is not loving Lakers vs. Thunder matchup, but Chet Holmgren is
General Hospital: Ric & Ava Bombshell – Ric’s Massive Secret Exposed!
Bold and Beautiful Early Spoilers May 11-15: Steffy Revolted & Liam Overjoyed!
Apple and Samsung are dominating smartphone sales so thoroughly that only one other company makes the top 10
The Best Work Pants for Women in 2026
You must be logged in to post a comment Login