Connect with us
DAPA Banner

Tech

43% of AI-generated code changes need debugging in production, survey finds

Published

on

The software industry is racing to write code with artificial intelligence. It is struggling, badly, to make sure that code holds up once it ships.

A survey of 200 senior site-reliability and DevOps leaders at large enterprises across the United States, United Kingdom, and European Union paints a stark picture of the hidden costs embedded in the AI coding boom. According to Lightrun’s 2026 State of AI-Powered Engineering Report, shared exclusively with VentureBeat ahead of its public release, 43% of AI-generated code changes require manual debugging in production environments even after passing quality assurance and staging tests. Not a single respondent said their organization could verify an AI-suggested fix with just one redeploy cycle; 88% reported needing two to three cycles, while 11% required four to six.

The findings land at a moment when AI-generated code is proliferating across global enterprises at a breathtaking pace. Both Microsoft CEO Satya Nadella and Google CEO Sundar Pichai have claimed that around a quarter of their companies’ code is now AI-generated. The AIOps market — the ecosystem of platforms and services designed to manage and monitor these AI-driven operations — stands at $18.95 billion in 2026 and is projected to reach $37.79 billion by 2031.

Yet the report suggests the infrastructure meant to catch AI-generated mistakes is badly lagging behind AI’s capacity to produce them.

Advertisement

“The 0% figure signals that engineering is hitting a trust wall with AI adoption,” said Or Maimon, Lightrun’s chief business officer, referring to the survey’s finding that zero percent of engineering leaders described themselves as “very confident” that AI-generated code will behave correctly once deployed. “While the industry’s emphasis on increased productivity has made AI a necessity, we are seeing a direct negative impact. As AI-generated code enters the system, it doesn’t just increase volume; it slows down the entire deployment pipeline.”

Amazon’s March outages showed what happens when AI-generated code ships without safeguards

The dangers are no longer theoretical. In early March 2026, Amazon suffered a series of high-profile outages that underscored exactly the kind of failure pattern the Lightrun survey describes. On March 2, Amazon.com experienced a disruption lasting nearly six hours, resulting in 120,000 lost orders and 1.6 million website errors. Three days later, on March 5, a more severe outage hit the storefront — lasting six hours and causing a 99% drop in U.S. order volume, with approximately 6.3 million lost orders. Both incidents were traced to AI-assisted code changes deployed to production without proper approval.

The fallout was swift. Amazon launched a 90-day code safety reset across 335 critical systems, and AI-assisted code changes must now be approved by senior engineers before they are deployed.

Maimon pointed directly to the Amazon episodes. “This uncertainty isn’t based on a hypothesis,” he said. “We just need to look back to the start of March, when Amazon.com in North America went down due to an AI-assisted change being implemented without established safeguards.”

Advertisement

The Amazon incidents illustrate the central tension the Lightrun report quantifies in survey data: AI tools can produce code at unprecedented speed, but the systems designed to validate, monitor, and trust that code in live environments have not kept pace. Google’s own 2025 DORA report corroborates this dynamic, finding that AI adoption correlates with an increase in code instability, and that 30% of developers report little or no trust in AI-generated code.

Maimon cited that research directly: “Google’s 2025 DORA report found that AI adoption correlates with an almost 10% increase in code instability. Our validation processes were built for the scale of human engineering, but today, engineers have become auditors for massive volumes of unfamiliar code.”

Developers are losing two days a week to debugging AI-generated code they didn’t write

One of the report’s most striking findings is the scale of human capital being consumed by AI-related verification work. Developers now spend an average of 38% of their work week — roughly two full days — on debugging, verification, and environment-specific troubleshooting, according to the survey. For 88% of the companies polled, this “reliability tax” consumes between 26% and 50% of their developers’ weekly capacity.

This is not the productivity dividend that enterprise leaders expected when they invested in AI coding assistants. Instead, the engineering bottleneck has simply migrated. Code gets written faster, but it takes far longer to confirm that it works.

Advertisement

“In some senses, AI has made the debugging problem worse,” Maimon said. “The volume of change is overwhelming human validation, while the generated code itself frequently does not behave as expected when deployed in Production. AI coding agents cannot see how their code behaves in running environments.”

The redeploy problem compounds the time drain. Every surveyed organization requires multiple deployment cycles to verify a single AI-suggested fix — and according to Google’s 2025 DORA report, a single redeploy cycle takes a day to one week on average. In regulated industries such as healthcare and finance, deployment windows are often narrow, governed by mandated code freezes and strict change-management protocols. Requiring three or more cycles to validate a single AI fix can push resolution timelines from days to weeks.

Maimon rejected the idea that these multiple cycles represent prudent engineering discipline. “This is not discipline, but an expensive bottleneck and a symptom of the fact that AI-generated fixes are often unreliable,” he said. “If we can move from three cycles to one, we reclaim a massive portion of that 38% lost engineering capacity.”

AI monitoring tools can’t see what’s happening inside running applications — and that’s the real problem

If the productivity drain is the most visible cost, the Lightrun report argues the deeper structural problem is what it calls “the runtime visibility gap” — the inability of AI tools and existing monitoring systems to observe what is actually happening inside running applications.

Advertisement

Sixty percent of the survey’s respondents identified a lack of visibility into live system behavior as the primary bottleneck in resolving production incidents. In 44% of cases where AI SRE or application performance monitoring tools attempted to investigate production issues, they failed because the necessary execution-level data — variable states, memory usage, request flow — had never been captured in the first place.

The report paints a picture of AI tools operating essentially blind in the environments that matter most. Ninety-seven percent of engineering leaders said their AI SRE agents operate without significant visibility into what is actually happening in production. Approximately half of all companies (49%) reported their AI agents have only limited visibility into live execution states. Only 1% reported extensive visibility, and not a single respondent claimed full visibility.

This is the gap that turns a minor software bug into a costly outage. When an AI-suggested fix fails in production — as 43% of them do — engineers cannot rely on their AI tools to diagnose the problem, because those tools cannot observe the code’s real-time behavior. Instead, teams fall back on what the report calls “tribal knowledge”: the institutional memory of senior engineers who have seen similar problems before and can intuit the root cause from experience rather than data. The survey found that 54% of resolutions to high-severity incidents rely on tribal knowledge rather than diagnostic evidence from AI SREs or APMs.

In finance, 74% of engineering teams trust human intuition over AI diagnostics during serious incidents

The trust deficit plays out with particular intensity in the finance sector. In an industry where a single application error can cascade into millions of dollars in losses per minute, the survey found that 74% of financial-services engineering teams rely on tribal knowledge over automated diagnostic data during serious incidents — far higher than the 44% figure in the technology sector.

Advertisement

“Finance is a heavily regulated, high-stakes environment where a single application error can cost millions of dollars per minute,” Maimon said. “The data shows that these teams simply do not trust AI not to make a dangerous mistake in their Production environments. This is a rational response to tool failure.”

The distrust extends beyond finance. Perhaps the most telling data point in the entire report is that not a single organization surveyed — across any industry — has moved its AI SRE tools into actual production workflows. Ninety percent remain in experimental or pilot mode. The remaining 10% evaluated AI SRE tools and chose not to adopt them at all. This represents an extraordinary gap between market enthusiasm and operational reality: enterprises are spending aggressively on AI for IT operations, but the tools they are buying remain quarantined from the environments where they would deliver the most value.

Maimon described this as one of the report’s most significant revelations. “Leaders are eager to adopt these new AI tools, but they don’t trust AI to touch live environments,” he said. “The lack of trust is shown in the data; 98% have lower trust in AI operating in production than in coding assistants.”

The observability industry built for human-speed engineering is falling short in the age of AI

The findings raise pointed questions about the current generation of observability tools from major vendors like Datadog, Dynatrace, and Splunk. Seventy-seven percent of the engineering leaders surveyed reported low or no confidence that their current observability stack provides enough information to support autonomous root cause analysis or automated incident remediation.

Advertisement

Maimon did not shy away from naming the structural problem. “Major vendors often build ‘closed-garden’ ecosystems where their AI SREs can only reason over data collected by their own proprietary agents,” he said. “In a modern enterprise, teams typically have a multi-tool stack to provide full coverage. By forcing a team into a single-vendor silo, these tools create an uncomfortable dependency and a strategic liability: if the vendor’s data coverage is missing a specific layer, the AI is effectively blind to the root cause.”

The second issue, Maimon argued, is that current observability-backed AI SRE solutions offer only partial visibility — defined by what engineers thought to log at the time of deployment. Because failures rarely follow predefined paths, autonomous root cause analysis using only these tools will frequently miss the key diagnostic evidence. “To move toward true autonomous remediation,” he said, “the industry must shift toward AI SRE without vendor lock-in; AI SREs must be an active participant that can connect across the entire stack and interrogate live code to capture the ground truth of a failure as it happens.”

When asked what it would take to trust AI SREs, the survey’s respondents coalesced unanimously around live runtime visibility. Fifty-eight percent said they need the ability to provide “evidence traces” of variables at the point of failure, and 42% cited the ability to verify a suggested fix before it actually deploys. No respondents selected the ability to ingest multiple log sources or provide better natural language explanations — suggesting that engineering leaders do not want AI that talks better, but AI that can see better.

The question is no longer whether to use AI for coding — it’s whether anyone can trust what it produces

The survey was administered by Global Surveyz Research, an independent firm, and drew responses from Directors, VPs, and C-level executives in SRE and DevOps roles at enterprises with 1,500 or more employees across the finance, technology, and information technology sectors. Responses were collected during January and February 2026, with questions randomized to prevent order bias.

Advertisement

Lightrun, which is backed by $110 million in funding from Accel and Insight Partners and counts AT&T, Citi, Microsoft, Salesforce, and UnitedHealth Group among its enterprise clients, has a clear commercial interest in the problem the report describes: the company sells a runtime observability platform designed to give AI agents and human engineers real-time visibility into live code execution. Its AI SRE product uses a Model Context Protocol connection to generate live diagnostic evidence at the point of failure without requiring redeployment. That commercial interest does not diminish the survey’s findings, which align closely with independent research from Google DORA and the real-world evidence of the Amazon outages.

Taken together, they describe an industry confronting an uncomfortable paradox. AI has solved the slowest part of building software — writing the code — only to reveal that writing was never the hard part. The hard part was always knowing whether it works. And on that question, the engineers closest to the problem are not optimistic.

“If the live visibility gap is not closed, then teams are really just compounding instability through their adoption of AI,” Maimon said. “Organizations that don’t bridge this gap will find themselves stuck with long redeploy loops, to solve ever more complex challenges. They will lose their competitive speed to the very AI tools that were meant to provide it.”

The machines learned to write the code. Nobody taught them to watch it run.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Sony is nerfing its Bravia TVs’ program guide

Published

on

Sony is removing some features from its TV guide and program guide displays for channels received by an over the air TV antenna on select models of Bravia televisions from 2023-2025. Cord Cutters News reported on the changes, which will take effect in late May.

Channel logos and thumbnail images in program descriptions are going away from the built-in TV Guide for antenna TV channels. Only programs from recently watched channels will be shown in the guide, and depending on the channel, program information may not be displayed. Change is also coming for set top box users, with the dedicated Set Top Box TV menu being removed and replaced by a Control menu. This setup will also not show program thumbnail images any longer.

This is an admittedly narrow use case in the age of both streaming and cable TV, but Sony didn’t provide any reason for making the change. And for those people who are impacted, this could be an unpleasant surprise next month that makes the TV guide and program guide much less helpful.

Source link

Advertisement
Continue Reading

Tech

Sony’s new gaming monitor serves a 720Hz refresh rate atop an OLED panel

Published

on

Sony just joined the ultra-fast gaming monitor party, and though it was a bit late, it could potentially turn a lot of heads there. On April 14, the company announced the INZONE M10S II, a 27-inch QHD OLED gaming monitor featuring a tandem OLED panel sourced from LG. 

Like other ultra-fast gaming monitors, the Sony gaming monitor pulls double duty between two modes: 540Hz at QHD, and a staggering 720Hz at HD. Developed in collaboration with esports powerhouse Fnatic, the monitor is a successor of the M10S.

Sony has priced the M10S II gaming monitor at $1,099.99. Availability, however, is expected later this year.  

But what does 720Hz actually do for you?

For everyday users, the monitor should offer razor-sharp visuals in QHD resolution at a 540Hz refresh rate with virtually no motion blur and the visual richness of an OLED panel. At this setting, the monitor offers 0.02 ms response time, which is exceptionally good. 

However, the 720Hz HD mode is reserved for hardcore, professional, competitive gamers, who’d rather sacrifice the resolution for pure speed. While I personally don’t know anyone who can make use of such speed, tournament-level FPS gamers, whose fate is determined at the last possible millisecond, could surely put it to good use. 

Advertisement

The monitor also features a new Motion Blur Reduction algorithm that unlocks extra brightness during frames. So, instead of going dark, fast on-screen movements remain vivid. 

What else did Sony launch with the monitor?

Sony didn’t stop at the INZONE MS10S II monitor; the company also launched INZONE H6 Air, an open-back wired gaming headphone, which is priced at $199.99, inspired by the studio-grade MDR-MV1 headphones and weighing just 199 grams. 

Rounding up the launch are new Fnatic Edition accessories, which include Mouse-A, Mat-F, and Mat-D, along with a new translucent Glass Purple finish of the INZONE Buds wireless earbuds, which are all available now. 

Source link

Advertisement
Continue Reading

Tech

OpenAI Engineer Helps Companies Boost Sales

Published

on

Like many engineers, Sarang Gupta spent his childhood tinkering with everyday items around the house. From a young age he gravitated to projects that could make a difference in someone’s everyday life.

When the family’s microwave plug broke, Gupta and his father figured out how to fix it. When a drawer handle started jiggling annoyingly, the youngster made sure it didn’t do so for long.

Sarang Gupta

Employer

Advertisement

OpenAI in San Francisco

Job

Data science staff member

Member grade

Advertisement

Senior member

Alma maters

The Hong Kong University of Science and Technology; Columbia

By age 11, his interest expanded from nuts and bolts to software. He learned programming languages such as Basic and Logo and designed simple programs including one that helped a local restaurant automate online ordering and billing.

Advertisement

Gupta, an IEEE senior member, brings his mix of curiosity, hands-on problem-solving, and a desire to make things work better to his role as member of the data science staff at OpenAI in San Francisco. He works with the go-to-market (GTM) team to help businesses adopt ChatGPT and other products. He builds data-driven models and systems that support the sales and marketing divisions.

Gupta says he tries to ensure his work has an impact. When making decisions about his career, he says, he thinks about what AI solutions he can unlock to improve people’s lives.

“If I were to sum up my overall goal in one sentence,” he says, “it’s that I want AI’s benefits to reach as many people as possible.”

Pursuing engineering through a business lens

Gupta’s early interest in tinkering and programming led him to choose physics, chemistry, and math as his higher-level subjects at Chinmaya International Residential School, in Tamil Nadu, India. As part of the high school’s International Baccalaureate chapter, students select three subjects in which to specialize.

Advertisement

“I was interested in engineering, including the theoretical part of it,” Gupta says, “But I was always more interested in the applications: how to sell that technology or how it ties to the real world.”

After graduating in 2012, he moved overseas to attend the Hong Kong University of Science and Technology. The university offered a dual bachelor’s program that allowed him to earn one degree in industrial engineering and another in business management in just four years.

In his spare time, Gupta built a smartphone app that let students upload their class schedules and find classmates to eat lunch with. The app didn’t take off, he says, but he enjoyed developing it. He also launched Pulp Ads, a business that printed advertisements for student groups on tissues and paper napkins, which were distributed in the school’s cafeterias. He made some money, he says, but shuttered the business after about a year.

After graduating from the university in 2016, he decided to work in Hong Kong’s financial hub and joined Goldman Sachs as an analyst in the bank’s operations division.

Advertisement

From finance to process optimization at scale

After two parties agree on securities transactions, the bank’s operations division ensures that the trade details are recorded correctly, the securities and payments are ready to transfer, and the transaction settles accurately and on time.

As an analyst, Gupta’s task was to find bottlenecks in the bank’s workflows and fix them. He identified an opportunity to automate trade reconciliation: when analysts would manually compare data across spreadsheets and systems to make sure a transaction’s details were consistent. The process helped ensure financial transactions were recorded accurately and settled correctly.

Gupta built internal automation tools that pulled trade data from different systems, ran validation checks, and generated reports highlighting any discrepancies.

“Instead of analysts manually checking large datasets, the tools automatically flagged only the cases that required investigation,” he says. “This helped the team spend less time on repetitive verification tasks and more time resolving complex issues. It was also my first real exposure to how software and data systems could dramatically improve operational workflows.”

Advertisement

“Whether it’s helping a person improve a trait like that or driving efficiencies at a business, AI just has so much potential to help. I’m excited to be a little part of that.”

The experience made him realize he wanted to work more deeply in technology and data-driven systems, he says. He decided to return to school in 2018 to study data science and AI, when the fields were just beginning to surge into broader awareness.

He discovered that Columbia offered a dedicated master’s degree program in data science with a focus on AI. After being accepted in 2019, he moved to New York City.

Throughout the program, he gravitated to the applied side of machine learning, taking courses in applied deep learning and neural networks.

Advertisement

One of his major academic highlights, he says, was a project he did in 2019 with the Brown Institute, a joint research lab between Columbia and Stanford focused on using technology to improve journalism. The team worked with The Philadelphia Inquirer to help the newsroom staff better understand their coverage from a geographic and social standpoint. The project highlighted “news deserts”—underserved communities for which the newspaper was not providing much coverage—so the publication could redirect its reporting resources.

To identify those areas, Gupta and his team built tools that extracted locations such as street names and neighborhoods from news articles and mapped them to visualize where most of the coverage was concentrated. The Inquirer implemented the tool in several ways including a new web page that aggregated stories about COVID-19 by county.

“Journalism was an interesting problem set for me, because I really like to read the news every day,” Gupta says. “It was an opportunity to work with a real newsroom on a problem that felt really impactful for both the business and the local community.”

The GenAI inflection point

After earning his master’s degree in 2020, Gupta moved to San Francisco to join Asana, the company that developed the work management platform by the same name. He was drawn to the opportunity to work for a relatively small company where he could have end-to-end ownership of projects. He joined the organization as a product data scientist, focusing on A/B testing for new platform features.

Advertisement

Two years later, a new opportunity emerged: He was asked to lead the launch of Asana Intelligence, an internal machine learning team building AI-powered features into the company’s products.

“I felt I didn’t have enough experience to be the founding data scientist,” he says. “But I was also really interested in the space, and spinning up a whole machine learning program was an opportunity I couldn’t turn down.”

The Asana Intelligence team was given six months to build several machine learning–powered features to help customers work more efficiently. They included automatic summaries of project updates, insights about potential risks or delays, and recommendations for next steps.

The team met that goal and launched several other features including Smart Status, an AI tool that analyzes a project’s tasks, deadlines, and activity, then generates a status update.

Advertisement

“When you finally launch the thing you’ve been working on, and you see the usage go up, it’s exhilarating,” he says. “You feel like that’s what you were building toward: users actually seeing and benefiting from what you made.”

Gupta and his team also translated that first wave of work into reusable frameworks and documentation to make it easier to create machine learning features at Asana. He and his colleagues filed several U.S. patents.

At the time he took on that role, OpenAI launched ChatGPT. The mainstreaming of generative AI and large language models shifted much of his work at Asana from model development to assessing LLMs.

OpenAI captured the attention of people around the world, including Gupta. In September 2025 he left Asana to join OpenAI’s data science team.

Advertisement

The transition has been both energizing and humbling, he says. At OpenAI, he works closely with the marketing team to help guide strategic decisions. His work focuses on developing models to understand the efficiency of different marketing channels, to measure what’s driving impact, and to help the company better reach and serve its customers.

“The pace is very different from my previous work. Things move quickly,” he says. “The industry is extremely competitive, and there’s a strong expectation to deliver fast. It’s been a great learning experience.”

Gupta says he plans to stay in the AI space. With technology evolving so rapidly, he says, he sees enormous potential for task automation across industries. AI has already transformed his core software engineering work, he says, and it’s helped him enhance areas that aren’t natural strengths.

“I’m not a good writer, and AI has been huge in helping me frame my words better and present my work more clearly,” he says. “Whether it’s helping a person improve a trait like that or driving efficiencies at a business, AI just has so much potential to help. I’m excited to be a little part of that.”

Advertisement

Gupta has been an IEEE member since 2024, and he values the organization as both a technical resource and a professional network.

He regularly turns to IEEE publications and the IEEE Xplore Digital Library to read articles that keep him abreast of the evolution of AI, data science, and the engineering profession.

IEEE’s member directory tools are another valuable resource that he uses often, he says.

“It’s been a great way to connect with other engineers in the same or similar fields,” he says. “I love sharing and hearing about what folks are working on. It brings me outside of what I’m doing day to day.

Advertisement

“It inspires me, and it’s something I really enjoy and cherish.”

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

John Deere Pays $99 Million To Settle ‘Right To Repair’ Class Action

Published

on

from the do-not-pass-go,-do-not-collect-$200 dept

A few years ago agricultural equipment giant John Deere found itself on the receiving end of multiple state, federal, and class action lawsuits for its efforts to monopolize tractor repair. The lawsuits noted that the company consistently purchased competing repair centers in order to consolidate the sector and force customers into using the company’s own repair facilities, driving up costs and logistical hurdles dramatically for farmers.

John Deere executives have repeatedly promised to do better, then just ignored those promises. Early last year, the FTC and numerous states filed an antitrust lawsuit against the company for its efforts to monopolize repair. Though, with MAGA corruption purging any remaining antitrust enforcers from its ranks, it’s unclear if the FTC action will ever actually result in anything meaningful.

John Deere did however just have to pay $99 million to settle a different class action lawsuit brought by its customers. Under the settlement John Deere doesn’t admit to any wrongdoing, but will deposit the money into a fund to pay more than 200,000 John Deere owners for expensive dealership repairs since 2018.

In an announcement by the company, John Deere pretends they’re a consumer-focused enterprise:

Advertisement

“As we continue to innovate industry leading equipment and technology solutions supported by our world-class dealer network, we are equally committed to providing customers and other service providers with access to repair resources,” said Denver Caldwell, vice president, Aftermarket & Customer Support. “We’re pleased that this resolution allows us to move forward and remain focused on what matters most – serving our customers.”

Except if John Deere had cared about customer service, they wouldn’t be in this predicament.

In addition to intentionally acquiring repair alternatives to monopolize repair and drive up consumer costs, John Deere also routinely makes repair difficult and costly through the act of software locks, obnoxious DRM restrictions, and “parts pairing” — which involves only allowing the installation of company-certified replacement parts — or mandatory collections of company-blessed components.

More recently, the company had been striking meaningless “memorandums of understanding” with key trade groups, pinky swearing to stop their bad behavior if the groups agree to not support state or federal right to repair legislation. Several such groups backed off their criticism, only to have John Deere continue its monopolistic behavior, the FTC’s complaint notes.

The annoyance at John Deere’s behavior has driven a broad, bipartisan movement that’s in very vocal support for state and federal guidelines enshrining “right to repair” protections into law. Unfortunately, while all fifty states have at least flirted with the idea of a state law, only Massachusetts, New York, Texas, Minnesota, Colorado, California, Oregon, and Washington have actually passed laws.

Advertisement

And among those, not one has taken any substantive action to actually enforce the new law, something that needs to change if the movement is to obtain and retain meaningful policy momentum.

Filed Under: agriculture, class action, dealership, ftc, lawsuits, repairs, right to repair, tractors

Companies: john deere

Source link

Advertisement
Continue Reading

Tech

New Display For Old Multimeter

Published

on

As a company, Fluke has been making electronic test equipment longer than the bipolar junction transistor has been around for. In that time they’ve developed a fairly stellar reputation for quality and consistency, but like any company they don’t support their products indefinitely. [ogdento] owns a Fluke meter that isn’t nearly as old as the BJT but still has an age well outside of the support window, and since the main problem was the broken LCD display they set about building a replacement for this retro multimeter.

Initially, [ogdento] had plans to retrofit this classic multimeter with a modern OLED, but could not find enough space for the display or a way to drive it easily. The next attempt to get something working was to build a custom one-off LCD using a drill press as an end mill, which didn’t work either. But after seeing a Charlieplexed display from [bobricius] as well as this video from EEVblog about designing custom LCDs, [ogdento] was able to not only design a custom PCB and LCD display to match the original meter, but was able to get a manufacturer in China to build them.

The new displays have a few improvements over the old; mostly they are more stylistically inspired by later Fluke models and have a few modern improvements to the LCD itself. There were are few issues during prototyping but nothing that was too hard to sort out, such as ordering the wrong size elastomeric strips initially. For anyone who needs to replace a custom LCD and can’t find replacement parts anymore, this project would be a great starting point for figuring out the process from the ground up.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Google is expanding Personal Intelligence to Gemini users globally and it’s a huge shift

Published

on

If you have been waiting for Gemini to actually feel like it knows you, your wait is almost over. Google’s Personal Intelligence, which launched earlier this year for paid US subscribers, is now rolling out globally.

What is Gemini Personal Intelligence and what can it do?

Personal Intelligence connects Gemini to your Google apps. Think Gmail, Google Photos, YouTube, Search, Maps, Calendar, Drive, and more. It uses your existing data to give smarter, more tailored responses without requiring you to explain everything each time.

The use cases are genuinely impressive. Ask Gemini for shopping recommendations, and it will factor in your recent purchases and style preferences. Stuck troubleshooting a device you do not remember buying? It can pull the exact model from your purchase receipts in Gmail.

If you are planning a trip with a tight layover, Gemini can use Personal Intelligence to check your gates, walking time, and meal preferences all at once. It can even suggest a new hobby based on patterns it notices across your activity.

Google says this is an opt-in feature, so you choose which apps to connect. Importantly, Gemini does not train directly on your Gmail or Photos data. It references them to answer your questions, but keeps the underlying personal content separate from model training.

Who can use Gemini’s Personal Intelligence feature?

Personal Intelligence works across desktop, Android, and iOS with languages supported by Gemini. The global rollout is now live for Google AI Plus, Pro, and Ultra subscribers everywhere except the European Economic Area, Switzerland, and the UK. Free Gemini users globally will get access within the next few weeks.

Advertisement

Why does this matter?

Personal Intelligence is probably the most significant thing Google has done with Gemini so far. Gemini is slowly becoming the kind of AI assistant that actually understands your life, not just the internet.

With access to Gmail, Photos, Maps, and more, Gemini will no longer feel like a generic chatbot and behave like a genuine personal assistant. No other AI assistant comes close to having this kind of data advantage baked in from the start.

Apple Intelligence is still finding its feet and Microsoft’s Copilot lives mostly inside productivity tools. Meanwhile OpenAI’s ChatGPT has no first-party data ecosystem of its own.

Google, on the other hand, already has your entire digital life across billions of users. In an AI race where rival companies are all building toward personalization, Google, with its unmatched ecosystem, is uniquely positioned to win it.

Advertisement

Source link

Continue Reading

Tech

Bull and Equal1 to advance next gen of hybrid quantum tech in Europe

Published

on

The partnership will bring together Bull’s supercomputing infrastructure and Equal1’s ‘breakthrough’ silicon-spin quantum computers.

Bull, a Paris-based high-performance computing (HPC), artificial intelligence and quantum technology company, is to partner with Dublin start-up Equal1, a silicon-powered quantum computing technology provider. 

Equal1 and Bull stated that their deal will “advance the next generation of hybrid quantum-classical technologies with European solutions”, at a time when quantum computing is beginning to transition from promise to practical reality.

The pair said the partnership will combine Bull’s supercomputing infrastructure and quantum emulation expertise with Equal1’s breakthrough silicon-spin quantum computers, as agreed in a memorandum of understanding. 

Advertisement

The collaboration will focus on three core pillars – technical integration, joint research and development to advance innovation, and a focus on sovereign European projects whereby both companies will collaborate on EU-led quantum initiatives amid the global quantum race.

Commenting on the announcement, Bruno Lecointe, the senior vice-president and global head of HPC, AI and quantum at Bull, said: “The convergence of high-performance computing and quantum technologies is redefining how we address the world’s most complex challenges. 

“10 years after launching the first quantum emulator of the market, innovation has always been part of Bull’s DNA and we remain committed to designing hybrid architectures that help translate emerging technologies into operational capability.

“By integrating Equal1’s silicon-spin quantum servers into our Qaptiva ecosystem, we are enabling a seamless bridge between HPC, quantum emulation and quantum execution. This alliance ensures our customers can leverage quantum-centric supercomputing to achieve real-world outcomes with unprecedented efficiency and performance.”

Advertisement

Jason Lynch, the CEO of Equal1, added: “By building quantum processors on standard silicon, we are turning quantum from bespoke laboratory hardware into deployable infrastructure. This collaboration with Bull is a vital step in bridging the gap between breakthrough hardware innovation and industrial workloads. 

“Together, we are positioning our joint solutions as the standard for high-performance computing, enabling seamless integration into existing data centres and driving a more sustainable digital future.”

Earlier this year, Equal1 announced it had raised $60m in a funding round led by Ireland Strategic Investment Fund, with participation from Atlantic Bridge, the European Innovation Council Fund, Matterwave Ventures, Enterprise Ireland, Elkstone and TNO Ventures.

At the time, Equal1 said that the investment would enable deployment to HPC centres – including to the European Space Agency’s Phi-lab in Italy – advance the roadmap towards “millions” of on-chip qubits, scale manufacturing and grow its team.

Advertisement

Last week, the Dublin-based start-up said it would partner with Californian quantum infrastructure software maker Q-Ctrl for the deployment of rack-mounted quantum computers in enterprise data centres.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Apple Store closures make sense to Apple, but not to the community

Published

on

Apple sometimes closes retail stores. The company always has private and public reasons why, but the communities and workers that are impacted don’t care much about what they are.

Modern Apple retail store interior with bright lighting, glass front, wooden tables displaying iPhones, iPads, and laptops, accessory shelves along gray walls, and large product posters on both sides
Apple Trumbull – Image Credit: Apple

On April 9, it was revealed that Apple was preparing to close three of its stores in the United States in June. The group consists of Apple North County in Escondido, California, Apple Towson Town Center in Towson, Maryland, and Apple Trumbull in Trumbull, Connecticut.
After the initial shock of the closures, people are still expressing their feelings about the store closures. However, as usual, nothing is straightforward in the court of public opinion.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

By Our Calculations, You’ll Love The Flapulator

Published

on

Oh sure, you’ve got calculators. There’s that phone program of course, and the one that comes with your OS, and the TI-86 and possibly RPN numbers you’ve had since high school.

But what you don’t have is a Flapulator, at least not until you build one. Possibly the be-all, end-all of physical calculating devices, the Flapulator does its calculating live on a split-flap display. It’s kind of slow and the accuracy is questionable, but the tactility is oh, so good.

This baby boasts a 6-digit display, where the decimal point and negative sign each require one digit. Inside is a Raspberry Pi Pico, which can calculate for around 4 hours on a full charge. But the coolest part (aside from the split-flap display, naturally) has got to be the 24-key, hand-wired mechanical keyboard. There’s also a couple of LEDs that light up to keep track of the current mathematical operation.

Advertisement

The story behind this one is kind of interesting. [Applepie1928] found out that one of their favorite mathematician-comedian-pi-lovers who is known for signing calculators was coming to town. With four weeks to whip something up, this was, amazingly, the result. Check it out in  action after the break.

Need something that’s a whole other kind of fancy? Here’s an open-source graphing calculator.

Advertisement

Source link

Continue Reading

Tech

Intel and Google lock in massive Xeon deal as AI workloads reshape cloud infrastructure across global hyperscale data centers

Published

on


  • Intel and Google signed a multi-year deal to keep Xeon in cloud infrastructure
  • Google Cloud instances C4 and N4 already run on Xeon 6 processors
  • Intel and Google are co-developing custom IPUs for networking and storage

Intel and Google have announced a multi-year collaboration that will keep Intel Xeon processors at the heart of Google Cloud infrastructure for the foreseeable future.

The agreement spans multiple generations of Xeon chips and includes systems used for AI workloads, inference tasks, and general-purpose computing across Google’s global data centers.

Source link

Continue Reading

Trending

Copyright © 2025