Connect with us
DAPA Banner

Tech

This $199 upscaler wants to make your retro consoles look great on modern TVs

Published

on


Pre-orders for the recently announced Morph 2K analog-to-digital video converter open on June 1, starting at $199. The device is essentially a budget version of Pixel FX’s earlier Morph 4K, dropping support for 4K output.
Read Entire Article
Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

The Other Side: Game Dev Tim Cain Isn’t Helping In The AI In Gaming Debate

Published

on

from the nuance-cuts-both-ways dept

You’re all sick of me saying we need to have more nuance in the discussion about AI use in the gaming industry. I get it. I’m also not going to stop. And I hope you will have noticed that I have called for nuance in both directions. While I’m more optimistic than many in our community that there is a place for this technology in the industry, and that it could actually have some net positive effects therein, I’m also not blind to the potential negative consequences. Concerns about industry jobs are a very real thing. A desire to protect the artistic intent of game makers is a worthy enterprise. Quality of output is paramount.

That’s why I’ve been repeating over and over again that we should be talking about how AI will be used in games, not if. The “if” question has already been answered in the affirmative, at least for some portion of the industry. Now we need to build very real guardrails around the “how.”

And, to be frank, comments such as those from Fallout co-creator Tim Cain are wildly unhelpful in the opposite direction.

Fallout co-creator Tim Cain says a world where AI generates games, TV shows, and even doctor’s appointments is inevitable, and he’s even “looking forward” to that future.

In arguably the veteran game developer’s saddest “fun Friday” video ever, Cain envisions a world in which dead MMOs come back to life with AI-generated players mimicking real-life personalities, where generative AI makes Joey from Friends a lawyer instead of a struggling actor, and where you take vacations in VR. Yes, really.

Advertisement

He goes way, way beyond even that. He talks at some length about using AI to create more episodes of retired shows that people still hunger for. As a massive fan of Firefly, I can’t tell you how ecstatic I’ve been these past several weeks with Nathan Fillion’s announcement that the show would be coming back in an animated form to build on the story that was infamously canceled by Fox after only 1 season. If that announcement was instead made by the rightsholder and said the new episodes would be created whole cloth using AI and that they would be customizable and tailored to my desires, my reaction would have been horror, not excitement.

AI needs to be a tool on the perimeter, not the creative force itself. I don’t want the pen telling me the story of Odysseus; I want the writer to use the pen to do so. And if the pen turns into a typewriter, which then turns into a word processor, that all works. There is still a human being telling the story.

Even Cain’s remarks tailored specifically for the gaming industry ring super hollow.

Cain goes on to say this will be especially handy for MMO players, in particular those who miss being able to play games that aren’t active anymore. “Have an AI make a local server,” he proposes. “Great, now you can play it again. Oh, it’s empty? Fill it with AI players. Have it watch videos of people who have played that game and just fill it up with players, and it mimics their personalities.”

Look, Cain is a veteran of the industry who was instrumental to one of the most beloved video game IPs of all time, but with all due respect, the idea of playing Ultima Online with AI-generated players designed to mimic the personalities of my friends who I used to play with… is genuinely one of the grimmest, most dire, dystopian realities I can possibly fathom. Likewise, my heart sinks at the thought of playing AI-generated stories with AI-generated characters that I can change however I want. That sounds like it would entirely rob a game, or any work of art, of its artistic intent. But alas, Cain reckons this is all inevitable, so get ready.

Advertisement

This is what the AI detractors are worried about. And when you hear an industry veteran speak so glowingly about gamers operating within these soulless arenas designed merely to mimic the authentic fun that these games produced, it’s easy to understand the concern. This isn’t helpful. Pretending to not understand that the very fucking point of MMOs is to play with other human beings in a single realm, not ginned-up robots pretending to be human, is incredibly frustrating.

And Cain, oddly enough, seems completely unconcerned with artistic intent at all. There is no reason why his example of requesting changes to a TV show wouldn’t translate into a video game. And if people can just customize games not through mods, but through fundamental changes driven by AI requests, then there is no game anymore. There is merely a shell of a game where the player is then free to remix it to extents that transform the intent of the maker completely.

I had to search around a lot to see if Cain was being sarcastic or making a fake attempt at over the top AI evangelism purely to make a point. Everything I have seen and read indicates that’s not what this was. And, again, that makes all of this very unhelpful if you want to get into some real discussions about where this technology should be used and where it shouldn’t.

Filed Under: ai, artistic intent, tim cain, video games

Advertisement

Source link

Continue Reading

Tech

Cognizant buys Astreya for $600m in its fourth major acquisition

Published

on

The deal, Cognizant’s fourth major acquisition in 18 months, is designed to plug a specific gap in its AI builder strategy: the ability to design, build, and run the physical data centre infrastructure that enterprise AI actually runs on.


Cognizant has agreed to acquire Astreya, a San Jose-based IT managed services firm specialising in AI infrastructure and data centre operations, for approximately $600 million, the company confirmed to Reuters. The deal is expected to close in the second quarter of 2026, pending regulatory approvals.

The acquisition is the most operationally specific move yet in Cognizant’s AI pivot under CEO Ravi Kumar S, who took the role in January 2023 and has spent the intervening period reshaping the company around what he calls an ‘AI builder strategy’: helping enterprise clients not merely adopt AI tools but architect, deploy, and scale production-grade AI systems.

“By acquiring Astreya and its proprietary AI tooling and production-grade infrastructure platform, which is complementary to Cognizant’s AI builder stack, we will be even better-positioned to help clients architect their platform-led AI systems and operationalise them at scale,” Kumar said.

Advertisement

What Astreya brings?

Founded in 2001 in Silicon Valley, Astreya has spent two decades building operational expertise in exactly the unglamorous layer of enterprise IT that most AI coverage ignores: the physical and logical infrastructure that connects AI compute to the businesses that use it.

Advertisement

The company employs more than 2,200 IT professionals across 33 countries, with a particular concentration in data centre management, network operations, cloud infrastructure, and digital workplace services for large enterprises.

Its customer base skews heavily towards technology companies, the same hyperscalers and enterprise tech firms whose AI ambitions are driving the current wave of infrastructure investment.

Its service offering spans network and data centre management with 24/7 monitoring, IT asset lifecycle management, cloud infrastructure services including migration and optimisation, and AI-driven automation tooling it describes as ‘AI-first’ managed services.

The company has been building proprietary AI agents and automation frameworks on top of its managed services, positioning itself not merely as a labour-arbitrage outsourcer but as a technology platform for infrastructure operations.

Advertisement

Revenue figures for Astreya vary across sources, as the company is privately held and does not publish audited financials. ZoomInfo and RocketReach both cite a figure of approximately $560 million in annual revenues, which would make the $600 million acquisition price a modest premium to trailing revenue, reflecting the value Cognizant is placing on Astreya’s proprietary tooling and its customer relationships rather than simply its headcount.

The Astreya acquisition continues an acquisition cadence that Kumar has maintained with unusual consistency since taking the helm. In 2024, Cognizant acquired Thirdera, a ServiceNow specialist, for $430 million, and Belcan, a digital engineering firm focused on aerospace and defence, for approximately $1.3 billion, its largest deal in years.

In November 2025, it announced the acquisition of 3Cloud, one of the largest independent Microsoft Azure services providers, in a deal that would add more than 1,000 Azure engineers and 1,500 Microsoft certifications to Cognizant’s bench. Financial terms for 3Cloud were not disclosed.

Each acquisition maps to a specific gap in Cognizant’s AI builder stack. Thirdera deepened its ServiceNow workflow automation practice. Belcan brought engineering R&D capabilities for complex physical systems. 3Cloud extended its Microsoft Azure and AI deployment bench. Astreya fills the infrastructure operations layer: the ability to design, build, and run the data centre and network fabric that production AI systems require.

Advertisement

The pattern reflects a deliberate attempt to move Cognizant away from the commoditised labour arbitrage model that has characterised Indian IT outsourcing for two decades, and towards a higher-margin, IP-enriched services model.

Cognizant was named as one of OpenAI’s first two Codex enterprise partners in April 2026, embedding the AI coding agent across its global delivery workforce of roughly 350,000 associates. The company has also embedded Claude across its entire workforce through Anthropic’s Claude Partner Network, making it one of the most AI-embedded large IT services firms by headcount.

The strategic rationale for the Astreya deal is rooted in a specific problem that large enterprise AI deployments consistently run into: the gap between a working AI model and a running AI system at scale. Building a model, or licensing one, is now relatively straightforward for a large organisation.

Running it reliably in production, managing the data centre connectivity, network latency, hardware provisioning, monitoring, and operational automation required to deliver consistent performance, is considerably harder, and considerably more expensive to staff organically.

Advertisement

That gap is where Astreya has operated for two decades. Its clients include some of the world’s largest technology companies, and it has been building automation and AI-agent tooling on top of its managed services infrastructure at a moment when demand for exactly that capability is accelerating.

Cognizant’s Q3 2025 revenues reached $5.4 billion, up 7.4% year on year, with its Products & Resources segment, boosted significantly by Belcan, growing at 12.6%. The company has guided for continued revenue acceleration into 2026, and the Astreya deal strengthens the infrastructure services revenue line that underpins that guidance.

For the broader IT services market, the deal is a signal of where the battleground has shifted. Accenture, Infosys, TCS, and Wipro are all pursuing variations of the same strategy: acquiring specialised capability to move up the AI value chain before the window closes.

The race is not simply for AI talent, it is for the operational infrastructure expertise and proprietary tooling that turns AI from a pilot into a production system. Astreya, at $600 million, is Cognizant’s bet that it can own that layer.

Advertisement

Source link

Continue Reading

Tech

Female Looksmaxxer Alorah Ziva Is Suing Clavicular for Alleged Battery

Published

on

An 18-year-old woman who promotes herself as the “#1 female looksmaxxer” is suing the highly controversial streamer Braden Eric Peters, aka Clavicular, for fraud, battery, and alleged sexual assault.

In the suit, which was filed in Miami-Dade County court and obtained by WIRED, Aleksandra Mendoza, who goes by the name @zahloria, or Alorah Ziva, on Instagram, alleges that she first encountered Peters “in or about May 2025.” According to the lawsuit, she was just 16 years old at the time. According to the complaint, Peters promised Mendoza he could make her “the female face of looksmaxxing,” the online trend of using surgery or drugs to enhance one’s facial features.

Eager to grow her social media following, Mendoza agreed to make four looksmaxxing videos for Peters in exchange for a $1,000 payment, court documents say. The two allegedly began a text-based relationship, with Peters offering to pay for an Uber ride for Mendoza to visit him and his family in Cape Cod, Massachusetts.

Upon her arrival, Mendoza alleges, Peters plied her with alcohol and “had sex with Mendoza while she was knowingly intoxicated, to the point where she was unable to give consent,” the complaint says.

Advertisement

Mendoza goes on to accuse Peters of nonconsensually having sex with her again the following morning while she was sleeping. The suit notes that Peters was aware of Mendoza’s age, referring to her as a “minor” in an online comment. (The age of consent in Florida, where the suit was filed, is 18, but the state’s “Romeo and Juliet” law provides an exception for those who are older than their 14- to 17-year-old partners by four years or less.)

According to the suit, Mendoza bumped into Peters in Miami a few months later. He allegedly invited her to his house to livestream with him, promising that he could help her grow her following. During the livestream, he then allegedly injected her in the cheeks with Aqualyx, an injectable used to reduce fat in the chin, thighs, or stomach.

According to the US Food and Drug Administration website, Aqualyx is not approved by the FDA and can result in “permanent scars, serious infections, skin deformities, cysts, and deep, painful knots” in the skin if it is administered by a nonprofessional. Mendoza contends that her right cheek became “perforated” after she was injected by Peters.

Though Peters and Mendoza continued to have sporadic contact, the suit alleges, their relationship soured in early 2026, when Mendoza signed a contract to promote an online trading platform. She alleges that she lost this sponsorship after Peters “began a campaign to discredit” her, which the suit contends was due to Peters’ concerns over her exposing him.

Advertisement

Mendoza is suing Peters for battery, fraud, and emotional distress and is seeking at least $50,000 in damages. In a post on X, Peters appeared to deny the allegations, writing, “The consistent theme of girls trying to use me for money is brutal for a young guy trying to navigate a complex society. Hopefully I can find a good girl whos [sic] intent is to not to screw me over and take my money.”

This is not the first time Peters has faced legal action. In March, he was arrested by Fort Lauderdale, Florida, police for allegedly instigating a physical fight between two women and livestreaming it on the platform Kick. He is also reportedly being investigated by Florida state wildlife authorities for shooting a dead alligator on livestream.

Through her attorney Andrew Moss, Mendoza declined to comment. “She will tell her story through the legal process,” Moss said. “We do look forward to hearing from Mr. Peters and his lawyers.” A representative for Peters did not immediately return WIRED’s request for comment.

Advertisement

Source link

Continue Reading

Tech

Latest Xiaomi 17T Pro leak points towards flagship MediaTek power

Published

on

The Xiaomi 17T Pro has surfaced on Geekbench ahead of launch, and early numbers suggest this won’t be just another mid-cycle refresh. Instead, Xiaomi looks set to push the Pro model firmly into flagship territory with a top-tier MediaTek chipset.

A new Geekbench AI listing for the global variant (model 2602EPTC0G) reveals what’s under the hood. The phone is expected to run on the MediaTek Dimensity 9500, MediaTek’s latest 3nm silicon, paired with around 12GB of RAM. That alone positions it as a clear step above the standard 17T.

The benchmark results back that up. The device posts 974 (single precision), 1,171 (half precision), and 1,334 (quantized) scores in Geekbench AI. These are solid numbers that hint at strong on-device AI performance. However, synthetic tests don’t always translate directly to real-world use.

Digging into the listing a bit more, it also points to an octa-core setup with a peak clock speed hitting 4.21GHz, alongside Android 16 out of the box. There’s also roughly 11GB of usable RAM detected. This lines up with a typical 12GB configuration once system allocation is accounted for.

Advertisement

This isn’t happening in isolation, either. The Dimensity 9500 is already powering premium devices like the Vivo X300 Pro and Oppo Find X9 Pro. So Xiaomi opting for the same chip signals serious intent with the 17T Pro.

Advertisement

Moreover, rumours suggest the wider 17T series will focus on battery improvements, while camera upgrades may be more incremental. For context, last year’s 15T Pro already packed a 6.83-inch AMOLED display, Dimensity 9400+, and a 5,500mAh battery. It also featured a triple-camera setup led by a 50MP sensor, so expectations are already fairly high.

There’s no confirmed launch date yet, but with certifications already in place and benchmarks starting to appear, a reveal as early as May 2026 looks increasingly likely.

Advertisement

Source link

Continue Reading

Tech

Why Model Collapse In LLMs Is Inevitable With Self-Learning

Published

on

There is a persistent belief in the ‘AI’ community that large language models (LLMs) have the ability to learn and self-improve by tweaking the weights in their vector space. Although there’s scant evidence that tweaking a probability vector space is anything like the learning process in biological brains, we nevertheless get sold the idea that artificial general intelligence (AGI) is just around the corner if we do just enough tweaking.

Instead of emerging super intelligence, the most likely outcome is what is called model collapse, with a recent paper by [Hector Zenil] going over the details on why self-training/learning in LLMs and similar systems is a fool’s errand. For those who just want the brief summary with all the memes, [Metin] wrote a blog post covering the basics.

In the end an LLM as well as a diffusion model (DM) is a statistical model of input data using which a statistically likely output can be generated (inferred) based on an input query. It follows intuitively that by using said output  to adjust the model with, the model will over time converge on a kind of statistical singularity rather than some ‘AI singularity’ event. This is also why these models need to be constantly trained with external, human-generated data in order to prevent such a collapse.

Advertisement

In the paper by [Hector] a mathematical model is created to demonstrate that an LLM, DM or similar statistical model undergoes degenerative dynamics whenever said external input is reduced. Although in the paper a mechanism is suggested to counter the entropy decay within the model, the ultimate point is that a statistical model cannot improve itself without continuous external anchoring.

The idea of LLMs being at all intelligent in any sense has been a contentious one, with the concept of language models being equated with ‘AI’ dating back to the 20th century, including as fun home computer projects. Much of the problem probably lies in humans projecting intelligent behavior onto these statistical models, turning LLMs into ‘counterfeit humans’, not helped by how closely generated text can resemble something written by a human, even if completely confabulated.

Thanks to [deshipu] for the tip.

Advertisement

Source link

Continue Reading

Tech

Tesla promises HW3 owners a lite version of FSD, but its months away

Published

on

A couple of days ago, Musk confirmed that Tesla’s third-generation vehicles can’t achieve Full Self-Driving (FSD) through software alone. Shortly after, roughly 3,000 Hardware 3 (HW3) owners across 29 European countries signed onto a collective legal claim, centered aorund €6.5 million already spent in the name of FSD purchases. 

As a response, Tesla (via an X post) has committed to bringing FSD V14 Lite to HW3 cars in the international markets. The catch, however, is that the software won’t arrive until after the automaker is done with the U.S. rollout. Even then, it’s surrounded with more ifs and buts than buyers would appreciate. 

Following future rollout of FSD V14 Lite for HW3 vehicles in the US, we plan on expanding V14 Lite to additional international markets.

This update ensures that HW3 vehicle owners will continue to benefit from ongoing software updates.

Since international rollout is subject to…

Advertisement

— Tesla (@Tesla) April 29, 2026

Is FSD V14 Lite what HW3 buyers paid for?

Not exactly. A lot of the Tesla HW3 buyers paid up to €6,400 for FSD in 2019, on the promise that their vehicles would eventually drive themselves. However, the now-promised FSD V14 Lite isn’t even remotely similar. 

Even though Tesla hasn’t confirmed exactly what FSD V14 Lite is, it sounds like a stripped-down version of the latest FSD software that operates on the same level as Level 2 driver-assistance systems. In other words, it might still require the driver to be attentive and in control at all times during a drive.

A Teslarati report throws an optimistic light on the V14 Lite’s anticipated feature set, stating that the update could unlock better handling in complex urban scenarios, improved parking, and reverse parking features (though I’ll remain skeptical about it). However, it still isn’t as good as the unsupervised, hands-free drive experience that thousands of buyers paid for. 

Advertisement

When will international HW3 owners get the promised update?

The U.S. rollout of the V14 Lite update is expected around the end of June 2026. Currently, international customers sit in a queue behind them, with an additional three approval hurdles to clear, which include technical verification, regional adaptation, and regulatory approvals. 

None of the procedures has a timeline attached to it. Hence, international buyers might be waiting for months after Tesla is done with the U.S. rollout. To me, it looks like the V14 Lite update buys Tesla some time and reduces the legal pressure, at least for the time being. 

Source link

Advertisement
Continue Reading

Tech

How Elon Musk Squeezed OpenAI: They ‘Are Gonna Want to Kill Me’

Published

on

Elon Musk returned to the witness stand on Wednesday to continue telling his side of the story in his legal battle against OpenAI and its CEO Sam Altman. Under cross-examination from OpenAI’s lawyers, Musk was pressed on all the ways he tried to squeeze the organization over a 2017 power struggle that he ultimately lost. Around this time, Musk tried to hire away OpenAI researchers and stopped sending it funding he had previously promised, according to emails presented as evidence in the case.

As the cross-examination began, tension rippled through the courtroom. Judge Yvonne Gonzalez Rogers started the day by reprimanding someone in the gallery for taking a picture of Musk. OpenAI president and cofounder Greg Brockman sat behind his lawyers with a yellow legal pad in his lap, giving Musk a cold stare as he testified. Musk grew visibly frustrated on the witness stand, pausing frequently to tell OpenAI’s lawyer, William Savitt, that he saw his questions as misleading. Meanwhile, Savitt’s cross-examination was derailed by objections, technical issues, and Musk continuously claiming he doesn’t recall key details of OpenAI’s history.

Savitt showed the courtroom emails from September 2017 between Musk, Brockman, and researcher Ilya Sutskever discussing the formation of what would become OpenAI’s for-profit arm. In the thread, Musk demanded the right to choose four members of its board of directors, giving him more voting power than his cofounders, who would be left with three in total. “I would unequivocally have initial control of the company, but this will change quickly,” said Musk in one message. Sutskever wrote back rejecting the idea because he said he feared it would give Musk too much power.

Months before these negotiations started, Musk had halted payments to OpenAI, which was particularly difficult for the organization because he was then its main source of funding. Since 2016, Musk had been sending $5 million payments to OpenAI quarterly as part of a broader $1 billion pledge he made at the organization’s launch. But in the spring of 2017, he stopped sending the money. In another email from August 2017, the head of Musk’s family office, Jared Birchall, asked Musk if he should continue withholding it. Musk responded simply, “Yes.”

Advertisement

Around the time Musk lost the power struggle, emails show that he held discussions with executives at Tesla and Neuralink, his brain-computer interface company, about hiring OpenAI employees. At the time, Musk was still a board member of OpenAI.

Musk sent an email to a Tesla vice president in June 2017 about hiring an early OpenAI researcher, Andrej Karpathy. “Just talked to Andrej and he accepted as joining as director of Tesla Vision,” Musk wrote. “Andrej is arguably the #2 guy in the world in computer vision … The openai guys are gonna want to kill me, but it had to be done.”

On the stand, Musk argued that Karpathy was already interested in leaving OpenAI when he tried to recruit him to Tesla. “Andrej had made his decision. If he’s going to leave OpenAI, he might as well work at Tesla,” Musk said.

In October 2017, Musk also wrote to Ben Rapoport, a cofounder of Neuralink. “Hire independently or directly from OpenAI,” said Musk. “I have no problem if you pitch people at OpenAI to work at Neuralink.”

Advertisement

When pressed about this by Savitt, Musk argued that it would have been illegal for him not to allow Tesla and Neuralink to hire from OpenAI. “It’s illegal to restrict employment. It would be illegal to say you can’t employ people from OpenAI. You can’t have some cabal that stops people from working at the company they want to work at,” Musk said.

Source link

Continue Reading

Tech

Today’s NYT Mini Crossword Answers for April 30

Published

on

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Need some help with today’s Mini Crossword? It’s not too tough, though 4-Down threw me off at first (think tape recorder commands or buttons on a streaming service like Netflix). Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Advertisement

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

completed-nyt-mini-crossword-puzzle-for-april-30-2026.png

The completed NYT Mini Crossword puzzle for April 30, 2026.

Advertisement

NYT/Screenshot by CNET

Mini across clues and answers

1A clue: Low-quality A.I. content
Answer: SLOP

5A clue: “Whoa, settle down!”
Answer: CHILL

6A clue: Toyota rival
Answer: HONDA

Advertisement

7A clue: Subject of many Sun Tzu quotes
Answer: ENEMY

8A clue: “Now ___ talkin’!”
Answer: WERE

Mini down clues and answers

1D clue: Gleamed, as the sun
Answer: SHONE

2D clue: ___ notes (text accompanying a record)
Answer: LINER

Advertisement

3D clue: Person I used to be, before some personal growth
Answer: OLDME

4D clue: Pause’s counterpart
Answer: PLAY

5D clue: What “masticate” is a fancy word for
Answer: CHEW

Advertisement

Source link

Continue Reading

Tech

China pauses AV permits after Baidu disruption

Published

on

Baidu’s robotaxi operations in Wuhan have been suspended, sources tell the publication.

China has suspended issuing new licences for level-four autonomous vehicles (AV) after more than 100 Baidu Apollo Go robotaxis abruptly stopped in Wuhan in late March, Bloomberg news reported. Level-four AVs do not require humans involvement to drive.

The suspension will prevent AV companies from adding new robotaxis to their fleet or expanding to new cities. It is unclear how long the suspension will last, sources told the publication.

Meanwhile, Baidu’s robotaxi operations in Wuhan have also been suspended while authorities investigate the incident.

Advertisement

The incident caused traffic disruptions and highway collisions, trapping passengers on the streets of Wuhan. Videos on social media show Baidu AVs lined up on the road, unmoving.

“Upon investigation, preliminary findings suggest system malfunctions as the cause of the incident”, read a translated statement from the Wuhan local traffic police department.

Apollo Go is the largest robotaxi provider in China, with several hundred vehicles in more than a dozen cities in the country.

Rival AV company Pony AI told Bloomberg that its services in Beijing, Shanghai, Guangzhou and Shenzhen are currently operating normally, with expansion plans progressing as planned. WeRide also said that its services in China are operating normally.

Advertisement

Baidu’s shares are down nearly 2pc, while Pony AI is down more than 3pc, WeRide is down by roughly 4.2pc and BYD is down by around 2.2pc.

Sources told publication that the Apollo Go incident led to a high-level meeting between agencies this month involving the country’s Ministry of Industry and Information Technology, with regulators calling for local governments to conduct a full self-review and improve safety monitoring to prevent similar incidents in the future.

In 2024, Wuhan residents protested against Apollo Go’s deployment in the city, fearing job losses. In response, regulators paused AV approvals for several months before resuming in early 2025.

Meanwhile, a massive power outage in San Francisco in December led to a disruption to Waymo services in the city. During the incident, California residents reported spotting Waymo vehicles stalled on the streets, adding on to the gridlock.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

GM brings Google Gemini to four million vehicles

Published

on

The over-the-air update replaces Google Assistant across model year 2022 and newer Cadillac, Chevrolet, Buick, and GMC vehicles, but arrives under the shadow of GM’s data-sharing controversy and a looming FTC consent order.


General Motors has announced that Google Gemini is rolling out to approximately four million vehicles in the United States, in what the company is calling one of the largest deployments of a generative AI assistant in the automotive industry.

The update, announced on April 28 and arriving via over-the-air Play Store update, will replace the existing Google Assistant experience in model year 2022 and newer Cadillac, Chevrolet, Buick, and GMC vehicles equipped with Google Built-in.

“Gemini delivers conversational AI to millions of drivers across every segment and price point for a wide range of everyday needs. That kind of scale is only possible because of the connected vehicle foundation GM has built through OnStar over the past 30 years,” said Tim Twerdahl, Global Vice President of Product Management at General Motors.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

“Later this year, GM will deliver a more deeply integrated AI experience shaped by OnStar intelligence.”

The scale claim is credible. The four-million eligible vehicle figure is almost certainly larger than any existing single-OEM deployment of a conversational AI assistant in production vehicles.

Advertisement

That reach is a direct product of GM’s decade-long investment in Android Automotive OS, the ‘Google Built-in’ platform that gives Buick, Chevrolet, Cadillac, and GMC vehicles native access to Google’s apps and services, and the connectivity infrastructure provided by OnStar, which has been GM’s in-car connectivity backbone since 1996.

The practical shift from Google Assistant to Gemini is one of conversational depth. Google Assistant in its current in-car incarnation is a command-recognition system: it works reliably when drivers use phrases it has been trained to recognise, and breaks when they do not.

Gemini is a large language model. It handles free-form requests, maintains context across a conversation, follows up questions without restarting the interaction, and is substantially more robust to accent variation and non-standard phrasing.

For drivers, the most visible change will be in how the assistant handles multi-part requests and task-switching mid-conversation. GM’s press release illustrates this: asking for directions and simultaneously texting a family member, then refining the route to add a coffee stop with outdoor seating, all within a single spoken exchange.

Advertisement

The assistant integrates with in-vehicle apps including Amazon Music, Apple Music, Spotify, YouTube, HBO Max, Hulu, and Prime Video, and can draw on web search to answer location and context-aware queries.

To receive the update, drivers must be connected to OnStar, signed into the Google Play Store on their infotainment system, and using US English as their assistant language. The update will roll out over several months and will initially be US-only, with additional markets and languages to follow.

For 2025 and newer models, access to basic OnStar voice features, and therefore to Gemini, is included in the standard OnStar Basics package at no additional charge for eight years.

GM is explicit that Gemini is an interim step. The company’s stated ambition, first outlined at its GM Forward event in October 2025, is to deploy a custom-built AI assistant fine-tuned on proprietary vehicle data and connected through OnStar, effectively a domain-specific model that knows every detail of your specific vehicle, can flag maintenance issues before they become problems, and can learn your personal preferences over time. That assistant is described as arriving ‘later this year.’

Advertisement

Gemini is the commercial bridge: it gives GM four million users of a meaningfully better in-car AI experience now, while the company continues to build the vehicle-specific layer. Architecturally, GM SVP of Software and Services Dave Richardson described the approach as taking a base model, training it on vehicle specifications, distilling it down, and running it on the vehicle.

That hybrid on-vehicle and cloud architecture will matter as models scale, regulatory scrutiny of connected vehicle data tightens, and connectivity varies across markets.

The competitive context is crowded and accelerating. Stellantis is working with French AI firm Mistral on in-car assistants. Mercedes-Benz has integrated ChatGPT. Tesla has deployed xAI’s Grok across its fleet.

BMW has its own AI assistant programme. GM’s path is more incremental than Tesla’s vertically integrated approach, it is leveraging Android Automotive and Gemini while building its own layer on top, but the four-million-vehicle deployment scale is a genuine differentiator that none of its competitors can currently match on a like-for-like basis.

Advertisement

The announcement arrives in the shadow of a significant data controversy. In January 2025, the Federal Trade Commission took action against GM and OnStar over the collection and sale of precise geolocation and driving behaviour data to insurance companies, allegedly without clear consumer consent.

The consent order bars GM from selling such data without explicit permission for five years. GM’s data practices, including reports that it had shared Sharp driving scores with insurers, resulting in premium increases for drivers who had no idea their data was being sold, generated significant public and regulatory backlash.

Deploying an AI assistant that, by design, accesses vehicle data and can learn personal preferences raises the stakes of that history considerably. GM addresses this by stating that drivers will control what data the assistant can access, and that the integration is ‘privacy-focused.’

The credibility of those assurances will be judged by implementation: whether privacy controls are comprehensible, whether defaults favour the driver rather than the data pipeline, and whether the millions of existing vehicle owners receiving an OTA update are genuinely informed and given a real choice before their data is processed by a new AI layer.

Advertisement

The FTC consent order raises the regulatory bar for transparency here, and privacy advocates and regulators will be watching the Gemini rollout closely. GM’s ability to convert its OnStar infrastructure advantage into a genuine AI product leadership position will depend as much on rebuilding trust around data as it does on the quality of the assistant itself.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025