Connect with us
DAPA Banner

Tech

Apple Will Pay $250 Million For Failing To Deliver Its AI-Powered Siri On Time

Published

on





Apple has agreed to pay $250 million to settle a class action lawsuit that claims the company misled iPhone buyers in the US that the updated version of Siri it announced alongside Apple Intelligence would launch in 2024, The Financial Times writes. The company originally showed off its more “personalized” Siri at WWDC 2024, but has failed to ship the new AI assistant almost two years later.

Assuming it’s approved by a judge, the settlement will cover a class that includes US buyers of the iPhone 16 lineup and the iPhone 15 Pro. The settlement will offer financial relief to anyone who expected Siri on their new iPhone, but Apple’s proposal notably doesn’t require the company to actually admit fault for advertising AI features it hasn’t shipped.

Advertisement

The company slowly rolled out components of the text editing, image generation and ChatGPT integration it pitched as Apple Intelligence throughout 2024 and 2025, but a version of Siri that understands the context of what’s on your device and can take action in apps on your behalf never arrived. Apple didn’t publicly acknowledge it would have to delay that Siri update until March 2025, over five months after the iPhone 16 launched, a phone the company sold as being able to run Apple Intelligence.

After Apple announced the delay, it pulled ads it had run in the lead-up to the iPhone launch showing off the new Siri feature. The company now plans to finally offer the new Siri this year, largely thanks to a partnership with Google that lets Apple use the company’s Gemini models. The new Siri, along with a collection of other AI features, will reportedly be included in iOS 27.

Advertisement



Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Instructure hacker claims data theft from 8,800 schools, universities

Published

on

Hacker in a school

The hacker behind a breach at education technology giant Instructure claims to have stolen 280 million records tied to students and staff from 8,809 colleges, school districts, and online education platforms.

Instructure is a cloud-based education technology company best known for its Canvas learning management system, which schools and universities use to manage coursework, assignments, grading, and communication.

Last Friday, Instructure disclosed that it was investigating a cyberattack and later revealed that it had suffered a data breach, during which users’ names, email addresses, and private messages were exposed.

The ShinyHunters extortion gang claimed responsibility for the attack and says it stole 280 million records for students, teachers, and staff.

Advertisement
Instructure listing on ShinyHunters data leak site
Instructure listing on ShinyHunters data leak site

The threat actors have now published a list of 8,809 school districts, universities, and educational platforms whose Canvas instances were allegedly impacted by the attack, sharing record counts per institution with BleepingComputer.

The record counts for each educational institution range from tens of thousands to several million per institution.

BleepingComputer is not naming specific organizations listed by the threat actor, as we have not independently verified whether they were impacted by the breach.

The threat actor claims the data was stolen using Canvas data export features, including DAP queries, provisioning reports, and user APIs, and that they harvested hundreds of gigabytes of user records, messages, and enrollment data.

While Instructure has not responded to repeated emails regarding the incident, some universities have begun issuing statements about the potential impact.

Advertisement

“CU is aware of a data breach involving Instructure, the parent company of Canvas, our learning management system. This reported data breach is a nationwide event affecting multiple institutions,” warned the University of Colorado Boulder.

“At present, Rutgers has not been notified of any direct impact to our campus. Canvas remains available and operational to Rutgers faculty, staff, and students,” warned Rutgers.

“An investigation is currently underway to determine what exactly happened and which systems were affected. It has not yet been confirmed whether data of Tilburg University students and staff has been impacted. Further questions have been submitted to the supplier to obtain more clarity,” warns Tilburg University.

BleepingComputer has contacted Instructure again with additional questions and will update this story if we receive a response.

Advertisement

article image

AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.

At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.

Claim Your Spot

Source link

Continue Reading

Tech

US-Irish trilateral research programme to receive $20m

Published

on

The project falls under the US-Ireland R&D Partnership, a tri-jurisdictional initiative founded in 2006 with the aim of supporting collaborative research projects involving partners in Ireland, Northern Ireland and the US.

A new all-island research programme will support Irish-US collaboration between researchers, innovators and industry partners through a $20m investment.

The ‘research translation and commercialisation initiative’ is a trilateral project funded by Research Ireland, Northern Ireland’s Department for the Economy and the US National Science Foundation (NSF) Directorate for Technology, Innovation and Partnerships (TIP).

The project falls under the US-Ireland R&D Partnership, a tri-jurisdictional initiative founded in 2006 with the aim of supporting collaborative research projects involving partners in Ireland, Northern Ireland and the US by bringing together government departments, funding agencies, academic institutions and industry to address shared scientific, economic and societal challenges.

Advertisement

The initiative, which will be open to current and past tripartite partnership awardee teams who have received their US support from NSF, is also part-funded through the Irish Government’s Shared Island Fund and supported by InterTradeIreland, the cross-border trade and business development body for all-island economic collaboration.

Taoiseach Micheál Martin, TD said: “The US-Ireland R&D Partnership is a powerful example of how sustained international cooperation delivers real benefits for our people, our economy and our research community.

“This new investment builds on 20 years of success and will help ensure that cutting-edge research developed across the island of Ireland and the United States can be translated into real-world solutions and high-value jobs.”

The new initiative plans to identify research under the themes of cybersecurity, energy and sustainability, telecommunications, sensors and sensor networks, and nanoscale science and engineering, and was established as an expansion activity to support the translation of research outputs from the US-Ireland R&D Partnership into market-ready products, services and solutions.

Advertisement

First minister of Northern Ireland Michelle O’Neill said: “This new transatlantic initiative represents a significant opportunity to turn excellent research into real benefits for our economy and our communities, while strengthening the strong relationships we have built with partners in the US and across this island.”

The collaboration is also targeting the development of bespoke training programmes for affiliated researchers to help them to upskill in advancing their work along the translation and commercialisation path, with further funding opportunities available to selected participating teams to kickstart the creation of research-related start-ups.

“For nearly 20 years, the US-Ireland R&D Partnership has not only jointly funded numerous trilateral science and engineering research projects, it has also served as a model of how to successfully facilitate cross-border research and development,” said Brian Stone, the NSF’s chief of staff.

“Today’s announcement from NSF TIP, the Government of Ireland and Department for the Economy marks a natural next step in our transatlantic partnership, expanding our collaboration to accelerate the translation of projects into businesses and solutions, delivering significant scientific, economic and real-world benefits.”

Advertisement

The US-Ireland R&D Partnership has supported 107 collaborative research projects to date through $196m in combined government funding for research projects across an array of sectors. Examples include: research on next-generation communications and 6G networks conducted by University College Dublin, Queen’s University Belfast and Purdue University; work on sustainable animal health solutions by University of Tennessee, University College Cork and Queen’s University; and colorectal cancer research carried out by GE Global Research, Queen’s University and Royal College of Surgeons Ireland.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

I tested the Xiaomi 17 Ultra’s camera and I don’t think I’ll ever go back to an iPhone

Published

on

When it comes to flagship phones, the word “Ultra” has started to lose meaning. Every brand throws it around, but very few actually deliver something that feels… ultra. Take the Samsung Galaxy S26 Ultra, for instance. It’s a solid phone, sure, but exciting? Not quite. And that’s the bigger issue with the US market right now. Some of the most interesting Android flagships simply don’t make it here.

Meanwhile, brands like Vivo, Oppo, and Honor are quietly pushing smartphone cameras into territory that feels closer to dedicated cameras than ever before. And then there’s the Xiaomi 17 Ultra. After using it for a couple of weeks, one thing is clear: this isn’t just a phone with a great camera. It’s a camera that happens to be a phone. And honestly, it kind of feels like a modern-day revival of the Samsung Galaxy Camera.

If this thing officially launched in the US, it would shake things up in a big way.

Spec Sheet Flex, But Make It Real

The Xiaomi 17 Ultra doesn’t just show up with a spec sheet — it shows off. You’re looking at a Leica-tuned triple-camera setup led by a 50MP 1-inch Light Fusion 1050L sensor with an f/1.67 aperture and LOFIC HDR, which is basically a fancy way of saying it handles highlights and shadows like a champ. Then there’s the real party trick: a 200MP periscope telephoto (Samsung HP9, 1/1.4″) with a slick continuous optical zoom from 75mm to 100mm (around 3.2x to 4.3x), stretching all the way to a wild 400mm equivalent via in-sensor crop.

Advertisement

Rounding things out is a 50MP ultrawide with a 115° field of view and macro support, plus a surprisingly serious 50MP autofocus selfie camera up front. And yes, it shoots 8K at 30fps and 4K at 120fps with Dolby Vision and ACES Log, which is as close as a phone gets to saying, “Yeah, I can do cinema too.” Additionally, there’s Leica optics and color tuning across all lenses. In fact, that Leica partnership isn’t just branding either. It shows up in how the photos look, feel, and behave.

Daylight Drama, Minus the Drama

Let’s start with daylight shots, because this is where most phones already do well. The Xiaomi 17 Ultra does better. Images are sharp, detailed, and rich without looking artificially processed. You get two primary profiles: Leica Authentic and Leica Vibrant. I found myself leaning toward Vibrant more often, and here’s the thing: it doesn’t go overboard.

Colors pop, but they don’t scream. Greens look lively without turning neon, blues stay controlled, and overall contrast feels more… intentional.

Honestly, it’s a refreshing break from the oversharpened, overprocessed look that some flagships lean into. Furthermore, the HDR performance is another highlight. Even in tricky lighting, the phone balances highlights and shadows beautifully, without flattening the scene.

Zoom Game That Actually Feels Like a Camera

This is where things start getting really fun. The combination of multiple lenses and a continuous optical zoom system means you’re not just jumping between fixed focal lengths. You’re actually working with something that feels closer to a real camera lens. From 1x to 2x, 3.2x, and even beyond, the results stay impressively sharp. Colors remain consistent across zoom levels, which is something many phones still struggle with.

And here’s the surprising part. I ended up using the camera at around 3.2x most of the time. It just hits that sweet spot for composition, perspective, and background separation.

Portraits That Don’t Try Too Hard

Portrait photography is another strong suit here, and it benefits massively from that telephoto hardware. You can shoot portraits using the tele lens for natural depth, or switch to portrait mode for additional processing. Either way, the results are excellent.

Edge detection is clean, subject separation looks natural, and the background blur doesn’t feel fake or overdone. In many cases, it genuinely holds its own against a decent DSLR setup.

What I really liked, though, is that you don’t always need portrait mode. Just using the telephoto lens gives you that natural compression and bokeh, especially for subjects like pets or candid shots.

Low Light, No Panic

While daylight photography is great, it’s great on a lot of other phones too. However, low-light photography is where this phone really flexes. That 1-inch sensor combined with the wide f/1.67 aperture allows it to pull in a ton of light. And the results show.

Advertisement

Even in challenging conditions with minimal lighting, the Xiaomi 17 Ultra manages to retain detail, control noise, and preserve the overall mood of the scene. Importantly, it doesn’t try to turn night into day. You still get that nighttime feel, just with better clarity and detail. Highlights are controlled, lens flare is minimal, and textures don’t get smudged into oblivion.

Ultrawide, But Actually Useful

The ultrawide camera here isn’t an afterthought. At 14mm, it captures a seriously wide field of view, which is great for landscapes, architecture, and group shots. Even better, image quality holds up surprisingly well, including in lower light.

That said, there’s one small annoyance. The placement of the ultrawide lens near the edge of the camera module means it’s very easy to accidentally get a finger in the frame. It’s not a dealbreaker, but definitely something to be mindful of.

The Photography Kit Pro

Speaking of the camera array, one of the best things Xiaomi did with this phone was to introduce the Photography Kit Pro, and the second best thing they did was to supply me with the kit, too. You get better ergonomics, physical controls for shooting, and an overall experience that makes you want to take more photos. It bridges that gap between smartphone photography and traditional cameras in a really satisfying way. The grip also doubles as a battery pack, which is incredibly useful during long shooting sessions.

There’s even a USB-C passthrough, so it’s easy to charge both the phone and grip simultaneously. That said, I wish Xiaomi added data passthrough as well, enabling one to connect an external SSD while the grip is attached. Maybe in future iterations, they could also add a microSD card slot to this, or better yet, a full-sized SD card slot to appeal to the photographers out there.

Selfie Cam… Exists

Now, all isn’t perfect here, and that brings me to the selfies. It’s… fine. Just fine.

HDR can be a bit inconsistent, colors often lean a little too punchy, and while there’s an attempt to smooth out skin textures, the result feels a bit off.

Of course, photography is subjective, but personally, this is one area where I’d still pick a Google Pixel any day. Even the iPhone does a solid job if you prefer softer-looking images, as you can see in the comparison shot above.

The Best Camera You Can’t (Officially) Buy?

So… is this the best camera phone right now? If photography is your priority, it’s honestly very hard to argue against it. The Xiaomi 17 Ultra brings together industry-leading hardware, genuinely thoughtful image processing, full RAW support for those who like to tweak every pixel, and smart AI tools that actually feel useful instead of gimmicky. And the best part? It’s not just a one-trick pony. Beyond the camera, you’re still getting a proper flagship experience with a top-tier chipset, a gorgeous display, and battery life that comfortably goes the distance.

Advertisement

But here’s the frustrating bit: you can’t officially buy it in the US. And that’s a real shame. Because if a phone like this were widely available, it would force the likes of Apple and Samsung to push their camera systems further, faster. The Xiaomi 17 Ultra isn’t trying to be the most balanced smartphone out there. Instead, it’s aiming to be the best camera you can carry in your pocket. And after spending time with it, it’s hard not to feel like the US market is seriously missing out.

Source link

Advertisement
Continue Reading

Tech

iPhone users will select their preferred AI model in iOS 27

Published

on

Apple is rumored to be giving users the option to run various AI features in iOS 27 with third-party models as an alternative to Apple Intelligence.

Apple has been trying to catch up to the rest of the AI market, but it may not have to worry about doing so for iOS 27. If a report is true, Apple will be making it easier to use third-party alternates throughout the operating system.

According to sources of Bloomberg on Tuesday, users will be able to select from multiple third-party AI models, which can be used for various tasks in the operating system. It’s a change arriving in iOS 27, iPadOS 27, and macOS 27.

While users can already use ChatGPT for some actions on their iPhone already, the new version will work with other models as well. These integrations have apparently included models from Anthropic and Google, the sources claim.

Advertisement

Those models will be tasked with answering queries, editing and generating text, and image generation. This is a lot like the existing capabilities of ChatGPT in iOS 26.

Extensions and the App Store

The choice will be available as part of “Extensions,” which will let users access the generative AI capabilities from installed apps, via Apple Intelligence. This includes Siri, Writing Tools, and Image Playground, a message in a test build apparently said.

For Siri, users will be able to select a different voice for conversations that use external models. This is to make it easier for users to quickly understand which AI source is handling the query.

As usual, Apple intends to warn users that it isn’t responsible for content generated by any of the selected third-party models.

Advertisement

While it will require users to install apps from their selected provider beforehand, Apple will also be making it easier for users to get onboard. There’s word of a specific App Store section that will list compatible AI apps that users can download.

The connection to the App Store is something that has been brought up long in the past. Back in March 2024, there were murmors of an AI App Store, which the new report is similar to in concept.

Rumors of Siri supporting other third-party AI tools have also surfaced, including one March report mentioning the use of installed apps.

However, there’s also the question of whether users will actually take advantage of this capability in the first place.

Advertisement

While Apple has been behind in the AI race, it did move to catch up in January thanks to a multi-year deal with Google. Under it, Apple would use Google’s Gemini models and cloud technology to help flesh out Apple’s Foundational Models.

With WWDC 2026 on the horizon in June, we don’t have long to wait to see what Apple’s AI strategy will actually be.

Source link

Advertisement
Continue Reading

Tech

The Italian Dubbing of ‘The Devil Wears Prada 2’ Has Stirred Up a Surprising Controversy

Published

on

One thing is certain about The Devil Wears Prada 2: The ambitious undertaking of making a sequel of a cult status film after 20 years has succeeded, at least as far as box office figures are concerned. The numbers speak for themselves, with $77 million generated in US theaters and another $157 million in the rest of the world since its April 29 release.

In the face of such a box office smash, this installment has inspired heated debates for days about its quality and comparisons to the original. In Italy, those arguments even extend to the dubbing of the film.

The controversy stems from the choice of voice actors in the Italian version of The Devil Wears Prada 2, who are themselves a nod to continuity; it’s the same cast as the original. Connie Bismuto is back to voice Anne Hathaway as Andy, Francesca Manicone dubs Emily Blunt as Emily, Gabriele Lavia is once again Stanley Tucci’s Nigel, and above all, Maria Pia Di Meo, the actress who has been the familiar and expressive voice of Meryl Streep in practically all the Italian adaptations of recent years—including the fearsome Miranda Priestly—returned for the sequel.

While many fans were happy to revisit these familiar voices, other viewers noticed some idiosyncrasies, largely due to the advanced age of the voice actors themselves, especially Di Meo and Lavia.

Advertisement

Di Meo, born in 1939, is undoubtedly a master of Italian dubbing, and her performances, linked to such great Hollywood actresses as Jane Fonda, Julie Andrews, Mia Farrow, Barbra Streisand, and Streep, have made her one of the most recognizable and expressive voices of cinema in that country’s theaters.

Yet some say her performance now reveals too much of the passage of time and that there’s a disconnect between her 87-year-old voice and that of a character as energetic and sharp as Miranda (played, in the original, by a 76-year-old Streep). Could this nine-year gap be too great to bridge? The same has been said of Lavia, who dubs Stanley Tucci with a result that often sounds a bit forced.

But more than a question of age, perhaps there’s a broader discussion to be had about dubbing in general and its effectiveness in an era in which downloads first and then streaming platforms have accustomed us to seeing more and more content in the original language.

Even just listening to the trailers released online for The Devil Wears Prada 2, a native Italian speaker will notice not only that the voices that have aged into varying degrees of mismatch but also that the speed of the lines makes them hard to follow. And what about the adaptation of the dialog? “I’m a features editor at Runway,” Anne Hathaway’s Andy says proudly, but how many of those who live outside newsrooms know what a features editor is? And again, when Miranda’s second assistant says, “I have to pee, I drank a venti,” how many people outside of the US understand on the fly that she’s referring to a Starbucks drink?

Advertisement

Perhaps, then, what hasn’t aged so well is not so much the voices of individual dubbers but a dubbing system that no longer keeps pace—in most cases—with the speed and specificity with which the content itself is produced. In the face of this consideration, however, one cannot ignore that, at least in a market like Italy, especially at the cinema, people overwhelmingly go to see dubbed versions of movies.

So these same online debates perhaps serve to keep attention focused on how many countries outside of the US experience these films. And one that deserves not only greater respect but also a quality that isn’t fully guaranteed with today’s frenetic pace.

This story originally appeared on WIRED Italia and has been translated from Italian.

Source link

Advertisement
Continue Reading

Tech

New stealthy Quasar Linux malware targets software developers

Published

on

Stealthy Quasar Linux malware implant targets software developers

A previously undocumented Linux implant named Quasar Linux (QLNX) is targeting developers’ systems with a mix of rootkit, backdoor, and credential-stealing capabilities.

The malware kit is deployed in development and DevOps environments in npm, PyPI, GitHub, AWS, Docker, and Kubernetes. This could enable supply-chain attacks where the threat actor publishes malicious packages on code distribution platforms.

Researchers at cybersecurity company Trend Micro analyzed the QLNX implant and found that “it dynamically compiles rootkit shared objects and PAM backdoor modules on the target host using gcc [GNU Compiler Collection].”

A report from the company this week notes that QLNX was designed for stealth and long-term persistence, as it runs in-memory, deletes the original binary from disk, wipes logs, spoofs process names, and clears forensic environment variables.

Advertisement

The malware uses seven distinct persistence mechanisms, including LD_PRELOAD, systemd, crontab, init.d scripts, XDG autostart, and ‘.bashrc’ injection, ensuring it loads into every dynamically linked process and respawns if killed.

Overview of QLNX's persistence mechanisms
Overview of QLNX’s persistence mechanisms
Source: Trend Micro

QLNX features multiple functional blocks dedicated to specific activities, making it a complete attack tool. Its core components can be summarized as follows:

  • RAT core — Central control component built around a 58-command framework that provides interactive shell access, file and process management, system control, and network operations, while maintaining persistent communication with the C2 over custom TCP/TLS or HTTP/S channels.
  • Rootkit — Dual-layer stealth mechanism combining a userland LD_PRELOAD rootkit and a kernel-level eBPF component. The userland layer hooks libc functions to hide files, processes, and malware artifacts, while the eBPF layer conceals PIDs, file paths, and network ports at the kernel level. Both are deployed dynamically, with the userland rootkit compiled on the target system.
  • Credential access layer — Combines credential harvesting (SSH keys, browsers, cloud and developer configs, /etc/shadow, clipboard) with PAM-based backdoors that intercept and log plaintext authentication data.
  • Surveillance module — Keylogging, screenshot capture, and clipboard monitoring.
  • Networking and lateral movement — TCP tunneling, SOCKS proxy, port scanning, SSH-based lateral movement, and peer-to-peer mesh networking.
  • Execution and injection engine — Process injection (ptrace, /proc/pid/mem) and in-memory execution of payloads (shared objects, BOF/COFF).
  • Filesystem monitoring — Real-time tracking of file activity via inotify.
The rootkit architecture
The rootkit architecture
Source: Trend Micro

After initial access, QLNX establishes a fileless foothold, deploys persistence and stealth mechanisms, and then harvests developer and cloud credentials.

By targeting developer workstations, attackers can bypass enterprise security controls and access the credentials that underpin software delivery pipelines.

Credential theft
Credential theft
Source: Trend Micro

This approach mirrors recent supply chain incidents in which stolen developer credentials were used to publish trojanized packages to public repositories.

Trend Micro has not provided details about specific attacks or any attribution for QLNX, so the deployment volume and specific activity levels of this new malware are unclear.

At the time of publication, the Quasar Linux implant is detected by only four security solutions, which flag its binary as malicious. Trend Micro has provided indicators of compromise (IoCs) to help defenders detect QLNX infections and protect against them.

Advertisement

article image

AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.

At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.

Claim Your Spot

Source link

Continue Reading

Tech

Apple and Samsung are dominating smartphone sales so thoroughly that only one other company makes the top 10

Published

on


The iPhone 17 was the best-selling smartphone in the first quarter, accounting for six percent of global sales. Apple’s iPhone 17 Pro Max ranked second, followed by the standard iPhone 17 Pro. Samsung grabbed fourth and fifth place with the Galaxy A07 G4 and Galaxy A17 5G, respectively.
Read Entire Article
Source link

Continue Reading

Tech

Nuro receives driverless testing permit ahead of Uber robotaxi service launch

Published

on

Nuro has been granted a permit to begin driverless testing of Lucid Gravity SUVs equipped with its autonomous tech on California public roads — vehicles that will eventually be used in Uber’s premium robotaxi service. But the Silicon Valley-based startup, backed by Nvidia and Uber, says it isn’t quite ready to begin.

The California Department of Motor Vehicles, the agency that regulates the testing and deployment of autonomous vehicles in the state, confirmed to TechCrunch on Tuesday that it modified Nuro’s driverless AV permit to include Lucid Gravity vehicles.

Nuro has held a driverless permit for six years, but it only applied to operate a low-speed delivery vehicle — a program that was scrapped when the startup pivoted its business model to focus on licensing its technology to companies like Uber.

This latest driverless permit allows Nuro to test the Lucid vehicles without a human safety operator behind the wheel. Nuro spokesperson David Salguero told TechCrunch the company expects to begin driverless testing later this year, without providing further information on timing.

Advertisement

The driverless permit is one of many regulatory hurdles that Nuro must clear before Uber can launch its premium robotaxi service. Nuro will also have to receive a driverless ride-hailing permit from the California Public Utilities Commission and a deployment permit from the DMV.

For now, Nuro and Uber are testing the Lucid vehicles in autonomous mode with a human safety operator in the driver’s seat. Last month, that testing was expanded to allow Uber employees to request an autonomous ride in a Lucid robotaxi — with a human safety operator still on board — through the Uber app.

As Nuro makes progress on testing, Uber has upped its commitment to Lucid.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

When the three-way deal was announced in July 2025, Uber said it would invest $300 million in Lucid and buy 20,000 robotaxi-ready Gravity vehicles. That has since been expanded to $500 million and a minimum of 35,000 robotaxis, with the agreement changing to include at least 10,000 Gravity SUVs and 25,000 EVs built on Lucid’s upcoming mid-size platform.

Advertisement

Those EVs will be equipped with Nuro’s autonomous vehicle system, which is powered by Nvidia’s Drive AGX Thor computer. The Lucid Gravity robotaxi, which was revealed in January, is outfitted with high-resolution cameras, solid-state lidar sensors, and radars that help the self-driving system perceive the real-world environment and operate within it.

Uber has also made a multimillion-dollar investment in Nuro.

Lucid has delivered 75 engineering vehicles to Nuro and Uber and testing and mileage accumulation is ongoing in several cities throughout the United States, the EV maker disclosed during its first-quarter earnings call on Tuesday.

Lucid said Tuesday it is on track for commercial robotaxi operations to begin in late 2026. It is possible that those robotaxi operations will not be driverless or will be limited in some other way, depending on regulatory approvals.

Advertisement

Still, Lucid executives struck a positive tone during the call noting that all the development and certifications are moving along as expected.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Source link

Advertisement
Continue Reading

Tech

Google Chrome has been silently pushing a 4GB AI model to your device without asking

Published

on


Google Chrome users who have noticed unusual disk activity or unexplained drops in available storage should look for a folder called “OptGuideOnDeviceModel” inside their Chrome directory. It holds roughly 4GB of weights for Google’s Gemini Nano LLM, downloaded by the browser without user consent.
Read Entire Article
Source link

Continue Reading

Tech

Seattle’s CopilotKit raises $27M, as some of the biggest names in tech adopt its AI agent protocol

Published

on

CopilotKit co-founders Uli Barkai, head of growth, left, and CEO Atai Barkai. (CopilotKit Photo)

CopilotKit, a Seattle startup with roots in the former Techstars Seattle accelerator, has raised $27 million for technology that lets AI agents work inside existing software applications.

The company created AG-UI, an open standard for how AI agents communicate with software, letting agents generate interactive charts, update dashboards, and take actions inside apps. 

Companies including Google, Microsoft, Amazon, and Oracle have adopted the protocol. CopilotKit says more than half of the Fortune 500 use its tools, primarily through the open-source project but also as paying customers of its enterprise product, CopilotKit Enterprise Intelligence.

Co-founded in 2023 by brothers Atai Barkai and Uli Barkai, and originally incorporated as Tawkit Inc., CopilotKit has about 20 employees.

The funding, announced Tuesday, was led by Glilot Capital, NFX, and SignalFire. It includes $20 million in new Series A capital and $7 million in a previously unannounced seed round.

Advertisement

The startup is headquartered in Seattle, with most of its engineering team based locally. The company plans to use the new funding in part to expand its Seattle team.

AG-UI (Agent-User Interaction) is part of an emerging field of AI protocols that also includes MCP (Model Context Protocol), which connects agents to external tools; and A2A (Agent-to-Agent), which connects agents to other agents. AG-UI handles a different part of the process, connecting agents to human users inside software through application interfaces.

CopilotKit’s core tools are open source, with more than 40,000 GitHub stars and what the company says are millions of installs per week. 

The startup generates revenue through CopilotKit Enterprise Intelligence, a self-hosted product that adds persistent conversation threads, analytics, and real-time learning capabilities. Named enterprise customers include Deutsche Telekom, Docusign, Cisco, and S&P Global.

Advertisement

Atai Barkai, the company’s CEO, previously worked on media infrastructure at Meta and led development of flagship iOS apps at Doximity. He holds bachelor’s and master’s degrees in physics from the University of Pennsylvania. Uli Barkai heads growth and partnerships and studied financial economics at Columbia and philosophy at Tel Aviv University. 

The two originally co-founded tawkitAI as an AI-powered podcast platform and pivoted to copilot development tools after open-sourcing their internal infrastructure and seeing strong developer interest. They joined Techstars Seattle’s 2023 cohort and later renamed the company CopilotKit.

CopilotKit competes with Vercel’s AI SDK, Assistant-ui, and OpenAI’s Apps SDK, among others. The company differentiates itself as a horizontal, vendor-neutral alternative that works with whatever agent framework, cloud provider, or backend a company already uses.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025