Apple has agreed to pay $250 million to settle a class action lawsuit that claims the company misled iPhone buyers in the US that the updated version of Siri it announced alongside Apple Intelligence would launch in 2024, The Financial Times writes. The company originally showed off its more “personalized” Siri at WWDC 2024, but has failed to ship the new AI assistant almost two years later.
Assuming it’s approved by a judge, the settlement will cover a class that includes US buyers of the iPhone 16 lineup and the iPhone 15 Pro. The settlement will offer financial relief to anyone who expected Siri on their new iPhone, but Apple’s proposal notably doesn’t require the company to actually admit fault for advertising AI features it hasn’t shipped.
Advertisement
The company slowly rolled out components of the text editing, image generation and ChatGPT integration it pitched as Apple Intelligence throughout 2024 and 2025, but a version of Siri that understands the context of what’s on your device and can take action in apps on your behalf never arrived. Apple didn’t publicly acknowledge it would have to delay that Siri update until March 2025, over five months after the iPhone 16 launched, a phone the company sold as being able to run Apple Intelligence.
After Apple announced the delay, it pulled ads it had run in the lead-up to the iPhone launch showing off the new Siri feature. The company now plans to finally offer the new Siri this year, largely thanks to a partnership with Google that lets Apple use the company’s Gemini models. The new Siri, along with a collection of other AI features, will reportedly be included in iOS 27.
Apple is rumored to be giving users the option to run various AI features in iOS 27 with third-party models as an alternative to Apple Intelligence.
Apple has been trying to catch up to the rest of the AI market, but it may not have to worry about doing so for iOS 27. If a report is true, Apple will be making it easier to use third-party alternates throughout the operating system.
According to sources of Bloomberg on Tuesday, users will be able to select from multiple third-party AI models, which can be used for various tasks in the operating system. It’s a change arriving in iOS 27, iPadOS 27, and macOS 27.
While users can already use ChatGPT for some actions on their iPhone already, the new version will work with other models as well. These integrations have apparently included models from Anthropic and Google, the sources claim.
Advertisement
Those models will be tasked with answering queries, editing and generating text, and image generation. This is a lot like the existing capabilities of ChatGPT in iOS 26.
Extensions and the App Store
The choice will be available as part of “Extensions,” which will let users access the generative AI capabilities from installed apps, via Apple Intelligence. This includes Siri, Writing Tools, and Image Playground, a message in a test build apparently said.
For Siri, users will be able to select a different voice for conversations that use external models. This is to make it easier for users to quickly understand which AI source is handling the query.
As usual, Apple intends to warn users that it isn’t responsible for content generated by any of the selected third-party models.
Advertisement
While it will require users to install apps from their selected provider beforehand, Apple will also be making it easier for users to get onboard. There’s word of a specific App Store section that will list compatible AI apps that users can download.
The connection to the App Store is something that has been brought up long in the past. Back in March 2024, there were murmors of an AI App Store, which the new report is similar to in concept.
Rumors of Siri supporting other third-party AI tools have also surfaced, including one March report mentioning the use of installed apps.
However, there’s also the question of whether users will actually take advantage of this capability in the first place.
Advertisement
While Apple has been behind in the AI race, it did move to catch up in January thanks to a multi-year deal with Google. Under it, Apple would use Google’s Gemini models and cloud technology to help flesh out Apple’s Foundational Models.
With WWDC 2026 on the horizon in June, we don’t have long to wait to see what Apple’s AI strategy will actually be.
One thing is certain about The Devil Wears Prada 2: The ambitious undertaking of making a sequel of a cult status film after 20 years has succeeded, at least as far as box office figures are concerned. The numbers speak for themselves, with $77 million generated in US theaters and another $157 million in the rest of the world since its April 29 release.
In the face of such a box office smash, this installment has inspired heated debates for days about its quality and comparisons to the original. In Italy, those arguments even extend to the dubbing of the film.
The controversy stems from the choice of voice actors in the Italian version of The Devil Wears Prada 2, who are themselves a nod to continuity; it’s the same cast as the original. Connie Bismuto is back to voice Anne Hathaway as Andy, Francesca Manicone dubs Emily Blunt as Emily, Gabriele Lavia is once again Stanley Tucci’s Nigel, and above all, Maria Pia Di Meo, the actress who has been the familiar and expressive voice of Meryl Streep in practically all the Italian adaptations of recent years—including the fearsome Miranda Priestly—returned for the sequel.
While many fans were happy to revisit these familiar voices, other viewers noticed some idiosyncrasies, largely due to the advanced age of the voice actors themselves, especially Di Meo and Lavia.
Advertisement
Di Meo, born in 1939, is undoubtedly a master of Italian dubbing, and her performances, linked to such great Hollywood actresses as Jane Fonda, Julie Andrews, Mia Farrow, Barbra Streisand, and Streep, have made her one of the most recognizable and expressive voices of cinema in that country’s theaters.
Yet some say her performance now reveals too much of the passage of time and that there’s a disconnect between her 87-year-old voice and that of a character as energetic and sharp as Miranda (played, in the original, by a 76-year-old Streep). Could this nine-year gap be too great to bridge? The same has been said of Lavia, who dubs Stanley Tucci with a result that often sounds a bit forced.
But more than a question of age, perhaps there’s a broader discussion to be had about dubbing in general and its effectiveness in an era in which downloads first and then streaming platforms have accustomed us to seeing more and more content in the original language.
Even just listening to the trailers released online for The Devil Wears Prada 2, a native Italian speaker will notice not only that the voices that have aged into varying degrees of mismatch but also that the speed of the lines makes them hard to follow. And what about the adaptation of the dialog? “I’m a features editor at Runway,” Anne Hathaway’s Andy says proudly, but how many of those who live outside newsrooms know what a features editor is? And again, when Miranda’s second assistant says, “I have to pee, I drank a venti,” how many people outside of the US understand on the fly that she’s referring to a Starbucks drink?
Advertisement
Perhaps, then, what hasn’t aged so well is not so much the voices of individual dubbers but a dubbing system that no longer keeps pace—in most cases—with the speed and specificity with which the content itself is produced. In the face of this consideration, however, one cannot ignore that, at least in a market like Italy, especially at the cinema, people overwhelmingly go to see dubbed versions of movies.
So these same online debates perhaps serve to keep attention focused on how many countries outside of the US experience these films. And one that deserves not only greater respect but also a quality that isn’t fully guaranteed with today’s frenetic pace.
This story originally appeared on WIRED Italia and has been translated from Italian.
A previously undocumented Linux implant named Quasar Linux (QLNX) is targeting developers’ systems with a mix of rootkit, backdoor, and credential-stealing capabilities.
The malware kit is deployed in development and DevOps environments in npm, PyPI, GitHub, AWS, Docker, and Kubernetes. This could enable supply-chain attacks where the threat actor publishes malicious packages on code distribution platforms.
Researchers at cybersecurity company Trend Micro analyzed the QLNX implant and found that “it dynamically compiles rootkit shared objects and PAM backdoor modules on the target host using gcc [GNU Compiler Collection].”
A report from the company this week notes that QLNX was designed for stealth and long-term persistence, as it runs in-memory, deletes the original binary from disk, wipes logs, spoofs process names, and clears forensic environment variables.
Advertisement
The malware uses seven distinct persistence mechanisms, including LD_PRELOAD, systemd, crontab, init.d scripts, XDG autostart, and ‘.bashrc’ injection, ensuring it loads into every dynamically linked process and respawns if killed.
Overview of QLNX’s persistence mechanisms Source: Trend Micro
QLNX features multiple functional blocks dedicated to specific activities, making it a complete attack tool. Its core components can be summarized as follows:
RAT core — Central control component built around a 58-command framework that provides interactive shell access, file and process management, system control, and network operations, while maintaining persistent communication with the C2 over custom TCP/TLS or HTTP/S channels.
Rootkit — Dual-layer stealth mechanism combining a userland LD_PRELOAD rootkit and a kernel-level eBPF component. The userland layer hooks libc functions to hide files, processes, and malware artifacts, while the eBPF layer conceals PIDs, file paths, and network ports at the kernel level. Both are deployed dynamically, with the userland rootkit compiled on the target system.
Credential access layer — Combines credential harvesting (SSH keys, browsers, cloud and developer configs, /etc/shadow, clipboard) with PAM-based backdoors that intercept and log plaintext authentication data.
Surveillance module — Keylogging, screenshot capture, and clipboard monitoring.
Networking and lateral movement — TCP tunneling, SOCKS proxy, port scanning, SSH-based lateral movement, and peer-to-peer mesh networking.
Execution and injection engine — Process injection (ptrace, /proc/pid/mem) and in-memory execution of payloads (shared objects, BOF/COFF).
Filesystem monitoring — Real-time tracking of file activity via inotify.
The rootkit architecture Source: Trend Micro
After initial access, QLNX establishes a fileless foothold, deploys persistence and stealth mechanisms, and then harvests developer and cloud credentials.
By targeting developer workstations, attackers can bypass enterprise security controls and access the credentials that underpin software delivery pipelines.
Credential theft Source: Trend Micro
This approach mirrors recent supply chain incidents in which stolen developer credentials were used to publish trojanized packages to public repositories.
Trend Micro has not provided details about specific attacks or any attribution for QLNX, so the deployment volume and specific activity levels of this new malware are unclear.
At the time of publication, the Quasar Linux implant is detected by only four security solutions, which flag its binary as malicious. Trend Micro has provided indicators of compromise (IoCs) to help defenders detect QLNX infections and protect against them.
Advertisement
AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.
At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.
The iPhone 17 was the best-selling smartphone in the first quarter, accounting for six percent of global sales. Apple’s iPhone 17 Pro Max ranked second, followed by the standard iPhone 17 Pro. Samsung grabbed fourth and fifth place with the Galaxy A07 G4 and Galaxy A17 5G, respectively. Read Entire Article Source link
Nuro has been granted a permit to begin driverless testing of Lucid Gravity SUVs equipped with its autonomous tech on California public roads — vehicles that will eventually be used in Uber’s premium robotaxi service. But the Silicon Valley-based startup, backed by Nvidia and Uber, says it isn’t quite ready to begin.
The California Department of Motor Vehicles, the agency that regulates the testing and deployment of autonomous vehicles in the state, confirmed to TechCrunch on Tuesday that it modified Nuro’s driverless AV permit to include Lucid Gravity vehicles.
Nuro has held a driverless permit for six years, but it only applied to operate a low-speed delivery vehicle — a program that was scrapped when the startup pivoted its business model to focus on licensing its technology to companies like Uber.
This latest driverless permit allows Nuro to test the Lucid vehicles without a human safety operator behind the wheel. Nuro spokesperson David Salguero told TechCrunch the company expects to begin driverless testing later this year, without providing further information on timing.
Advertisement
The driverless permit is one of many regulatory hurdles that Nuro must clear before Uber can launch its premium robotaxi service. Nuro will also have to receive a driverless ride-hailing permit from the California Public Utilities Commission and a deployment permit from the DMV.
For now, Nuro and Uber are testing the Lucid vehicles in autonomous mode with a human safety operator in the driver’s seat. Last month, that testing was expanded to allow Uber employees to request an autonomous ride in a Lucid robotaxi — with a human safety operator still on board — through the Uber app.
As Nuro makes progress on testing, Uber has upped its commitment to Lucid.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
When the three-way deal was announced in July 2025, Uber said it would invest $300 million in Lucid and buy 20,000 robotaxi-ready Gravity vehicles. That has since been expanded to $500 million and a minimum of 35,000 robotaxis, with the agreement changing to include at least 10,000 Gravity SUVs and 25,000 EVs built on Lucid’s upcoming mid-size platform.
Advertisement
Those EVs will be equipped with Nuro’s autonomous vehicle system, which is powered by Nvidia’s Drive AGX Thor computer. The Lucid Gravity robotaxi, which was revealed in January, is outfitted with high-resolution cameras, solid-state lidar sensors, and radars that help the self-driving system perceive the real-world environment and operate within it.
Uber has also made a multimillion-dollar investment in Nuro.
Lucid has delivered 75 engineering vehicles to Nuro and Uber and testing and mileage accumulation is ongoing in several cities throughout the United States, the EV maker disclosed during its first-quarter earnings call on Tuesday.
Lucid said Tuesday it is on track for commercial robotaxi operations to begin in late 2026. It is possible that those robotaxi operations will not be driverless or will be limited in some other way, depending on regulatory approvals.
Advertisement
Still, Lucid executives struck a positive tone during the call noting that all the development and certifications are moving along as expected.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Google Chrome users who have noticed unusual disk activity or unexplained drops in available storage should look for a folder called “OptGuideOnDeviceModel” inside their Chrome directory. It holds roughly 4GB of weights for Google’s Gemini Nano LLM, downloaded by the browser without user consent. Read Entire Article Source link
CopilotKit co-founders Uli Barkai, head of growth, left, and CEO Atai Barkai. (CopilotKit Photo)
CopilotKit, a Seattle startup with roots in the former Techstars Seattle accelerator, has raised $27 million for technology that lets AI agents work inside existing software applications.
The company created AG-UI, an open standard for how AI agents communicate with software, letting agents generate interactive charts, update dashboards, and take actions inside apps.
Companies including Google, Microsoft, Amazon, and Oracle have adopted the protocol. CopilotKit says more than half of the Fortune 500 use its tools, primarily through the open-source project but also as paying customers of its enterprise product, CopilotKit Enterprise Intelligence.
Co-founded in 2023 by brothers Atai Barkai and Uli Barkai, and originally incorporated as Tawkit Inc., CopilotKit has about 20 employees.
The funding, announced Tuesday, was led by Glilot Capital, NFX, and SignalFire. It includes $20 million in new Series A capital and $7 million in a previously unannounced seed round.
Advertisement
The startup is headquartered in Seattle, with most of its engineering team based locally. The company plans to use the new funding in part to expand its Seattle team.
AG-UI (Agent-User Interaction) is part of an emerging field of AI protocols that also includes MCP (Model Context Protocol), which connects agents to external tools; and A2A (Agent-to-Agent), which connects agents to other agents. AG-UI handles a different part of the process, connecting agents to human users inside software through application interfaces.
CopilotKit’s core tools are open source, with more than 40,000 GitHub stars and what the company says are millions of installs per week.
The startup generates revenue through CopilotKit Enterprise Intelligence, a self-hosted product that adds persistent conversation threads, analytics, and real-time learning capabilities. Named enterprise customers include Deutsche Telekom, Docusign, Cisco, and S&P Global.
Advertisement
Atai Barkai, the company’s CEO, previously worked on media infrastructure at Meta and led development of flagship iOS apps at Doximity. He holds bachelor’s and master’s degrees in physics from the University of Pennsylvania. Uli Barkai heads growth and partnerships and studied financial economics at Columbia and philosophy at Tel Aviv University.
The two originally co-founded tawkitAI as an AI-powered podcast platform and pivoted to copilot development tools after open-sourcing their internal infrastructure and seeing strong developer interest. They joined Techstars Seattle’s 2023 cohort and later renamed the company CopilotKit.
CopilotKit competes with Vercel’s AI SDK, Assistant-ui, and OpenAI’s Apps SDK, among others. The company differentiates itself as a horizontal, vendor-neutral alternative that works with whatever agent framework, cloud provider, or backend a company already uses.
Even though the very concept of an ‘unpickable lock’ is as plausible as making water not be wet, this doesn’t take away from the intellectual thrill of devising solutions to picking attacks and subsequently circumventing those solutions. Case in point the ‘unpickable’ traveling key lock that [Works by Design] recently featured and sent a few copies off to lock pickers such as [Lock Noob] who gave picking it a shake.
Many of the details and reasoning behind [Works by Design]’s lock design can be found in the original video, with [Lock Noob] going over the basic summary before getting to work trying to pick it.
Rather than trying to bump the tumbler lock mechanism or another indirect approach, the focus is here on an impressioning attack. Although in this traveling key mechanism the physical key is moved inside the lock, the pins of the tumbler lock will leave impressions on the brass blanks when the lock is gently forced to rotate, indicating that there’s still too much material there.
The approach here is thus to slowly file away these sections, with interestingly the plastic pin that [Works by Design] had added to dodge impressioning attacks not being too much of an issue. Thus after over an hour of turning-filing-turning-filing ad nauseam, the lock mechanism rotated, confirming that it had been defeated.
Advertisement
In the subsequent teardown of the lock it can be seen that a plastic pin is indeed rather fragile, with part of its top having been torn off. After replacing this damaged plastic pin with a fresh one, a foil-based impressioning attack is attempted by putting aluminium foil over a skeleton key, but this didn’t quite work out as the pins come in sideways and thus do not leave a useful impression.
Theoretically the pins would press down onto the soft foil, creating an almost immediate impression of the required key. Perhaps that leaving a solid side on the blank would make it work, but this is an approach that would have to be refined.
Either way, it shows that ‘unpickable’ depends on your definition, as ‘1+ hour of filing with knowledge of bitting depths’ would be considered ‘unpickable’ by some. At least it’s not as dramatic as a 2020 [Stuff Made Here] ‘unpickable lock’ hack that we covered, before it got shredded by the [LockPickingLawyer] with resulting list of potential fixes of multiple easy exploits before even having to resort to impressioning.
Considering that traveling key designs generally require at least a tedious impressioning attack, with potential ways to address this in a more substantial way, a redesign featuring these changes would be rather interesting to see picked. If it can defeat the average lockpicking enthusiast including those practicing the legal profession, it’s probably as close to ‘unpickable’ as can be before the bolt cutters and angle grinders are used against any vulnerable parts that aren’t the lock itself.
Audio Advice Live 2026 returns to Raleigh, North Carolina from August 7-9, bringing three days of high-end audio and home theater demonstrations to one of the fastest-growing hi-fi events in the United States. Hosted by Audio Advice, the show will once again bring together leading audio brands, industry experts, and content creators for an immersive experience covering everything from two-channel systems and turntables to headphones and reference-level home theater installations.
For readers of eCoustics, the event is already on the radar. We were among the few publications to cover the show in-depth in 2025, with on-site reporting from Chris Boylan that captured the scale of the demonstrations and the growing enthusiasm around the Raleigh gathering. With its mix of serious gear, approachable demonstrations, and a healthy dose of Southern hospitality, Audio Advice Live has quickly become one of the more compelling destinations on the North American hi-fi calendar—hosted by one of the most influential specialty audio retailers in the country, whose footprint now stretches across the Southeast and into the Midwest and Nevada.
The company currently operates three retail locations in North Carolina; Raleigh, Charlotte, and Wilmington and recently expanded into the Midwest with the acquisition of a specialty audio store in the suburbs of St. Louis. A fourth company-owned showroom is also under development in Nashville, Tennessee. In addition to its physical locations, the retailer runs a comprehensive online store featuring in-depth reviews, system planning tools, and step-by-step tutorials for home audio and home theater enthusiasts. Very few retailers offer that level of free knowledge and help in 2026.
Advertisement
Why Audio Advice Live Is Worth the Trip — and Why Raleigh Makes It Even Better
Audio shows can sometimes feel like insider events for industry veterans, but Audio Advice Live has carved out a reputation as one of the most accessible and hands-on hi-fi gatherings in North America. Across more than 60 listening and home theater demonstration rooms, attendees can experience everything from affordable two-channel setups and turntables to statement-level loudspeakers and reference home theater systems.
Major brands such as Sony, Epson, NAD, JVC, MartinLogan, McIntosh, Focal, Bowers & Wilkins, and JBL regularly participate, giving visitors the chance to hear new products demonstrated in carefully tuned rooms while speaking directly with product managers, engineers, and calibration specialists. For enthusiasts curious about system setup, room acoustics, streaming platforms, or even full home theater calibration, the show offers a rare opportunity to get practical advice from the people who actually design the gear.
The setting helps, too. Raleigh has quietly become one of the most appealing destinations for a summer audio trip, with a vibrant downtown filled with restaurants, breweries, museums, and excellent shopping within easy reach of the show. Visitors flying in will appreciate the convenience of Raleigh-Durham International Airport, widely considered one of the most efficient and traveler-friendly airports in the country.
The event routinely draws attendees from across the Southeast, but it also attracts serious hobbyists willing to travel from Florida and Texas to the Mid-Atlantic and even the DelMarVa region, for three days of immersive listening, industry insight, and a little Southern hospitality in the middle of summer.
Advertisement
Where and How to Attend Audio Advice Live 2026
Audio Advice Live 2026 will take place August 7-9, 2026 at the Sheraton Raleigh Hotel in downtown Raleigh, placing attendees within walking distance of restaurants, bars, museums, and shopping in the city’s revitalized downtown district.
Early bird tickets are already available, with several options depending on how much time you want to spend exploring the show’s listening rooms, seminars, and demonstrations.
Ticket Options (Early Bird Pricing):
3 Day All Access Pass: $45
1 Day Pass (Friday): $25
1 Day Pass (Saturday): $25
Child / Student Pass (with valid ID): Free
The three day pass offers the best value, giving attendees full access to more than 60 listening rooms, home theater demonstrations, seminars, and live presentations from industry experts across the entire weekend.
Advertisement. Scroll to continue reading.
Advertisement
Tickets can be purchased in advance through the event’s official website, and early registration is recommended as Audio Advice Live has grown rapidly and regularly attracts enthusiasts from across the South and Mid-Atlantic region.
The Bottom Line
If you’re serious about high performance audio, home theater, headphones, or vinyl, Audio Advice Live 2026 is one of the most accessible and hands on events in North America to experience it all in one place. The show runs August 7-9, 2026 at the Sheraton Raleigh Hotel in downtown Raleigh, where more than 60 listening rooms and seminars will showcase everything from attainable systems to ultra high end gear. Expect strong coverage from us on the show floor. We’ll be there listening, asking questions, and reporting on the best systems and surprises from one of the South’s fastest growing hi-fi events.
In August 2017, Greg Brockman and Ilya Sutskever gathered at Elon Musk’s self-described “haunted mansion,” a 47-acre, $23 million estate in Hillsborough, south of San Francisco, to discuss the future of OpenAI. Actor Amber Heard, Musk’s then-girlfriend, had served the group whiskey and then dashed off with a friend, Brockman, OpenAI’s cofounder and president, testified in federal court during the trial for Musk v. Altman on Tuesday.
Ahead of the meeting, Musk gifted Brockman and Sutskever, OpenAI’s cofounder and former chief scientist, new Tesla Model 3 cars. “It felt like he was buttering us up,” Brockman said on the stand. “He wanted us to feel indebted to him in some way.” Sutskever tried to reciprocate for the occasion. The amateur artist presented Musk with a painting of a Tesla. Musk and the other cofounders wanted to establish a for-profit arm to entice investors to give them billions of dollars to pay for compute. But Musk also wanted control of the company, and Sutskever and Brockman objected to granting the Tesla CEO what they believed would be a “dictatorship” over the future of AI development. They proposed having shared control.
After several minutes of deliberation, Musk rejected their offer. “He stood up and stormed around the table,” Brockman recalled. “I actually thought he was going to hit me, physically attack me.” Musk grabbed the painting, said he would cut off his funding of the nonprofit until Brockman and Sutskever quit, and left the room, according to Brockman’s testimony. But that night, Musk’s so-called chief of staff Shivon Zilis called Brockman and Sutskever “to say it’s not over,” Brockman testified. “There were discussions of futures that included us.”
The story of the heated negotiations emerged as Brockman wrapped up his testimony on Tuesday. To OpenAI, the events at the mansion are representative of repeated instances of erratic behavior by Musk that they believe undermine his arguments about the company. Musk contends his roughly $38 million in donations to OpenAI were abused by Brockman and others on the path to creating the $852 billion for-profit venture now known for services such as ChatGPT and Codex. Brockman, OpenAI CEO Sam Altman, and OpenAI deny any wrongdoing, and the jury in Musk v. Altman could begin deliberating on an advisory ruling as soon as next week.
Advertisement
After Tuesday’s testimony, William Savitt, an attorney for OpenAI, told reporters that what Brockman had learned in 2017 was how tough it can be to meet one’s heroes. Brockman admired and respected Musk’s business acumen, but his desire for control was absolute and concerning, Savitt said. Marc Toberoff, an attorney for Musk, told reporters that the true concern was Brockman’s motivations for sharing control, with his desire for wealth having faced scrutiny in court a day earlier.
For his part, Brockman offered another story on Tuesday to underscore why he thought Musk was not up to the task of controlling an AI company. Brockman recalled then-OpenAI researcher Alec Radford showing Musk an early version of an AI chatbot that didn’t generate responses that he liked. Musk “kept saying this system is so stupid, that a kid on the internet could do better,” Brockman said. Radford “was absolutely crushed” and “demoralized” to the point that he almost quit the AI research field altogether, Brockman said. Brockman and Sutskever “spent a lot of time” rebuilding his confidence. Musk’s inability to see the potential in the early technology—which eventually became the basis for ChatGPT—made him unfit to control OpenAI, in Brockman’s view. “You needed to dream a little bit,” Brockman said. And Musk hadn’t shown that he could.
Boardroom Fights
Brockman said Tuesday that he, Sutskever, and Altman considered voting Musk off the OpenAI nonprofit board as negotiations with him about a for-profit sibling company dragged on for months. They would meet again over whiskey at Musk’s mansion to discuss alternative funding options. There was agreement over what not to do, but little on what to do instead. But Brockman and Sutskever decided removing Musk felt “wrong,” Brockman testified. Eventually, Musk left on his own after deeming OpenAI was on a path of “certain failure,” according to an email he wrote in early 2018.
Zilis, then an adviser to both OpenAI and Musk, kept him informed about developments at the AI venture in the years to come. “She was proxy Elon in some ways,” Brockman said, referring to her as “a friend” who he had first met in 2012 or 2013.
You must be logged in to post a comment Login