Bringing AI agents into the enterprise software development lifecycle is fast becoming the norm. As developers experiment with new platforms, organizations are exposed to potential security and orchestration failures. Systems that work in pilots may fail once the agents start working with real-time data.
Legacy tech giant IBM is one of several companies trying to address that gap by introducing more structure into how these workflows run. Yesterday, it announced the global launch of its AI-powered software development platform Bob, designed to write and test code across the development cycle, already in use by more than 80,000 of its employees after starting with just 100 internal users in summer 2025.
Bob introduces a structured layer that constantly pauses for human-led checkpoints, yet by harnessing AI models to perform agentic tasks, IBM says it has saved some teams up to 70% of time “on selected tasks…equaling an average time savings of 10 hours per week.”
Specific models supported include IBM’s own Granite series, Anthropic’s Claude, some from French AI firm Mistral and other smaller distilled models — no Alibaba Qwen or other fully open source ones.
Advertisement
This approach reflects a shift in how enterprises want to approach AI-led development: to build systems that not only build applications but also execute complex, multi-step workflows that do not rely on a single model or a single orchestration framework. It provides a structured, guarded approach to automation that seeks to center humans more in the process and fill audit gaps.
Neal Sundaresan, general manager, Automation and AI at IBM, told VentureBeat in an exclusive interview that a large part of using AI for software development is being systematic.
“Model capability alone isn’t enough,” Sundaresan said. “How you deploy it, how you structure context, and how you keep humans in the loop is what determines whether AI actually delivers.”
That divide is shaping how enterprises choose AI tools, whether they prioritize flexibility and experimentation or reliability and auditability.
Advertisement
Varying approaches to AI-led development
A growing class of open or autonomous agent systems has pushed the boundaries of what developers can do. They can now run extended or stateful workflows without much human intervention.
The rise of OpenClaw showed enterprises how far experimentation can go, especially when trained on local data and run in sandboxes. But it also meant that the choice between easier agent and workflow creation and security.
Some companies have embraced this spirit of experimentation.
Enterprise providers like Nvidia chose to embrace OpenClaw-like systems by adding a fence around the sandbox environment that runs autonomous agents, using NemoClaw. Kilo launched Kilo Claw, aimed at providing security for autonomous agents. OpenAI, in its updated Agents SDK, added support for sandbox agent implementations that mirror a lot of the usage patterns of systems like OpenClaw.
Advertisement
Sundaresan said enterprises continue to experiment with how they want to approach coding and agent building. He doesn’t want to close the door on fully autonomous agents proactively completing tasks, but he believes enterprises will want to exercise more caution as well.
“If you tell me that the final answer will be OpenClaw, then we will get there,” he said. “But it’s better to open the gate slowly than say, ‘oops, how do I close it now?’”
Bob reflects that thought process, highlighting the increasing shift for enterprises.
How Bob compares
Bob acts as a coding platform, but unlike similar products, it aims to standardize and govern the agent workflows created on it.
Advertisement
Tools like Cursor and Claude Code position the user at the beginning of the task. They are writing the prompts, chaining steps and debugging. LangGraph does similarly while also allowing teams to define agent flows.
The difference is not about capabilities but about control, and whether the system enterprises use explores potential solutions or delivers predictable execution.
Credit: VentureBeat, generated with Gemini
In this case, the human employee starts and ends the process. If the agent is unable to complete its task or makes a mistake, this is handled after the fact.
Advertisement
Bob, on the other hand, essentially pre-structures the development lifecycle into role-based stages. The agents will often check-in with the user for approval as a natural workflow checkpoint. Sundaresan said the idea is to combine the human and automated workflows.
What is becoming clear is that the next phase of enterprise AI no longer relies on model power, but rather on how well tools are designed to balance autonomy and control.
Pricing and availability
As mentioned previously, Bob is now available for all regions where IBM does business. IBM’s pricing structure for Bob consists of four primary subscription tiers for each user/seat and is built around its own internal credits system called “Bobcoins,” which serves as the primary metric for transparency and predictability.
These are set at a fixed valuation of 1 Bobcoin per $0.50 USD. Users consume these coins by performing specific actions, such as generating code, running commands, or performing file operations. If a user exhausts their balance, they must upgrade their plan to continue using the service.
Advertisement
Here are the plans currently offered and how many Bobcoins the user obtains by subscribing to each tier.
30-day Free Trial providing 40 Bobcoins
Pro plan at $20 per month with 40 Bobcoins
Pro+ plan at $60 per month with 160 Bobcoins
Ultra tier priced at $200 per month for 500 Bobcoins.
All standard plans provide access to core features including specialized agentic modes, literate coding, the Bob Shell for intelligent CLI workflows, and Model Context Protocol (MCP) integration.
While all individual plans are restricted to a single user, an Enterprise plan is available through sales contact, offering centralized team management, flexible role assignments, and the ability to distribute Bobcoins across an organization.
Enterprise subscribers receive additional benefits such as priority support and a dashboard to track entitlements and usage awareness.
The latest PowerToys update brings new tools like Power Display for multi-monitor brightness control and Grab and Move for simpler window handling, along with refinements to modules like Command Palette and Keyboard Manager.
A new rumor suggests Apple Vision Pro hardware may be dead, but the dissolution of a team doesn’t necessarily mean that pipeline is dead. If anything, it’s business as usual.
Whenever Apple releases a new product category, there seems to be this industry drive to find its weak points and jab at it until it dies. Apple Vision Pro may not be a blockbuster, but it is the entry point to Spatial Computing, which Apple still believes to be its future.
According to a report from MacRumors, Apple Vision Pro hardware as it stands in April 2026 may truly be dead. The story suggests that Apple has likely given up on the platform due to a lack of consumer interest after the M5 update.
The evidence presented is an anonymous tip about changes to the Apple Vision Pro team. Apparently, Apple has redistributed the team to other projects, including Siri. We believe what MacRumors has been told, and we know our friends over there do good work.
Advertisement
Let’s examine the big picture.
A reorganization, not a death
The Apple Vision Pro is a very strange product for Apple. First off, it had a dedicated team for developing just this one piece of hardware, which is unusual for Apple.
In short, there isn’t a dedicated iPhone, HomePod, Apple TV, or iPad team and there is or was one for Apple Vision Pro hardware. Everyone contributes to the development of each new product, except for Apple Vision Pro.
The Apple Silicon team develops chips, the design team works on how the product looks, the software team puts together the operating system. They’re all working to create Apple’s next-generation of hardware rather than being siloed into specific product divisions.
Advertisement
Rockwell oversaw Apple Vision Pro and AI development
Apple Vision Pro was different. It was given special attention by Tim Cook, as he saw it as the future of Apple and Spatial Computing.
Mike Rockwell’s Technology Development Group was renamed the Vision Products Group, notably not the Apple Vision Pro Products Group, when it took up the task of building Apple Vision Pro.
The operating system at the core of Apple Vision Pro, visionOS, is still under Rockwell’s oversight. Even if Apple Vision Pro doesn’t see a hardware revision soon, the OS will continue to be updated.
Advertisement
Tech has to catch up
We still don’t know if Apple Vision Pro is considered a flop internally, but it sold somewhere north of 600,000 units in its first year, and more since. Whatever Apple’s success metrics, the device is out there and being used by consumers and enterprises alike.
After the M5 model was released in October, there was likely little reason to keep a dedicated team to support and expand on this particular form factor. Apple is now in a holding pattern as it waits for modern technology to catch up with its demands for a true second-generation model, or its smart glasses concept.
Jony Ive shared that Apple Watch was in development for years because it couldn’t exist with the available technology. That first iteration couldn’t come out until the display, battery, and housing could all fit into that unit.
The same is true for a next-generation Apple Vision Pro-like product. Apple most likely has plenty of ideas and prototypes for what is next, but getting it smaller, lighter, and keeping it just as powerful likely isn’t possible today.
Advertisement
Also, Apple doesn’t need a dedicated team for research and development. Prototyping occurs within its own department, so keeping a talented group of engineers on a product that could be a year or more away from being realized is wasteful.
Apple’s Spatial Computing ambitions are still high
Rockwell is now overseeing AI and Siri development, and it has been reported that parts of the Apple Vision Pro team joined him as early as April 2025. That’s where Apple’s focus has been since WWDC 2024, so of course the best and brightest are there.
Apple Vision Pro is stuck in limbo until better technology comes along
Eventually, smart glasses could collide with Spatial Computing development and result in the long-rumored Apple Glass, or AR glasses. For now, the technology is nowhere near ready.
Apple has clearly not given up on its AR and Spatial Computing ambitions. Its job listings are filled with AR, VR, and Vision positions. There’s some keyword spam in here, but we stopped counting at 200 positions that directly apply to a headset or glasses of some sort.
There is also little doubt that visionOS 27 will get dedicated time during WWDC 26. Also, Apple has spent a lot of time getting an entire streaming and entertainment platform built around Apple Immersive Video.
As always, what Apple Vision Pro lacks the most is developer support. I hope that WWDC shows some signs of life in that area, but only time will tell.
Advertisement
For now, I can confidently say that Apple isn’t abandoning Apple Vision, even if Apple Vision Pro has hit the end of the line in this form factor. The overall “vision” product may not be updated for a year or more as we wait for technology to catch up, but that doesn’t mean Apple has given up on the concept.
The April version of ColorOS 16 by OPPO India has started to be released to bring some helpful changes in terms of usage for consumers. The main focus of this version is to improve the experience related to photos and videos for users, along with another interesting aspect.
UEFA Champions League Watermark in Photos
Among the many features of this new update, one of the most important is that it comes with a watermark for the UEFA Champions League. With this new feature, it becomes easy for people to include a watermark in their images in the theme of the UEFA Champions League by using the Photos application. Therefore, this makes their pictures look more professional and sporting.
Improved Video Speed Control
The update brings improved video speed controls in the Photos app:
Four playback speed options available: 0.5× for slower viewing, 1.0× for normal playback, 1.5× for slightly faster viewing, and 2.0× for fast playback.
Controls can be used directly during video playback.
Helps in reviewing clips more easily.
Useful for skipping long videos or focusing on key moments.
Offers better control for regular video users.
Rollout Timeline and Supported Devices
The ColorOS 16 April software update by OPPO is now being rolled out phase-by-phase. The update process started on April 8 and will last until April 30. It depends on different criteria – device and region – whether one is among the lucky owners who will be able to download the update sooner rather than later. Here’s the list of phones on which it can be downloaded:
Well, it took long enough — but YouTube is finally doing something really nice for its free users. Picture-in-picture mode, the feature that lets you shrink a video into a floating mini-player while you go about your phone life, is rolling out globally to all users over the coming months — no subscription or premium paywall is required. Just you, your video, and the freedom to check your messages without the whole thing grinding to a halt. For anyone outside the US who has spent years watching Premium subscribers float their videos around like smug little royals, this one’s for you.
The rollout covers longform, non-music content on both Android and iOS — and honestly, that caveat about music being Premium-only is fair enough. YouTube Music needs something to justify its existence.
Why this is a bigger deal than it sounds
Picture-in-picture might sound like a tiny upgrade, but it reshapes how you use YouTube on your phone. Following a recipe while your hands are a mess? Let it hover. Listening to a long podcast or video essay while replying to messages? Keep it floating. You’re no longer forced to choose between watching something and actually using your phone like a functional human being.
Leon Bublitz / Unsplash
For the longest time, this was locked behind Premium — one of those subtle nudges to justify the monthly fee. Making it free now feels deliberate. YouTube clearly knows its ad-supported experience needs to feel less restrictive if it wants people to stick around rather than drift toward ad blockers or workarounds. Consider this a small olive branch, but one that improves the experience.
How to actually get it working
Using it feels almost suspiciously easy. Start a video, swipe up or tap the home button, and it instantly shrinks into a floating mini-player you can drag around wherever you like. That’s it.
Advertisement
Pranob Mehrotra / Digital Trends
If it doesn’t show up right away, updating your YouTube app should do the trick. On iPhone, you’ll also need iOS 15 or later. The rollout is happening in phases, so it might take a bit to reach everyone. Either way, free YouTube just got a little less frustrating — and honestly, it’s about time.
Galax announced the original news via a message on its Brazilian website and advised customers to contact Palit’s official channels for all support and service inquiries. The company did not provide further details, instead emphasizing that both Galax and Palit are authorized Nvidia partners, adding that the restructuring would ensure… Read Entire Article Source link
Multiple official SAP npm packages were compromised in what is believed to be a TeamPCP supply-chain attack to steal credentials and authentication tokens from developers’ systems.
Security researchers report that the compromise impacted four packages, with the versions now deprecated on NPM:
@cap-js/sqlite – v2.2.2
@cap-js/postgres – v2.2.2
@cap-js/db-service – v2.10.1
mbt – v1.2.48
These packages support SAP’s Cloud Application Programming Model (CAP) and Cloud MTA, which are commonly used in enterprise development.
According to new reports by Aikido and Socket, the compromised packages were modified to include a malicious ‘preinstall’ script that executes automatically when the npm package is installed.
This script launches a loader named setup.mjs that downloads the Bun JavaScript runtime from GitHub and uses it to execute a heavily obfuscated execution.js payload.
The payload is an information-stealer used to steal a wide variety of credentials from both developer machines and CI/CD environments, including:
Advertisement
npm and GitHub authentication tokens
SSH keys and developer credentials
Cloud credentials for AWS, Azure, and Google Cloud
Kubernetes configuration and secrets
CI/CD pipeline secrets and environment variables
The malware also attempts to extract secrets directly from the CI runner’s memory, similar to how TeamPCP extracted credentials in previous supply-chain attacks.
“On CI runners, the payload executes an embedded Python script that reads /proc//maps and /proc//mem for the Runner.Worker process to extract every secret matching “key” :{ “value”: “…”, “isSecret”:true} directly from runner memory, bypassing all log masking applied by the CI platform,” explains Socket.
“This memory scanner for secrets is structurally identical to the one documented in the Bitwarden and Checkmarx incidents.”
Once data is collected, it is encrypted and uploaded to public GitHub repositories under the victim’s account. These repositories include the description, “A Mini Shai-Hulud has Appeared”, which is also similar to the “Shai-Hulud: The Third Coming” string seen in the Bitwarden supply chain attack.
Github repos created with a description of “A Mini Shai-Hulud has Appeared” Source: Aikido
The malware also relies on GitHub commit searches as a dead-drop mechanism to retrieve tokens and gain further access.
“The malware searches GitHub commits for this string and uses matching commit messages as a token dead-drop,” explains Aikido.
Advertisement
“Commit messages matching OhNoWhatsGoingOnWithGitHub: are decoded into GitHub tokens and checked for repository access.”
Similar to previous attacks, the deployed payload also includes code to self-propagate to other packages.
Using stolen npm or GitHub credentials, it attempts to modify other packages and repositories it gains access to, and injects the same malicious code to spread further.
Researchers have linked this attack with medium confidence to the TeamPCP threat actors, who used similar code and tactics in previous supply-chain attacks against Trivy, Checkmarx, and Bitwarden.
Advertisement
While it is unclear how the threat actors compromised SAP’s npm publishing process, Security Engineer Adnan Khan reports that an NPM token may have been exposed via a misconfigured CircleCI job.
BleepingComputer contacted SAP to learn how the npm packages were compromised, but did not receive a reply at the time of publication.
AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.
At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.
The London-founded startup wants to replace the entire construction value chain, from architect’s brief to move-in-ready building, using AI and a purpose-built on-site robot called the Mantis.
Construction is the world’s largest industry by value and one of its least automated. A typical building project today involves roughly the same sequence of hand-offs, on-site labour, and bureaucratic bottlenecks that it did half a century ago. Productivity in the sector has barely improved in 50 years, even as manufacturing, logistics, and services have been transformed by software and automation. That stagnation is what All3 was built to end.
The European construction robotics startup today announced a $25 million seed funding round. The round is led by RTP Global, the early-stage firm whose portfolio includes Datadog, Delivery Hero, and SumUp, with significant participation from SuperSeed and additional investment from Begin Capital, s16vc, and VNV Global. Jelmer de Jong, Partner at RTP Global, has joined the All3 board.
Founded by Rodion Shishkov, a serial entrepreneur with a background in industrial robotics and retail logistics, All3 operates across Europe with offices in Berlin and Zug and R&D facilities in London and Belgrade. It emerged from stealth in mid-2025, at which point it had already begun testing its robotic system in Belgrade.
All3’s pitch rests on vertical integration: unlike startups that sell one piece of the construction puzzle to builders who still manage the rest, All3 is positioning itself as an end-to-end replacement for the traditional construction value chain. “The developers find the sites and handle permits and financing, while we can do the rest,” Shishkov told The Robot Report in June 2025.
Advertisement
The company’s system has three integrated components. First, an AI-powered design platform that translates a brief or site address into a fully compliant building design, optimising for space, local planning regulations, and robotic production constraints. S
econd, robotic factories, described by All3 as compact, modular production cells, that fabricate custom timber composite components with claimed 0.2mm precision, without the need for programmer intervention. Third, the All3 Mantis: an autonomous legged robot with a 100kg payload and 4-metre reach, purpose-built for on-site assembly, covering placement, fastening, finishing, and inspection.
The choice of material is deliberate. All3 builds in structural timber composites, a renewable material that stores CO₂ rather than emitting it during production. Concrete, by contrast, accounts for around 7–8% of global carbon emissions.
The company claims its approach enables cost savings of up to 30%, timeline reductions of up to 50%, and up to 25% less embodied carbon compared to conventional construction, numbers drawn from its own modelling and marketing materials, which have not been independently audited at this stage.
Advertisement
The funding will go primarily towards advancing R&D in London and Belgrade and deploying the robot fleet across All3’s first commercial projects in Germany, the company’s initial launch market. A first building is due to break ground there later in 2026.
The choice of Germany is well-chosen for narrative purposes and operationally strategic. As TNW has reported on Europe’s construction crisis, the continent faces a chronic shortage of both homes and construction workers. Germany’s situation is particularly acute.
All3 has already processed over 100,000 square metres of residential projects through its AI design platform, forming the basis for its construction pipeline in Germany for 2026–2027. The company says this represents genuine customer engagement rather than internal modelling, though it has not disclosed the number or identity of the developers involved.
Yet, All3 is not the first European startup to bet that robotics can unlock construction’s missing productivity revolution. As we previously covered, Monumental, a Dutch startup, has built a working bricklaying robot that has already completed commercial projects in the Netherlands and raised $25 million at the seed stage. Spain’s 011h has taken a software-first prefab approach. All3’s differentiating claim is the degree of vertical integration: not just automating one part of construction, but redesigning the entire process around what robots can do.
Advertisement
That ambition is also its principal risk. Replacing a fragmented industry’s entire value chain requires not just working technology but regulatory approval, developer buy-in across multiple jurisdictions, and the ability to deploy a fleet of legged robots on active construction sites in dense urban environments, one of the most unpredictable physical settings imaginable. Shishkov’s background in industrial robotics and logistics gives him credibility in the manufacturing dimension; the construction site dimension is where the hard engineering problems remain unsolved for most players in the space.
Europe’s appetite for exactly this kind of deep physical AI is, nonetheless, growing. RTP Global partner de Jong framed the investment in terms that capture the broader investor narrative: “Europe needs its own physical AI champions.” Whether All3 becomes one of them will be determined not by what it claims to be able to build, but by what it actually delivers in Germany this year.
Meta’s superintelligence team is working on a new set of AI agents that are meant to help the company’s users “achieve the diverse goals in their lives,” according to CEO Mark Zuckerberg. Zuckerberg, who is reportedly also overseeing an AI clone of himself, said that his goal is for the agents to be more approachable and easier to use than existing agent products like OpenClaw.
Speaking during Meta’s first-quarter earnings call Wednesday, Zuckerberg said that the upcoming business and personal agents will build on the newly-released Muse Spark model, the first from Meta Superintelligence Labs (MSL).
“Our goal is not just to deliver Meta AI as an assistant, but to deliver agents that can understand your goals and then work day and night to help you achieve them,” he said. “We are building a personal agent focused on helping people achieve the diverse goals in their lives. We’re also building a business agent focused on helping entrepreneurs and businesses across the world use our tools and others to grow their efforts, reach new customers and serve existing customers better.”
Advertisement
Zuckerberg didn’t give a timeline for the new products, but he said that Meta’s goal was to make agents more accessible than current platforms. OpenClaw, he said, offers “a very exciting glimpse of what types of things should be possible” but is “pretty rough” to set up.
“There’s a lot of agents out there that people are building for different things, and there aren’t that many that I would want to give to my mother,” he said. “How do you make a version of that experience that is a lot more polished and dialed and easy, and that has all the infrastructure basically done for people already.”
An anonymous reader quotes a report from the San Francisco Chronicle: Elon Musk returned to the witness stand Wednesday in Oakland federal court for a second day of testimony in his case against OpenAI, detailing his shift from being an enthusiastic supporter of the nonprofit to feeling betrayed. He also clashed repeatedly with OpenAI’s attorney over questions that Musk believed were unfair. He said his feelings towards OpenAI CEO Sam Altman and President Greg Brockman shifted from a “phase one” of support, “phase two” of doubts, and finally “phase three, where I’m sure they’re looting the nonprofit. We’re currently in phase three,” Musk said with a chuckle. Musk said he was a “fool” for giving OpenAI “$38 million of essentially free funding to create what would become an $800 billion company,” of which he has no equity stake.
In his 2024 lawsuit, Musk alleged breach of charitable trust and unjust enrichment, arguing OpenAI abandoned its original nonprofit mission to benefit humanity to pursue financial gain. OpenAI’s lawyer William Savitt argued Tuesday during his opening statement that the nonprofit entity remains in control of the for-profit public benefit corporation and is now one of the most well-funded nonprofits in the world. Musk is seeking to oust Altman from OpenAI’s board and upwards of $134 billion in damages, which he said would be used to fund OpenAI’s nonprofit mission. During cross-examination, Savitt clashed with Musk over questioning. Savitt asked whether Musk had contributed $38 million to OpenAI, rather than the $100 million that he later claimed to have invested on X. Musk said he also contributed his reputation to the company and came up with the idea for the name, leading Savitt to ask Musk to respond yes or no to “simple” questions.
“Your questions are not simple. They’re designed to trick me, essentially,” Musk said, adding that he had to elaborate or it would mislead the jury. He compared Savitt’s questions to asking, “have you stopped beating your wife?” Judge Yvonne Gonzalez Rogers intervened, leading Musk to answer yes to the $38 million investment amount. The world’s richest man said his doubts grew and by late 2022, he thought “wait a second, these guys are betraying their promise. They’re breaking the deal.” “I started to lose confidence that they were telling me the truth,” Musk said. A turning point was co-defendent Microsoft’s investment of billions of dollars into OpenAI, Musk said. On October 23, 2022, Musk texted Altman that he was “disturbed” to see OpenAI’s valuation of $20 billion in the wake of the Microsoft deal. Musk called the deal a “bait and switch,” since a nonprofit doesn’t have a valuation. OpenAI had “for all intents and purposes” become primarily a for-profit company, Musk argued. Altman responded to Musk by text that “I agree this feels bad,” saying that OpenAI had previously offered equity in the company but Musk hadn’t wanted it at the time. Altman said the company was happy to offer equity in the future. Musk said it “didn’t seem to make sense to me” to hold equity in what should be a nonprofit. Musk also testified about former OpenAI board member Shivon Zilis, who lives with him, is the mother of four of his children, and served as a senior advisor at Neuralink. He denied that she shared sensitive OpenAI information with him. Court evidence showed Musk had encouraged her to stay close to OpenAI to “keep info flowing” and had approved Neuralink recruiting OpenAI employees, which he defended by saying workers are free to change jobs. “It’s a free country,” Musk said.
You’re all sick of me saying we need to have more nuance in the discussion about AI use in the gaming industry. I get it. I’m also not going to stop. And I hope you will have noticed that I have called for nuance in both directions. While I’m more optimistic than many in our community that there is a place for this technology in the industry, and that it could actually have some net positive effects therein, I’m also not blind to the potential negative consequences. Concerns about industry jobs are a very real thing. A desire to protect the artistic intent of game makers is a worthy enterprise. Quality of output is paramount.
That’s why I’ve been repeating over and over again that we should be talking about how AI will be used in games, not if. The “if” question has already been answered in the affirmative, at least for some portion of the industry. Now we need to build very real guardrails around the “how.”
Fallout co-creator Tim Cain says a world where AI generates games, TV shows, and even doctor’s appointments is inevitable, and he’s even “looking forward” to that future.
In arguably the veteran game developer’s saddest “fun Friday” video ever, Cain envisions a world in which dead MMOs come back to life with AI-generated players mimicking real-life personalities, where generative AI makes Joey from Friends a lawyer instead of a struggling actor, and where you take vacations in VR. Yes, really.
Advertisement
He goes way, way beyond even that. He talks at some length about using AI to create more episodes of retired shows that people still hunger for. As a massive fan of Firefly, I can’t tell you how ecstatic I’ve been these past several weeks with Nathan Fillion’s announcement that the show would be coming back in an animated form to build on the story that was infamously canceled by Fox after only 1 season. If that announcement was instead made by the rightsholder and said the new episodes would be created whole cloth using AI and that they would be customizable and tailored to my desires, my reaction would have been horror, not excitement.
AI needs to be a tool on the perimeter, not the creative force itself. I don’t want the pen telling me the story of Odysseus; I want the writer to use the pen to do so. And if the pen turns into a typewriter, which then turns into a word processor, that all works. There is still a human being telling the story.
Even Cain’s remarks tailored specifically for the gaming industry ring super hollow.
Cain goes on to say this will be especially handy for MMO players, in particular those who miss being able to play games that aren’t active anymore. “Have an AI make a local server,” he proposes. “Great, now you can play it again. Oh, it’s empty? Fill it with AI players. Have it watch videos of people who have played that game and just fill it up with players, and it mimics their personalities.”
Look, Cain is a veteran of the industry who was instrumental to one of the most beloved video game IPs of all time, but with all due respect, the idea of playing Ultima Online with AI-generated players designed to mimic the personalities of my friends who I used to play with… is genuinely one of the grimmest, most dire, dystopian realities I can possibly fathom. Likewise, my heart sinks at the thought of playing AI-generated stories with AI-generated characters that I can change however I want. That sounds like it would entirely rob a game, or any work of art, of its artistic intent. But alas, Cain reckons this is all inevitable, so get ready.
Advertisement
This is what the AI detractors are worried about. And when you hear an industry veteran speak so glowingly about gamers operating within these soulless arenas designed merely to mimic the authentic fun that these games produced, it’s easy to understand the concern. This isn’t helpful. Pretending to not understand that the very fucking point of MMOs is to play with other human beings in a single realm, not ginned-up robots pretending to be human, is incredibly frustrating.
And Cain, oddly enough, seems completely unconcerned with artistic intent at all. There is no reason why his example of requesting changes to a TV show wouldn’t translate into a video game. And if people can just customize games not through mods, but through fundamental changes driven by AI requests, then there is no game anymore. There is merely a shell of a game where the player is then free to remix it to extents that transform the intent of the maker completely.
I had to search around a lot to see if Cain was being sarcastic or making a fake attempt at over the top AI evangelism purely to make a point. Everything I have seen and read indicates that’s not what this was. And, again, that makes all of this very unhelpful if you want to get into some real discussions about where this technology should be used and where it shouldn’t.
You must be logged in to post a comment Login