PayPal is notifying customers of a data breach after a software error in a loan application exposed their sensitive personal information, including Social Security numbers, for nearly 6 months last year.
The incident affected the PayPal Working Capital (PPWC) loan app, which provides small businesses with quick access to financing.
PayPal discovered the breach on December 12, 2025, and determined that customers’ names, email addresses, phone numbers, business addresses, Social Security numbers, and dates of birth had been exposed since July 1, 2025.
The financial technology company said it has reversed the code change that caused the incident, blocking attackers’ access to the data one day after discovering the breach.
“On December 12, 2025, PayPal identified that due to an error in its PayPal Working Capital (“PPWC”) loan application, the PII of a small number of customers was exposed to unauthorized individuals during the timeframe of July 1, 2025 to December 13, 2025,” PayPal said in breach notification letters sent to affected users.
Advertisement
“PayPal has since rolled back the code change responsible for this error, which potentially exposed the PII. We have not delayed this notification as a result of any law enforcement investigation.”
PayPal also detected unauthorized transactions on the accounts of a small number of customers as a direct result of the incident and has issued refunds to those affected.
The company now offers affected users two years of free three-bureau credit monitoring and identity restoration services through Equifax, which require enrollment by June 30, 2026.
Affected customers are also advised to monitor their credit reports and their account activity for suspicious transactions. PayPal reminded users that it never requests account passwords, one-time codes, or other authentication credentials via phone, text, or email, a common tactic used in phishing attacks that often follow data breach disclosures.
Advertisement
While PayPal has yet to disclose how many customers were affected, it has reset passwords for all impacted accounts and said that users will be prompted to create new credentials upon their next login if they have not already done so.
BleepingComputer reached out to a PayPal spokesperson with questions about the incident, but a response was not immediately available.
In January 2023, PayPal notified customers of another data breach after a large-scale credential stuffing attack compromised 35,000 accounts between December 6 and December 8, 2022.
Two years later, in January 2025, New York State announced a $2,000,000 settlement with PayPal over charges that it failed to comply with the state’s cybersecurity regulations, leading to the 2022 data breach.
Advertisement
Modern IT infrastructure moves faster than manual workflows can handle.
In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.
Cable TV providers have spent the past decade losing tens of millions of households to streaming services, but companies like Charter Communications are now slowing that exodus by bundling the very apps that once threatened to replace them.
Charter added 44,000 net video subscribers in the fourth quarter of 2025, its first growth in that count since 2020, after integrating Disney+, Hulu, and ESPN+ directly into Spectrum cable packages — a deal that grew out of a contentious 2023 contract dispute with Disney. Comcast and Optimum still lost subscribers in the quarter, though both saw those losses narrow.
Charter’s Q4 numbers also got a lift from a 15-day Disney channel blackout on YouTube TV during football season, which drove more than 14,000 subscribers to Spectrum. Charter has been discounting aggressively — video revenue fell 10% year over year despite the subscriber gains. Cox Communications launched its first streaming-inclusive cable bundles last month, and Dish Network has yet to integrate streaming apps into its packages at all.
Advantest Corporation disclosed that its corporate network has been targeted in a ransomware attack that may have affected customer or employee data.
Preliminary investigation results revealed that an intruder gained access to certain parts of the company’s network on February 15.
Tokyo-based Advantest is a global leader in testing equipment for semiconductors, measuring instruments, digital consumer products, and wireless communications equipment.
The company employs 7,600 people, has an annual revenue of more than $5 billion, and a market capitalization of $120 billion.
On February 15, the firm detected unusual activity in its IT environment, prompting a response in accordance with incident response protocols, including the isolation of affected systems.
Advertisement
As part of its response, the company contracted third-party cybersecurity specialists to help isolate the threat and investigate its impact.
“Preliminary findings appear to indicate that an unauthorized third party may have gained access to portions of the company’s network and deployed ransomware,” Advantest states.
“If our investigation determines that customer or employee data was affected, we will notify impacted persons directly and provide guidance on protective measures.”
Currently, no data theft has been confirmed, but Advantest noted that this may change as more information emerges from the ongoing investigation.
Advertisement
Should customers or staff be determined to be impacted, Advantest will notify them directly and provide instructions on mitigating the associated risks.
At the time of writing, no ransomware groups have claimed the attack on the Japanese tech giant.
BleepingComputer has contacted Advantest directly to request more details about the attack, but we have not heard back by publishing time.
Multiple Japanese companies have been the target of cyberattacks recently, as several high-profile entities suffered data breaches and operational disruptions. Notable examples include Washington Hotel, Nissan, Muji, Asahi, and NTT.
Advertisement
Advantest says that the investigation continues and that it will provide updates on the incident when new details emerge.
Modern IT infrastructure moves faster than manual workflows can handle.
In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.
Cybernews found misconfigured database in “Video AI Art Generator & Maker” app
Leak exposed 8.27m media files, including 2m private user photos and videos
Developers secured database after disclosure; similar flaws seen in another Codeway app
Yet another misconfigured database leaking sensitive user data was found, but this one is even more worrying since the data being leaked is – user-uploaded photos and videos.
Researchers from Cybernews recently discovered an Android app called “Video AI Art Generator & Maker” contained a misconfigured Google Cloud storage bucket which was accessible to anyone who knew where to look.
In total, more than 1.5 million user images, and more than 385,000 videos were stored in the bucket. Furthermore, it stored more than 2.87 million AI-generated videos, more than 386,000 AI-generated audio files, and more than 2.87 million AI-generated images.
Several vulnerable apps
The app offered AI-generated makeovers for photos and videos – something that is particularly popular these days. It was launched in mid-June 2023 and according to Cybernews, has been storing the multimedia people upload since then.
So, in total, the exposed bucket contained 8.27 million media files, 2 million of which were private, user-uploaded content.
The app was allegedly developed by Codeway Dijital Hizmetler Anonim Sirketi, a private company registered in Turkey, but we could not find this app on the Play Store, or the developer’s official website. Codeway’s dedicated Play Store page shows only three apps.
Advertisement
However, the official website does demonstrate a different app, called Chat & Ask AI, and this one also had a misconfigured backend using Google Firebase. In early February 2026, an independent researcher found that this app, one of the most popular ones in its category, exposed 300 million messages tied to 25 million users.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Still, Cybernews said it managed to get in touch with the developers, who secured the Video AI Art Generator & Maker database soon after.
“This data leak shows how some AI apps prioritize fast product delivery, skipping crucial security features, such as enabling authentication for the critical cloud storage bucket used to store user data, including images and videos,” the researchers explained.
We’ve been rigorously testing work-from-home gear for years—even prior to the Covid-19 remote work boom—and that includes dozens of office chairs and desks. Branch furniture has made standouts that are highlighted in our guides over and over again. Its Presidents’ Day deals have been extended, bringing some of the better discounts we’ve seen on essentials we’ve tested like chairs and desks.
Check out our other deals coverage for additional discounts on gear we’ve tried and would recommend to a friend.
Branch Ergonomic Chair Pro for $449 ($50 off)
Photograph: Julian Chokkattu
Photograph: Julian Chokkattu
Photograph: Julian Chokkattu
Branch
Ergonomic Chair Pro
This price matches the best we usually see for our very favorite office chair. Out of the dozens we’ve tried, this chair strikes the best balance of features for the price. It’s comfortable, adjustable, and easy to dial in so you can get your perfect ergonomic fit. It also has a solid warranty and isn’t too terribly expensive compared to similar chairs. There are different fabric finishes and colors to choose from, all of which are on sale right now.
Advertisement
Branch Ergonomic Chair for $323 ($36 off)
The best budget office chair is even more affordable right now thanks to this deal. It’s easy to assemble, it has some adjustable elements, it’s comfortable and breathable, and it looks nice with or without the optional headrest. The upholstery is available in several colors, though the fabric does pill and attract pet hair. We still think this is a chair worth checking out if you’re on a tight budget.
Branch Four Leg Standing Desk for $854 ($95 off)
Photograph: Julian Chokkattu
Branch
Four Leg Standing Desk
This is editor Julian Chokkattu’s favorite desk he’s tried. At first glance, it looks like a standard desk, but it’s actually a standing desk that can be raised or lowered with the little control panel. Assembly was easy, the controls are simple, and the shape is elegant. If you want a desk that looks great no matter how tall it is, this is worth checking out, especially at this price.
Branch Duo Standing Desk for $494 ($55 off)
Photograph: Julian Chokkattu
We like this compact, affordable standing desk, which gets you a lot of value for how little you’ll pay. It’s compatible with a lot of add-ons, and the paddle controls are easy to use. There’s even a preset mode so you can press the paddle twice to raise it to your preset height. This desk is compact, but if you don’t need a ton of room for your working setup, it’s a good option even at full-price. (Luckily, right now, you can snag it for less.)
Matt Oppenheimer led Remitly for nearly 15 years as co-founder and CEO. He announced this week that he’s moving into the chairman role. (Remitly Photo)
Build with intentionality. Lead with authenticity. Prioritize customers over your ego. And focus on the problem you’re solving — with flexibility on the solution.
That’s part of the playbook Matt Oppenheimer followed as he helped grow a three-person Techstars Seattle startup into one of the world’s leading remittance platforms.
After nearly 15 years leading Remitly as CEO, Oppenheimer announced Wednesday that he’s stepping down as CEO and moving to a board chair role. He’s passing the baton to Sebastian Gunningham, a longtime tech and finance leader who previously led Amazon’s marketplace and payments businesses.
“I feel wonderful, honestly,” Oppenheimer told GeekWire on Thursday. “One thing that has always driven me from the moment I started the business 15 years ago is impact and purpose and doing things with a sense of intentionality. And I feel like that’s how we’ve done this succession planning.”
The Remitly story began more than a decade ago after Oppenheimer had just returned from Kenya, where he was working for Barclays and realized how hard it was for families to send and receive money overseas.
Advertisement
He teamed up with co-founders Josh Hug and Shivaas Gulati, navigating an early pivot before landing product-market fit and raising around $400 million. The company went public in 2021 at a valuation of nearly $7 billion.
Remitly’s mobile technology lets people send and receive money across borders, eliminating many of the forms, codes, and in-person agents traditionally associated with international transfers. It’s used by more than 9 million people. The company reported revenue of $442.2 million in Q4, up 26% year-over-year, and had its first full year of GAAP profitability in 2025.
We spoke to Oppenheimer about lessons learned from Remitly’s journey and his advice for entrepreneurs. Here are some key takeaways.
Fall in love with the problem, not your product
Oppenheimer remembers the frustration he saw and felt watching families struggle to send money across borders. That sparked the idea for Remitly. The key, he says, was locking onto that problem — not any one product idea. The danger is when founders apply their grit in the wrong place.
Advertisement
“If they channel that perseverance in the wrong area — the product or trying to force something into existence that customers don’t care about — they fail,” he said. “They run out of time, energy, or money.”
Remitly’s booth at Southcenter Mall served as a key customer feedback mechanism. (Remitly Photo)
Get close to customers
In the early days, Remitly set up a booth at Southcenter Mall near Seattle outside a legacy remittance location, complete with scotch-taped signage.
Oppenheimer referenced a phrase from Airbnb co-founder Brian Chesky: “find marketing channels that don’t scale.” The goal wasn’t growth, but rather insight.
They learned why customers weren’t using Remitly. That feedback drove a big pivot from mobile wallets to cash pickup, bank deposit, and door-to-door delivery.
“We had to follow customers,” Oppenheimer said. He added: “If we would have been stubborn about only doing mobile wallets — that’s what our pitch said — then we would have failed.”
Advertisement
Define culture as behaviors — and keep rewriting it
Oppenheimer said many companies stop at a short list of vague values. “Culture is how people in a company or institution interact to deliver for their customers,” he said.
Before Remitly launched its product, the founding team did an offsite to define the culture on a whiteboard. Early values like “relationships” were well-intentioned but too broad. Remitly refreshed its values every six months and now every couple of years, evolving them into more specific behaviors such as “be a compassionate partner,” “lead authentically,” and “constructively direct.”
Customer centricity sits at the top as the single overarching value. Oppenheimer said the test is whether values show up in concrete decision-making: “Once you’ve got it defined, [you embed it] into the interview process and the performance review process.”
Remitly co-founders Josh Hug, Matt Oppenheimer and Shivash Gulati. (Remitly Photo)
Find complementary co-founders
Oppenheimer said Remitly wouldn’t exist without his co-founders, pointing to Hug’s product skills and Gulati’s engineering chops.
“It’s important for all founders to surround themselves with complementary skills and respect those skills deeply,” he said.
Advertisement
In the very early days, his own contribution was often clearing obstacles: money transmission licenses, office leases, even taking out the trash. “My job was to help them build,” he said. Oppenheimer stressed the importance of shared values but different strengths.
Raise more capital than you think you need
Remitly raised hundreds of millions of dollars on its way to an IPO across multiple rounds. None were easy.
“It requires getting a lot of no’s,” Oppenheimer said. “It requires that grit, tenacity and perseverance that is critical for any entrepreneur to be successful.”
He advised treating fundraising as a two-way conversation, not a one-sided pitch. “Investors can sniff desperation,” he said. Make sure investors are asking the right questions, and think about whether you want them on your board.
Advertisement
When the partner is right, Oppenheimer leans toward raising a bit more. “Things always take a little bit longer than you imagine,” he said. For companies pursuing bold visions, “if you’ve got the right partner, you can raise enough capital, then it’s worth the dilution to be able to make progress against accomplishing that vision.”
Oppenheimer on his first trip to the Philippines as Remitly’s CEO. (Remitly Photo)
Treat your own growth like a product, with reviews and roadmaps
As he focused more on management, Oppenheimer built a formal process for his own development as CEO, especially as Remitly grew from a handful of people to more than 3,000.
He started asking each new investor who joined Remitly’s board to run his performance review. “I’d like you to talk to all other board members. I’d like you to talk to my leadership team,” he’d tell them. “And then I’d like your insights.”
He turned that input into a written development plan, shared it with the company, and then found coaches and mentors to help him work on specific gaps. “It took a lot of intentionality to grow as a leader,” he said.
That work continues in his new role as chairman. “After mission and purpose, my second biggest motivator for me personally is growing as a human,” he said. “That’s what I’ve loved about the journey, and it continues in this next role.”
Advertisement
Don’t underestimate the role of community
Seattle is a huge part of Remitly’s story. Techstars Seattle helped launch Remitly (back when it was called Beamit Mobile); talent from the region’s tech ecosystem helped scale it.
“The talent we’ve been able to recruit from some of the largest technology companies has been foundational,” Oppenheimer said. With fewer growth-stage companies in the city than in some other hubs, he believes Remitly could attract people looking to join a mission-driven startup with scale ambitions.
Last year the company moved into a new headquarters in downtown Seattle. Oppenheimer said he and Remitly remain committed to Seattle, noting that he wants to make sure “that’s the case for the next decade to come.”
The summit is the fourth in an annual series that began in UK in 2023, before moving to Korea, and then France.
Global business leaders and heads of states all flocked to New Delhi this week to attend India’s AI Impact Summit 2026, which just concluded today (20 February).
The summit is the fourth in an annual series that began in UK in 2023, before moving to Korea, and then France last year.
India has huge aspirations to become a leader in AI – and overall, the billions in investments announced from Big Tech leaders this past week could mean that the fast-growing economy might be on the right track.
Advertisement
But as The New York Times puts it: “India brims with tech talent but not the companies that command it.” While slow to develop on the technology itself, India offers a very large AI user base (100m weekly ChatGPT users alone are from India) and huge under-employed workforce.
Outside the summit, however, disorganisation was rife. Blocked roads forced delegates to walk kilometres to reach the summit and wait in long queues. Meanwhile, dozens of New Delhi’s poorest accused state leaders of forcibly displacing their makeshift homes to ‘beautify’ the streets for the incoming international guests.
Though as that went on, AI leaders including OpenAI, Microsoft and Nvidia made some big announcements.
Microsoft’s $50bn ‘global south’ pledge
Microsoft said it is on its way to invest $50bn in the “global south” – a term used to refer to the world’s developing countries. Research from the company finds that AI usage in the “global north” is roughly twice that of the “global south”. The company said that the investment will be used to help increase AI usage in the region.
Advertisement
The Bill Gates-founded company wants to build the infrastructure needed for better AI diffusion in the country, help develop multilingual and multicultural AI capabilities, enable local AI innovations, and be able to measure AI diffusion better to guide future policies and investments in the tech.
Last year, the company announced plans for around $17.5bn worth of AI investments in India.
Adani adds $100bn to AI data centre pot
Billionaire business mogul Gautam Adani’s company, the Adani Group, announced direct investments of $100bn to create the “world’s largest integrated data centre platform” in India.
The new investment adds to AdaniConnex’s already existing 2GW national data centre, and expands it to a 5GW target. These AI data centres will be powered with renewable energy, the company claimed.
Advertisement
Its plans for the mega data centre comes with existing partnerships with Google and Microsoft. The company said it is discussing plans for more large-scale campuses in India with other major players.
According to the Adani Group, the $100bn investment is expected to generate an additional $150bn across manufacturing, advanced electrical infrastructure and sovereign cloud platforms in the country by 2035. Together, it projects to create a $250bn AI infrastructure ecosystem in India over the decade.
Telco leader drops $110bn for compute
Indian telecommunications giant Reliance Industries and Jio – its digital business – announced 10trn rupees (around $110bn) in new investments to build AI computing infrastructure in the country.
Owner Mukesh Ambani said that the investment would fund what he described as India’s sovereign compute infrastructure, which would include multi-gigawatt-scale data centres, a nationwide edge computing network and new AI services integrated with Jio.
Advertisement
“This is not speculative investment, this is patient capital to build India,” Ambani said at the summit in New Delhi.
OpenAI becomes a TCS customer
Tata Consultancy Services (TCS) announced that OpenAI will become its first customer in its recently announced data centre business, Hypervault, with an initial commitment of 100MW of AI capacity. The capacity, it said, can be eventually scaled up to 1GW.
The project is a part of OpenAI’s Stargate venture, a $500bn privately-funded initiative to build AI data centres across the globe.
Alongside infrastructure, the partnership will also deploy ChatGPT Enterprise across parent company Tata Group’s other subsidiaries over the next several years.
Advertisement
“Through OpenAI for India and our partnership with the Tata Group, we’re working together to build the infrastructure, skills, and local partnerships needed to build AI with India, for India, and in India, so that more people across the country can access and benefit from it,” OpenAI CEO Sam Altman claimed.
L&T teams up with Nvidia
Larsen & Toubro (L&T), on the other hand, claimed to be building India’s “largest gigawatt-scale AI factory” using Nvidia’s AI infrastructure, which includes its GPUs, CPUs networking and accelerated storage platforms.
The venture will scale Nvidia GPU clusters at the company’s data centres in Chennai and Mumbai.
“AI is driving the largest infrastructure buildout in human history – everyone will use it, every company will be powered by it and every country will build it,” said Jensen Huang.
Advertisement
Its venture with L&T is “enabling AI factories at national scale”, he added, “ready to serve global and domestic AI demand”.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
CreationDose’s Alessandro La Rosa discusses AI and the creator economy, and why a balance between automation and humanity is needed to preserve authenticity.
The rise of social media and influencers into the mainstream – and the subsequent monetisation that eventually emerged out of the field – has made the ‘creator economy’ an extremely buoyant market in the last decade.
As the creator economy continues to grow, one company is working on using artificial intelligence (AI) to help content creators and brands manage collaboration strategies and campaigns.
Advertisement
CreationDose, a media-tech company based in Sicily, Italy, has developed an AI-powered platform called Vidoser, which aims to help manage the collaboration life cycle between influencers and brands, assisting in tasks such as content production and marketing campaigns.
“I’ve always had a deep passion for communication and for the ways people express themselves through the media,” says founder and CEO Alessandro La Rosa. “When I realised that creators were redefining the language of brands, I decided to build a platform that put them at the centre.
“That’s how Vidoser was born – with the mission to unite technology, creativity and new generations.”
Risk and trust
‘Make sure your ambition is always greater than your fears.’
Advertisement
According to La Rosa, this is the best piece of career advice he has ever received, as it pushed him to “never stop in front of risk, to believe in my projects and to build something concrete even when conditions seemed impossible”.
La Rosa held this advice to heart when founding CreationDose in 2018 and launching Vidoser in 2019, which he describes as the biggest risks he has ever taken, as at the time, talking about the creator economy “still sounded almost utopian”.
“We started with a small team and a big vision in an environment where the start-up ecosystem was still underdeveloped,” he says. “Today, I can say it was the biggest – and most rewarding – risk of my life.”
As CreationDose’s CEO, La Rosa leads the company’s strategic direction, overseeing product development, revenue growth and partnerships, with his main focus being defining the company’s long-term trajectory, ensuring technological innovation remains at the core of its culture, and aligning all business units toward common objectives.
Advertisement
As a business leader, La Rosa says he believes in giving trust and responsibility to his team.
“I try to build an environment where people feel part of the vision and can express themselves freely. I focus more on results than on hours worked, promoting a culture of listening and continuous growth. When people understand that their contribution has real impact, they give their best.”
Balanced automation
The advent of advanced AI technology has sparked concern in multiple industries – especially creative industries such as art and entertainment.
In the aftermath of the rise of generative AI, kicked off by OpenAI’s ChatGPT, professionals from creative industries – ranging from film, TV and literature to music and video games – have voiced worry about the technology encroaching on their sectors and work without restraint.
Advertisement
La Rosa recognises the concern that creatives may have about the technology, and emphasises that a balance is needed between automation and the human in the process.
“It’s natural that people feel apprehensive about AI, especially in creative fields where personal identity carries great value,” he says.
“AI should be used as a creative partner, not a replacement. It can help improve quality, analyse creator performance, suggest optimisations, review content or speed up editing – but the final decisions should always remain in human hands.
“Transparency in the use of AI, data protection and respect for the intellectual property of creators are essential principles.”
Advertisement
La Rosa says that the speed at which the creator economy is evolving means that one of the biggest challenges is maintaining that balance between automation and humanity.
“On one hand, artificial intelligence allows us to scale and optimise content production; on the other, it’s essential to preserve the authenticity of creators and support the people behind this industry,” he says.
“The main advantages are the ability to analyse millions of data points, predict trends and optimise campaigns in real time. The downside is the risk of losing authenticity if everything becomes too automated.”
La Rosa believes that the right balance comes from combining AI with human sensitivity. “Data can guide decisions, but the relationship between brand and creator must remain deeply human.
Advertisement
“I believe AI represents an extraordinary opportunity to free up time, enhance productivity and make tools accessible that were once available only to a few,” he says. “The difference will always depend on how it’s used: as a lever to elevate human ingenuity, not to replace it.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Sam Altman claims ChatGPT’s adult mode will ‘be able to safely relax the restrictions’ of the chatbot, but firing a critic of the plan is a reason to be wary
OpenAI is about to give ChatGPT an adults-only option. At almost the same moment, the company has parted ways in disputed fashion with one of the executives responsible for deciding how far the system should be allowed to go, as first reported by The Wall Street Journal. OpenAI CEO Sam Altman’s promise of a responsible, safe adult mode for ChatGPT is now at risk of looking hollow.
Ryan Beiermeister led product policy at OpenAI, shaping the rules and enforcement mechanisms governing ChatGPT’s behavior, at least until last month. The timing is notable as WSJ says it happened soon after she raised concerns about the adult mode plans.
OpenAI says her departure was unrelated to any objections she voiced and instead tied to an allegation of discrimination that she strongly denies. She has called the claim “absolutely false,” and the timing is difficult to ignore.
Advertisement
Adult Mode was first teased by Altman in October and should debut soon. The idea is to allow verified adults to generate AI erotica and engage in explicit conversations. Altman framed the shift as part of a broader effort to make ChatGPT more flexible and less sanitized.
“We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right,” Altman said at the time. “Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”
According to the report, Beiermeister warned colleagues that the company’s mechanisms for blocking child exploitation content were not strong enough and that preventing teenage users from accessing adult material would be far harder than executives seemed to believe. Even if her departure from OpenAI has nothing to do with the warning, it’s something guaranteed to raise eyebrows among those already worried about sex online.
Advertisement
The adult internet has always existed, and it has always been lucrative. That fact sits in the background of this story. Companies that want growth eventually confront the gravitational pull of sexual content. It drives engagement. It keeps users logged in. It fuels subscriptions. OpenAI is not immune to those incentives.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
What makes this moment different is the nature of the product. ChatGPT is interactive, adaptive, and capable of responding to a user’s emotional cues. It can tailor fantasies in real time. The shift from passive consumption to personalized simulation changes the stakes.
Advertisement
Adulting AI
Altman’s argument rests on the idea that maturity has arrived. Early versions of ChatGPT were deliberately restrictive. The system often refused to engage even in mild romantic roleplay. Many users complained that it felt stiff and overly cautious.
The premise now is that better safety systems, improved monitoring, and more robust age verification make expansion possible. Verified adults, in this view, should be treated like adults.
That principle sounds reasonable. Adults routinely access erotic content online. If a chatbot can generate a steamy short story for a consenting adult, why should that be treated differently from a romance novel on a bookstore shelf?
Advertisement
But ChatGPT is not a niche adult app. It is a general-purpose assistant used in offices, classrooms, and homes. It drafts emails, explains homework, helps with coding, and offers companionship to people who feel isolated.
Beiermeister’s reported worry about child exploitation and teenage access speaks to a familiar weakness in digital safeguards. Teenagers often bypass restrictions on social platforms with ease, while identity checks can be spoofed.
OpenAI would likely argue that refusing to offer adult content does not prevent its existence. Competitors already do. Elon Musk’s xAI launched Ani, a flirtatious anime-styled AI companion, and the market has shown an appetite for AI companions that blur the line between conversation and seduction.
Yet xAI’s recent experience, when its Grok AI chatbot was reportedly used to generate sexualized deepfakes without consent, has shown the dangers of swimming in these waters. The UK regulators opened investigations into whether adequate safeguards were built into the system’s design, and the company rushed to impose new restrictions on editing images of real people into revealing clothing.
Advertisement
OpenAI may not stumble in the same way, but once this kind of explicit capability exists, it can be repurposed in ways designers did not anticipate or cannot fully control.
(Image credit: Getty Images/ Cheng Xin)
Maturity missing
The reported firing of Beiermeister makes things seem unsavory in other ways. Though OpenAI insists her termination had nothing to do with her policy objections, the fact that there’s any debate on it isn’t ideal for the company. When a senior leader responsible for crafting and enforcing safety rules exits amid a policy dispute, observers draw connections.
Still, ChatGPT’s adult mode might be implemented thoughtfully, with clear boundaries and strong enforcement. All of the current concerns might evaporate. Sexuality is not inherently harmful, and adults are capable of making choices about what they consume.
Advertisement
But there are already plenty of stories of people being in love with their version of a ChatGPT personality. Adding sexual content into that equation won’t do much to dampen matters.
The market pressure to expand into adult content is obvious. But there is, or at least should be, a moral calculus alongside the market logic. ChatGPT has become an infrastructure for millions of people. Decisions about its evolution carry social weight.
If the firing of Ryan Beiermeister has nothing to do with her objections, OpenAI has an opportunity to make that clear and to show that policy debates remain robust inside its walls. If it cannot, the suspicion will linger that growth has taken priority over caution.
When a company loosens its guardrails, the world watches to see who is still holding the map. In this case, one of the people tasked with drawing the boundaries is no longer in the room, and without that essential disagreement, any decision is likely to come off as imperfect at best.
Advertisement
OpenAI wants to treat adults like adults. That aspiration should include treating internal critics like indispensable partners. Otherwise, adult mode won’t be adult in the most important way, keeping things safe for kids.
A recent Amazon Web Services (AWS) outage that lasted 13 hours was reportedly caused by one of its own AI tools, . This happened in December after engineers deployed the Kiro AI coding tool to make certain changes, say four people familiar with the matter.
Kiro is an agentic tool, meaning it can take autonomous actions on behalf of users. In this case, the bot reportedly determined that it needed to “delete and recreate the environment.” This is what allegedly led to the lengthy outage that primarily impacted China.
Amazon says it was merely a “coincidence that AI tools were involved” and that “the same issue could occur with any developer tool or manual action.” The company blamed the outage on “user error, not AI error.” It said that by default the Kiro tool “requests authorization before taking any action” but that the staffer involved in the December incident had “broader permissions than expected — a user access control issue, not an AI autonomy issue.”
Multiple Amazon employees spoke to Financial Times and noted that this was “at least” the second occasion in recent months in which the company’s AI tools were at the center of a service disruption. “The outages were small but entirely foreseeable,” said one senior AWS employee.
Advertisement
The company and has since . Leadership set an 80 percent weekly use goal and has been closely tracking adoption rates. Amazon also sells access to the agentic tool for a monthly subscription fee.
These recent outages follow a more serious event from October, in which a like Alexa, Snapchat, Fortnite and Venmo, among others. The company for that one.
Late last year, Google briefly took the crown for most powerful AI model in the world with the launch of Gemini 3 Pro — only to be surpassed within weeks by OpenAI and Anthropic releasing new models, s is common in the fiercely competitive AI race.
Now Google is back to retake the throne with an updated version of that flagship model: Gemini 3.1 Pro, positioned as a smarter baseline for tasks where a simple response is insufficient—targeting science, research, and engineering workflows that demand deep planning and synthesis.
The most significant advancement in Gemini 3.1 Pro lies in its performance on rigorous logic benchmarks. Most notably, the model achieved a verified score of 77.1% on ARC-AGI-2.
Advertisement
This specific benchmark is designed to evaluate a model’s ability to solve entirely new logic patterns it has not encountered during training.
This result represents more than double the reasoning performance of the previous Gemini 3 Pro model.
Google Gemini 3.1 Pro benchmark chart. Credit: Google
Beyond abstract logic, internal benchmarks indicate that 3.1 Pro is highly competitive across specialized domains:
Advertisement
Scientific Knowledge: It scored 94.3% on GPQA Diamond.
Coding: It reached an Elo of 2887 on LiveCodeBench Pro and scored 80.6% on SWE-Bench Verified.
Multimodal Understanding: It achieved 92.6% on MMMLU.
These technical gains are not just incremental; they represent a refinement in how the model handles “thinking” tokens and long-horizon tasks, providing a more reliable foundation for developers building autonomous agents.
Improved vibe coding and 3D synthesis
Google is demonstrating the model’s utility through “intelligence applied”—shifting the focus from chat interfaces to functional outputs.
One of the most prominent features is the model’s ability to generate “vibe-coded” animated SVGs directly from text prompts. Because these are code-based rather than pixel-based, they remain scalable and maintain tiny file sizes compared to traditional video, boasting far more detailed, presentable and professional visuals for websites and presentations and other enterprise applications.
Other showcased applications include:
Advertisement
Complex System Synthesis: The model successfully configured a public telemetry stream to build a live aerospace dashboard visualizing the International Space Station’s orbit.
Interactive Design: In one demo, 3.1 Pro coded a complex 3D starling murmuration that users can manipulate via hand-tracking, accompanied by a generative audio score.
Creative Coding: The model translated the atmospheric themes of Emily Brontë’s Wuthering Heights into a functional, modern web design, demonstrating an ability to reason through tone and style rather than just literal text.
Business impact and community reactions
Enterprise partners have already begun integrating the preview version of 3.1 Pro, reporting noticeable improvements in reliability and efficiency.
Vladislav Tankov, Director of AI at JetBrains, noted a 15% quality improvement over previous versions, stating the model is “stronger, faster… and more efficient, requiring fewer output tokens”. Other industry reactions include:
Databricks: CTO Hanlin Tang reported that the model achieved “best-in-class results” on OfficeQA, a benchmark for grounded reasoning across tabular and unstructured data.
Cartwheel: Co-founder Andrew Carr highlighted the model’s “substantially improved understanding of 3D transformations,” noting it resolved long-standing rotation order bugs in 3D animation pipelines.
Hostinger Horizons: Head of Product Dainius Kavoliunas observed that the model understands the “vibe” behind a prompt, translating intent into style-accurate code for non-developers.
Pricing, licensing, and availability
For developers, the most striking aspect of the 3.1 Pro release is the “reasoning-to-dollar” ratio. When Gemini 3 Pro launched, it was positioned in the mid-high price range at $2.00 per million input tokens for standard prompts. Gemini 3.1 Pro maintains this exact pricing structure, effectively offering a massive performance upgrade at no additional cost to API users.
Input Price: $2.00 per 1M tokens for prompts up to 200k; $4.00 per 1M tokens for prompts over 200k.
Output Price: $12.00 per 1M tokens for prompts up to 200k; $18.00 per 1M tokens for prompts over 200k.
Context Caching: Billed at $0.20 to $0.40 per 1M tokens depending on prompt size, plus a storage fee of $4.50 per 1M tokens per hour.
Search Grounding: 5,000 prompts per month are free, followed by a charge of $14 per 1,000 search queries.
For consumers, the model is rolling out in the Gemini app and NotebookLM with higher limits for Google AI Pro and Ultra subscribers.
Advertisement
Licensing implications
As a proprietary model offered through Vertex Studio inGoogle Cloud and the Gemini API, 3.1 Pro follows a standard commercial SaaS (Software as a Service) model rather than an open-source license.
For enterprise users, this provides “grounded reasoning” within the security perimeter of Vertex AI, allowing businesses to operate on their own data with confidence.
The “Preview” status allows Google to refine the model’s safety and performance before general availability, a common practice in high-stakes AI deployment.
By doubling down on core reasoning and specialized benchmarks like ARC-AGI-2, Google is signaling that the next phase of the AI race will be won by models that can think through a problem, not just predict the next word.