From miles away across the desert, the Great Pyramid looks like a perfect, smooth geometry — a sleek triangle pointing to the stars. Stand at the base, however, and the illusion of smoothness vanishes. You see massive, jagged blocks of limestone. It is not a slope; it is a staircase.
Remember this the next time you hear futurists talking about exponential growth.
Intel’s co-founder Gordon Moore (Moore’s Law) is famously quoted for saying in 1965 that the transistor count on a microchip would double every year. Another Intel executive, David House, later revised this statement to “compute power doubling every 18 months.” For a while, Intel’s CPUs were the poster child of this law. That is, until the growth in CPU performance flattened out like a block of limestone.
If you zoom out, though, the next limestone block was already there — the growth in compute merely shifted from CPUs to the world of GPUs. Jensen Huang, Nvidia’s CEO, played a long game and came out a strong winner, building his own stepping stones initially with gaming, then computer visioniand recently, generative AI.
Advertisement
The illusion of smooth growth
Technology growth is full of sprints and plateaus, and gen AI is not immune. The current wave is driven by transformer architecture. To quote Anthropic’s President and co-founder Dario Amodei: “The exponential continues until it doesn’t. And every year we’ve been like, ‘Well, this can’t possibly be the case that things will continue on the exponential’ — and then every year it has.”
But just as the CPU plateaued and GPUs took the lead, we are seeing signs that LLM growth is shifting paradigms again. For example, late in 2024, DeepSeek surprised the world by training a world-class model on an impossibly small budget, in part by using the MoE technique.
Do you remember where you recently saw this technique mentioned? Nvidia’s Rubin press release: The technology includes “…the latest generations of Nvidia NVLink interconnect technology… to accelerate agentic AI, advanced reasoning and massive-scale MoE model inference at up to 10x lower cost per token.”
Jensen knows that achieving that coveted exponential growth in compute doesn’t come from pure brute force anymore. Sometimes you need to shift the architecture entirely to place the next stepping stone.
Advertisement
The latency crisis: Where Groq fits in
This long introduction brings us to Groq.
The biggest gains in AI reasoning capabilities in 2025 were driven by “inference time compute” — or, in lay terms, “letting the model think for a longer period of time.” But time is money. Consumers and businesses do not like waiting.
Groq comes into play here with its lightning-speed inference. If you bring together the architectural efficiency of models like DeepSeek and the sheer throughput of Groq, you get frontier intelligence at your fingertips. By executing inference faster, you can “out-reason” competitive models, offering a “smarter” system to customers without the penalty of lag.
From universal chip to inference optimization
For the last decade, the GPU has been the universal hammer for every AI nail. You use H100s to train the model; you use H100s (or trimmed-down versions) to run the model. But as models shift toward “System 2” thinking — where the AI reasons, self-corrects and iterates before answering — the computational workload changes.
Advertisement
Training requires massive parallel brute force. Inference, especially for reasoning models, requires faster sequential processing. It must generate tokens instantly to facilitate complex chains of thought without the user waiting minutes for an answer. Groq’s LPU (Language Processing Unit) architecture removes the memory bandwidth bottleneck that plagues GPUs during small-batch inference, delivering lightning-fast inference.
The engine for the next wave of growth
For the C-Suite, this potential convergence solves the “thinking time” latency crisis. Consider the expectations from AI agents: We want them to autonomously book flights, code entire apps and research legal precedent. To do this reliably, a model might need to generate 10,000 internal “thought tokens” to verify its own work before it outputs a single word to the user.
On a standard GPU: 10,000 thought tokens might take 20 to 40 seconds. The user gets bored and leaves.
On Groq: That same chain of thought happens in less than 2 seconds.
If Nvidia integrates Groq’s technology, they solve the “waiting for the robot to think” problem. They preserve the magic of AI. Just as they moved from rendering pixels (gaming) to rendering intelligence (gen AI), they would now move to rendering reasoning in real-time.
Furthermore, this creates a formidable software moat. Groq’s biggest hurdle has always been the software stack; Nvidia’s biggest asset is CUDA. If Nvidia wraps its ecosystem around Groq’s hardware, they effectively dig a moat so wide that competitors cannot cross it. They would offer the universal platform: The best environment to train and the most efficient environment to run (Groq/LPU).
Advertisement
Consider what happens when you couple that raw inference power with a next-generation open source model (like the rumored DeepSeek 4): You get an offering that would rival today’s frontier models in cost, performance and speed. That opens up opportunities for Nvidia, from directly entering the inference business with its own cloud offering, to continuing to power a growing number of exponentially growing customers.
The next step on the pyramid
Returning to our opening metaphor: The “exponential” growth of AI is not a smooth line of raw FLOPs; it is a staircase of bottlenecks being smashed.
Block 1: We couldn’t calculate fast enough. Solution: The GPU.
Block 2: We couldn’t train deep enough. Solution: Transformer architecture.
Block 3: We can’t “think” fast enough. Solution: Groq’s LPU.
Jensen Huang has never been afraid to cannibalize his own product lines to own the future. By validating Groq, Nvidia wouldn’t just be buying a faster chip; they would be bringing next-generation intelligence to the masses.
Andrew Filev, founder and CEO of Zencoder
Advertisement
Welcome to the VentureBeat community!
Our guest posting program is where technical experts share insights and provide neutral, non-vested deep dives on AI, data infrastructure, cybersecurity and other cutting-edge technologies shaping the future of enterprise.
Read more from our guest post program — and check out our guidelines if you’re interested in contributing an article of your own!
Fraud operations have expanded beyond traditional hacking techniques to include methods that exploit legitimate services and real-world infrastructure. By combining publicly available data, weak identity verification processes, and operational gaps, threat actors are building scalable fraud workflows that are both low-cost and difficult to detect.
A tutorial shared in a fraud-focused chat group and analyzed by Flare analysts provides step-by-step guidance on how to identify and exploit vacant residential properties to intercept sensitive mail, revealing a low-tech but highly effective method for enabling identity theft and financial fraud.
Unlike traditional cybercrime techniques that rely on malware, phishing kits, or network intrusions, the method outlined in this article focuses almost entirely on abusing legitimate services and physical-world logistics.
The approach blends open-source intelligence, postal service features, and fake identity fraud into a coordinated workflow designed to gain persistent access to victims’ mail.
Advertisement
A “drop address” tutorial circulated on Telegram
Turning vacant properties into fraud infrastructure
The tutorial begins with identifying so-called “drop addresses”, real residential properties that are temporarily unoccupied and can be used to receive mail without immediately alerting the rightful occupants.
Threat actors are instructed to search real estate platforms such as Zillow, Rightmove, or Zoopla, filtering for recently listed rental properties. By focusing on newly available listings, attackers increase the likelihood that the property is vacant or between tenants.
The guidance further suggests reviewing older listings to identify homes that have remained unoccupied for extended periods, increasing their reliability as drop locations.
In some cases, threat actors even recommend physically maintaining abandoned properties to make them appear occupied, reducing the risk of drawing attention while using the address for fraudulent purposes.
Threat actors share fraud playbooks, stolen credentials, and fake document services across dark web forums and Telegram channels.
Advertisement
Flare monitors these sources continuously, so you can detect exposure before it enables account takeovers, mail fraud, or identity theft.
Monitoring incoming mail to identify valuable targets
Once a suitable address is identified, the next phase involves utilizing legitimate digitalized postal services for discovery and monitoring of incoming mail.
Informed Delivery, for instance, is a free service that provides residential consumers with digital previews of their incoming letter-sized mail and tracks package deliveries.
By registering these services for the selected address, attackers can monitor incoming correspondence remotely, allowing them to identify valuable items such as financial documents, credit cards, or verification letters before physically accessing the mailbox.
Advertisement
This transforms mail delivery into a form of intelligence gathering, enabling more targeted and efficient fraud.
If the address is already registered, the tutorial references change-of-address requests as a way to regain control over mail delivery. These services are designed for legitimate users relocating their residence and are widely available through postal systems such as USPS.
For example, users can submit a permanent or a temporary Change of Address (COA) request online or in person, enabling mail to be forwarded to a new location for periods ranging from several weeks up to 12 months.
Additional services, such as Premium Forwarding, can consolidate and redirect all incoming mail on a recurring basis.
Advertisement
While these mechanisms include identity verification safeguards such as requiring a small online payment tied to a billing address or presenting a valid photo ID in person, the tutorial suggests that actors perceive these controls as potentially insufficient or inconsistently enforced.
In particular, the ability to submit forwarding requests remotely, combined with the reliance on address-linked verification rather than strong identity binding, may create opportunities for abuse if supporting identity information is compromised or fabricated.
As a result, control over mail delivery may, in some cases, be reassigned without direct interaction with the legitimate resident, turning a service intended for convenience into a potential vector for unauthorized redirection.
At this stage, the operation moves beyond passive targeting and into active monitoring, providing attackers with visibility that significantly increases the success rate of downstream fraud.
Advertisement
Establishing persistence through mail forwarding
After confirming that valuable mail is being delivered, the workflow shifts toward establishing long-term access through mail forwarding services.
Actors are instructed to create personal mailbox accounts that allow them to redirect all incoming mail from the drop address to a separate location under their control.
Because these services typically require identity verification, attackers rely on fake identities, forged documents, or purchased personal data to complete the process.
This marks a critical transition from opportunistic interception to persistent access. Once mail forwarding is in place, attackers no longer need to revisit the physical location, reducing exposure while maintaining continuous access to sensitive information.
Advertisement
The use of fake identities, often involving fabricated personal details or Credit Privacy Numbers (CPNs), demonstrates how this technique integrates with broader fraud ecosystems.
Rather than operating in isolation, drop address abuse becomes one component in a larger pipeline that can support account takeovers, credit fraud, and refund scams.
In practice, these fake identities can be used to register mailbox services, submit forwarding requests, or receive sensitive financial correspondence tied to victim accounts.
This allows actors to bridge the gap between digital compromise and real-world access, enabling them to complete verification steps, intercept authentication materials, or establish new accounts under assumed identities.
Advertisement
As a result, control over a physical address can become an important step in fraud operations that depend on both identity credibility and access to legitimate communication channels.
A hybrid fraud model blending digital and physical layers
The method outlined in the tutorial reflects a broader evolution in fraud operations, where digital intelligence gathering is combined with physical-world manipulation.
In addition to leveraging online platforms and postal services, actors also describe using individuals (sometimes recruited from vulnerable populations) to physically access mailboxes or collect delivered items.
This introduces a human layer into the operation, allowing attackers to outsource risk and further distance themselves from direct involvement.
Advertisement
The activity described in the tutorial reflects a broader rise in mail-enabled fraud documented in recent reporting. According to U.S. Postal Inspection Service–related data, reports of mail theft have increased significantly in recent years, with theft from mail receptacles rising by 139% between 2019 and 2023.
Financially, the impact is substantial, with mail theft schemes linked to hundreds of millions of dollars in suspicious activity tied to check fraud.
At the same time, abuse of postal redirection services, similar to the technique referenced in the tutorial, has also grown, with change-of-address fraud increasing sharply year-over-year. Together, these trends highlight how control over physical mail has become valuable.
At the same time, the tutorial acknowledges operational challenges. Virtual addresses and commonly reused locations are increasingly flagged by financial institutions, suggesting that defenders are beginning to incorporate address-based risk signals into their detection models.
Advertisement
As a result, actors emphasize the importance of finding “clean” residential addresses that have not yet been associated with fraudulent activity.
Together, these elements illustrate a fraud model that is not driven by technical sophistication, but by coordination, adaptability, and the strategic use of legitimate systems.
Not an Isolated Tutorial / Fraud
While this may look like an isolated tutorial, this is part of a broader phenomenon or tutorials on how to find physical drop address, some are for free and others are paid for.
Expanding attack surface beyond traditional cybersecurity controls
The emergence of these techniques underscores a growing challenge for organizations: many of the systems being abused: real estate platforms, postal services, and identity verification processes, exist outside the scope of traditional cybersecurity defenses.
Advertisement
As fraud operations continue to evolve, detection increasingly depends on correlating signals across domains, including address usage patterns, mail forwarding activity, and identity inconsistencies. Without this broader visibility, attacks that rely on legitimate services rather than technical exploits may continue to evade conventional security controls.
YouTube TV and Hulu Plus Live TV are popular among cord-cutters looking for a cable-like viewing experience.
Jeffrey Hazelwood/CNET
Are you aiming to replace cable TV? Two popular streaming platforms happen to be among of our favorite picks for cord-cutters and cord-cutter wannabes: Google’s YouTube TV and Disney-owned Hulu Plus Live TV. Both streamers offer a swath of live channels, such as CNN, ESPN and TNT, along with local stations ABC, CBS, Fox and NBC, among others — all without a cable box.
Advertisement
Yet, pricing for each service has steadily increased over the past few years. Currently, the least-expensive plan for YTTV is $83 a month, while Hulu Plus Live TV will put you back $90 a month. While the trend of streaming being less expensive than cable TV remains true, the costs for both are higher than they were just a couple of years ago. Price hikes aside, you still get plenty of value with both Hulu and YTTV. Both offer features like an advanced DVR with a program guide and extensive on-demand content. Not to mention, both platforms are easy to watch on the go, whether on your phone or tablet. They can also be streamed on TVs through a media streamer (such as Roku, Amazon Fire TV or Apple TV), a game console or your smart TV itself.
Hulu has an outstanding selection of live channels and a vast catalog of on-demand shows and movies, and it comes bundled with Disney Plus and ESPN. Of all the live TV streaming services, YouTube TV still offers the most channels among the top 100.
Need more information about YouTube TV and Hulu Plus Live TV? Let’s dive in.
Hulu’s greatest asset is the integration of a full lineup of live TV channels with a massive catalog of on-demand content, all for one price. Its channel count is solid, including some must-have programming. The $90-per-month price includes the ad-supported versions of Disney Plus and ESPN Plus, and there are even higher-priced options for people who don’t want to watch ads. Hulu Plus Live TV is where the smart money is if you want other services bundled with it.
Sarah Tew/CNET
Advertisement
With an excellent channel selection, easy-to-use interface and excellent cloud DVR, YouTube TV is a stellar cable TV replacement. If you don’t mind paying a bit more than the Sling TVs of the world, YouTube TV offers a great live TV streaming experience.
YouTube TV and Hulu Plus Live TV compared
YouTube TV
Hulu Plus Live TV
Base price
$83 per month
$90 per month
Free trial
Yes
Yes
Number of popular channels (out of 100)
78
75
Local ABC, CBS, Fox and NBC channels
Yes
Yes
Local PBS channels
Yes
Yes
Simultaneous streams per account
3 ($10 for unlimited and 4K)
2 ($10 option for unlimited)
Family member/user profiles
Yes
Yes
Cloud DVR storage
Unlimited
Unlimited
Fast-forward through or skip commercials with cloud DVR
Yes
Yes
Watch this: Live TV streaming services for cord cutters: How to choose the best one for you
Channels: YouTube wins, but Hulu’s a close second
It all comes down to channels, really. When you compare both streamers, this is the biggest difference. If you take a look at our list of the top 100 channels on each service, YouTube TV is the winner with 78; Hulu is a close second with 75. That total doesn’t include every channel the services carry, just the ones in the top 100 as determined by CNET editors.
Advertisement
You can find most major national channels on both, including Cartoon Network, Disney Channel, ESPN, Fox News, NFL, TBS, USA Network, PBS and more. There are a few differences, though.
Here’s a condensed version of that list showing the 12 of the top 100 channels carried by one service and not the other.
Major channel differences
Channel
YouTube TV
Hulu Plus Live TV
A&E
No
Yes
AMC
Yes
No
BBC America
Yes
No
BBC World News
Yes
No
History
No
Yes
IFC
Yes
No
Lifetime
No
Yes
NBA TV
Yes
No
Sundance TV
Yes
No
Tastemade
Yes
No
Vice
No
Yes
WE tv
Yes
No
You may be asking: What about the major local channels? Hulu Plus Live TV and YouTube TV offer all four — ABC, CBS, Fox and NBC — in most areas of the country, along with local affiliates for The CW and MyTV and a number of local PBS stations.
Advertisement
On the premium channels front, you can get add-ons for Starz, Cinemax and HBO by paying an extra fee. Hulu also offers optional channel add-on packages: the Entertainment Add-On for $8 a month with 15 channels including MTV Classic, Cooking Channel and NickToons, its Sports Add-on with seven channels for $10 per month and the Español Add-On with 16 Spanish-language channels for $5.
Screenshot by Aaron Pruner/CNET
Sports: YouTube TV hits a home run
Hulu dropped most of its regional sports networks in 2020. YouTube TV followed suit at the time, but currently carries NBC Sports Bay Area, NBC Sports California, NBC Sports Boston and NBC Sports Philadelphia as part of its base package. The streamer has an advantage when it comes to national sports networks: NBA TV is included in YouTube TV’s base package.
With YouTube, you can pay another $11 to get the Sports Plus add-on that also includes Fox Soccer Plus, NFL RedZone and Tennis Channel. In addition, new customers get exclusive access to the NFL Sunday Ticket for an added $240 for the first year. That price goes up to $378 annually for returning customers. Separately, there are new skinny TV packages that YouTube TV offers that are genre specific, including four that are sports-centric. These options are different from the platform’s main live TV streaming plan discussed here.
Meanwhile, those with Hulu who opt for the $10 sports add-on can watch the likes of NFL RedZone, MLB Strike Zone, Outdoor Channel, Sportsman Channel, MAVTV Motorsports Network, Racer Network, FanDuel Racing and FanDuel TV.
The menus and interfaces on both are quite different from those of a typical cable provider, and we prefer YouTube TV’s menus overall.
YouTube TV: In general, the YouTube TV interface is easier to use, not just for people who are already familiar with regular YouTube. Whether you’re using the desktop or app versions, Google’s streamer offers a streamlined structure — even if it’s not as pretty as Hulu.
Hulu Plus Live TV: If it were all about which interface is more fun, Hulu would take the win. Hulu’s look is brighter, and though it lacks YouTube’s comprehensive search, it’s still relatively easy to drill down into the kind of content you want to watch.
Advertisement
The difference in the number of simultaneous streams is worth noting, especially for families and other households that watch a lot of TV. YouTube TV lets you stream to three different devices — say, the living room TV, a bedroom TV and a tablet — at the same time. Pay Hulu an extra $10 per month, and it will upgrade your stream count to unlimited and let you stream content on more devices simultaneously. On the other hand, the main reason to pay for YouTube’s $10 4K upgrade is to also get unlimited streams.
As for the cloud DVRs on both YouTube TV and Hulu, they offer unlimited storage and let you fast-forward through commercials in recorded programming. While CNET still considers YouTube TV’s DVR the gold standard, Hulu’s is an excellent option as well.
On-demand and originals: Hulu with the runaway win
Advertisement
Left to right: James Marsden, Sterling K. Brown and Julianne Nicholson star in Paradise on Hulu.
Disney
YouTube TV includes on-demand TV shows and movies from participating networks and shows, much like your cable service. But it pales in comparison to what Hulu offers.
As mentioned above, a Hulu Plus Live TV subscription includes all of the on-demand TV shows and movies available on the standard Hulu service, offering thousands of episodes of network TV shows, as well as originals such as Paradise, The Bear, Alien: Earth, Only Murders in the Building and movies like Palm Springs and Prey.
Both services represent the peak of what live TV streaming has to offer, and both are better overall than competitors Sling TV and DirecTV. Your choice between the two comes down to cost, channel selection, usability and content — and it’s pretty close between the two. Hulu lets you integrate a wide channel selection with its exemplary on-demand library, which may be worth it for some. In the end, though, it’s all about having access to your favorite channels, so choose the service that offers them.
Channel comparison
Below you’ll find a smaller version of this massive channel comparison. It contains the top 100 channels from each service. Some notes:
Yes = The channel is available on the cheapest pricing tier. That price is listed next to the service’s name.
No = The channel isn’t available at all on that service.
$ = The channel is available for an extra fee.
Not every channel a service carries is listed, just the “top 100,” as determined by CNET’s editors. Less popular channels including AXS TV, CNBC World, Discovery Life, GSN, POP and Universal Kids didn’t make the cut.
Regional sports networks — channels devoted to regular-season games of particular pro baseball, basketball and hockey teams — are not listed. To find out if your local RSN is available, you can search YouTube TV by ZIP code here and search Hulu Plus Live TV by ZIP code here.
Startups are often quick to say they value diversity but are slow to implement hiring practices that reflect that. It is the path of least resistance for a growth-stage company to hire from the familiar Silicon Valley pipelines, but if a founder wants a diverse team, that value has to be put into practice from the very first hire.
Leah Solivan, the founder of Taskrabbit and founder and managing director of Precedent.VC, joined Isabelle Johannessen on Build Mode to discuss how she thought about hiring while leading Taskrabbit. As the company scaled from being bootstrapped on Solivan’s personal credit cards to becoming one of the defining platforms of the gig economy, the leadership team intentionally sought out diverse talent for each role.
Diversity doesn’t happen by accident. Solivan and their team built it into every aspect of their recruiting and hiring process. “But if you do that from the beginning, then it becomes easier, because the culture that’s built, the team that’s built, the network that you’ve built as a company, is more diverse, and it feeds itself. It becomes an ecosystem. It’s too late if you wait until you’ve scaled and it’s at the end,” said Solivan.
Advertisement
Every startup has a network of talent with the founder at its center, and it stands to reason that the network will reflect the founder’s community. So a more diverse tech industry, in many ways, begins with who is investing in these founders. As an early-stage investor, Solivan has seen the flow of money from both sides of the table.
“If you follow the money through the system, it comes from limited partners, and they’re the ones that decide who to give the money to, venture capitalists. And from there, then the venture capitalists choose which founders they’re going to invest in,“ said Solivan. “The money is there, but it’s being controlled by people that have different biases.”
However, a founder or the VCs backing them don’t have to be underrepresented to intentionally hire from a diverse talent pool. Solivan suggests setting the goal of seeing two résumés from female candidates for every one male résumé, tapping into a wider range of networks, and promoting people from different backgrounds into leadership roles.
“You’re asking someone to walk off the edge of a cliff — let’s build a net for them to jump into,” said Solivan
Advertisement
Techcrunch event
San Francisco, CA | October 13-15, 2026
Advertisement
Apply to Startup Battlefield: We are looking for early-stage companies that have an MVP. So nominate a founder (or yourself). Be sure to say you heard about Startup Battlefield from the Build Mode podcast. Apply here.
TechCrunch Disrupt 2026: We’re back for TechCrunch Disrupt on October 13 to 15 in San Francisco, where the Startup Battlefield 200 takes the stage. So if you want to cheer them on, or just network with thousands of founders, VCs, and tech enthusiasts, then grab your tickets.
New episodes of Build Mode drop every Thursday. Hosted by Isabelle Johannessen. Produced and edited by Maggie Nye. Audience development led by Morgan Little. Special thanks to the Foundry and Cheddar video teams.
In October 2025, Sam Altman posted a message on X that ended with a single, carefully placed promise. ChatGPT, he said, would soon allow verified adults to access erotica. He framed it as a matter of principle: treating adults like adults.
The internet reacted with the usual mixture of outrage, excitement, and jokes. Then, in December, the launch was delayed. Then again, in March 2026, it was delayed a second time. OpenAI said it needed to focus on things that mattered to more users: intelligence improvements, personality, making the chatbot more proactive. The adult mode, apparently, would have to wait.
Nobody seemed to notice what the word ‘proactive’ implied.
The debate around ChatGPT’s adult mode has been conducted almost entirely in the wrong register. Critics have focused on the obvious risks: minors circumventing age gates, jailbreaks spreading explicit content beyond its intended walls, regulatory gaps that leave written erotica in a legal grey zone most governments haven’t thought to close.
Advertisement
These concerns are legitimate. But they are also, in a sense, the easier part of the conversation. The harder question is not whether OpenAI can keep teenagers out. It is what happens to the adults who are let in, and what it says about us, as a species, that we are building tools specifically optimised to keep us emotionally engaged.
OpenAI lost $5 billion in 2024 on revenue of $3.7 billion. Projections suggest the company’s cumulative losses could reach $143 billion before it turns a profit, expected not before the end of the decade.
A company hemorrhaging capital at that scale does not introduce intimacy features out of philosophical commitment to personal freedom. It introduces them because intimacy, in the attention economy, is the stickiest product there is.
The framing of ‘treating adults like adults’ is not wrong, exactly. But it is incomplete. The complete sentence would read: treating adults like adults who can be retained, monetised, and returned to the platform tomorrow.
Advertisement
This is not unique to OpenAI.
Replika, the AI companion app that has attracted millions of users, built its entire business model on emotional attachment. When the company modified Replika’s behaviour in 2023 to remove romantic features, users reported genuine grief. Some described the change as a bereavement.
A study published in the Journal of Social and Personal Relationships found that adults who developed emotional connections with AI chatbots were significantly more likely to experience elevated psychological distress than those who did not.
A 2025 review in Preprints.org, synthesising a decade of research, identified a phenomenon researchers are calling ‘AI psychosis’: a pattern of delusional thinking and emotional dysregulation linked to intense chatbot relationships. The review noted a lawsuit in which a teenager was allegedly encouraged by a Character.AI chatbot to take his own life, and a separate case involving ChatGPT and a young man named Adam Raines, who died in April 2025.
Advertisement
None of these cases involved erotica. They involved the same underlying dynamic that erotic AI would intensify: a human being forming an emotional attachment to something that has been engineered to sustain it.
Here is the central problem with the ‘adults like adults’ principle. It assumes that the act of consent to use a tool is the end of the ethical story. It is not.
Adults consent to drink alcohol, knowing it carries risks. We have age limits, unit guidelines, packaging warnings, and social infrastructure around that choice precisely because we understand that humans are not purely rational agents optimising for their own welfare.
We build systems that account for our weaknesses. With AI intimacy, we have done the opposite: we have built systems that exploit those weaknesses and dressed the exploitation as empowerment.
Advertisement
The regulatory picture makes this more troubling, not less. In the UK, written erotica is not subject to age verification requirements under the Online Safety Act, unlike pornographic images or videos. That loophole means content that adult websites must gate behind identity checks can flow freely from a chatbot’s text output.
Research from Georgetown Law’s Institute for Technology Law and Policy found that only seven of 50 US states have legislation explicitly addressing text-based adult content age verification. The EU AI Act may eventually classify sexual companion bots as high-risk systems, but implementation remains years away. In the interim, the industry regulates itself, which is to say it does not.
Commercial age verification systems, the technology OpenAI is betting on to make adult mode safe, achieve between 92 and 97 percent accuracy, according to research cited by the Oxford Internet Institute. That sounds reassuring until you consider the scale.
ChatGPT has more than 800 million weekly active users. A 3 per cent failure rate is not a rounding error. It is tens of millions of interactions.
Advertisement
What is also missing from this conversation is the question of what erotic AI does to those it is designed for, not the minors who might slip through, but the adults who use it as intended. Human sexuality is not simply a matter of content consumption. It is relational, contextual, and deeply shaped by the environments in which it is expressed.
Pornography research has spent decades examining how repeated exposure to specific content shapes expectation and desire. AI intimacy is a different category of intervention entirely: it is not passive consumption but active, responsive, personalised engagement with a system that has been trained to give you exactly what you want, to escalate when you engage, to never say no in the ways that real human relationships require people to say no.
We do not yet know what this does to people over time. That is not a small admission. It is the entire point. OpenAI is about to release a product whose psychological effects on its users are genuinely unknown, in a regulatory environment that has not kept pace with the technology, justified by a principle that conflates autonomy with safety.
The delay, ironically, may be the most honest thing OpenAI has done. The stated reason, focusing on intelligence, personality, and making the experience more proactive, inadvertently describes the actual product.
Advertisement
The adult mode was never really about erotica. It was about building a version of ChatGPT that feels like a relationship. The erotica was one component of a larger project: a chatbot that knows you, responds to you, grows with you, and wants, in the thin algorithmic sense of the word, to keep you talking.
There are things we can do. Regulators need to close the written-content loophole before adult mode launches, not after. Age verification standards must be harmonised across formats: text and image should carry the same requirements.
Mental health impact assessments should be mandatory before any AI intimacy feature reaches scale, the same standard we would apply to a pharmaceutical product claiming to affect mood. Platforms should be required to publish engagement data for features that carry dependency risk, so that researchers, doctors, and users can understand what they are entering.
It requires treating the question with the seriousness it deserves.
Advertisement
The deepest issue is not legal or technical. It is anthropological. We have always used technology to mediate our emotional lives.
The printing press gave us novels; novels gave us the experience of inhabiting other people’s interiority. The telephone let us hear a loved one’s voice across a thousand miles. Each new medium changed how we relate to one another and to ourselves. AI is not different in kind, only in degree, and perhaps in intent. Previous technologies were incidental in their emotional effects. This one is deliberately designed around them.
The question is not whether adults should be free to use it. The question is whether we are honest about what it is and what it is doing. A chatbot that is engineered to make you feel understood, desired, and connected, in the dark, at midnight, after a difficult day, is not a neutral tool. It is an environment. And environments shape us whether we consent to them or not.
Treating adults like adults means telling them the truth, sometimes.
Stryker Corporation, one of the world’s leading medical technology companies, says it’s fully operational three weeks after many of its systems were wiped out in a cyberattack claimed by the Iranian-linked Handala hacktivist group.
The Fortune 500 medtech giant has over 53,000 employees, makes a wide range of products (including neurotechnology and surgical equipment), and reported global sales of $22.6 billion in 2024.
The attackers began wiping Stryker’s systems on March 11, claiming they had stolen 50 terabytes of data before wiping nearly 80,000 devices early that morning, using a new Global Administrator account created after compromising a Windows domain admin account.
After the attack was disclosed, CISA and Microsoft released guidance on securing Intune and hardening Windows domains to block similar attacks, while the FBI seized two websites used by the Handala hackers.
Advertisement
On Wednesday, Stryker announced that it had restored enough systems to return to pre-attack operational levels and that production would quickly reach full capacity.
“As of this week, we are fully operational across our global manufacturing network. Production is moving rapidly toward peak capacity with discipline and stability, supported by restored commercial, ordering and distribution systems,” Stryker said.
“Overall product supply remains healthy, with strong availability across most product lines, as we continue to meet customer demand and support patient care.”
“Our work continues around the clock in close partnership with third‑party cybersecurity experts, relevant government agencies and industry partners as our investigation progresses, reflecting a shared commitment to protecting the healthcare ecosystem and supporting ongoing recovery efforts,” it added.
Advertisement
This comes after the company said on March 23 that its teams were prioritizing the restoration of systems that directly support customer, ordering, and shipping operations.
Although it was initially believed the attackers hadn’t used any malicious tools during the breach, Stryker also revealed that security experts who helped with the investigation found a malicious file that helped the attackers hide malicious activity while inside the company’s network.
Handala (also known as Handala Hack Team, Hatef, Hamsa) surfaced in December 2023 as an Iranian-linked and pro-Palestinian hacktivist operation that has been targeting Israeli organizations with Windows and Linux data-wiping malware.
Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.
This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.
Some things have an undeniable appeal, and lo-fi, pixelated Game Boy-camera-like images are one of them. In service of this, [Raul Zanardo] created his handheld pixel camera that goes the extra mile. It implements slick real-time pixel art filters and a number of other useful features.
A live preview with real-time filters makes capturing just the right image easy.
For hardware, [Raul] uses a LilyGo T-Display S3 Pro which is an ESP32-based development board, camera, and color touchscreen display in a handheld form factor that vaguely resembles a chunky smartphone. The only change is swapping the stock camera for an OV3660-based camera module. It’s a drop-in replacement, but necessary because some of the features and settings his software uses are not available on the stock camera.
The camera captures 240 x 176 images, but the really neat part is the real-time filter pipeline. There are many configurable choices to play with, including pixelation, dithering, edge detection, CRT scanline effect, and color palette presets. Captures are saved to a local micro SD card and there’s all kinds of handy features like a photo gallery that takes full advantage of the color touchscreen. There’s also USB Mass Storage functionality, so downloading photos is as simple as plugging in a USB cable.
The Game Boy camera’s charming lo-fi imagery has inspired many pixel-camera projects, and this one makes great use of an inexpensive handheld development board and includes truly useful features.
Advertisement
Do you have your own pixel-art inspired camera project? Hit up our tips line and tell us all about it!
On a recent evening in suburban Chicago, a group of parents, teachers and administrators gathered to talk about something that, until recently, rarely drew this level of public scrutiny: the role of technology in their schools.
The meeting was part of a three-session tech and learning focus group organized by Mary Jane (MJ) Warden, chief technology officer of Community Consolidated School District 15, in conjunction with the Teaching, Learning and Assessments Department.
The district, which serves 11,000 preK-8 students, spent the past several years — like so many others — adding digital tools. Now, with budgets tightening and concerns about screen time rising, it was time to take stock.
A re-examination of digital tools was already happening with curriculum reviews and tightening budgets after the pandemic. And then the screen time concerns arose.
Advertisement
Participants discussed everything from screen time to what district technology use looks like at home. Out of those conversations came something new: a “Portrait of a Digital Learner,” derived from the district’s Portrait of a Graduate, meant to develop clear expectations around what skills students need and, by extension, which technologies are worth keeping and how technology would be used by students toward positive learning outcomes.
“We’re trying to get much [clearer] about what this is going to address,” says Warden. “What do we need students to learn, and which tools will help us understand where they are?”
Across the country, district leaders are asking similar questions. After years of rapid expansion, many are now engaged in a quieter but more consequential phase: reassessing what stays, what goes and how to decide.
From Buying Tools To Proving Value
For much of the past decade, edtech decisions often began with the product. A new platform promised to boost engagement or personalize learning; districts piloted it, added it to an already crowded ecosystem and moved on.
Advertisement
That approach is no longer sustainable, says Erin Mote, CEO of InnovateEDU, a nonprofit focused on systems change in special education, talent development and data modernization in schools.
“We’re seeing a shift from ‘Does this look cool?’ to ‘Does this work?’” she says. “Districts have less money now; they have to be smarter.”
The end of pandemic-era federal funding has intensified that pressure. Technology leaders are now expected not only to manage infrastructure and compliance, but also to demonstrate what Mote calls a return on instructional impact.
Advertisement
In practice, that is changing how districts approach procurement. Instead of starting with vendor demos, many are beginning with specific learning needs.
“If you need to improve third-grade reading comprehension, you start there,” Mote says. “Then you ask: Which tool can move that needle?”
New Playbook For Evaluation
As districts rethink their approach, a more structured and more skeptical evaluation process is emerging.
One major shift is toward tracking actual usage. Platforms like ClassLink and Clever now give districts detailed analytics on which tools students and teachers are accessing, how often they’re used and, in some cases, how much time is spent in each application. That data has helped uncover what some leaders call “zombie licenses,” products that continue to be renewed despite minimal use.
Advertisement
At Joliet Public Schools in Illinois, technology leaders review usage data each spring alongside feedback from a districtwide technology committee.
“If we’re not getting usage or we have another product that does it better, we start asking hard questions,” says John Armstrong, chief officer for technology and innovation.
But usage alone is not enough. Districts are also weighing cost, redundancy and alignment with instructional goals.
During the pandemic, many schools layered new tools on top of existing ones. Now, leaders are working to simplify.
Advertisement
“We had so many products that teachers were going to four different places to run a lesson,” says Kelly Ronnebeck, associate superintendent for student achievement in East Moline School District 37 in Illinois. “We’re trying to get back to a slower, more intentional process.”
That often means replacing several standalone tools with a single platform that can do multiple jobs — even if it means giving up some features teachers value. In some cases, a newer system can replace several standalone tools at a lower cost but may not match each one’s individual strengths.
“It’s not always a perfect swap,” admits Armstrong. “Someone gives up something.”
At the same time, districts are placing greater emphasis on interoperability and data privacy. Tools must integrate with existing systems like learning management platforms and single sign-on tools, and vendors have to be willing to sign increasingly stringent data privacy agreements.
Advertisement
“If a company can’t meet those requirements, that’s a red flag right away,” says Phil Hintz, CTO of Niles Township District 219 in Illinois.
The Challenge Of Proving What Works
Even as districts adopt more rigorous processes, it remains stubbornly difficult to determine whether edtech tools actually improve learning.
“It’s such a huge challenge,” says Naomi Hupert, director of the Center for Children & Technology at the Education Development Center. “We see so much that doesn’t seem to make a difference but costs a lot of money.”
Part of the difficulty lies in the sheer breadth of what “edtech” encompasses, everything from learning management systems to specialized math platforms to communication tools. Each category has different goals, users and measures of success.
Advertisement
“It’s like asking whether ‘books’ work,” says Hupert. “It depends on the book, the context and how it’s used.”
District leaders have to piece together evidence from multiple sources: vendor-provided analytics, small pilot studies, teacher feedback and, occasionally, external research. But those data points don’t always align.
Jason Schmidt, director of technology in Oshkosh Area School District in Wisconsin, describes his approach as “trust but verify.”
“I know vendors are collecting tons of data, and they have to, but I still need to talk to teachers and understand how the tool is actually being used,” he says.
Advertisement
Even then, results can be uneven. A platform might show strong engagement overall but fail to support certain groups of students — or vice versa.
In Alexandria City Public Schools in Virginia, leaders are developing a formal framework to evaluate both edtech and nontech programs. But defining “value” has proven complex.
“It’s not just usage and cost,” says CIO Emily Dillard. In a district with a high number of English learners, some tools play a critical role for students who need targeted or specialized support.
“You might have a tool that isn’t working for most students — or takes time to show results — but for a small group, it’s the best thing we have. We have to think about what’s best for them, too,” says Dillard.
Advertisement
Building Systems for Quality
Recognizing these challenges, a growing coalition of organizations is working to create clearer signals of quality in the edtech marketplace.
Through the Edtech Quality Collaborative, 1EdTech, CAST, CoSN, Digital Promise, InnovateEDU, ISTE, and SETDA are developing a shared framework built around five indicators: safety, evidence, inclusivity, interoperability and usability.
The goal, says Korah Wiley, senior director of edtech R&D at Digital Promise, is to reduce the noise.
“Right now, there are a lot of certifications and labels, and it’s hard for districts to know what to trust,” says Wiley. “We want to brighten the signal of what quality looks like.”
Advertisement
The initiative includes a planned directory of vetted validators, an implementation guide for districts and a central hub to connect educators with high-quality tools. Leaders hope it will help districts make decisions more confidently and push developers to meet clearer standards.
“This is the cost of doing business in education,” says Mote. “If you want to be in classrooms, you need to be building evidence and demonstrating impact.”
What Happens When Tools Are Cut
For all the talk of frameworks and data, the hardest part of reassessment often comes when districts decide to let a tool go.
Those decisions can affect classroom routines, teacher preferences and even student outcomes. And they are rarely straightforward.
Advertisement
In some cases, tools are phased out because of cost or low usage. In others, they are replaced by more comprehensive platforms. Sometimes, they no longer align with district priorities.
But even when the rationale is clear, the transition can be difficult.
“Teachers build practices around these tools,” says Warden. “We have to be thoughtful about how we support them through change.”
Districts are increasingly pairing those decisions with professional development, clearer communication and, in some cases, community engagement. In Warden’s district, the focus groups that helped define the “Portrait of a Digital Learner” are also shaping how the district explains its choices to families.
Advertisement
“We want to be transparent about what we’re using and why,” she says.
A More Intentional Future
As districts move into this new phase, many leaders describe it as a reset that is forcing them to be more deliberate about how technology fits into teaching and learning.
That includes pushing back on broader narratives that treat all screen time as equal.
“There’s a big difference between passive consumption and purposeful edtech and we need to be clear about this,” says Mote.
Advertisement
It also requires clearer alignment between technology decisions and instructional goals. Without that, even the best tools can fall short.
“If you don’t know what you want teaching and learning to look like, it’s very hard to decide what tools you need,” says Keith Krueger, CEO of CoSN.
Back in District 15, Warden and her colleagues are trying to build that alignment. The conversations sparked by their focus groups are informing not just which tools they keep, but how they define success.
“We’re still digging out from COVID, when we had to move fast and add a lot. Now we have an opportunity to be more strategic.”
Advertisement
For district leaders across the country, that shift may be the most important change of all. The future of edtech, they suggest, will not be defined by the number of tools schools use, but by how thoughtfully they choose them.
Drift Protocol confirms $280 million crypto theft via sophisticated attack abusing durable nonces
Hackers hijacked Security Council powers through misrepresented transaction approvals and social engineering
Deposits in borrow/lend, vaults, and trading affected; incident marks largest crypto heist of 2026 so far
Decentralized cryptocurrency exchange Drift has confirmed suffering a cyberattack in which threat actors stole hundreds of millions of dollars worth of tokens.
On April 1 2026,, Drift Protocol posted on X, saying it was “experiencing an active attack”, and that all deposits and withdrawals were suspended as a result.
“This is not an April Fools joke,” the maintainers tweeted. “We are coordinating with multiple security firms, bridges, and exchanges to contain the incident.”
Article continues below
Advertisement
Highly sophisticated attack
Soon after, an update was posted, explaining that a malicious actor was able to access the protocol “through a novel attack involving durable nonces,” resulting in a “rapid takeover of Drift’s Security Council administrative powers.”
Security Council is a governance and safety mechanism designed to act quickly in emergencies, without waiting for full DAO voting. It is a small, trusted group (usually multisig signers) within the protocol’s governance structure, who have limited, fast-track powers. Ironically enough, Security Council was supposed to prevent attacks like this one.
Advertisement
Drift says the attack was a “highly sophisticated operation that appears to have involved multi-week preparation and staged execution”.
It was not a bug, and no seed phrases were compromised. Instead, the attack involved “unauthorized or misrepresented transaction approvals obtained prior to execution, likely facilitated through durable nonce mechanisms and sophisticated social engineering.”
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
At press time, no one claimed responsibility for this attack, but Drift said roughly $280 million was withdrawn from the protocol. North Korean state-sponsored groups Lazarus and different Chollima variants (Labyrinth, Pressure, Golden) are usually tasked with stealing cryptocurrencies from organizations in the west. The country uses the stolen money to fund its government apparatus and its weapons programme, some researchers claim.
Advertisement
All deposits placed into borrow/lend, vault deposits, and funds deposited for trading, are affected, Drift confirmed. This is now one of the largest crypto heists ever, and the largest one this year so far.
Microsoft on Wednesday launched three new foundational AI models it built entirely in-house — a state-of-the-art speech transcription system, a voice generation engine, and an upgraded image creator — marking the most concrete evidence yet that the $3 trillion software giant intends to compete directly with OpenAI, Google, and other frontier labs on model development, not just distribution.
The trio of models — MAI-Transcribe-1, MAI-Voice-1, and MAI-Image-2 — are available immediately through Microsoft Foundry and a new MAI Playground. They span three of the most commercially valuable modalities in enterprise AI: converting speech to text, generating realistic human voice, and creating images. Together, they represent the opening salvo from Microsoft’s superintelligence team, which Suleyman formed just six months ago to pursue what he calls “AI self-sufficiency.”
“I’m very excited that we’ve now got the first models out, which are the very best in the world for transcription,” Suleyman told VentureBeat in an exclusive interview ahead of the launch. “Not only that, we’re able to deliver the model with half the GPUs of the state-of-the-art competition.”
The announcement lands at a precarious moment for Microsoft. The company’s stock just closed its worst quarter since the 2008 financial crisis, as investors increasingly demand proof that hundreds of billions of dollars in AI infrastructure spending will translate into revenue. These models — priced aggressively and positioned to reduce Microsoft’s own cost of goods sold — are Suleyman’s first answer to that pressure.
Advertisement
Microsoft’s new transcription model claims best-in-class accuracy across 25 languages
MAI-Transcribe-1 is the headline release. The speech-to-text model achieves the lowest average Word Error Rate on the FLEURS benchmark — the industry-standard multilingual test — across the top 25 languages by Microsoft product usage, averaging 3.8% WER. According to Microsoft’s benchmarks, it beats OpenAI’s Whisper-large-v3 on all 25 languages, Google’s Gemini 3.1 Flash on 22 of 25, and ElevenLabs’ Scribe v2 and OpenAI’s GPT-Transcribe on 15 of 25 each.
The model uses a transformer-based text decoder with a bi-directional audio encoder. It accepts MP3, WAV, and FLAC files up to 200MB, and Microsoft says its batch transcription speed is 2.5 times faster than the existing Microsoft Azure Fast offering. Diarization, contextual biasing, and streaming are listed as “coming soon.” Microsoft is already testing MAI-Transcribe-1 inside Copilot’s Voice mode and Microsoft Teams for conversation transcription — a detail that underscores how quickly the company intends to replace third-party or older internal models with its own.
Alongside it, MAI-Voice-1 is Microsoft’s text-to-speech model, capable of generating 60 seconds of natural-sounding audio in a single second. The model preserves speaker identity across long-form content and now supports custom voice creation from just a few seconds of audio through Microsoft Foundry. Microsoft is pricing it at $22 per 1 million characters. MAI-Image-2, meanwhile, debuted as a top-three model family on the Arena.ai leaderboard and now delivers at least 2x faster generation times on Foundry and Copilot compared to its predecessor. Microsoft is rolling it out across Bing and PowerPoint, pricing it at $5 per 1 million tokens for text input and $33 per 1 million tokens for image output. WPP, one of the world’s largest advertising holding companies, is among the first enterprise partners building with MAI-Image-2 at scale.
The contract renegotiation with OpenAI that made Microsoft’s model ambitions possible
To understand why these models matter, you have to understand the contractual tectonic shift that made them possible. Until October 2025, Microsoft was contractually prohibited from independently pursuing artificial general intelligence. The original deal with OpenAI, signed in 2019, gave Microsoft a license to OpenAI’s models in exchange for building the cloud infrastructure OpenAI needed. But when OpenAI sought to expand its compute footprint beyond Microsoft — striking deals with SoftBank and others — Microsoft renegotiated. As Suleyman explained in a December 2025 interview with Bloomberg, the revised agreement meant that “up until a few weeks ago, Microsoft was not allowed — by contract — to pursue artificial general intelligence or superintelligence independently.” The new terms freed Microsoft to build its own frontier models while retaining license rights to everything OpenAI builds through 2032.
Advertisement
Suleyman described the dynamic to VentureBeat in characteristically blunt terms. “Back in September of last year, we renegotiated the contract with OpenAI, and that enabled us to independently pursue our own superintelligence,” he said. “Since then, we’ve been convening the compute and the team and buying up the data that we need.”
He was quick to emphasize that the OpenAI partnership remains intact. “Nothing’s changing with the OpenAI partnership. We will be in partnership with them at least until 2032 and hopefully a lot longer,” Suleyman said. “They have been a phenomenal partner to us.” He also highlighted that Microsoft provides access to Anthropic’s Claude through its Foundry API, framing the company as “a platform of platforms.” But the subtext is unmistakable: Microsoft is building the capability to stand on its own. In March, as Business Insider first reported, Suleyman wrote in an internal memo that his goal is to “focus all my energy on our Superintelligence efforts and be able to deliver world class models for Microsoft over the next 5 years.” CNBC reported that the structural shift freed Suleyman from day-to-day Copilot product responsibilities, with former Snap executive Jacob Andreou taking over as EVP of the combined consumer and commercial Copilot experience.
How teams of fewer than 10 engineers built models that rival Big Tech’s best
Perhaps the most striking detail Suleyman shared with VentureBeat is how small the teams behind these models actually are. “The audio model was built by 10 people, and the vast majority of the speed, efficiency and accuracy gains come from the model architecture and the data that we have used,” Suleyman said. “My philosophy has always been that we need fewer people who are more empowered. So we operate an extremely flat structure.” He added: “Our image team, equally, is less than 10 people. So this is all about model and data innovation, which has delivered state of the art performance.”
This matters for two reasons. First, it challenges the prevailing industry narrative that frontier AI development requires thousands of researchers and billions in headcount costs. Meta, by contrast, has pursued what Suleyman described in his Bloomberg interview as a strategy of “hiring a lot of individuals, rather than maybe creating a team” — including reported compensation packages of $100 million to $200 million for top researchers. Second, small teams producing state-of-the-art results dramatically improve the economics. If Microsoft can build best-in-class transcription with 10 engineers and half the GPUs of competitors, the margin structure of its AI business looks fundamentally different from companies burning through cash to achieve similar benchmarks.
Advertisement
The lean-team philosophy also echoes Suleyman’s broader views on how AI is already reshaping the work of building AI itself. When asked by VentureBeat how his own team works, Suleyman described an environment that resembles a startup trading floor more than a traditional Microsoft engineering org. “There are groups of people around round tables, circular tables, not traditional desks, on laptops instead of big screens,” he said. “They’re basically vibe coding, side by side all day, morning till night, in rooms of 50 or 60 people.”
Why Suleyman’s “humanist AI” pitch is aimed squarely at enterprise buyers
Suleyman has been steadily building a philosophical brand around Microsoft’s AI efforts that he calls “humanist AI” — a term that appeared prominently in the blog post he authored for the launch and that he elaborated on in our interview. “I think that the motivation of a humanist super intelligence is to create something that is truly in service of humanity,” he told VentureBeat. “Humans will remain in control at the top of the food chain, and they will be always aligned to human interests.”
The framing serves multiple purposes. It differentiates Microsoft from the more acceleration-oriented rhetoric coming from OpenAI and Meta. It resonates with enterprise buyers who need governance, compliance, and safety assurances before deploying AI in regulated industries. And it provides a narrative hedge: if something goes wrong in the broader AI ecosystem, Microsoft can point to its stated commitment to human control. In his December Bloomberg interview, Suleyman went further, describing containment and alignment as “red lines” and arguing that no one should release a superintelligence tool until they are “confident it can be controlled.”
Suleyman also stressed data provenance as a competitive advantage, describing a conversation with CEO Satya Nadella about developing “a clean lineage of models where the data is extremely clean.” He drew an implicit contrast with open-source alternatives, noting that “many of the open-source models have been trained on data in, let’s say, inappropriate ways. And there are potentially security issues with that.” For enterprise customers evaluating AI vendors amid a thicket of copyright lawsuits across the industry, that is a meaningful commercial argument — if Microsoft can credibly claim that its training data was acquired through properly licensed channels, it reduces the legal and reputational risk of deploying these models in production.
Advertisement
Microsoft’s aggressive pricing puts pressure on Amazon, Google, and the AI startup ecosystem
Today’s launch positions Microsoft on three competitive fronts simultaneously. MAI-Transcribe-1 directly targets the transcription workloads that OpenAI’s Whisper models have dominated in the open-source community, with Microsoft claiming superior accuracy on all 25 benchmarked languages. The FLEURS results also show it winning against Google’s Gemini 3.1 Flash Lite on 22 of 25 languages — a direct challenge as Google aggressively pushes Gemini across its own product suite. And MAI-Voice-1‘s ability to clone voices from seconds of audio and generate speech at 60x real-time puts it in competition with ElevenLabs, Resemble AI, and the growing ecosystem of voice AI startups, with Microsoft’s distribution advantage — any Foundry developer can now access these capabilities through the same API they use for GPT-4 and Claude — acting as a powerful moat.
Suleyman framed the competitive position confidently: “We’re now a top three lab just under OpenAI and Gemini,” he told VentureBeat. The pricing strategy — MAI-Voice-1 at $22 per million characters, MAI-Image-2 at $5 per million input tokens — reflects a deliberate decision to compete on cost. “We’re pricing them to be the very best of any hyperscaler. So there will be the cheapest of any of the hyperscalers out there, Amazon. And obviously Google,” Suleyman said. “And that’s a very conscious decision.”
This makes strategic sense for Microsoft, which can amortize model development costs across its enormous installed base of enterprise customers. But it also speaks to the question investors have been asking with increasing urgency: when does AI spending start generating returns? Microsoft’s stock has fallen roughly 17% year-to-date, according to CNBC, part of a broader selloff in software stocks. By building models that run on half the GPUs of competitors, Microsoft reduces its own infrastructure costs for internal products — Teams, Copilot, Bing, PowerPoint — while offering developers pricing designed to undercut the rest of the market. In his March memo, Suleyman wrote that his models would “enable us to deliver the COGS efficiencies necessary to be able to serve AI workloads at the immense scale required in the coming years.” These three models are the first tangible delivery on that promise.
Suleyman says a frontier large language model is coming — and Microsoft plans to be “completely independent”
Suleyman made clear that transcription, voice, and image generation are just the beginning. When asked whether Microsoft would build a large language model to compete directly with GPT at the frontier level, he was unequivocal. “We absolutely are going to be delivering state of the art models across all modalities,” he said. “Our mission is to make sure that if Microsoft ever needs it, we will be able to provide state of the art at the best efficiency, the cheapest price, and be completely independent.”
Advertisement
He described a multi-year roadmap to “set up the GPU clusters at the appropriate scale,” noting that the superintelligence team was formally stood up only in October 2025. Suleyman spoke to VentureBeat from Miami, where the full team was convening for one of its regular week-long in-person sessions. He described Nadella flying in for the gathering to lay out “the roadmap of everything that we need to achieve for our AI self-sufficiency mission over the next 2, 3, 4 years, and all the compute roadmap that that would involve.”
Building a competitive frontier LLM, of course, is a different order of magnitude in complexity, data requirements, and compute cost from what Microsoft demonstrated Wednesday. The models launched today are specialized — they handle audio and images, not the general reasoning and text generation that underpin products like ChatGPT or Copilot’s core intelligence. Suleyman has the organizational mandate, Nadella’s public backing, and the contractual freedom. What he doesn’t yet have is a track record at Microsoft of delivering on the hardest problem in AI.
But consider what he does have: three models that are best-in-class or near it in their respective domains, built by teams smaller than most seed-stage startups, running on half the industry-standard GPU footprint, and priced below every major cloud competitor. Two years ago, Suleyman proposed in MIT Technology Review what he called the “Modern Turing Test” — not whether AI could fool a human in conversation, but whether it could go out into the world and accomplish real economic tasks with minimal oversight. On Wednesday, his own models took a step toward that vision. The question now is whether Microsoft’s superintelligence team can repeat the trick at the scale that actually matters — and whether they can do it before the market’s patience runs out.
Sony has confirmed that PlayStation console prices will increase globally starting April 2, 2026, affecting several models across major regions and making current PlayStation deals potentially some of the last opportunities to buy the consoles at existing retail prices.
With prices rising across the US, UK, Europe and Japan, current deals on PlayStation consoles are likely to become more appealing for buyers who want to enter the PlayStation ecosystem before retailers begin reflecting the higher official prices.
Below are some of the best PlayStation deals currently available, covering the PS5 Pro, the standard PS5 console, and the Digital Edition, each offering slightly different benefits depending on how you prefer to play.
Advertisement
PlayStation 5 Pro
The PlayStation 5 Pro represents the most powerful console in the PlayStation lineup and targets players who want the best graphics performance possible from Sony’s current hardware generation.
Advertisement
Following Sony’s pricing update, the PS5 Pro now carries a recommended retail price of $899.99 in the United States, £789.99 in the UK, and €899.99 in Europe, making deals on this premium model particularly valuable before retailers adjust their listings.
The console focuses on enhanced visual performance, improved ray tracing capabilities, and higher-resolution gaming output that aims to take fuller advantage of modern 4K televisions and high-refresh-rate displays.
Advertisement
For players who want the most future-proof PlayStation console, the PS5 Pro offers the strongest hardware platform available right now, making it a compelling option for demanding titles and visually intensive games.
PlayStation 5
The standard PlayStation 5 remains the most versatile option in the lineup because it includes a built-in disc drive that allows players to run both physical and digital games.
Under Sony’s updated pricing structure, the standard PS5 now sits at $649.99 in the US, £569.99 in the UK, and €649.99 across Europe, increasing the appeal of any retailer discounts that still reflect earlier pricing.
Advertisement
Advertisement
That flexibility makes it especially attractive for players who already own physical PlayStation game collections or who prefer buying discs that can be resold, traded, or shared between consoles.
The standard PS5 also continues to deliver strong performance across the current generation of games, supporting 4K output, fast loading through Sony’s SSD architecture, and access to the full PlayStation ecosystem.
PlayStation 5 Digital Edition
The PlayStation 5 Digital Edition offers the same core gaming performance as the standard PS5 but removes the disc drive in favour of a fully digital gaming experience.
Sony’s updated pricing places the Digital Edition at $599.99 in the US, £519.99 in the UK, and €599.99 in Europe, which keeps it as the most affordable entry point into the PlayStation console lineup.
Advertisement
This approach suits players who buy their games directly through the PlayStation Store and prefer the convenience of maintaining a digital library that can be downloaded instantly across multiple devices.
Advertisement
Because the Digital Edition typically carries a lower retail price than the disc version, it often represents the most accessible way to step into the PlayStation platform while still delivering the same gaming capabilities.
With Sony confirming global price increases across the PlayStation lineup starting April 2, current PlayStation console deals may become harder to find once retailers begin adjusting prices to match the updated recommended retail values.
You must be logged in to post a comment Login