Google on Monday unveiled the most significant upgrade to its autonomous research agent capabilities since the product’s debut, launching two new agents — Deep Research and Deep Research Max — that for the first time allow developers to fuse open web data with proprietary enterprise information through a single API call, produce native charts and infographics inside research reports, and connect to arbitrary third-party data sources through the Model Context Protocol (MCP).
The release, built on Google’s Gemini 3.1 Pro model, marks an inflection point in the rapidly intensifying race to build AI systems that can autonomously conduct the kind of exhaustive, multi-source research that has traditionally consumed hours or days of human analyst time. It also represents Google’s clearest bid yet to position its AI infrastructure as the backbone for enterprise research workflows in finance, life sciences, and market intelligence — industries where the stakes of getting information wrong are extraordinarily high.
“We are launching two powerful updates to Deep Research in the Gemini API, now with better quality, MCP support, and native chart/infographics generation,” Google CEO Sundar Pichai wrote on X. “Use Deep Research when you want speed and efficiency, and use Max when you want the highest quality context gathering & synthesis using extended test-time compute — achieving 93.3% on DeepSearchQA and 54.6% on HLE.”
Both agents are available starting today in public preview via paid tiers of the Gemini API, accessible through the Interactions API that Google first introduced in December 2025.
Advertisement
Why Google built two research agents instead of one
The launch introduces a tiered architecture that reflects a fundamental tension in AI agent design: the tradeoff between speed and thoroughness.
Deep Research, the standard tier, replaces the preview agent Google released in December and is optimized for low-latency, interactive use cases. It delivers what Google describes as significantly reduced latency and cost at higher quality levels compared to its predecessor. The company positions it as ideal for applications where a developer wants to embed research capabilities directly into a user-facing interface — think a financial dashboard that can answer complex analytical questions in near-real time.
Deep Research Max occupies the opposite end of the spectrum. It leverages extended test-time compute — a technique where the model spends more computational cycles iteratively reasoning, searching, and refining its output before delivering a final report. Google designed it for asynchronous, background workflows: the kind of task where an analyst team kicks off a batch of due diligence reports before leaving the office and expects exhaustive, fully sourced analyses waiting for them the next morning.
The Google DeepMind team framed the distinction on X: “Deep Research: Optimized for speed and efficiency. Perfect for interactive apps needing quicker responses. Deep Research Max: It uses extra time to search and reason. Ideal for exhaustive context gathering and tasks happening in the background.”
Advertisement
“Deep Research was our first hosted agent in the API and has gained a ton of traction over the last 3 months, very excited for folks to test out the new agents and all the improvements, this is just the start of our agents journey,” Logan Kilpatrick, who leads developer relations for Google’s AI efforts, wrote on X.
MCP support lets the agents tap into private enterprise data for the first time
Perhaps the most consequential feature in today’s release is the addition of Model Context Protocol support, which transforms Deep Research from a sophisticated web research tool into something more closely resembling a universal data analyst.
MCP , an emerging open standard for connecting AI models to external data sources, allows Deep Research to securely query private databases, internal document repositories, and specialized third-party data services — all without requiring sensitive information to leave its source environment. In practical terms, this means a hedge fund could point Deep Research at its internal deal-flow database and a financial data terminal simultaneously, then ask the agent to synthesize insights from both alongside publicly available information from the web.
Google disclosed that it is actively collaborating with FactSet, S&P, and PitchBook on their MCP server designs, a signal that the company is pursuing deep integration with the data providers that Wall Street and the broader financial services industry already rely on daily. The goal, according to the blog post authored by Google DeepMind product managers Lukas Haas and Srinivas Tadepalli, is to “let shared customers integrate financial data offerings into workflows powered by Deep Research, and to enable them to realize a leap in productivity by gathering context using their exhaustive data universes at lightning speed.”
Advertisement
This addresses one of the most persistent pain points in enterprise AI adoption: the gap between what a model can find on the open internet and what an organization actually needs to make decisions. Until now, bridging that gap required significant custom engineering. MCP support, combined with Deep Research’s autonomous browsing and reasoning capabilities, collapses much of that complexity into a configuration step. Developers can now run Deep Research with Google Search, remote MCP servers, URL Context, Code Execution, and File Search simultaneously — or turn off web access entirely to search exclusively over custom data. The system also accepts multimodal inputs including PDFs, CSVs, images, audio, and video as grounding context.
Native charts and infographics turn AI reports into stakeholder-ready deliverables
The second headline feature — native chart and infographic generation — may sound incremental, but it addresses a practical limitation that has constrained the usefulness of AI-generated research outputs in professional settings.
Previous versions of Deep Research produced text-only reports. Users who needed visualizations had to export the data and build charts themselves, a friction point that undermined the promise of end-to-end automation. The new agents generate high-quality charts and infographics inline within their reports, rendered in HTML or Google’s Nano Banana format, dynamically visualizing complex datasets as part of the analytical narrative.
“The agent generates HTML charts and infographics inline with the report. Not screenshots. Not suggestions to ‘visualize this data.’ Actual rendered charts inside the markdown output,” noted AI commentator Shruti Mishra on X, capturing the practical significance of the change.
Advertisement
For enterprise users — particularly those in finance and consulting who need to produce stakeholder-ready deliverables — this transforms Deep Research from a tool that accelerates the research phase into one that can potentially produce near-final analytical products. Combined with a new collaborative planning feature that lets users review, guide, and refine the agent’s research plan before execution, and real-time streaming of intermediate reasoning steps, the system gives developers granular control over the investigation’s scope while maintaining the transparency that regulated industries demand.
How Deep Research evolved from a consumer chatbot feature to enterprise platform infrastructure
Today’s release crystallizes a strategic narrative Google has been building for months: Deep Research is not merely a consumer feature but a piece of infrastructure that powers multiple Google products and is now being offered to external developers as a platform.
The blog post explicitly notes that when developers build with the Deep Research agent, they tap into “the same autonomous research infrastructure that powers research capabilities within some of Google’s most popular products like Gemini App, NotebookLM, Google Search and Google Finance.” This suggests that the agent available through the API is not a stripped-down version of what Google uses internally but the same system, offered at platform scale.
The journey to this point has been remarkably rapid. Google first introduced Deep Research as a consumer feature in the Gemini app in December 2024, initially powered by Gemini 1.5 Pro. At the time, the company described it as a personal AI research assistant that could save users hours by synthesizing web information in minutes. By March 2025, Google upgraded Deep Research with Gemini 2.0 Flash Thinking Experimental and made it available for anyone to try. Then came the upgrade to Gemini 2.5 Pro Experimental, where Google reported that raters preferred its reports over competing deep research providers by more than a 2-to-1 margin. The December 2025 release was the pivot to developer access, when Google launched the Interactions API and made Deep Research available programmatically for the first time, powered by Gemini 3 Pro and accompanied by the open-source DeepSearchQA benchmark.
Advertisement
The underlying model driving today’s improvements is Gemini 3.1 Pro, which Google released on February 19, 2026. That model represented a significant leap in core reasoning: on ARC-AGI-2, a benchmark evaluating a model’s ability to solve novel logic patterns, 3.1 Pro scored 77.1% — more than double the performance of Gemini 3 Pro. Deep Research Max inherits that reasoning foundation and layers autonomous research behaviors on top of it, achieving 93.3% on DeepSearchQA (up from 66.1% in December) and 54.6% on Humanity’s Last Exam (up from 46.4%).
Google’s new Deep Research Max agent outperformed its December predecessor across nearly all qualitative dimensions in internal expert evaluations — but the older version held an edge in internal consistency and faithfulness. (Source: Google DeepMind)
Google faces a crowded field of competitors building autonomous research agents
Google is not operating in a vacuum. The launch arrives amid intensifying competition in the autonomous research agent space. OpenAI has been developing its own agent capabilities within ChatGPT under the codename Hermes, which includes an agent builder, templates, scheduling, and Slack integration, according to reports circulating on social media. Perplexity has built its business around AI-powered research. And a growing ecosystem of startups is attacking various slices of the automated research workflow.
What distinguishes Google’s approach is the combination of its search infrastructure — which gives Deep Research access to the broadest and most current index of web information available — with the MCP-based connectivity to enterprise data sources. No other company currently offers a research agent that can simultaneously query the open web at Google Search’s scale and navigate proprietary data repositories through a standardized protocol. The pricing structure also signals Google’s intent to drive adoption: according to Sim.ai, which tracks model pricing, the Deep Research agent in the December preview was priced at $2 per million input tokens and $2 per million output tokens with a 1 million token context window — positioning it as cost-competitive for the volume of research output it generates.
Advertisement
Not everyone greeted the announcement with unalloyed enthusiasm, however. Several users on X noted that the new agents are available only through the API, not in the Gemini consumer app. “Not on Gemini app,” observed TestingCatalog News, while another user wrote, “Google keeps punishing Gemini App Pro subscribers for some reason.” Others raised concerns about the presentation of benchmark results, with one user arguing that Google’s charts could be “misleading” in how they represent percentage improvements. These complaints point to a broader tension in Google’s AI strategy: the company is increasingly directing its most advanced capabilities toward developers and enterprise customers who access them through APIs, while consumer-facing products sometimes lag behind.
Deep Research Max led all competitors on DeepSearchQA and BrowseComp, but GPT 5.4 edged ahead on Humanity’s Last Exam, a benchmark measuring reasoning and knowledge. All results were evaluated by Google DeepMind using publicly available model APIs. (Source: Google DeepMind)
What Deep Research Max means for finance, biotech, and the future of knowledge work
The practical implications of today’s launch are most immediately felt in industries that depend on exhaustive, multi-source research as a core business function. In financial services, where analysts routinely spend hours assembling due diligence reports from scattered sources — SEC filings, earnings transcripts, market data terminals, internal deal memos — Deep Research Max offers the possibility of automating the initial research phase entirely. The FactSet, S&P, and PitchBook partnerships suggest Google is serious about making this work with the data infrastructure that financial professionals already use.
In life sciences, the blog post notes that Google has collaborated with Axiom Bio, which builds AI systems to predict drug toxicity, and found that Deep Research unlocked new levels of initial research depth across biomedical literature. In market research and consulting, the ability to produce stakeholder-ready reports with embedded visualizations and granular citations could compress project timelines from days to hours.
Advertisement
The key question is whether the quality and reliability of these automated outputs will meet the standards that professionals in these fields demand. Google’s benchmark numbers are impressive, but benchmarks measure performance on standardized tasks — real-world research is messier, more ambiguous, and often requires the kind of judgment that remains difficult to automate. Deep Research and Deep Research Max are available now in public preview via paid tiers of the Gemini API, with availability on Google Cloud for startups and enterprises coming soon.
Eighteen months ago, Deep Research was a feature that helped grad students avoid drowning in browser tabs. Today, Google is betting it can replace the first shift at an investment bank. The distance between those two ambitions — and whether the technology can actually close it — will define whether autonomous research agents become a transformative category of enterprise software or just another AI demo that dazzles on benchmarks and disappoints in the conference room.
BrianFagioli writes: Mozilla says it used an early version of Anthropic’s Claude Mythos Preview to comb through Firefox’s code, and the results were hard to ignore. In Firefox 150, the team fixed 271 vulnerabilities identified during this effort, a number that would have been unthinkable not long ago. Instead of relying only on fuzzing tools or human review, the AI was able to reason through code and surface issues that typically require highly specialized expertise.
The bigger implication is less about one release and more about where this is heading. Security has long favored attackers, since they only need to find a single flaw while defenders have to protect everything. If AI can scale vulnerability discovery for defenders, that dynamic could start to shift. It does not mean zero days disappear overnight, but it suggests a future where bugs are found and fixed faster than attackers can weaponize them. “Computers were completely incapable of doing this a few months ago, and now they excel at it,” says Mozilla in a blog post. “We have many years of experience picking apart the work of the world’s best security researchers, and Mythos Preview is every bit as capable. So far we’ve found no category or complexity of vulnerability that humans can find that this model can’t.”
The company concluded: “The defects are finite, and we are entering a world where we can finally find them all.”
It could enable larger AI models on lower-powered devices
Anker is getting into the silicon business, specifically building a CIM (Compute In Memory) solution that will support onboard large model processing inside tiny, low-powered Bluetooth earbuds.
THUS is Anker’s first step in a long-term plan to bring local, large-model AI to mobile, wearable, and IoT technologies. Anker’s chip technology relies on Neural network-style computing, eschewing the traditional compute architecture in which the CPU processes the commands based on data and instructions it derives from memory. The transit from one to the other is an energy-intensive process. Neural Networks, like the human brain, don’t really respect that division. Letting it all work in one place saves considerable energy. That’s why CIM is attractive to Anker as a solution for bringing more powerful AI to its small-battery, lower-powered devices.
Basically, THUS, which is being fabbed in Germany, performs its computations inside NOR flash memory cells, which are known for their low-power operation; they’re slower than traditional memory for writing data but actually faster than NAND memory for reading operations.
Article continues below
Advertisement
By putting the models the AI need in the same spot as computation, THUS could not only conceivably lower power consumption, but also, Anker claims, make it possible to put larger models in devices that normally cannot house them because of their tiny batteries (at least based on traditional energy needs).
The first platform will be a pair of as-of-yet-unnamed Bluetooth earbuds where THUS will support more powerful environmental noise cancellation than was possible with traditional on-board AIU platforms. A larger on-bud model means the AI can more effectively cut out unwanted noise for better call clarity. Anker will call the feature, naturally, Clear Calls.
The chip will also add a pair of other features, “Signature Sound” and “Voice Control,” though Anker didn’t offer any further details on these features in our briefing. What we do know is that Anker will reveal all the details about its first THUS-bearing headphones on May 21, 2026.
Advertisement
Thinking in memory
CIM (also known as “in-Memory Compute”) isn’t a new concept, and it’s been widely ignored by most chip designers (some wonder if “it’s still alive”) and certainly by most people building ever-larger models, for bigger, more powerful, and more agentic AI operations.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Still, if Anker, which says it’s not becoming a chip company, succeeds, it could be a big moment for all kinds of low-powered devices, which have traditionally relied on cloud-based AI and the larger models they can house there.
Advertisement
Imagine smarter smart watches. Even smartphones could be impacted if other companies, like say, Apple, adopt CIM technologies for future Apple Silicon builds.
Oppo has just unveiled its flagship smartphone that’s “engineered to be your next camera”.
The Oppo Find X9 Ultra is fitted with “groundbreaking” lenses and “industry-leading hardware”, but how does it compare to the 4.5-star Oppo Find X9 Pro? Considering we concluded that the latter delivers a top-notch camera experience and has a spot on our best camera phones list, how does the X9 Ultra look to compare?
We’ve assessed the specs of the Oppo Find X9 Ultra to the Find X9 Pro, and highlighted the key differences between the two below.
The Find X9 Ultra is the first of Oppo’s Ultra models to launch globally.
Advertisement
The Find X9 Pro is available to buy now, and has a starting price of £1099. However, we have seen the phone’s price drop over the last few months, so it’s worth keeping an eye out for deals.
Oppo Find X9 Ultra has five rear lenses
Oppo explains that the Find X9 Ultra is fitted with a new-generation Hasselblad Master Camera System, which promises to deliver a versatile and high-quality framing that spans from 14mm to 460mm.
Advertisement
Made up of five rear lenses, the Find X9 Pro sports dual Hasselblad 200MP lenses. The first of the two is the Ultra-Sensing main lens which features the new 1/1.12-inch Sony LYTIA 901 sensor while the second is a 3x ultra-sensing telephoto which boasts the largest sensor of its type (1/1.28-inch), and doubles as a macro lens.
Oppo Find X9 Ultra. Image Credit (Oppo)
The two 200MP lenses are supported by two 50MP cameras: an ultrawide and a 10x Ultra-Sensing Optical-Zoom telephoto. In fact, the latter benefits from an industry-first 20x optical zoom too.
Finally, the four lenses are flanked by a new-gen True Color Camera which promises natural colour rendition.
Advertisement
In comparison, the Find X9 Pro is equipped with a 50MP main lens which, sure sounds pretty measly when compared to a 200MP alternative, but is able to capture plenty of detail and offers an impressive low-light performance too.
Advertisement
Oppo Find X9 Pro. Image Credit (Trusted Reviews)
This is paired with a 200MP telephoto lens that can reach up to a whopping 120x zoom. While this will come at the expense of detail, using Oppo’s Hasselblad Teleconverter attachment aims to fix this issue – and we’ll explain more below.
Both have supporting teleconverter attachments – but there’s a difference
Both the Find X9 Ultra and X9 Pro can be equipped with their own teleconverter attachments. With the Pro, the Teleconverter twists onto the 200MP telephoto lens and enables impressive zoom without compromising on quality. While it’s certainly not the most subtle of accessories, we were still impressed by its performance.
Oppo Find X9 Ultra attachment. Image Credit (Oppo)
Oppo has also created a similar 300m Teleconverter lens for the X9 Ultra edition, that mounts to the 200MP, 3x telephoto sensor. According to Oppo, this attachment will allow photographers to retain sharp detail at “30x and beyond”. That’s a bold claim, and one we’re keen to try out for ourselves.
Advertisement
Oppo Find X9 Ultra runs on Snapdragon 8 Elite Gen 5
Photography ability aside, one of the key differences between the Find X9 Ultra and X9 Pro is with their respective chips. While the latter Pro model runs on MediaTek’s Dimensity 9500, the Ultra is powered by Qualcomm’s Snapdragon 8 Elite Gen 5.
We found during our review of the Find X9 Pro that its Dimensity 9500 chip, combined with Oppo’s Luminous Rendering Engine, enabled the flagship to fly through everyday use while feeling rapid and responsive too. In addition, although it isn’t a dedicated gaming phone, it still had no issue running titles such as Call of Duty Mobile.
Advertisement
Oppo Find X9 Pro. Image Credit (Trusted Reviews)
However, Snapdragon 8 Elite Gen 5 is a tough competitor to beat. The chip is not only behind many of the best Android phones, but it can handle everything from casual tasks to generative AI tasks and gaming with ease. Having said that, we’d argue that most users will be unlikely to notice much of a difference between the chips in everyday use.
Oppo Find X9 Pro has a larger battery
With a mighty 7500mAh cell, the Find X9 Pro has one of the largest batteries found in any smartphone. This translates to comfortably being a two-day handset, although remember this will depend on your own usage. For example, we found that on days where we really pushed the phone’s limits, the handset couldn’t quite make it through a full second day.
Although it’s not quite as large, the Find X9 Ultra is still fitted with a whopping 7050mAh battery, which promises to ensure “reliable, all-day content creation”.
Advertisement
It’s worth pointing out that, although the Find X9 Pro’s battery is larger than the X9 Ultra’s own, both do boast pretty generous capacities. Considering the likes of the Samsung Galaxy S26 Ultra and Google Pixel 10 Pro XL max out at 5000mAh and 5200mAh respectively, Oppo’s Find series are certainly not to be sniffed at.
Advertisement
Oppo Find X9 Ultra comes in a familiar orange shade
Although both only come in a choice between two shades, they differ with their exact offerings. While the Find X9 Pro comes as Titanium Carbon or Silk White, the Find X9 Ultra is available in either Tundra Umber or Canyon Orange.
Regardless of the colour you choose, both the X9 Ultra and X9 Pro sport IP66, IP68 and IP69 ratings which means the handsets can withstand water submersion and even high pressure and high temperature water jets too.
Early Verdict
Although the Oppo Find X9 Pro is easily one of the best camera phones we’ve reviewed, the Find X9 Pro looks like a promising alternative for those who need even more versatility and shooting modes to play with. With a whopping five rear cameras and Qualcomm’s Snapdragon 8 Elite Gen 5 chip at play, the Oppo Find X9 Ultra is undoubtedly a promising handset for the keen photographer.
Microsoft has released out-of-band (OOB) security updates to patch a critical ASP.NET Core privilege escalation vulnerability.
The security flaw (tracked as CVE-2026-40372) was found in the ASP.NET Core Data Protection cryptographic APIs, and it could allow unauthenticated attackers to gain SYSTEM privileges on affected devices by forging authentication cookies.
Microsoft discovered the flaw following user reports that decryption was failing in their applications after installing the .NET 10.0.6 update release during this month’s Patch Tuesday.
“A regression in the Microsoft.AspNetCore.DataProtection 10.0.0-10.0.6 NuGet packages causes the managed authenticated encryptor to compute its HMAC validation tag over the wrong bytes of the payload and then discard the computed hash in some cases,” Microsoft says in the .NET 10.0.7 release notes.
“In these cases, the broken validation could allow an attacker to forge payloads that pass DataProtection’s authenticity checks, and to decrypt previously-protected payloads in auth cookies, antiforgery tokens, TempData, OIDC state, etc.
Advertisement
“If an attacker used forged payloads to authenticate as a privileged user during the vulnerable window, they may have induced the application to issue legitimately-signed tokens (session refresh, API key, password reset link, etc.) to themselves. Those tokens remain valid after upgrading to 10.0.7 unless the DataProtection key ring is rotated.”
As Microsoft further explained in a Tuesday security advisory, this vulnerability can also enable attackers to disclose files and modify data, but they cannot impact the system’s availability.
On Tuesday, senior program manager Rahul Bhandari warned all customers whose applications use ASP.NET Core Data Protection to update the Microsoft.AspNetCore.DataProtection package to 10.0.7 as soon as possible, then redeploy to fix the validation routine and ensure that any forged payloads are rejected automatically.
More information regarding affected platforms, packages, and application configuration can be found in the original announcement.
Advertisement
In October, Microsoft also patched an HTTP request smuggling bug (CVE-2025-55315) in the Kestrel web server that was flagged with the “highest ever” severity rating for an ASP.NET Core security flaw.
Successful exploitation of CVE-2025-55315 enables authenticated attackers to either hijack other users’ credentials, bypass front-end security controls, or crash the server.
AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.
At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.
Invincible season 4 episode 8 has landed on Prime Video — and, hoo boy, if Mark Grayson thought he had it bad already, nothing can prepare him (or you, for that matter) about what’s to come after that decision he’s just made.
If you’re here, I’m guessing you’ve seen the Amazon TV Original’s latest finale and have big questions about what you just watched. Luckily for you, I’m a huge Invincible nerd, so I’m perfectly placed to answer them.
Haven’t watched this chapter, titled ‘Don’t Leave Me Hanging Here’, but somehow stumbled upon this article? Consider this your one and only warning: full spoilers immediately follow forInvincible season 4‘s finale.
Advertisement
Article continues below
Advertisement
Do Thragg and his fellow Viltrumites go to Earth in the Invincible season 4 finale?
Season 4 episode 8’s cold open isn’t what you think it is (Image credit: Prime Video)
In short: yes — but the revelation that they’re now secretly living among Earth’s population is withheld until this episode’s final minutes.
Initially, it seems they’ve arrived on Earth with the sole intention of doing what Mark and the Coalition of Planets (CoP) did to the Viltrumites’ home world — that being, completely destroying it. You can remind yourself about that in my Invincible season 4 episode 7 ending explainer.
Anyway, season 4 episode 8 opens with a cataclysmic event that sees Thragg and the remaining Viltrumites attack Earth, and leave countless dead and wanton destruction in their wake.
It’s soon revealed, though, to be a misdirect. Indeed, said scenario is just a nightmarish vision that a traumatized Mark creates in his own mind aboard the interstellar starship he’s travelling home on. He imagines similar incidents throughout this episode, too, including Thragg killing his mom Debbie, girlfriend Eve, and Global Defence Agency (GDA) chief Cecil Stedman. All of them contribute to him having panic attacks and, eventually, seeking therapy through the GDA.
Advertisement
Sign up for breaking news, reviews, opinion, top tech deals, and more.
But I digress. Upon the space vessel’s arrival near Earth, Mark’s dad Nolan and Zoe/Tech Jacket start to formulate a plan in case Thragg and his forces are present on Earth. However, fearing for his mom and Eve, Mark impulsively leaves the ship and heads planetside alone. One quick but fear-fuelled recon mission later, though, and it appears that the Viltrumites haven’t traveled to Mark’s home planet. Phew!
Why does Mark let Thragg and the Viltrumites stay on Earth in Invincible season 4’s finale?
Err, nice to see you again, Thragg? (Image credit: Prime Video)
Or so we’re led to believe. In the final minutes of the Prime Video show’s latest finale, Eve persuades Mark to take a flight to clear his head.
Advertisement
However, upon said excursion, he’s stopped in his tracks by what he thinks is a hallucination of Thragg. Closing his eyes, Mark takes some deep breaths to compose himself but, upon opening them again, realizes Thragg isn’t a figment of his imagination — he’s really there.
This didn’t work last time, Mark, so why would it work now? (Image credit: Prime Video)
Mark launches himself at Thragg, but, just like Invincible season 4 episode 7, his punches do no damage. Thragg soon starts dodging Mark’s increasingly wild attacks with ease before effortlessly pushes him away.
Mark lines up another strike, but Thragg bellows at him to stop, which Mark does. In the incredibly tense chat that follows, Thragg informs Mark that he hasn’t done anything to Earth… yet. Mark angrily asks what Thragg wants, to which the Viltrumites’ Grand Regent replies that, upon his coronation, he was tasked with leading his people out of the darkness and thrive among the stars — a mission that, still clearly weighing heavy on Thragg, he admits hasn’t been easy.
Advertisement
I wouldn’t keep hollering at Anissa if I were you, guys… (Image credit: Prime Video)
Then comes the kicker. Thragg reveals only 37 Viltrumites remain, but even that miniscule number would be enough to “tear Earth apart” and be “fair payment” for Viltrum’s own destruction.
Continuing, Thragg gives Mark an ultimatum. In a voiceover accompanying scenes of Luccan, Anissa, and Krieg secretly living among humanity, Thragg tells Mark to let the remaining Viltrumites stay and breed with humans to prevent the Viltrumite race’s extinction. Do so, and Earth and its inhabitants won’t be harmed. However, if Mark or the CoP get in their way, billions will die and those who survive will be forced to eek out a miserable life under Thragg’s authoritarian rule.
Until next time, Thragg… (Image credit: Prime Video)
An indignant Mark starts to say he’ll never accept Thragg’s truce but, as the previously mentioned hallucinations, plus as a soul-calming memory of Eve smiling at him, flash before his eyes, he reluctantly agrees to Thragg’s proposal.
Surprised, Thragg admits it’s strange how the universe works, adding that, “willing or not,” he didn’t expect Mark to be his species’ savior. As Thragg prepares to leave, Mark says himself “what have I done?”. Thragg hears him and, turning back to Mark, says “you just saved the lives of every person on this planet”. Thragg departs, leaving a despairing Mark floating alone in the sky.
Advertisement
Did Eve get an abortion in Invincible season 4?
Eve finally tells Mark she was pregnant in the season 4 finale (Image credit: Prime Video)
Yes, but not straight away — and there’s an emotionally devastating addendum to this storyline.
After visiting Debbie to tell her that the severely injured Oliver is being treated back on Talescria, Mark leaves to see Eve, who tearfully greets him because she’d started to think he’d died. It’s been months since he left to take part in the Viltrumite War, so I don’t blame her.
Anyway, following some long-overdue, erm, lovemaking, Eve mentions that, as Mark (and, by proxy, viewers) can see, she’s put on some weight. She blames that on living with her parents and overeating in his absence, and Mark replies that he couldn’t care less about her weight gain. Eve also reveals that her powers have miraculously returned, though she’s constantly worried that they’ll stop working again.
Advertisement
Anybody else well up during this scene, too? (Image credit: Prime Video)
Later when the pair are sitting on the Grayson household’s roof, though, Eve comes clean. She tells him that she knows why she lost her matter manipulation abilities — it was, as we learned in season 4 episode 3, that she’d fallen pregnant. Tearing up, she adds that, without Mark around, she felt so alone and, if he had died, she’d have been scared about potentially raising a child on her own. Long story short: she had an abortion.
Visibly moved, Mark tries to process everything Eve’s just told him. However, upon realizing that she’s had to carry this burden alone for months, he quickly turns his attentions to Eve and, while crying himself, hugs his clearly distraught girlfriend to reassure her that everything will be okay.
Does Debbie forgive Nolan in Invincible’s season 4 finale? And why does she go to space?
Season 4 episode 8 indicated that, one day, Debbie might finally forgive Nolan (Image credit: Prime Video)
Let’s start with the first question: no, but there’s a clear hint that her stance has started to soften and that she might one day forgive Nolan.
Advertisement
Before Nolan heads back to Talescria to be with Oliver and aid the CoP’s efforts to find the remaining Viltrumites, he visits Debbie again. He tells her how brave Oliver was, and that he and Mark did her proud. Replying, Debbie, rebukes Nolan for letting Oliver get hurt before chastizing him once more for trying to make amends for what he did in Invincible‘s season 1 finale.
As she prepares to head back into her home, Nolan flies in front of her. Re-expressing his deep regret for the devastation he caused in season 1 episode 8, he also reiterates he’s trying to change and begs Debbie to let him show her that he deserves a second chance — something she’s long believed everyone is entitled to. Somewhat taken aback, Debbie re-composes herself, tells Nolan he can’t stay in the same house as her, and walks away.
Will the Graysons’ rift be healed by this space adventure? (Image credit: Prime Video)
Later, Debbie complains to Paul who, it’s revealed, she’s no longer romantically involved with. Despite the pair’s separation, he surprisingly advises her to go with Nolan to Talescria to be with Oliver, adding that this world of superheroes, villains, and extraterrestrial worlds is her life as much as Mark, Nolan, and Oliver’s — she just “doesn’t see it yet.”
Fast-forward to Nolan’s departure, and Debbie shocks him, Mark, and Eve by saying she’s decided to follow Paul’s advice to be at Oliver’s side. She tells Nolan to call down the spaceship, but he informs her that it can’t land anywhere. Reclutantly, Debbie agrees to let Nolan fly her to said space vessel where they share a tender moment looking out onto planet Earth. D’aww!
Advertisement
Invincible season 4 episode 8’s mid-credits scene: what does it tell us about the Scourge Virus?
Don’t do it, Allen… (Image credit: Prime Video)
In this episode’s one and only post-credits scene, Telia hands Allen, who’s now the CoP’s leader, a tablet with a post-humous video message from Thaedus. In the event of the latter’s death, said footage was to be passed on Allen, so he hits play.
He’s probably wishing he didn’t. As the recording progresses, Thaedus shockingly reveals that he created a perfected form of the Scourge Virus. That’s the pathogen he made to wipe out the Viltrumites decades ago but, while it killed billions, didn’t eradicate them all
With the tyrannical species surviving the Viltrumite War, Thaedus gives Allen a single mission: no matter the cost, use the far deadlier strain to kill every single living Viltrumite. Unfortunately for Allen, that would include Mark, Nolan, and Oliver, who’ve allied themselves with the CoP, and who he considers close friends.
Advertisement
Not exactly the dilemma that Allen probably wanted as he tries to get his feet under the leadership table, but will he go through with it? We’ll have to wait until next season to find out. Speaking of which…
Has Invincible season 5 been announced yet?
Don’t look so surprised, Mark — season 5 was an inevitability (Image credit: Prime Video)
Oh, haven’t you heard? Invincible season 5 is expected to be released sometime in early 2027, with co-creator Robert Kirkman indicating it could drop between February and April next year. All but one of the adult animated series’ installments — Invincible season 2 part 1 — have come out around March, so don’t be stunned if next season does likewise.
A group of unauthorized users has reportedly gained access to Mythos, the cybersecurity tool recently announced by Anthropic.
Much has been made of Mythos and its purported power — an AI product designed for enterprise security that, in the wrong hands, could become a potent hacking tool, according to the company. Now Bloomberg has reported that a “private online forum,” the members of which have not been publicly identified, has managed to gain access to the tool through a third-party vendor.
“We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments,” an Anthropic spokesperson told TechCrunch. The company said that, so far, it has found no evidence that the supposedly unauthorized activity has impacted Anthropic’s systems in any way.
The unauthorized group tried a number of different strategies to gain access to the model, including using “access” enjoyed by the person who was interviewed by Bloomberg. That person is currently employed at a third-party contractor that works for Anthropic, the outlet reported.
Advertisement
Members of the group are part of a Discord channel that seeks out information about unreleased AI models, the outlet reported. The group has been using Mythos regularly since gaining access to it, and provided evidence to Bloomberg in the form of screenshots and a live demonstration of the software.
Bloomberg reports that the group, which supposedly gained access to the tool on the same day it was publicly announced, “made an educated guess about the model’s online location based on knowledge about the format Anthropic has used for other models.” The group in question is “interested in playing around with new models, not wreaking havoc with them,” the source told the outlet.
Mythos was released to a select number of vendors, including big names like Apple, as part of an initiative called Project Glasswing. The limited release of the model was designed to prevent its use by bad actors. The tool could be weaponized against corporate security instead of bolstering it, Anthropic said.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
If true, unauthorized use of Mythos could spell trouble for Anthropic, which provided the exclusive release to allay the company’s concern for enterprise security.
Advertisement
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Summary: Lovable, the $6.6 billion vibe coding platform with eight million users, has faced three documented security incidents exposing source code, database credentials, and thousands of user records, with the most recent BOLA vulnerability left open for 48 days after the company closed a bug bounty report without escalation. The incidents are representative of a structural problem across vibe coding: 40-62% of AI-generated code contains vulnerabilities, 91.5% of vibe-coded apps had at least one AI hallucination-related flaw in Q1 2026, and the market’s incentive structure rewards growth over security at a moment when 60% of all new code is projected to be AI-generated by year end.
Lovable, the vibe coding platform valued at $6.6 billion with eight million users, has spent the past two months dealing with security incidents that collectively exposed source code, database credentials, AI chat histories, and the personal data of thousands of users across projects built on its platform. The most recent disclosure, published on 20 April by a security researcher, revealed a broken object-level authorisation vulnerability in Lovable’s API that allowed anyone with a free account to access another user’s profile, public projects, source code, and database credentials in as few as five API calls. The researcher reported the flaw to Lovable’s bug bounty programme on 3 March. Lovable patched it for new projects but never fixed it for existing ones, marked a follow-up report as a duplicate, and closed it. As of reporting, the vulnerability had been open for 48 days.
Lovable’s response followed a pattern that security researchers found more telling than the vulnerability itself. The company first posted on X that it “did not suffer a data breach,” calling the exposed data “intentional behaviour.” It then blamed its own documentation, saying that what “public” implies “was unclear.” It then blamed its bug bounty partner HackerOne, saying reports were “closed without escalation because our HackerOne partners thought that seeing public projects’ chats was the intended behaviour.” Later that day, it issued a partial apology acknowledging that “pointing to documentation issues alone was not enough.” Cybernews headlined its coverage: “Lovable goes on ego trip denying vulnerability, then blames others for said vulnerability.”
What was exposed
The April incident affected projects created before November 2025. The researcher demonstrated that extracting a user’s source code from Lovable’s API also yielded hardcoded Supabase database credentials embedded in that code. One affected project belonged to Connected Women in AI, a Danish nonprofit. Its exposed data contained real user records including names, job titles, LinkedIn profiles, and Stripe customer IDs, with records linked to individuals at Accenture Denmark and Copenhagen Business School. Employees at Nvidia, Microsoft, Uber, and Spotify reportedly have Lovable accounts tied to affected projects.
Advertisement
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
This was the third documented security incident involving the platform. In February, a tech entrepreneur named Taimur Khan found 16 vulnerabilities, six of them critical, in a single app hosted on Lovable and featured on its own Discover page with more than 100,000 views. The most severe was an inverted authentication logic that granted anonymous users full access while blocking authenticated users. The app, an AI-powered EdTech tool, exposed 18,697 user records including 4,538 student accounts from institutions including UC Berkeley and UC Davis, with minors likely on the platform. Khan reported his findings through Lovable’s support channel. His ticket was closed without a response.
An earlier study in May 2025 found that 170 out of 1,645 sampled Lovable-created applications had issues allowing personal information to be accessed by anyone. Approximately 70% of Lovable apps had row-level security disabled entirely.
Advertisement
The structural problem
Lovable is not uniquely insecure. It is representatively insecure. The platform generates full-stack applications using React, Tailwind, and Supabase in response to natural language prompts, a process the industry calls vibe coding after Andrej Karpathy coined the term in February 2025. The approach lets anyone describe an application and have it built by an AI model without writing or reviewing code. Collins English Dictionary named it Word of the Year for 2025. Gartner forecasts that 60% of all new code will be AI-generated by the end of this year.
The security data across the entire category is consistent. Between 40 and 62% of AI-generated code contains security vulnerabilities, depending on the study. AI-written code produces flaws at 2.74 times the rate of human-written code, according to an analysis of 470 GitHub pull requests. A first-quarter 2026 assessment of more than 200 vibe-coded applications found that 91.5% contained at least one vulnerability traceable to AI hallucination. More than 60% exposed API keys or database credentials in public repositories. The vulnerability classes are the same acrossevery major vibe coding platform: disabled row-level security, hardcoded secrets, missing webhook verification, injection flaws, and broken access controls.
Bolt.new ships with row-level security off by default. Cursor has had multiple CVEs patched, including a case-sensitivity bypass enabling persistent remote code execution. Researchers at Pillar Security demonstrated a “rules file backdoor” attack in which hackers inject hidden malicious instructions into configuration files used by Cursor and GitHub Copilot. A separate “Agent Commander” attack in March showed that prompt injection into AI coding agents could convert autonomous coding tools into remotely controlled malware delivery platforms. In January, the vibe-coded social network Moltbook was breached within three days of launch, exposing 1.5 million API authentication tokens and 35,000 email addresses through a misconfigured Supabase database with no row-level security.
The economic incentive problem
Security firms are raising moneyspecifically to address the gap. Escape raised $18 million to replace manual penetration testing with AI agents that scan vibe-coded applications, citing over 2,000 high-impact vulnerabilities and hundreds of exposed secrets found in live production systems. Lovable itself partnered with Aikido to bring automated pentesting to its platform. But the fundamental incentive structure of the market works against security.
Advertisement
Lovable hit $4 million in annual recurring revenue in its first four weeks and $10 million in two months with a team of 15 people. It raised $200 million at a $1.8 billion valuation in July 2025 and $330 million at $6.6 billion in December, more than tripling its valuation in five months. Enterprise adoption of vibe coding grew 340% year over year. Non-technical user adoption surged 520%. Eighty-seven percent of Fortune 500 companies have adopted at least one vibe coding platform. The market rewards speed and accessibility. Security is a cost centre that slows both.
The result is a category in which the dominant platforms generate code that is insecure by default, the users generating that code lack the expertise to identify the vulnerabilities, and the platforms themselves have financial incentives to prioritise growth over remediation. Lovable’s handling of the March and April incidents illustrates the dynamic precisely: a bug bounty report was closed without escalation, a vulnerability affecting thousands of projects was patched for new users but not existing ones, and the public response cycled through denial, deflection, and a partial apology within a single day.
The regulatory gap
TheEU AI Act’s high-risk obligationstake effect on 2 August, requiring transparency, human oversight, and data governance for AI systems. California’s S.B. 53 and New York’s RAISE Act require frontier AI developers to publish safety frameworks and report incidents. But none of these regulations specifically address the security of code generated by AI models for end users, and the adoption data suggests the market is moving faster than regulators can respond. Financial services and healthcare, the two most regulated sectors, show the lowest vibe coding adoption rates at 34 and 28% respectively, which indicates that the market itself recognises the compliance gap even if regulations have not yet caught up.
As Trend Micro framed it: “The real risk of vibe coding isn’t AI writing insecure code. It’s humans shipping code they never had a chance to secure.” The84% surge in App Store submissionsdriven by vibe coding tools suggests the volume of unreviewed code entering production is accelerating. Thirty-five CVEs were disclosed in March alone from AI-generated code, up from six in January, and Georgia Tech estimates the actual figure is five to ten times higher than what is detected.
Advertisement
Lovable is thefastest-growing software startup in historyby several measures. It is also a company that closed a critical vulnerability report without reading it, left thousands of projects exposed for 48 days, and responded to public disclosure by denying a breach, blaming its documentation, blaming its bug bounty partner, and then apologising for the apology. The pattern is not unique to Lovable. It is the pattern of a category that has built extraordinary tools for creating software and almost nothing for securing it.
ZenTimings is a Windows utility designed for AMD Ryzen platforms that displays real-time memory configuration data and timings, frequency, and Infinity Fabric clocks. It’s primarily used to verify BIOS or XMP/EXPO settings, offering a straightforward way to check how system memory is running.
Honor has finally lifted the covers off its latest N-series devices, and the new models bring several key upgrades over the Honor 400 series, which was the last N-series lineup to launch outside China. The Pro model in the new Honor 600 series is especially noteworthy because it’s positioned as a legitimate “accessible flagship” that pairs a top-tier Snapdragon SoC with a stunning display and a massive battery at an attractive price.
Flagship specs without the flagship price
The new lineup pushes Honor’s N-series further into flagship territory, with both the standard Honor 600 and the more premium Honor 600 Pro offering features typically reserved for more expensive phones. The Honor 600 Pro packs Qualcomm’s Snapdragon 8 Elite chip and is positioned as a serious option for gaming and demanding workloads. The standard variant, meanwhile, runs on the Snapdragon 7 Gen 4, offering a more balanced mix of performance and efficiency.
Honor 600 Pro in Golden White, Orange, and BlackHonor
On the display front, both phones feature a 6.57-inch OLED panel with a 120Hz refresh rate, peak brightness of 8,000 nits, and 3,840Hz PWM dimming. Battery life is another major highlight, with the two devices packing a 7,000mAh silicon-carbon battery that supports 80W wired fast charging and 27W reverse charging. The Pro model even includes 50W wireless charging support, a feature that’s often omitted on affordable flagships.
Honor 600 in Orange, Golden White, and BlackHonor
The Honor 600 series’ camera hardware is no slouch either, with both devices featuring a 200MP main shooter with a large 1/1.4-inch sensor size. On the Pro model, it’s paired with a 50MP telephoto lens with up to 120x zoom and CIPA 6.5 image stabilization, a 12MP ultrawide camera, and a 50MP selfie shooter. The standard version has the same ultrawide and selfie cameras, but skips the telephoto camera.
As announced earlier, Honor is also debuting its upgraded AI Image to Video 2.0 feature with the lineup, which allows users to generate short videos from still images using natural language prompts. Durability has also been upgraded, and both models feature an IP69K rating along with enhanced drop resistance certification.
Pricing and availability
Honor has launched the 600 series in Malaysia today, with the Pro model priced at RM3,099 (~$784) for the 12GB+256GB configuration and RM3,299 (~$835) for the 512GB storage option. The standard Honor 600 comes in a single 12GB+512GB configuration priced at RM2,599 (~$658). Both models come in Orange, Golden White, and Black color options.
Advertisement
Honor has confirmed that the devices will roll out to additional global markets, but regional pricing and availability details have not yet been announced.
Federal authorities are now reviewing a string of deaths and disappearances involving scientists tied to sensitive U.S. aerospace and nuclear work, though officials have not established any confirmed link between the cases. The FBI says it “is spearheading the effort to look for connections into the missing and deceased scientists,” adding that it “is working with the Department of Energy, Department of War, and with our state … and local law enforcement partners to find answers.” The Republican-led House Oversight Committee also announced an investigation into the reports. CNN reports: A nuclear physicist and MIT professor fatally shot outside his Massachusetts residence. A retired Air Force general missing from his New Mexico home. An aerospace engineer who disappeared during a hike in Los Angeles. These are among at least 10 individuals connected to sensitive US nuclear and aerospace research who have died or disappeared in recent years, prompting concerns whether they are connected and fueling speculation online about the possibility of nefarious activity. […]
The Defense Department said only that it would respond to the committee directly, and the Department of Energy referred questions to the White House. In a post on X, NASA said it is “coordinating and cooperating with the relevant agencies” in relation to the scientists. “At this time, nothing related to NASA indicates a national security threat,” NASA spokesperson Bethany Stevens said.
The cases vary widely in circumstance. Some involve unsolved homicides, while others are missing persons cases with no signs of foul play. In at least two instances, families have pointed to preexisting medical conditions or personal struggles as explanations. Authorities have not established any links between the cases. The White House said last week it is also working with federal agencies to probe any potential links between the deaths and disappearances, with President Donald Trump referring to the matter as “pretty serious stuff.” “The United States has thousands of nuclear scientists and nuclear experts,” said Rep. James Walkinshaw, a Democrat who also serves on the Oversight Committee. “It’s not the kind of nuclear program that potentially a foreign adversary could significantly impact by targeting 10 individuals.”
You must be logged in to post a comment Login