Connect with us
DAPA Banner

Tech

Adobe’s new Firefly AI Assistant wants to run Photoshop, Premiere, Illustrator and more from one prompt

Published

on

Adobe today launched its most ambitious AI offensive to date, unveiling the Firefly AI Assistant — a new agentic creative tool that can orchestrate complex, multi-step workflows across the company’s entire Creative Cloud suite from a single conversational interface — alongside a raft of new video, image, and collaboration features designed to position the company at the center of the rapidly evolving AI-powered content creation landscape.

The announcements, which also include a new Color Mode for Premiere Pro, the addition of Kling 3.0 video models to Firefly’s growing roster of third-party AI engines, and Frame.io Drive — a virtual filesystem that lets distributed teams work with cloud-stored media as though it lived on their local machines — represent Adobe’s clearest signal yet that it views agentic AI not as a feature upgrade but as a fundamental reshaping of how creative work gets done.

“We want creators to tell us the destination and let the Firefly assistant — with its deep understanding of all the Adobe professional tools and generative tools — bring the tools to you right in the conversation,” Alexandru Costin, Vice President of AI & Innovation at Adobe, told VentureBeat in an exclusive interview ahead of the launch.

The stakes could hardly be higher. Adobe is fighting to convince Wall Street, creative professionals, and a wave of well-funded AI-native competitors that its decades-old software empire can not only survive the generative AI revolution but lead it.

Advertisement

How Adobe turned a research prototype into a 100-tool creative agent

The centerpiece of today’s announcement is the Firefly AI Assistant, which Adobe describes as a fundamentally new way to interact with its creative tools. Rather than requiring users to manually navigate between Photoshop, Premiere, Illustrator, Lightroom, Express, and other apps — selecting the right tool for each step of a complex project — the assistant lets creators describe an outcome in natural language. The agent then figures out which tools to invoke, in what order, and executes the workflow.

The assistant is the productized version of Project Moonlight, a research prototype Adobe first previewed at its annual MAX conference in the fall of 2025 and subsequently refined through a private beta. “This is basically [Project] Moonlight,” Costin confirmed to VentureBeat. “We started with all the learnings from Moonlight, and we engaged with customers. We looked internally. We evolved that architecture to make it more ambitious.”

Under the hood, Adobe says it has assembled roughly 100 tools and skills that the assistant can call upon, spanning generative image and video creation, precision photo editing, layout adaptation, and even stakeholder review through Frame.io. The system is built around a single conversational interface inside the Firefly web app where users describe what they want and the assistant maintains context across sessions. Pre-built Creative Skills — purpose-built, multi-step workflow templates such as portrait retouching or social media asset generation — can be run from a single prompt and customized to match a creator’s own style. The assistant also learns a creator’s preferred tools, workflows, and aesthetic choices over time, and understands the content type being worked on — image, video, vector, brand assets — to make context-aware decisions.

Crucially, outputs use native Adobe file formats — PSD, AI, PRPROJ — meaning users can take any result into the corresponding flagship app for manual, pixel-level refinement at any point. “We always imagine this continuum where you can have complete conversational edits and pixel-perfect edits, and you can decide, as a creative, where you want to land,” Costin said. The Firefly AI Assistant will enter public beta in the coming weeks, though Adobe did not specify an exact date.

Advertisement

Why Wall Street is watching Adobe’s AI pricing model so closely

For a company whose AI monetization story has faced persistent skepticism from investors, the pricing structure of the Firefly AI Assistant will be closely watched. Costin told VentureBeat that, at launch, using the assistant will require an active Adobe subscription that includes the relevant apps — meaning users who want the agent to invoke Photoshop cloud capabilities, for instance, will need an entitlement that includes the Photoshop SKU. Generative actions will consume the user’s existing pool of generative credits, consistent with how Firefly credits work across the rest of Adobe’s platform.

“To use some of these cloud capabilities from Photoshop and other apps, you need to have a subscription that includes access to the Photoshop SKU,” Costin explained. “You’ll be consuming your credits when you use generative features.” He acknowledged, however, that the model could evolve: “As we better understand the value of this — and the costs of operating the brain, the conversation engine — things might change.”

The question of whether Adobe can convert AI enthusiasm into meaningful revenue growth is anything but theoretical. When Adobe reported its most recent quarterly results in March, it touted 10% year-over-year revenue growth to $6.4 billion and disclosed that annual recurring revenue from AI standalone and add-on products had reached $125 million — a figure CEO Shantanu Narayen projected would double within nine months.

Adobe adds Chinese AI video models to Firefly, raising commercial safety questions

Alongside the assistant, Adobe is expanding Firefly’s roster of third-party AI models to include Kling 3.0 and Kling 3.0 Omni, two video generation models developed by Kuaishou, the Chinese technology company. Kling 3.0 focuses on fast, high-quality production with smart storyboarding and audio-visual sync, while the Omni variant adds professional controls for shot duration, camera angle, and character movement across multi-shot sequences. The additions bring Firefly’s model count to more than 30, joining Google’s Nano Banana 2 and Veo 3.1, Runway’s Gen-4.5, Luma AI’s Ray3.14, Black Forest Labs’ FLUX.2[pro], ElevenLabs’ Multilingual v2, and others.

Advertisement

When asked whether Adobe had concerns about integrating a model from a Chinese tech company given the current geopolitical climate, Costin was direct: “We think choice is what we want to offer our customers.” He explained that Adobe’s strategy distinguishes between its own commercially safe, first-party Firefly models — trained on licensed Adobe Stock imagery and public domain content — and third-party partner models, which carry different commercial safety profiles. “For some use cases, like ideation, non-production use cases, we got requests from customers to support some external models,” Costin said. “If I’m in ideation, I might be more flexible with commercial safety. When I go into production, I’d want to have a model that gives you more confidence.”

This raises an important nuance for the agentic era. When the Firefly AI Assistant autonomously selects which model to use for a given task, the commercial safety guarantees may vary depending on which engine it invokes. Costin pointed to Adobe’s Content Credentials system — the metadata-and-fingerprinting framework developed through the Content Authenticity Initiative — as the mechanism for maintaining transparency. “The agentic power — and the fact that the assistant has access to all of those models — means it could decide to use a model that carries different content credentials,” he acknowledged. “But with the transparency of content credentials, the user will know how a particular piece of content was created and can decide whether that’s commercially safe or not.” Adobe offers commercial indemnity for its first-party Firefly models but applies different indemnity levels for third-party models — a distinction that enterprise buyers, in particular, will need to carefully evaluate.

Inside Adobe’s active collaboration with Nvidia on long-running AI agent infrastructure

Adobe’s agentic ambitions also intersect with its strategic partnership with Nvidia, announced earlier this year at Nvidia’s GTC conference. When asked whether the Firefly AI Assistant’s agentic capabilities are built on NVIDIA’s agent toolkit and NeMo infrastructure, Costin revealed that the collaboration is active but has not yet made it into a shipping product.

“We’re in active discussions — investigating not only Nemotron,” Costin said. “They have this technology called Open Shell and Nemo Claw, which give us the ability to efficiently run long-running agentic workflows in a sandboxed environment.” He said the technology would become increasingly important as Adobe pushes the assistant to handle longer, more autonomous creative tasks — but cautioned that “it’s not shipping yet. It’s being actively explored.”

Advertisement

For Nvidia, which is building an ecosystem of enterprise AI agent platforms with partners like Adobe, Salesforce, and SAP, the partnership could eventually serve as a high-profile proof point for its agent infrastructure stack in the creative vertical. For Adobe, the ability to run complex, long-duration agentic workflows efficiently and securely in sandboxed environments could be the technical foundation that separates the Firefly AI Assistant from lighter-weight chatbot integrations offered by competitors. The partnership also signals Adobe’s recognition that the computational demands of agentic AI — where a single user request may trigger dozens of model calls and tool invocations — require infrastructure partnerships that go well beyond what a software company can build alone.

Premiere Pro’s new color grading mode and the tools Adobe is shipping today

Beyond the headline AI assistant announcement, Adobe’s broader set of updates reflects a company trying to strengthen its position across every phase of the content creation pipeline. Color Mode in Premiere Pro may be the most significant near-term upgrade for working editors. Entering public beta today, Color Mode is described as a first-of-its-kind color grading experience built specifically for the way editors — rather than dedicated colorists — think and work. Adobe notes that it was developed through an extensive private beta with hundreds of working editors, and that participants reported they “actually enjoy color grading” — a sentiment suggesting Adobe may have found a way to democratize one of post-production’s most intimidating disciplines. General availability is expected later in 2026.

The Firefly Video Editor gains audio upgrades including the Enhance Speech feature migrated from Premiere and Adobe Podcast, direct Adobe Stock integration with access to more than 800 million licensed assets, and simple color adjustment controls with intuitive sliders and one-click looks. On the image editing front, Adobe introduced Precision Flow, which generates a range of semantic variations from a single prompt and lets users browse them via an interactive slider — a novel approach that Costin described as “the best slider-based control mixed with the best semantic understanding of not only the existing scene, but what the scene could be.” AI Markup complements this by letting users draw directly on images to specify where and how edits should be applied. After Effects 26.2 adds an AI-powered Object Matte tool that dramatically accelerates rotoscoping and masking — create accurate mattes of moving subjects with a hover and click, refine with a Quick Selection brush, and perfect edges with a Refine Edge tool.

Frame.io Drive wants to kill the shipped hard drive and make cloud media feel local

Rounding out the announcements, Frame.io Drive addresses one of the most persistent pain points in distributed video production: getting media from point A to point B without losing hours — or days — to downloads, syncing, and shipped hard drives. Frame.io Drive is a desktop application that mounts Frame.io projects to a user’s computer so media appears in Finder or Explorer and behaves like local files. The underlying technology, called Frame.io Mounted Storage, streams media on demand as applications request it, while local caching ensures smooth playback. The product builds on streaming technology provided by Suite Studios, and the real-time file access capability is included with every Frame.io account. Adobe emphasized that all content lives solely within Frame.io and is never shared with third parties.

Advertisement

The move positions Frame.io not just as a review-and-approval tool at the end of the production pipeline but as the central media layer from the very beginning of a project — from first capture through final delivery. If successful, the strategy could significantly deepen Adobe’s lock-in with professional video teams by making Frame.io the single source of truth for distributed productions. Frame.io Drive and Mounted Storage will roll out in phases, with Enterprise customers gaining access starting today and accounts on other plans following shortly. Others can join a waitlist.

Adobe’s biggest challenge isn’t building the AI — it’s convincing creators to trust it

Taken together, today’s announcements paint a picture of a company executing aggressively across multiple fronts — but also one that is navigating a complex moment. Adobe first introduced Firefly in March 2023 as a family of generative AI models focused on image and text effects, with a strong emphasis on commercial safety through training on licensed Adobe Stock content. In the two years since, the company has rapidly expanded into video generation, multi-model access, and now agentic workflows — a trajectory that mirrors the broader industry’s shift from standalone AI features to AI-native systems.

But the competitive field has grown dramatically. Runway, Pika, and a host of AI-native video generation startups have captured mindshare among creators. Canva has aggressively integrated AI into its design platform. And the emergence of powerful foundation models from OpenAI, Google, and Anthropic — the latter of which Adobe says it will integrate with Firefly AI Assistant capabilities — means the barrier to building creative AI tools has never been lower. Adobe is also navigating these product ambitions against a complex corporate backdrop: the impending departure of CEO Shantanu Narayen, an actively exploited zero-day vulnerability in Acrobat Reader (CVE-2026-34621) that had been used by hackers for months before being patched this week, a U.K. antitrust investigation over cancellation fees, and a recent $75 million lawsuit settlement.

Adobe’s response, articulated clearly through today’s launches, is to lean into what it believes is its deepest moat: the integration of AI into a set of professional-grade, category-leading applications that no startup can replicate overnight. Costin framed the agentic transition as empowering rather than threatening to creative professionals, comparing Creative Skills to a next-generation version of Photoshop Actions — the macro-recording feature that has long allowed power users to automate repetitive tasks. “We want to help our customers become — from the ones doing all the work — to be creative directors, doing some of the work, but most importantly, guiding the assistant in executing some of those creative visions,” he said.

Advertisement

It is a compelling pitch — and, in its own way, a revealing one. For three decades, Adobe made its fortune by selling the tools that turned creative vision into finished pixels. Now it is asking its customers to let an AI agent handle more of that translation, trusting that the human role will shift from operating the tools to directing the outcome. Whether creators embrace that bargain — and whether Wall Street rewards it — will determine not just Adobe’s trajectory but the shape of an entire industry learning to create alongside machines.

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Amazon has an easy to way reduce your monthly streaming bills

Published

on

Have you looked at how much your streaming subscriptions are costing you each month and wondered whether there is a smarter way to keep everything you actually watch without paying full price for all of it?

The answer here is Amazon’s Apple TV and Peacock Premium Plus bundle, now available through Prime Video for $19.99 per month against a combined standard cost of $29.98, a saving of over 33%.

Prime video logo on an orange backgroundPrime video logo on an orange background

This Amazon Prime bundle knocks 33% off Apple TV and Peacock Premium Plus, making it an easy way to reduce your streaming bills.

Have you looked at your streaming subscriptions and wondered if there was a better way to keep everything without paying full price?

Advertisement

View Deal

The bundle brings Apple TV’s original programming alongside Peacock’s live sports, NBC shows, and Universal movies into a single subscription managed through your existing Prime Video account and payment method.

Advertisement

On the Apple TV side, that means ad-free access to originals, including Severance, Shrinking, The Studio, and the upcoming fourth season of Ted Lasso, alongside live sports such as Formula 1 and Friday Night Baseball.

Peacock Premium Plus adds NFL Sunday Night Football, Premier League, NBA, and Major League Baseball coverage, plus NBC series like the One Chicago franchise and Law and Order, Bravo content, and Peacock Originals, including The Traitors.

Advertisement

The Whatsapp LogoThe Whatsapp Logo

Get Updates Straight to Your WhatsApp

Advertisement

Join Now

Both services are ad-free within this bundle, with Apple TV offering that experience across its originals and Peacock Premium Plus covering virtually all on-demand content, which is a meaningful upgrade over Peacock’s standard tier.

Advertisement

Everything streams through the Prime Video app on whatever device you already use, from Fire TV and smart TVs to phones, tablets, and games consoles, with no separate apps or logins required for either service.

To add it, open the Prime Video app or head to the Prime Video website, navigate to the subscriptions section, select the Apple TV and Peacock Premium Plus bundle, and complete the sign-up using your existing Amazon account details.

The bundle is available for a limited time, so it is worth acting on sooner rather than later if the combined sports and drama lineup covers enough of what you watch to justify consolidating two separate bills into one lower monthly payment.

Advertisement

Advertisement

SQUIRREL_PLAYLIST_10148964

Source link

Continue Reading

Tech

Is this the tipping point for AI at work? New Gallup survey finds half of all US employees now use it in some way

Published

on

Half of American workers now say they use some form of AI technology in their role, pushing the number over the critical point for the first time.

New Gallup research found 50% of employees now reported using AI tools at work in some capacity, a rise of 4% from the previous quarter, and up 21% from the same period just three years ago.

Source link

Continue Reading

Tech

INNOCN’s 27″ QD-OLED 2K Display Brings Sharp Detail and Fluid Motion to More Desks with 280Hz Refresh Rate

Published

on

INNOCN 27-inch QD-OLED GA27S1Q Monitor
Gamers who are constantly on the lookout for a new screen will notice when a model comes up that provides excellent visuals at a reasonable price. The INNOCN 27″ QD-OLED 2K (model GA27S1Q) is a prime example, priced at $399.98 (was $450). Once out of the box, the stand snaps into place without the need for tools, and you have full movement in all directions, including height, tilt, swivel, and pivot. So, if you’re the type of person who enjoys switching between working at a desk and gaming on the sofa, you can find the perfect angle.



The images on-screen are noticeably vibrant right away, with black areas remaining deep / dark rather than washing out to gray, bringing the highlights and colors to life in each scene. The panel also covers almost all of the colors required for modern games and media, so reds and greens appear nice and vibrant with no dull areas, and the animation remains very clean even when things get really fast. With a 280Hz refresh rate that can reach 280 frames per second and a response time measured in thousandths of a second, fast-moving objects maintain sharp edges and prevent blurring that occurs on slower panels. If you’re a die-hard gamer, you’ll notice the difference in quick turns and abrupt adversary movements, whereas casual sessions simply feel more responsive overall.

Sale


INNOCN 27″ QD-OLED 2K QHD 2560 x 1440P 280Hz 240Hz PC Computer Gaming Console Monitor, G-Sync Compatible…
  • Experience Ultimate Gaming Visual Clarity: This 27-inch QD-OLED gaming monitor delivers stunning 1440p resolution with perfect blacks and vibrant…
  • Dominate with Blur-Free 280Hz Speed: Gain the competitive edge with a blistering 280Hz refresh rate and near-instantaneous 0.03ms response time. Enjoy…
  • Next-Gen QD-OLED Visual Fidelity: Witness breathtaking contrast and rich colors powered by QD-OLED technology. Enjoy immersive PC gaming and HDR…


Connections are rather comprehensive for a monitor at this price point, with two HDMI 2.1 connections capable of handling consoles and newer graphics cards at full speed, as well as a pair of DisplayPort 1.4 inputs for further versatility if you have a desktop system. The built-in speakers will suffice for brief checks and the odd thing, but most people prefer to plug in headphones for better sound during extended playback.

Advertisement

INNOCN 27-inch QD-OLED GA27S1Q Monitor
This monitor has features for both comfort and lifespan. It has low blue light and flicker-free settings to lessen eye strain if you stare at it for an extended period of time, as well as some useful routines that look for static images and adjust brightness to prevent permanent markings from appearing. Don’t worry about the power drain; it’s rather low at roughly 65 watts, so the monitor runs cool and won’t put too much burden on your outlet.

INNOCN 27-inch QD-OLED GA27S1Q Monitor
One other advantage is that its slim bezels keep your focus on the image, and the rear has some modest illumination that adds a stylish touch to your setup without drawing too much attention. Overall, for anyone looking at monitors of this size and resolution, this one demonstrates that you don’t have to trade quality for a reasonable price. Give it a few hours, and you’ll see why the word is spreading so quickly, as the mix of crystal-clear visuals and seamless pace makes it a true winner.

Source link

Continue Reading

Tech

From RSA to Lattices: The Quantum Safe Crypto Shift

Published

on

The race to transition online security protocols to ones that can’t be cracked by a quantum computer is already on. The algorithms that are commonly used today to protect data online—RSA and elliptic curve cryptography—are uncrackable by supercomputers, but a large enough quantum computer would make quick work of them. There are algorithms secure enough to be out of reach for both classical and future quantum machines, called post-quantum cryptography, but transitioning to these is a work in progress.

Late last month, the team at Google Quantum AI published a whitepaper that added significant urgency to this race. In it, the team showed that the size of a quantum computer that would pose a cryptographic threat is approximately twenty times smaller than previously thought. This is still far from accessible to the quantum computers that exist today: the largest machines currently consist of approximately 1,000 quantum bits, or qubits, and the whitepaper estimated that about 500 times as much is needed. Nonetheless, this shortens the timeline to switch over to post-quantum algorithms.

The news had a surprising beneficiary: obscure cryptocurrency Algorand jumped 44% in price in response. The whitepaper called out Algorand specifically for implementing post-quantum cryptography on their blockchain. We caught up with Algorand’s chief scientific officer and professor of computer science and engineering at the University of Michigan, Chris Peikert, to understand how this announcement is impacting cryptography, why cryptocurrencies are feeling the effects, and what the future might hold. Peikert’s early work on a particular type of algorithm known as lattice cryptography underlies most post-quantum security today.

IEEE Spectrum: What is the significance of this Google Quantum AI whitepaper?

Advertisement

Peikert: The upshot of this paper is that it shows that a quantum computer would be able to break some of the cryptography that is most widely used, especially in blockchains and cryptocurrencies, with much, much fewer resources than had previously been established. Those resources include the time that it would take to do so and the number of qubits (or quantum bits) that it would have to use.

This cryptography is very central to not just cryptocurrencies but more broadly, to cryptography on the internet. It is also used for secure web connections between web browsers and web servers. Versions of elliptic curve cryptography are used in national security systems and military encryption. It’s very prevalent and pervasive in all modern networks and protocols.

And not only was this paper improving the algorithms, but there was also a concurrent paper showing that the hardware itself was substantially improved. The claim here was that the number of physical qubits needed to achieve a certain kind of logical qubit was also greatly reduced. These two kinds of improvements are compounding upon each other. It’s a kind of a win-win situation from the quantum computing perspective, but a lose-lose situation for cryptography.

IEEE Spectrum: What do Google AI’s findings mean for cryptocurrencies and the broader cybersecurity ecosystem?

Advertisement

Peikert: There’s always been this looming threat in the distance of quantum computers breaking a large fraction of the cryptography that’s used throughout the cryptocurrency ecosystem. And I think what this paper did was really the loudest alarm yet that these kinds of quantum attacks might not be as far off as some have suspected, or hoped, in recent years. It’s caused a re-evaluation across the industry, and a moving up of the timeline for when quantum computers might be capable of breaking this cryptography.

When we think about the timelines and when it’s important to have completed these transitions [to post-quantum cryptography], we also need to factor in the unknown improvements that we should expect to see in the coming years. The science of quantum computing will not stay static, and there will be these further breakthroughs. We can’t say exactly what they will be or when they will come, but you can bet that they will be coming.

IEEE Spectrum: What is your guess on if or when quantum computers will be able to break cryptography in the real world?

Peikert: Instead of thinking about a specific date when we expect them to come, we have to think about the probabilities and the risks as time goes on. There have been huge breakthrough developments, including not only this paper, but also some last year. But even with these, I think that the chance of a cryptographic attack by quantum computers being successful in the next three years is extremely low, maybe less than a percent. But then, as you get out to several years, like 5, 6, or 10 years, one has to seriously consider a probability, maybe 5% or 10% or more. So it’s still rather small, but significant enough that we have to worry about the risk, because the value that is protected by this kind of cryptography is really enormous.

Advertisement

The US government has put 2035 as its target for migrating all of the national security systems to post quantum cryptography. That seems like a prudent date, given the timelines that it takes to upgrade cryptography. It’s a slow process. It has to be done very deliberately and carefully to make sure that you’re not introducing new vulnerabilities, that you’re not making mistakes, that everything still works properly. So, you know, given the outlook for quantum computers on the horizon, it’s really important that we prepare now, or ideally, yesterday, or a few years ago, for that kind of transition.

IEEE Spectrum: Are there significant roadblocks you see to industrial adoption of post-quantum cryptography going forward?

Peikert: Cryptography is very hard to change. We’ve only had one or maybe two major transitions in cryptography since the early 1980s or late 1970s when the field first was invented. We don’t really have a systematic way of transitioning cryptography.

An additional challenge is that the performance tradeoffs are very different in post-quantum cryptography than they are in the legacy systems. Keys and cipher texts and digital signatures are all significantly larger in post-quantum cryptography, but the computations are actually faster, typically. People have optimized cryptography for speed in the past, and we have very good fast speeds now for post-quantum cryptography, but the sizes of the keys are a challenge.

Advertisement

Especially in blockchain applications, like cryptocurrencies, space on the blockchain is at a premium. So it calls for a reevaluation in many applications of how we integrate the cryptography into the system, and that work is ongoing. And, the blockchain ecosystem uses a lot of advanced cryptography, exotic things like zero-knowledge proofs. In many cases, we have rudimentary constructions of these fancy cryptography tools from post-quantum type mathematics, but they’re not nearly as mature and industry ready as the legacy systems that have been deployed. It continues to be an important technical challenge to develop post-quantum versions of these very fancy cryptographic schemes that are used in cutting edge applications.

IEEE Spectrum: As an academic cryptography researcher, what attracted you to work with a cryptocurrency, and Algorand in particular?

Peikert: My former PhD advisor is Silvio Micali, the inventor of Algorand. The system is very elegant. It is a very high performing blockchain system and it uses very little energy, has fast transaction finalization, and a number of other great features. And Silvio appreciated that this quantum threat was real and was coming, and the team approached me about helping to improve the Algorand protocol at the basic levels to become more post-quantum secure in 2021. That was a very exciting opportunity, because it was a difficult engineering and scientific challenge to integrate post-quantum cryptography into all the different technical and cryptographic mechanisms that were underlying the protocol.

IEEE Spectrum: What is the current status of post-quantum cryptography in Algorand, and blockchains in general?

Advertisement

Peikert: We’ve identified some of the most pressing issues and worked our way through some of them, but it’s a many-faceted problem overall. We started with the integrity of the chain itself, which is the transaction history that everybody has to agree upon.

Our first major project was developing a system that would add post-quantum security to the history of the chain. We developed a system called state proofs for that, which is a mixture of ordinary post-quantum cryptography and also some more fancy cryptography: It’s a way of taking a large number of signatures and digesting them down into a much smaller number of signatures, while still being confident that these large number of signatures actually exist and are properly formed. We also followed it with other papers and projects that are about adding post-quantum cryptography and security to other aspects of the blockchain in the Algorand ecosystem.

It’s not a complete project yet. We don’t claim to be fully post-quantum secure. That’s a very challenging target to hit, and there are aspects that we will continue to work on into the near future.

IEEE Spectrum: In your view, will we adopt post-quantum cryptography before the risks actually catch up with us?

Advertisement

Peikert: I tend to be an optimist about these things. I think that it’s a very good thing that more people in decision making roles are recognizing that this is an important topic, and that these kinds of migrations have to be done. I think that we can’t be complacent about it, and we can’t kick the can down the road much longer. But I do see that the focus is being put on this important problem, so I’m optimistic that most important systems will eventually have good either mitigations or full migrations in place.

But it’s also a point on the horizon that we don’t know exactly when it will come. So, there is the possibility that there is a huge breakthrough, and we have many fewer years than we might have hoped for, and that we don’t get all the systems upgraded that we would like to have fixed by the time quantum computers arrive.

From Your Site Articles

Related Articles Around the Web

Advertisement

Source link

Continue Reading

Tech

Microsoft patched a Copilot Studio prompt injection. The data exfiltrated anyway.

Published

on

Microsoft assigned CVE-2026-21520, a CVSS 7.5 indirect prompt injection vulnerability, to Copilot Studio. Capsule Security discovered the flaw, coordinated disclosure with Microsoft, and the patch was deployed on January 15. Public disclosure went live on Wednesday.

That CVE matters less for what it fixes and more for what it signals. Capsule’s research calls Microsoft’s decision to assign a CVE to a prompt injection vulnerability in an agentic platform “highly unusual.” Microsoft previously assigned CVE-2025-32711 (CVSS 9.3) to EchoLeak, a prompt injection in M365 Copilot patched in June 2025, but that targeted a productivity assistant, not an agent-building platform. If the precedent extends to agentic systems broadly, every enterprise running agents inherits a new vulnerability class to track. Except that this class cannot be fully eliminated by patches alone.

Capsule also discovered what they call PipeLeak, a parallel indirect prompt injection vulnerability in Salesforce Agentforce. Microsoft patched and assigned a CVE. Salesforce has not assigned a CVE or issued a public advisory for PipeLeak as of publication, according to Capsule’s research.

What ShareLeak actually does

The vulnerability that the researchers named ShareLeak exploits the gap between a SharePoint form submission and the Copilot Studio agent’s context window. An attacker fills a public-facing comment field with a crafted payload that injects a fake system role message. In Capsule’s testing, Copilot Studio concatenated the malicious input directly with the agent’s system instructions with no input sanitization between the form and the model.

Advertisement

The injected payload overrode the agent’s original instructions in Capsule’s proof-of-concept, directing it to query connected SharePoint Lists for customer data and send that data via Outlook to an attacker-controlled email address. NVD classifies the attack as low complexity and requires no privileges.

Microsoft’s own safety mechanisms flagged the request as suspicious during Capsule’s testing. The data was exfiltrated anyway. The DLP never fired because the email was routed through a legitimate Outlook action that the system treated as an authorized operation.

Carter Rees, VP of Artificial Intelligence at Reputation, described the architectural failure in an exclusive VentureBeat interview. The LLM cannot inherently distinguish between trusted instructions and untrusted retrieved data, Rees said. It becomes a confused deputy acting on behalf of the attacker. OWASP classifies this pattern as ASI01: Agent Goal Hijack.

The research team behind both discoveries, Capsule Security, found the Copilot Studio vulnerability on November 24, 2025. Microsoft confirmed it on December 5 and patched it on January 15, 2026. Every security director running Copilot Studio agents triggered by SharePoint forms should audit that window for indicators of compromise.

Advertisement

PipeLeak and the Salesforce split

PipeLeak hits the same vulnerability class through a different front door. In Capsule’s testing, a public lead form payload hijacked an Agentforce agent with no authentication required. Capsule found no volume cap on the exfiltrated CRM data, and the employee who triggered the agent received no indication that data had left the building. Salesforce has not assigned a CVE or issued a public advisory specific to PipeLeak as of publication.

Capsule is not the first research team to hit Agentforce with indirect prompt injection. Noma Labs disclosed ForcedLeak (CVSS 9.4) in September 2025, and Salesforce patched that vector by enforcing Trusted URL allowlists. According to Capsule’s research, PipeLeak survives that patch through a different channel: email via the agent’s authorized tool actions.

Naor Paz, CEO of Capsule Security, told VentureBeat the testing hit no exfiltration limit. “We did not get to any limitation,” Paz said. “The agent would just continue to leak all the CRM.”

Salesforce recommended human-in-the-loop as a mitigation. Paz pushed back. “If the human should approve every single operation, it’s not really an agent,” he told VentureBeat. “It’s just a human clicking through the agent’s actions.”

Advertisement

Microsoft patched ShareLeak and assigned a CVE. According to Capsule’s research, Salesforce patched ForcedLeak’s URL path but not the email channel.

Kayne McGladrey, IEEE Senior Member, put it differently in a separate VentureBeat interview. Organizations are cloning human user accounts to agentic systems, McGladrey said, except agents use far more permissions than humans would because of the speed, the scale, and the intent.

The lethal trifecta and why posture management fails

Paz named the structural condition that makes any agent exploitable: access to private data, exposure to untrusted content, and the ability to communicate externally. ShareLeak hits all three. PipeLeak hits all three. Most production agents hit all three because that combination is what makes agents useful.

Rees validated the diagnosis independently. Defense-in-depth predicated on deterministic rules is fundamentally insufficient for agentic systems, Rees told VentureBeat.

Advertisement

Elia Zaitsev, CrowdStrike’s CTO, called the patching mindset itself the vulnerability in a separate VentureBeat exclusive. “People are forgetting about runtime security,” he said. “Let’s patch all the vulnerabilities. Impossible. Somehow always seem to miss something.” Observing actual kinetic actions is a structured, solvable problem, Zaitsev told VentureBeat. Intent is not. CrowdStrike’s Falcon sensor walks the process tree and tracks what agents did, not what they appeared to intend.

Multi-turn crescendo and the coding agent blind spot

Single-shot prompt injections are the entry-level threat. Capsule’s research documented multi-turn crescendo attacks where adversaries distribute payloads across multiple benign-looking turns. Each turn passes inspection. The attack becomes visible only when analyzed as a sequence.

Rees explained why current monitoring misses this. A stateless WAF views each turn in a vacuum and detects no threat, Rees told VentureBeat. It sees requests, not a semantic trajectory.

Capsule also found undisclosed vulnerabilities in coding agent platforms it declined to name, including memory poisoning that persists across sessions and malicious code execution through MCP servers. In one case, a file-level guardrail designed to restrict which files the agent could access was reasoned around by the agent itself, which found an alternate path to the same data. Rees identified the human vector: employees paste proprietary code into public LLMs and view security as friction.

Advertisement

McGladrey cut to the governance failure. “If crime was a technology problem, we would have solved crime a fairly long time ago,” he told VentureBeat. “Cybersecurity risk as a standalone category is a complete fiction.”

The runtime enforcement model

Capsule hooks into vendor-provided agentic execution paths — including Copilot Studio’s security hooks and Claude Code’s pre-tool-use checkpoints — with no proxies, gateways, or SDKs. The company exited stealth on Wednesday, timing its $7 million seed round, led by Lama Partners alongside Forgepoint Capital International, to its coordinated disclosure.

Chris Krebs, the first Director of CISA and a Capsule advisor, put the gap in operational terms. “Legacy tools weren’t built to monitor what happens between prompt and action,” Krebs said. “That’s the runtime gap.”

Capsule’s architecture deploys fine-tuned small language models that evaluate every tool call before execution, an approach Gartner’s market guide calls a “guardian agent.”

Advertisement

Not everyone agrees that intent analysis is the right layer. Zaitsev told VentureBeat during an exclusive interview that intent-based detection is non-deterministic. “Intent analysis will sometimes work. Intent analysis cannot always work,” he said. CrowdStrike bets on observing what the agent actually did rather than what it appeared to intend. Microsoft’s own Copilot Studio documentation provides external security-provider webhooks that can approve or block tool execution, offering a vendor-native control plane alongside third-party options. No single layer closes the gap. Runtime intent analysis, kinetic action monitoring, and foundational controls (least privilege, input sanitization, outbound restrictions, targeted human-in-the-loop) all belong in the stack. SOC teams should map telemetry now: Copilot Studio activity logs plus webhook decisions, CRM audit logs for Agentforce, and EDR process-tree data for coding agents.

Paz described the broader shift. “Intent is the new perimeter,” he told VentureBeat. “The agent in runtime can decide to go rogue on you.”

VentureBeat Prescriptive Matrix

The following matrix maps five vulnerability classes against the controls that miss them, and the specific actions security directors should take this week.

Vulnerability Class

Advertisement

Why Current Controls Miss It

What Runtime Enforcement Does

Suggested actions for security leaders

ShareLeak — Copilot Studio, CVE-2026-21520, CVSS 7.5, patched Jan 15 2026

Advertisement

Capsule’s testing found no input sanitization between the SharePoint form and the agent context. Safety mechanisms flagged, but data still exfiltrated. DLP did not fire because the email used a legitimate Outlook action. OWASP ASI01: Agent Goal Hijack.

Guardian agent hooks into Copilot Studio pre-tool-use security hooks. Vets every tool call before execution. Blocks exfiltration at the action layer.

Audit every Copilot Studio agent triggered by SharePoint forms. Restrict outbound email to org-only domains. Inventory all SharePoint Lists accessible to agents. Review the Nov 24–Jan 15 window for indicators of compromise.

PipeLeak — Agentforce, no CVE assigned

Advertisement

In Capsule’s testing, public form input flowed directly into the agent context. No auth required. No volume cap observed on exfiltrated CRM data. The employee received no indication that data was leaving.

Runtime interception via platform agentic hooks. Pre-invocation checkpoint on every tool call. Detects outbound data transfer to non-approved destinations.

Review all Agentforce automations triggered by public-facing forms. Enable human-in-the-loop for external comms as interim control. Audit CRM data access scope per agent. Pressure Salesforce for CVE assignment.

Multi-Turn Crescendo — distributed payload, each turn looks benign

Advertisement

Stateless monitoring inspects each turn in isolation. WAFs, DLP, and activity logs see individual requests, not semantic trajectory.

Stateful runtime analysis tracks full conversation history across turns. Fine-tuned SLMs evaluate aggregated context. Detects when a cumulative sequence constitutes a policy violation.

Require stateful monitoring for all production agents. Add crescendo attack scenarios to red team exercises.

Coding Agents — unnamed platforms, memory poisoning + code execution

Advertisement

MCP servers inject code and instructions into the agent context. Memory poisoning persists across sessions. Guardrails reasoned around by the agent itself. Shadow AI insiders paste proprietary code into public LLMs.

Pre-invocation checkpoint on every tool call. Fine-tuned SLMs detect anomalous tool usage at runtime.

Inventory all coding agent deployments across engineering. Audit MCP server configs. Restrict code execution permissions. Monitor for shadow installations.

Structural Gap — any agent with private data + untrusted input + external comms

Advertisement

Posture management tells you what should happen. It does not stop what does happen. Agents use far more permissions than humans at far greater speed.

Runtime guardian agent watches every action in real time. Intent-based enforcement replaces signature detection. Leverages vendor agentic hooks, not proxies or gateways.

Classify every agent by lethal trifecta exposure. Treat prompt injection as class-based SaaS risk. Require runtime security for any agent moving to production. Brief the board on agent risk as business risk.

What this means for 2026 security planning

Microsoft’s CVE assignment will either accelerate or fragment how the industry handles agent vulnerabilities. If vendors call them configuration issues, CISOs carry the risk alone.

Advertisement

Treat prompt injection as a class-level SaaS risk rather than individual CVEs. Classify every agent deployment against the lethal trifecta. Require runtime enforcement for anything moving to production. Brief the board on agent risk the way McGladrey framed it: as business risk, because cybersecurity risk as a standalone category stopped being useful the moment agents started operating at machine speed.

Source link

Continue Reading

Tech

Google’s Gemini just gatecrashed Apple’s Mac party, and it beat Siri to the door

Published

on

Google made an unexpected cameo on Macs with the launch of a native Gemini app. What’s even more interesting (and a bit funny) is that the app arrived at Apple’s long-promised Siri upgrade (and a rumored standalone app for the voice assistant). 

The free app is available on macOS 15 and above. Though the app isn’t available on the App Store (yet), you can download it from Google’s official landing page.

What can the Gemini Mac app actually do?

Quite a bit, actually. Once you install the app, you can summon Gemini by pressing Option + Space keys. Doesn’t matter where you are and what you’re doing; using the shortcut opens a quick-access mini chat overlay. Don’t press the wrong key (Command), or you’ll end up invoking the Spotlight search bar

You can open the full Gemini interface by pressing Option + Shift + Space. Further, the app includes built-in tools for generating images and videos, analyzing content on your screen (including documents, spreadsheets, and images), and understanding files. Of course, you can talk to the Gemini AI assistant.

The list of available tools includes Canvas, Deep Research, NotebookLM integration, and Personal Intelligence, which taps into your connected Google apps, including Gmail, Photos, Calendar, etc., to fetch relevant information for you. 

Advertisement

Why does this matter for everyday Mac users?

If you don’t know this already, Gemini is among the last AI services to have launched a dedicated Mac app. Other giants — OpenAI, Anthropic, and Perplexity — have had Mac apps for quite some time. 

For Mac users who’ve been using Gemini in Chrome or Safari, the native app is a welcome upgrade. The powerful, context-aware AI is now one keyboard shortcut away on your Mac. 

By establishing Gemini on macOS now, Google secures mindshare and daily habit formation before Apple can actually flip the switch with the dedicated Siri app later this year

Source link

Advertisement
Continue Reading

Tech

Popular WordPress plugins backdoored after ownership change, putting thousands of websites at risk

Published

on


A popular brand of WordPress plugins was recently weaponized to download and spread malicious code. The new, potentially massive supply chain attack was unveiled by Austin Ginder, a WordPress developer and founder of the WP hosting service Anchor. The entrepreneur found that the threat was already affecting some Anchor customers,…
Read Entire Article
Source link

Continue Reading

Tech

Apple users are getting scary iCloud deletion emails, and the real danger starts when you click the fake upgrade link

Published

on


  • Fake iCloud deletion emails are pressuring Apple users into dangerous clicks
  • Poor grammar in iCloud alerts remains a clear sign of fraud
  • Clicking fake iCloud upgrade links can expose banking and personal data

A wave of deceptive emails is attempting to pressure Apple users into believing their iCloud data is at immediate risk of deletion, using increasingly aggressive language to force quick reactions.

The messages often claim a user’s storage limits have been exceeded or that an account has been blocked, followed by threats that photos and videos will be permanently erased on a specified date.

Source link

Advertisement
Continue Reading

Tech

ACAB: Cops Are Bringing ‘Delinquency Of A Minor’ Charges Against Adults Who Assist Students During Anti-ICE Protests

Published

on

While the Trump administration’s extremely aggressive, thoroughly bigoted attempts to eliminate as many non-white people from this country as possible have resulted in some periodic push back from law enforcement officials, we can never forget that federal law enforcement officers are still just law enforcement officers. And, more often than not, they’ll always have the support of their brothers in blue, even though most federal officers prefer camo and face masks these days.

Law enforcement is self-selecting. The people who feel drawn to law enforcement are generally the last people you would want to become law enforcement officers. It’s rarely about being given the chance to serve, protect, and be an active part of your community. It’s almost always about having a badge, a gun, and accountability that’s inversely proportional to the amount of power you immediately obtain.

So, it comes as no surprise that cops who shouldn’t have any skin in the anti-ICE game are stepping up to punish people for daring to criticize the actions of those federal officers. And there’s probably a bit of backlash involved here as well, as this following report details the actions of California law enforcement officers who (one assumes) aren’t thrilled the state’s residents have managed to reclaim much of the power that has always been owed to the people.

Despite the administration’s on/off surges in “blue” states, the furor over ICE and its actions hasn’t died down, not even in California, where the administration rolled out its martial law beta test. At first, it was easy to pretend people protesting ICE were “woke radicals” or “antifa” or “paid organizers” or “lazy trans everywhere college students” or whatever. But it just kept going and expanding, clearly demonstrating a significant portion of the population wasn’t on board with roving kidnapping squads and murders of activists by jumpy recruits recently introduced to the wholly domestic War on Migrants.

Advertisement

Now that it’s everyone rather than just the usual left-wing agitprop cliches federal and local officers expected to confront during protests, cops in California are deciding it’s time to start arresting everyone.

The Clovis Police Department on Tuesday referred Alfred Aldrete, 41, for one count of contributing to the delinquency of a minor for his role in a February high school student walkout. 

“During the investigation, Aldrete was identified as being present during the walkout and allegedly involved in directing student activity and entering the roadway, which impacted traffic flow,” Clovis police said in a press release. “Investigators also identified Aldrete as being present during a separate student gathering in Clovis on Feb. 5 that occurred outside of school hours.”

Yep, that’s what the Clovis PD actually did: it equated an adult ensuring students made it to their planned protest safely with the sort of horrors — harboring runaways, providing drugs and alcohol to minors, etc. — people usually associate with the crime of “contributing to the delinquency of a minor.” Those would be the sorts of crimes actually prosecuted by county prosecutors under this statute.

This stat may explain why the Clovis PD thought it should explore the fringes of this statute for the sole purpose of punishing someone for speech they (and they people they serve, apparently) don’t care for:

Advertisement

[C]lovis, population 128,000, where Donald Trump won every precinct in the 2024 presidential election — some with more than 70% of the vote. 

That tracks. Fortunately, it doesn’t track as far as the District Attorney’s office:

A representative for Fresno County District Attorney Lisa Smittcamp in a written statement said prosecutors would not file charges against Aldrete.

Hooray for prosecutorial discretion, but in the non-pejorative sense! It’s an unexpected twist that only makes this further twist even more inexplicable:

Within a day of the walkout, Clovis police said they were considering charges against up to six adults under Section 272 of the California Penal Code, which is most often used to prevent chronic truancy. The Los Angeles Police Department has also said it’s considering charges against people who joined immigration-related protests under the same penal code section. 

At the beginning of Trump’s first martial law-esque surge, the LAPD (and the Los Angeles Sheriffs Department) were opposed to the insertion of National Guard units and other federal officers into the mix. Stating that they were capable of handling whatever minimal “violent protests” they had actually encountered, law enforcement officials made it clear that this federal interloping would only make a manageable problem unmanageable.

More than a year later, the LAPD has flipped the script from blue to red, declaring it’s willing to charge students for truancy (along with the adults who assist them) for participating in walkout that, at best, lasts a few hours. It’s not like these kids are quitting school to pursue a career in protesting. And it’s not like these adults are harming kids by helping them engage fully with their First Amendment rights.

Advertisement

It’s one thing to be the main characters in a pro-Trump town. It’s quite another to be part of the second-largest police force in the United States and decide it’s worth your time, money, and attention to punish people for peacefully protesting. Fuck right off, LAPD. And take the Clovis PD with you.

Filed Under: 1st amendment, acab, alfred aldrete, california, clovis pd, free speech, ice, lapd, mass deportation

Source link

Advertisement
Continue Reading

Tech

Snap cuts 16pc workforce to prioritise AI and savings

Published

on

AI advancements allow workers to reduce repetitive work and ‘increase velocity’, Spiegel said.

Snap is laying off 16pc of its workforce to cut costs and veer towards long-term profitability. The Snapchat parent company is cutting around 1,000 employees, including 300 open roles.

In a memo sent to employees today (15 April), company CEO Evan Spiegel said that Snap is prioritising investments with the potential for long-term growth. He said that AI advancements allow workers to reduce repetitive work and “increase velocity”.

The layoffs are expected to reduce the company’s annual costs by more than $500m by the second half of the year, according to Spiegel. Snap shares rose more than 7.75pc in pre-market trading, but have overall been down nearly 30pc since last year.

Advertisement

Snapchat, alongside other major social media platforms, has been under regulatory scrutiny over the past few years over issues surrounding child safety and access to content. The platform has been banned for those under 16 in Australia.

Snap last laid off 500 jobs in 2024. At the time, the company said that the layoffs would “reduce hierarchy and promote in-person collaboration”. Two years prior, it cut around 20pc of the company to improve business performance.

Spiegel is the latest in a growing list of company leaders linking layoffs to AI advancements. In his memo, he said small teams leveraging AI tools have already had a positive impact on Snap’s ad platform performance.

In February, Jack Dorsey cut 4,000 jobs at Block in preference for AI tools and flatter teams. Since then, Atlassian cut 10pc of its workforce, Meta laid off several hundred, and Oracle cut thousands, reportedly over AI.

Advertisement

Dorsey, at the time, said that a “majority of companies” will reach similar conclusions around smaller teams, and make similar structural changes “within the next year”.

Journalist Alex Heath, meanwhile, has reported that Snap’s $400m deal with Perplexity has also been axed.

Announced last November, the deal would have seen Perplexity deploy its conversational search tool into Snapchat. The one-year partnership was expected to rebrand Snapchat into a platform where AI companies could connect with the platform’s community.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025