Business
Unpacking Adobe’s Approach to AI Development
Adobe has garnered a strong global reputation as a force for innovation across many different tech and design disciplines. From being recognised for producing some of the most high-performance design tools for businesses to pioneering the PDF file format and establishing Acrobat as the industry’s gold standard in PDF editing, the influence and impact that Adobe has had in the information age has been immense.
That influence only seems to be continuing through to the digital age and the era of AI, as Adobe continues to make strides with its suite of Firefly generative AI tools. Encompassing some of the most commonly sought after generative AI offerings including a text to image generator and even an AI-powered video translator, Adobe Firefly is delivering versatile outputs all within Adobe’s familiar UI.
All of this detail is positioning Adobe in being a leading provider of generative AI tools, and perhaps even developing Firefly to the point where much like Acrobat, it becomes a gold standard in generative AI software.
But of course, quality AI tools aren’t just defined by their UI – they’re defined by their reliability and quality assurance. From Adobe to Anthropic and Open AI, issues like AI hallucinations continue to be a foremost concern – so what is Adobe doing in combating these risks and ensuring Firefly is safe for commercial applications? Let’s get into it by taking a closer look at what’s powering Adobe Firefly’s ethical AI tools.
What is ‘commercially safe AI’? – Defining AI ethics
Let’s start with a definition of ‘commercially safe’ AI. According to Adobe, their models for ensuring commercial safety for final Firefly outputs includes:
- Never training Firefly LLMs on customer content or unvetted open web data
- Limiting asset harvesting to internally managed Adobe Stock libraries as well as public domain data to fight copyright risks
This approach works to ensure that Firefly’s output isn’t only of a consistent quality, but also is safe from infringement on copyright or intellectual property rights. This means that Adobe Firefly users retain full ownership rights over any assets they generate using Firefly tools.
For enterprise users and SMBs, the reassurance of full ownership rights over AI-generated assets, naturally means integrating AI tools like Firefly into creative workflows automatically becomes safer and more commercially viable. So it can be argued that commercial safety in AI ties hand-in-hand with copyright sensitivities.
What is AI ethics?
AI ethics outline theories and practices for the responsible use of AI tools across creative, commercial, and even public sector applications (i.e. using AI for legislative purposes). Some key factors in AI ethics frameworks include:
- Accurate, unbiased, and culturally sensitive AI outputs
- AI authorship and transparency
- Accountability for AI outputs (i.e. responsibilities of creators and developers)
- Social and environmental impact management
Engaging with AI ethics framework is essential for implementing AI governance policies both across the public and private sector. In enterprise environments, maintaining robust AI policies is becoming integral to risk management processes. Across the public sector, the race for implementing AI legislation for citizens and AI regulations for developers and tech sector enterprises is happening all around us.
There is a genuine economic advantage to staying ahead in the global race for AI readiness, and whilst Australia could be performing stronger when compared to the US, China, and India, our tech sector is still well-positioned to innovate in the sphere of AI governance and make our own contributions in the foundational realm of AI ethics. In this regard, partnerships with enterprise innovators like Adobe can help policymakers just as much as it can help corporations and entrepreneurs get ahead in their markets.
Key components in Adobe’s ethical AI strategy
So what exactly is Adobe doing that’s so groundbreaking and worth paying attention to? In truth, Adobe’s approach to AI ethics investments operates at a whole systems level. From the most preliminary stages of AI feature development to real global investments in AI governance initiatives, Adobe is rapidly branding themselves as a leading voice in commercially safe AI.
Here’s a closer look into the strategy they’re using to get there.
Copyright safety = commercial safety
As Firefly is never trained on private user data nor on any privately owned IP, Firefly users retain full ownership rights over all the outputs they generate using Adobe’s suite of tools. This means that brands using Adobe Firefly Foundry can generate asset libraries that are 100% owned by their business and ready to use across everything from annual reports to social media ads.
This is trickier to achieve for a lot of Firefly’s major competitors in the generative AI space, namely because it’s harder for other developers to train their LLMs on privately managed data. For Adobe, however, their asset catalogue is huge, thanks in part to the Adobe Stock library. Other generative AI developers may find themselves filling in gaps in their own asset catalogue by relying on public domain data, or even turning to open web data – and this is where copyright infringement and quality control risks come into play, as assets derived from the open web are unlikely to be consistently accurate nor unbiased.
Content Credentials for AI transparency
Adobe is also investing heavily into metadata for tracking assets generated by Firefly. This is in direct response to growing concerns from AI regulatory bodies worldwide about rampant AI use turning the entire internet into an ‘AI slop factory’.
Commentary online is skewing more towards skeptical and distrustful, even across reputable publishers like local news outlets. And whilst it’s true that older generations are struggling with building AI literacy, these AI skills aren’t as easy and organic for digital natives to develop either.
The solution is clear: ensuring all AI-generated content is readily identifiable. This is where Adobe’s Content Credentials come into play.
Operating like a labelling method for assets generated using Firefly or integrated Firefly features across the wider Adobe Creative Cloud suite, Content Credentials act similarly to traditional metadata, in that they can be used to record when content was created. Adobe takes metadata a step further, however, by also using Content Credentials to signal that that content has been AI-generated, and reference which tool or platform was used to generate that content as well.
Content Credentials are a groundbreaking innovation in the realm of AI transparency, ensuring that Adobe-generated assets will be used responsibly and that creators will maintain accountability on the responsible use of their own AI outputs.
Founding of the Content Authenticity Initiative (CAI)
At a governance level, Adobe has also done a great service for policymakers and NGOs working in the AI regulatory space by founding the Content Authenticity Initiative (CAI). A collective comprising over 3300 members that include global tech corporations, media entities, universities and colleges, NGOs, and government agencies from all over the world, the CAI is committed to spearheading AI policy development and facilitating the sustainable adoption of AI tools into the systems we live by.
The CAI partners with enterprises as well as government agencies and other AI regulatory bodies like the Coalition for Content Provenance and Authenticity (C2PA) to ensure AI ethics frameworks are being considered in the foundation of AI industry regulations. As national and international AI regulations are foundational to long-term AI integration, the work being pioneered by the CAI and the C2PA is helping build a more sustainable AI-first future.
Impact assessment procedures for all AI features
Speaking of sustainability, there have been many anti-AI voices in the media in recent years, and they’re not all averse to AI tech for the same reasons. Safe water access and energy consumption continues to be major talking points in the anti-AI space, but thought leaders like Bill Gates touch on additional points of concern, like the risk factors of AI in bioterrorism or in contributing to global conflicts (i.e. via economic pressures due to job market influences, perpetuating harmful biases and stereotypes, etc.).
For Adobe, social and environmental impact reporting is fundamental to quality assurance across Firefly’s ethical AI tools. So before rolling out any new Firefly features, Adobe engages in thorough ethics impact assessments managed by a dedicated internal ethics review board. The board is designed for diversity, ensuring any potential risks across demographics and markets is caught well before the feature is launched for public use.
User feedback and other third-party review processes
Alongside these developmental review processes prior to feature launch, Adobe also maintains customer feedback mechanics across the entire suite of Creative Cloud tools, ensuring that Firefly features both within the Firefly platform as well as across integrated platforms maintain access to diagnostics reporting and feedback channels.
Adobe moderators are online at all hours globally to manage alerts for any potential AI ethics issues reported by Firefly users. These human resources aren’t just deployed for brand management either, but for upholding the ethical AI values that are integral to Adobe’s operations today, both across R&D as well as through the CAI and other ethical partnerships and initiatives.
Is Adobe a global leader in AI innovation?
Adobe’s genuine, multifaceted commitment to ethical AI innovation is making the tech giant a vital asset in the pathway to AI policymaking. For enterprise Adobe users, the investment in Firefly’s guaranteed ‘commercially safe’ AI also positions Adobe as a safer investment in AI transformation and business growth strategies.
Is Adobe a perfect AI innovator in their own right? Not at all, but AI development is proving to be just as iterative a process as typing an art prompt into an LLM. And the faster we can arrive at the right iteration for AI ethics and policies, the better off we’ll all be as a digital global society.
You must be logged in to post a comment Login