Last week, I was in North Greenwich, London, for Adobe Max, the major conference hosted by Adobe. Full disclosure: Adobe covered my travel and hotel costs, which is common for journalists attending such events. However, despite the company’s hospitality, I’ve been critical of Adobe in the past, and I made it clear that my opinions weren’t likely to change.
One of the key reasons for my ongoing skepticism is Adobe’s Creative Cloud subscription model, which continues to be prohibitively expensive for many users. Moreover, the company has been heavily promoting its generative AI, Firefly, which has raised concerns within the creative community. Many creatives are wary of generative AI, which has contributed to a broader sense of distrust towards Adobe’s current direction.
Adobe’s Generative AI Strategy: Ethical Concerns and the Promise of Content Authenticity Tool
Adobe’s approach to generative AI differs from other companies in that it claims to only train its AI on Adobe Stock content. While this seems like a responsible policy, it still leaves some wondering if this move is truly for the benefit of creators or just a strategy to increase their influence in the AI space. For many, this may seem like a small gesture that doesn’t address the deeper concerns surrounding the use of AI in creative work.

Adobe Faces Scrutiny Over AI Strategy but Gains Ground with Content Authenticity Protections
Despite my doubts, Adobe’s introduction of the Content Authenticity app at Adobe Max made me reconsider my perspective. This new tool, which is currently in public beta, allows creatives to embed “Content Credentials” into their work. These credentials act like a digital watermark, providing proof of authorship and ensuring that ownership is clear even if an image is copied or altered.
Protecting Creators’ Rights: Adobe’s Commitment to Preventing AI Scraping and Future Potential
Another important feature of Content Authenticity is the ability to prevent AI systems from scraping your work for training purposes. While this is a promising development, it’s worth noting that Adobe is currently the only company to publicly commit to respecting these preferences. Other AI companies, like OpenAI and Midjourney, have not made similar assurances, leaving creators vulnerable to having their work used without consent.
There is, however, a possibility that things could improve. If one of the numerous lawsuits against AI companies successfully protects creators’ rights, it could lead to significant changes in how AI is used. In such a scenario, Adobe, with its more ethical approach to AI, could become a key player in defining the future of both AI and creative work. If AI companies face tighter regulations, Adobe might step into a leadership role, offering real benefits to creators that could reshape the industry.

































