Article

11_Sept_CTO_Adobe’s Commitment to AI Ethics

How Adobe is Leading the Way in Ethical AI Development

What has Adobe learned over decades in ethical AI development? For one, that customers value transparency and responsibility, even over form and features. As Adobe brings transformational technologies to market, the team has always sought to pair innovation with responsible innovation. Adobe believes that placing thoughtful safeguards around AI development and use will help everyone realize its full potential.

“AI is transforming the way we create, work and communicate. By taking a thoughtful and comprehensive approach to AI ethics, Adobe is committed to ensuring this technology is developed responsibly and respects our customers and our communities,” said Dana Rao, Executive Vice President, General Counsel and Chief Trust Office.

By spotlighting major companies and C-suite leaders committed to leading the charge in AI transparency, ethics, and safe development, we can ensure that our collaborative future of work leans positive.

How Adobe ensures ethical AI development practices

Adobe is committed to ensuring the safety of all its AI models – and all Adobe AI products are designed in line with its principles of accountability, responsibility and transparency. In 2019, Adobe implemented a comprehensive AI program for all products that includes training, testing, and a review by the AI Ethics Review Board. As part of this program, all product teams developing AI must complete an AI impact assessment process. This multi-part assessment procedure evaluates the potential impact of AI features and products and identifies potential risks that can be addressed before bringing a product to launch.

Because Adobe is committed to remediating negative AI impacts that emerge after deployment, the team welcomes and encourages feedback on its AI-powered features and technologies in many ways. E.g. Firefly has a built-in feedback mechanism so that users can report if a feature produces a result they perceive as, for example, biased or inaccurate. In view, this feedback loop with the user community is one important way to help ensure the tools minimize harm and uphold Adobe’s AI Ethics principles.

In addition, Adobe is a contributor to the NIST Risk Management Framework and a member of the Partnership on AI. Adobe is committed to sharing its learnings with peers and the government to help establish industry best practices to ensure a unified approach to responsible AI.

A collaborative effort at Adobe

Adobe strongly believes that cross-team and cross-functional engagement expands perspectives and contributes to the culture of shared responsibility for the quality and safety of AI technologies. Hence, in addition to the product team, the Ethical Innovation team collaborates closely with Trust and Safety, Legal, and International teams to help account for possible issues, monitor feedback, and develop mitigations.

For example, recently Firefly added AI-enabled multilingual support for its users globally. The team reviewed and expanded its terminology to cover country-specific terms and connotations, and the international team made sure native speakers were part of the process.

Building trust with ‘content credentials’

As a leader in the image-editing space, this has been a focus for Adobe for the past many years. This is why Adobe developed a technology called ‘content credentials’ — which show information such as a creator’s name, the date an image was created, what tools were used to create an image, and any edits that were made. It can even indicate when something was created with AI. This way, when you see the content, you can see for yourself how it came to be. 

In fact, Adobe recently conducted a future of AI trust survey that showed that most people believe they must have the right tools, like content credentials, to verify if online content is trustworthy. Adobe co-founded the Content Authenticity Initiative to help increase trust and transparency online. This initiative now has more than 1500 members from across industries, including the Associated Press, New York Times, Wall Street Journal, Microsoft, NVIDIA, Nikon, Leica, etc.

Moreover, Adobe announced its support for the White House Voluntary AI Commitments to promote safe, secure, and trustworthy AI. These commitments represent a strong foundation for ensuring the responsible development of AI and are an important step in the ongoing collaboration between industry and government that is needed in the age of this new technology.

In brief

Adobe has opened the door to and walked toward ethical AI development and responsibility in the digital age. As Adobe and others harness the power of this cutting-edge technology, tech leaders must come together across industries to develop, implement, and respect a set of guardrails that will guide its responsible development and use.

Avatar photo

Gizel Gomes

Gizel Gomes is a professional technical writer with a bachelor's degree in computer science. With a unique blend of technical acumen, industry insights, and writing prowess, she produces informative and engaging content for the B2B tech domain.