
Salesforces’ Ethical AI Path: From Vision to Practice
Technology leaders are moving beyond the race to deploy AI and are shifting focus toward its ethical and responsible use. Salesforce AI Ethics stands out as a model for this transition. It shows how organizations can embed ethical principles into AI practices to strengthen governance, foster trust, and deliver long-term value.
The Ethisphere Institute, a global authority on corporate ethics, recently named Salesforce one of the World’s Most Ethical Companies™. This recognition comes from its strong governance, values-driven culture, and leadership in responsible technology.
For CTOs, the takeaway is clear: ethics in technology is not a side initiative but a strategic imperative. Salesforce demonstrates how embedding responsibility into every layer of the tech stack—from data governance to AI deployment—can transform ethics from a compliance requirement into a competitive advantage.
This article explores Salesforce AI Ethics as a case study, offering a practical blueprint for technology leaders seeking to balance innovation with accountability.
What steps is Salesforce taking to build ethical AI?
Salesforce always emphasizes trust, transparency, equality, and sustainability as the cornerstones of its technology development and deployment. These principles are some of the reasons why companies like SharkNinja, Precina, and 1-800Accountant choose to do business with Salesforce.
Let’s explore the key steps and best practices Salesforce follows to build ethical AI:
The backstory: Ethical AI journey
Salesforce’s journey toward becoming an ethics-driven AI company began when CEO Marc Benioff outlined his vision for responsible AI.
“Einstein, the first comprehensive AI solution for CRM, will give our customers the ability to experience the power of AI right inside the applications they use everyday,” said Benioff. Central to this vision was the commitment to developing AI rooted in trust and transparency for customers.
Kathy Baxter (then a Salesforce User Experience Researcher) was deeply inspired by Marc Benioff’s vision. She immediately began working with the Einstein product teams to identify potential ethical risks. Realizing the need for a dedicated team, Baxter drafted a job description (in 2018) and ultimately stepped into this newly created role herself.
By the end of the first year, her efforts had laid the foundation for what would become Salesforce’s Trusted AI Principles. These principles commit to developing AI that is responsible, accountable, transparent, empowering, and inclusive.
Broadening the ethics umbrella: Establishing the office for ethical AI development
Salesforce broadened its vision by establishing its Office of Ethical and Humane Use (OEHU) to focus on the ‘Ethics by Design’ policy.
The office acts as a guiding light for the tough questions that would arise when human potential meets emergent technology. The team here develops and deploys powerful frameworks and products to propel innovation for a trusted, equitable tech world.
Salesforce later even appointed Paula Goldman as its Chief Ethical and Humane Officer to ensure that the ‘Ethics by Design’ policy became an integral part of how the company designs, builds, and sells its software and services.
Understanding the trusted AI principles at Salesforce
In 2018, Salesforce first set its trusted AI principles to ensure they were specific to its products, use cases, and customers.
It was a year-long journey of soliciting feedback from individual contributors, managers, and executives across the company, including engineering, product development, UX, data science, legal, equality, government affairs, and marketing. Later, the executives across clouds and roles approved the principles, including the then co-CEOs Marc Benioff and Keith Block.
Here are the core AI principles at Salesforce:

Building trusted AI around three pillars
The trusted AI principles became the basis for building trusted AI around three pillars: employee engagement, product development, and empowering customers.
Let’s look at how each pillar functions:
Employee engagement to support ethical AI
To build a culture in which employees have the right mindset to create ethical products, Salesforce offers programs to help employees put ethics at the core of their respective workflows. These programs empower the entire organization to think critically at every stage of building AI solutions.
For example: The ‘Bootcamp’, an intensive training program for new hires, equips employees with an ethics-by-design mindset from the very start of their careers at Salesforce. The organization even provides comprehensive employee resources, such as training from the Institute of Electrical and Electronics Engineers (IEEE) Ethically Aligned Design, which focuses on essential ethical concepts that anyone developing autonomous and intelligent systems should understand.
Similarly, Salesforce actively involves both internal and external experts in structured discussions to spark fresh thinking and challenge employees with diverse perspectives.
For instance. Before releasing/publishing high-risk research or code, teams consult a diverse group of ethics and domain experts for guidance. This is to identify potential risk, check if proper mitigation strategies are in place, and whether it is safe to publish.
The most recent example of this is the publication of Salesforce’s AI Economist research where Baxter engaged external experts prior to publishing.
“Having a sense of ethics embedded culturally across the organization enables our success. Developing what I call a ‘moral muscle memory’ allows more risk spotters, enables more people to engage in tough conversations with their teams, and shifts the responsibility for ethics from one central hub to the teams building our products”. – says Yoav Schlesinger, principal of ethical AI practice.
Product development to support ethical AI
In order to develop ethical products, the team articulates a series of accountability questions at the beginning of the product cycle.
Another practice is ‘consequence scanning’, an exercise that asks participants to envision potentially unintended outcomes of a new feature and how to mitigate harm. This framework has been systematically adopted across all product teams. This is to help team members think creatively about potential problems and how to mitigate risk for customers.
Another ethics checkpoint is the dedicated ‘Data Science Review Board (DSRB)’, which encourages and enforces best practices in data quality and model building across the organization.
From prototyping to production and then product, the DSRB helps gauge whether teams are effectively removing bias in training data, understanding where any unintended biases may have crept in, and are capable of mitigating possible scenarios in the future.
The review board, managed by leaders across research and data science, even partner with teams to create transparency in how they collect data used by machine-learning algorithms.
Empowering customers with ethical AI
The most critical factor in responsible AI use is the end user—the person actually using the AI product. For them to act responsibly, they need the right tools, safeguards, and guidance.
To support this, Salesforce designs its products with built-in features that encourage and guide customers toward ethical and responsible use. These features may include things like transparency tools, usage policies, explainability functions, or guardrails that prevent misuse.
In addition to these features that identify and mitigate ethical risk, Trailhead learning modules also help customers better understand the products they’re using, know what trusted AI means, and how they can be champions of implementing it in an ethical way.
The goal, says Baxter, is to be as transparent as possible about how an AI model was built so that end users have a better sense of the safeguards in place to minimize bias.
Salesforce has also introduced the practice of publishing model cards for its global models—predictive systems built from aggregated data across multiple sources and designed for use by a broad customer base.
Generative AI: 5 guidelines for responsible development
As Salesforce entered the era of generative AI, they augmented their trusted AI principles with a set of five guidelines for developing responsible generative AI — principles that also hold true for agentic AI.
Below are five guidelines Salesforce is using to guide the development of trusted generative AI.
Accuracy
The organization understands the importance of providing trustworthy and reliable AI outputs. To achieve this, it aims to create AI models that maintain a good balance between accuracy, precision, and recall (catching as many relevant or correct cases as possible).
Moreover, by allowing customers to train the models using their own data, the organization ensures the results are not only technically strong but also relevant, customized, and verifiable in the customer’s specific context.
Likewise, it commits in communicating when there is uncertainty about the veracity of the AI’s response and enables users to validate these responses. This can be achieved by citing sources, providing explainability for why the AI generated specific outputs (e.g., chain-of-thought prompts), highlighting areas that require double-checking (e.g., statistics, recommendations, dates), and establishing guardrails to prevent certain tasks from being fully automated (e.g., launching code into a production environment without human review).
Safety
With all the AI models, the team strives to make every effort to mitigate bias, toxicity, and harmful output by conducting bias, explainability, and robustness assessments, and ethical red teaming excersise.
The team also aims to protect the privacy of any personally identifying information (PII) present in the data used for training and create guardrails to prevent additional harm.
Honesty
When collecting data to train and evaluate its models, the organization respects data provenance and ensures that it has consent to use the data (e.g., open-source, user-provided). It also maintains transparency by disclosing what content has been created by AI (e.g., chatbot responses to consumers).
Empowerment
The organization recognizes that while some processes are best fully automated, others require AI to play a supporting role alongside human judgment.
It emphasizes the importance of finding the right balance to enhance human capabilities and make solutions accessible to all (e.g., generating ALT text to accompany images).
Sustainability
The organization recognizes that achieving sustainability requires designing appropriately sized models to minimize the carbon footprint. It acknowledges that larger models are not always superior. In certain cases, smaller, well-trained models can perform in a much better way.
“By adhering to these principles and guidelines, Salesforce is committed to developing AI agents that are not only powerful and efficient but also ethical and trustworthy.”-
Paula Goldman,Chief Ethical and Humane Use Officer.
Collaboration and continuous improvement at Salesforce
Salesforce recognizes that the ethical landscape of AI is constantly evolving. The company collaborates with external experts, industry partners, and its Research & Insights team to stay ahead of emerging challenges.
This collaborative approach ensures that Salesforce’s AI ethics framework is continually refreshed and enhanced, and that it incorporates the latest research and best practices.
Salesforce ethics in AI: Lessons to learn for CTOs
Salesforce’s integration of ethics into AI is exemplary, and the company continues to build on this foundation. Here, every product and service is representative of its core values. And this ethical culture is now being incorporated at each product development stage.
Customer demand for ethics is at an all-time high, and tech leaders need to look internally and externally to ensure high standards of development with the best ethical practices.
Here’s what CTOs can learn from Salesforce’s ethical practices:
Ethics must be embedded, not added on
Salesforce shows that ethical AI isn’t a side project — it’s woven into every product and service. CTOs should ensure their teams integrate ethics from the earliest stages of product development, not treat it as an afterthought.
Ethical culture drives innovation
By aligning technology with core values, Salesforce demonstrates that ethics can be a competitive advantage. Hence, CTOs can use ethics to build trust, differentiate their offerings, and meet rising customer expectations.
Continuous progress matters
Ethics is an evolving journey, not a one-time achievement. CTOs must view ethical practices as something that requires ongoing attention, refinement, and adaptation as technologies and societal expectations change.
Balance demand with responsibility
With competition being at an all-time high, CTOs face the pressure to innovate quickly. The lesson is to balance speed with responsibility, ensuring that rapid development doesn’t compromise ethical standards.
In brief
The conversation around AI is no longer just about building smarter systems—it’s about using them responsibly. Ethical AI has become a top priority for leading organizations, and Salesforce is a standout example, showing how companies can weave ethics into technology to create trust and lasting impact.