Article

06_Aug_CTO_LinkedIn’s Responsible AI Principles

Find Inspiration in LinkedIn’s Responsible AI Principles

With great power, comes an even greater responsibility toward ethical use of technology – is how the quote goes, right? Truthfully, the rapid growth of AI offers a challenging moral pathway to all leaders who interact with it.

While some leaders are relatively mum about their ethical relationship with AI, LinkedIn reports their commitment to building a trustworthy platform using AI. For this company, ethical AI principles look like transparency, ethical inclusion, and accountability. As CTO’s begin to develop their own principles around ethical AI use, carving inspiration from major tech movers, like LinkedIn or Microsoft, offers a place to start.

LinkedIn AI Principles start with fairness

LinkedIn ensures that the use of AI benefits all members fairly without causing or amplifying unfair bias. As AI becomes increasingly ubiquitous daily, LinkedIn remains mindful of the potential biases in the algorithms used. By regularly assessing AI systems for potential biases they ensure it does not perpetuate any discriminatory patterns that may exist in our society. Ongoing monitoring is also done to identify any unfair patterns that may emerge over time and that requires corrective action. They ensure that AI systems do not discriminate against individuals or groups based on factors such as race, gender, or socioeconomic status.

Advocates trust

The focus is on keeping LinkedIn a safe, trusted, and professional platform.  Since members rightfully expect the content they encounter on LinkedIn to be legitimate, LinkedIn uses innovative AI strategies to detect and quickly remove any content that violates their professional community policies. They are deeply focused on removing fake profiles, jobs, and companies. 

Members-first focused

For LinkedIn, it’s members first; AI is just a tool to further the vision. Hence, when they build products, they think about how it can help their members achieve their goals.

Everything LinkedIn builds is dedicated to furthering the mission of connecting the world’s professionals to make them more productive and successful. This ‘member-first focus’ approach is being implemented across the company, from the product to engineering to the marketing team, which has positively influenced future road maps and important decisions.

Provides transparency

For LinkedIn, transparency means that AI system’s behavior and its related components are understandable, explainable, and interpretable. The goal is that end users of AI—such as LinkedIn employees, customers, and members—can use their systems efficiently. They ensure that users can understand these systems, suggest improvements, and identify potential problems (should they arise any). 

Thus, LinkedIn seeks to explain in clear and simple ways how it uses AI. This clarity helps build trust in AI, particularly in high-risk applications.

For example, LinkedIn provides regular transparency updates on its actions to protect members, how it handles questions about member data, and how it responds to content removal requests. 

Embrace accountability

LinkedIn assesses how each AI-powered tool impacts its members, customers, and society as a whole. For example, AI tools use more computing energy. However, even with the use of AI, the team remains committed to being carbon-negative and cutting overall emissions by more than half by 2030.    

LinkedIn recognizes that government bodies and civil society around the world are working out on how to make AI better for humanity and help ensure its safe and useful. As best practices and laws around governance and accountability evolve, LinkedIn promises to embrace those practices in the future. 

AI is not new to LinkedIn. Inspired and driven by the transformative power generative AI tools offer, LinkedIn aims to implement them into its daily operational activities to help its members be more successful and productive. And by applying those responsible AI principles, the team is committed to using AI in ways that are aligned with its mission, providing more value to its members and customers.

What CTOs should consider?

While AI has enormous potential to offer multiple opportunities and transform the world of work in positive ways, its use comes with risks and potential harm. By setting out principles, CTOs can build an operational model that promotes trust and authenticity. It sets guidelines to reduce the potential negative effects (e.g., recapitulating bias, spreading misinformation, and disinformation).

Remember that AI is a tool that requires human navigation. Hence, it’s helpful to have a team in the loop who reviews and edits the AI output and is aware of the shortcomings of generative AI.

Having said that, a survey of 1,000 business leaders conducted by Boston Consulting Group (BCG) found that: while 84 percent of the surveyed organizations believe that responsible AI should be a top management priority, only 16 percent have a fully mature program in place. 

Far too often, responsible AI lacks clear ownership or senior sponsorship. It isn’t integrated into core governance and risk management processes or embedded in the frontlines. However, in this era of digitalization, the principle of responsible AI is becoming the need of the hour, and CTOs should certainly take a serious note of this aspect.

In brief

As AI continues to evolve, tech leaders must prioritize responsible practices to harness AI’s benefits while safeguarding against its risks.

Avatar photo

Gizel Gomes

Gizel Gomes is a professional technical writer with a bachelor's degree in computer science. With a unique blend of technical acumen, industry insights, and writing prowess, she produces informative and engaging content for the B2B leadership tech domain.