Examining the CTOs responsibility for ethical AI use

As we chart new courses of innovation and fair use, ethics remains a key piece of the AI advancement conversation, as new technologies continue to be unveiled at a blistering pace. This begs the question – what is a CTOs responsibility in ethical tech practices? 

Just in the past few months, we’ve seen a significant number of tech companies pairing some of their strongest announcements with GenAI. Outside of traditional tech, all industries have jumped on the bandwagon, with Volkswagen recently announcing the inclusion of ChatGPT into its vehicles. And while headline after headline show these technologies progressing at an exponential rate – so are its criticisms around its usefulness. Are we standing on the edge of a complete revolution in the world of innovation -or are we standing on the edge of a cliff?  

As we examine the burgeoning list of duties for a company’s Chief Technology Officer, we cannot ignore the genuine need to examine our ethical responsibilities too.  

Opening the ethics door: What’s a CTOs responsibility?

In October 2023, the Writers Guild of America ended their historic 148 day strike against Hollywood. At the center of one of the longest work stoppages in Hollywood history was an issue around AI, and more specifically, job protections against the use of AI. In 2021, Helen Rosner published in The New Yorker that we should have real ethical concerns about the use of AI, after chef Anthony Bourdain’s voice was AI-generated and used in the documentary “Roadrunner.” Anthony Bourdain died in 2018, three years before the documentary was released.  

Additionally, the rise of AI in the voice acting industry has led to rumors that companies that use voice actors – like audiobook companies – may cut down their staff entirely. While the art industry has a unique philosophical rage against the use of AI in creativity – what should this mean to private sector companies?

We should not so earnestly innovate and implement technological advancements without considering their deep potential impacts on our society. Implied further is that when innovation challenges ethical standards, ethics should win out.  

Charting the impact of AI on privacy

Echoes of tech resonating in society ring over growing privacy concerns and public distrust in a number of companies to protect their users. Bluntly- users do not expect companies to take appropriate measures to secure their data. Pew Research found that 77% of adults do not trust the leaders of their platforms to take responsibility for data misuse. They further reported that 76% of adults do not trust leaders to not sell their data without their consent.   

While AI systems offer valuable capabilities, such as data-driven insights and personalized experiences, concerns arise regarding the extensive data collection, profiling, and surveillance associated with these technologies. The use of biometric data, potential biases in decision-making, and the lack of transparency in certain AI models contribute to apprehensions about privacy infringement. Just recently, experienced a data breach where hackers stole genetic information for 6.9 million users. Not coincidentally, 23andMe’s stock valuation has gone from $6 billion in 2021 to now essentially zero. They join a number of other major brands like T-Mobile, which has experienced nine different data breaches since 2018 alone.

Additionally, the evolving nature of AI has outpaced the development of robust legal and ethical frameworks, necessitating ongoing efforts to establish responsible guidelines that address the complex interplay between AI advancements and individual privacy rights. Companies have seemed so keen on their disregard for privacy that websites like have maintained a ranking for a number of popular websites used daily, even establishing a grade worse than a failing “F” score. 

Efforts to mitigate privacy concerns are underway, with the exploration of privacy-preserving AI techniques like federated learning and homomorphic encryption. These approaches aim to empower AI models to learn from decentralized data sources without compromising the privacy of individuals. Recognizing the importance of ethical considerations and regulatory frameworks, organizations and policymakers are working towards a responsible AI development landscape that maximizes the advantages of AI while safeguarding the privacy and rights of individuals.

Our Climate Impact Matters Too 

As technological advancement brings new milestones seemingly by the day, the need for honest conversations on ethical practices grows more urgent – but conversations around protecting individuals is only a starting point.  

The number of software as a service (SaaS) companies has grown substantially in the last decade, with Statista estimating the industry to grow to over $230 billion in 2024. Involved in this are companies that continue to use an endless number of new apps year after year as they attempt to niche out specific functions within the business. Each of these apps involve their own array of technologies to operate, technologies like data centers that must be powered at all times.

The sheer energy consumption of large-scale AI model training, computation, and execution are cause for concern – as the public at large begin a seismic shift toward demanding sustainability and transparency from their companies. The rapid evolution of AI technologies may contribute to electronic waste, posing disposal and recycling challenges.

We must include the sheer scope of physical resources used to manage and maintain a growing ecosystem of “pay-on-demand” niche apps and enterprise-level services that can have large footprints. With no shortage of new options for a company’s tech stack, CTOs have real considerations on how the resources they utilize can negatively impact our planet’s climate.  

In Brief:

“Do the right thing” can just as easily be supplanted by, “Just don’t do anything too evil” for those who lean too closely into all the advantages of contemporary AI without recognizing any of the drawbacks – privacy and climate, to be specific. 

Will these rapidly advancing technologies be a major boon to our society or the bane of our civilization? So long as ethics remains a conscious piece of the modern CTOs responsibility matrix , the outlook for our future can remain a positive one.

Avatar photo

Blake Binns

Blake is a marketer and consultant based in Northwest Arkansas, the home of Wal-Mart. He is a fan of all things digital and hosts the Good Advice Podcast. He lives with his wife of 10 years, Joy, and together has two kids, Blake and Maylee.

Leave a Comment

No posts found.
No posts found.