Chapter 9: A Responsible Approach to AI

Understanding ethical principles and responsible practices for AI development and deployment

Foundation of Responsible AI

Executive Summary: Everyone needs a basic understanding of AI to participate in discussions about its responsible development and deployment. Core to this are key principles for the ethical use of AI, and all leaders must take responsibility for learning about ways to make AI use appropriate, effective, and fair for all. Time Investment: 6-8 minutes to understand key ethical principles.

The Critical Question: How can we control and govern the impact of AI on our future? This has become a dominant business agenda item, given the shockwave induced by the latest wave of generative AI tools like ChatGPT, Gemini, and Claude.

The Challenge: When looking for answers, we often find poorly informed speculation about an uncertain future based on little evidence. Leaders need practical, evidence-based approaches to AI governance and ethical implementation.

Three Key Concerns for AI Governance

Yet, beyond the futurology and the scare stories, leaders and decision makers are now required to participate in a key debate that is likely to have a deep effect on our understanding of AI and its adoption.

  • Human Costs of AI As AI is more widely adopted, its use impacts more and more people. We must face up to how these effects are managed in the short term and engage in the debate about how we maximize their benefits in the longer term.
  • Role of Regulation The role of regulation in guiding and constraining what could, should and must be implemented by AI technology providers and consumers as they adopt AI in support of different business and societal needs.
  • Responsible AI Promotion A broader attempt to promote the concept of 'responsible AI' and to encourage policymakers, leaders, practitioners and citizens to prioritize responsibility in all aspects of AI use.

The Human Costs of AI

Most people exploring AI adoption focus their attention on understanding the advanced technology that defines it.

They are enamoured by the speed of its operation, its versatility and its sophisticated analytics. However, this focus can obscure an important factor: AI requires significant human effort. For instance, consider how the data used to train AI systems is acquired and processed. A common reaction when people learn about the way that many of today's datasets are acquired, tagged and used goes something like: 'I always knew it was bad. But I didn't realize it was this bad.' Articles outline the enormous amount of manual effort that is required to optimize the algorithms at the heart of many kinds of AI approach and to build the large language models that drive generative AI tools such as ChatGPT and Gemini.

According to these reports, making machines appear to be human takes a remarkable number of people. They are needed to create the data sources that drive the AI algorithms and fuel the analytics used in decision making. One article notes: "You might miss this if you believe AI is a brilliant, thinking machine. But if you pull back the curtain even a little, it looks more familiar, the latest iteration of a particularly Silicon Valley division of labor, in which the futuristic gleam of new technologies hides a sprawling manufacturing apparatus and the people who make it run".

The Humans in the Loop

It is worth repeating that the secret to AI is people: humans and machines working together and supporting each other.

It is worth repeating that the secret to AI is people: humans and machines working together and supporting each other.

  • Job Displacement and Reskilling Automation driven by AI can lead to the displacement of certain jobs, particularly those involving repetitive and routine tasks. While AI creates new job opportunities in areas such as AI development, data analysis and AI ethics, the transition is hard on individuals whose skills become obsolete. Many people will struggle to adjust.
  • Bias and Fairness Concerns Biases and influences from many directions place pressure on the ways that AI systems are built and evolve. This can exacerbate existing inequalities and lead to discriminatory outcomes in areas like hiring, lending and law enforcement.
  • Privacy and Security AI technologies, particularly when used in the collection and analysis of personal data, raise significant privacy concerns. The extensive collection and analysis of personal data for profiling and decision making can erode individual privacy rights and lead to unintended consequences, such as inappropriate monitoring and profiling.
  • Ethical Dilemmas Deploying AI systems brings many kinds of ethical dilemmas to decision-making processes. For instance, there are well-known case studies that highlight the issues faced when self-driving cars need to make split-second decisions that involve weighing different priorities to choose a 'least bad action'. Determining the 'right' course of action in such situations is complex, ambiguous and open to ethical challenges.
  • Depersonalization of Customer Service The use of AI-powered chatbots and automated customer service systems can result in a depersonalized customer experience. While these technologies offer efficiency, they can be viewed as 'dehumanizing', lacking the empathy and nuanced understanding that human interactions provide.
  • Mental Health Impact Constant connectivity, social media algorithms and AI-driven content recommendations have been linked to negative impacts on mental health. These technologies can contribute to feelings of social isolation, lack of self-worth and addiction.
  • Loss of Human Judgment Overreliance on AI systems can lead to a decline in human judgment and critical thinking. Blindly following technology-driven AI recommendations reduces individual participation in and understanding of complex situations and removes the need for people to learn how decisions are made.

It Has Always Been About Data

Beneath each of these human dilemmas is a story about data.

Beneath each of these human dilemmas is a story about data.

The way that AI systems procure, manage and apply data is a determining factor in the human–machine relationship. This raises its head most obviously in the way AI systems are trained. The quality and effectiveness of AI systems are intricately tied to the source and calibre of their training data. Good training data is the foundation upon which AI models are built, shaping their capabilities, accuracy and real-world applicability. It serves as the essential raw material that allows AI algorithms to recognize patterns, make predictions and perform tasks with accuracy and relevance. In a rapidly advancing AI landscape, the importance of high quality training data cannot be overstated. It is the cornerstone on which the entire AI infrastructure rests. Investments in obtaining and maintaining good training data pay off by yielding AI systems that provide accurate, reliable and valuable insights, ultimately determining the systems' success and impact across various industries and applications.

Training data essentially guides AI models in understanding the complexities of the world. When the data is comprehensive, diverse and representative, the AI system can generalize from the examples it has seen during training to make informed decisions on new data. This capacity for generalization is what makes AI systems valuable and adaptable to different scenarios. Conversely, poor-quality or biased training data can lead to skewed outcomes and unreliable predictions. The importance of good training data is particularly evident in supervised learning, where AI models learn from labelled examples. If the labels are incorrect or inconsistent, the AI's understanding becomes flawed. In addition, the absence of specific examples can hinder the AI's ability to grasp the full scope of a task, limiting its performance.

However, obtaining good training data can be costly, and it is also harder to come by than many people think. Ensuring good training data involves meticulous curation, validation and augmentation. Data then needs to be cleaned, verified and balanced to mitigate biases and inaccuracies. Moreover, the continuous refinement of training data is vital to keep AI models up to date and relevant as trends and contexts evolve. As an article reminds us, this takes people – a lot of people. The latest AI advances illustrate just how much data is required. It is estimated that GPT-3.5, the LLM underlying OpenAI's ChatGPT, was trained on 570 GB of text data from the internet, which included books, articles, websites and social media.