Thanks to Tom Whittaker, Senior Associate, and Martin Cook, Head of Fintech Burges-Salmon for this guest blog discussing AI Ethics.

Imagine a bank that uses an AI system to determine that a customer should be denied a mortgage. What was the input information, and what was the AI system’s decision output? How did the bank use the AI system and how did the AI system reach that conclusion? Is the outcome fair, and does it take account of the full range of factors?

There is legislation and regulation that governs this scenario. But where the AI system can be used it does not mean it should be. AI poses potentially significant risks, such as denying a mortgage application incorrectly, or setting it on punitive terms or rates. Navigating when AI systems are used (if at all) and how they are used raises ethical questions. Applying ethics to AI systems is the emerging field of AI ethics.

AI opportunities and risks

AI systems present enormous opportunities. Decisions – like that of the bank– could be made better, faster.  Additionally, AI can analyse more data than any human, so may be able to spot patterns or produce outputs beyond what humans can do. These factors – along with cheaper computer power and an increase in those with the necessary skills – have seen significant investment in AI across sectors and types of investors.

However, AI systems also pose significant risks to individuals and to society. Take the mortgage application scenario, which includes risks of:

  • Bias and discrimination – for example, using existing data or training models which reproduce, reinforce or amplify existing discrimination
  • Non-transparent, unexplainable, or unjustifiable outcomes – can the bank see and explain how the AI reached its decision to its customer and its regulators?
  • Invasions of privacy – did the AI use data about the applicant legally?
  • Unreliable, unsafe, or poor-quality outcomes – does the bank’s AI make decisions consistently and accurately?

The risks above arise partly because AI systems are not created, and do not exist, in a vacuum. They are not neutral but nor are they inherently biased. They depend on and are defined by their context:

  • The AI system’s design, development and use
  • What data is used
  • Who uses the AI systems and how as part of other decision-making
  • The beliefs, assumptions and actions of people involved at every stage: those who design, develop deploy the AI system; the people who choose and train the data and models for the AI system; the availability of source data; and the people whose products and services use those AI systems.

Those risks may be magnified given the volume of personal data used, the nature of that personal data, and the fundamental human rights which AI systems may negatively affect.

In the mortgage example, the wrong decision could have significant financial impacts on the applicant. Of course, humans are not free from error and AI systems should not be expected to be. But those designing, deploying, and using AI need to be aware of the opportunities, risks, and ethical issues arising from AI systems and in a position to navigate them. AI ethics can help.

What are AI ethics?

AI ethics is a set of values, principles, and techniques that employ widely accepted standards to guide moral conduct in the development and use of AI systems.

They are important because they help navigate the AI system’s risks. In doing so, they help make the AI system trustworthy – the ability for developers, users and those affected to have trust in the AI system’s processes and output.

Governments, international organisations, companies and others have produced ethical frameworks for AI. Different studies have found that they converge around similar themes; one study of 36 AI ethics frameworks identified eight thematic trends: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values.

There were similarities but also differences in how each principle was interpreted.  What they look like when put into practice diverges further. This is because each ethical framework depends on the context of the AI system’s use. It’s also because ethical issues interact with each other and sometimes they do not sit together easily or align with commercial or technical objectives.

Take the mortgage application example. The bank may prefer a more complex AI system to improve the accuracy of decisions and provide a competitive advantage with, for example, more nuanced pricing for lower-risk customers and better scalability. In some senses that may be fairer to customers by getting the ‘right’ decisions.

However, that may reduce the explainability of the AI’s decision for the bank’s credit risk and data science teams, its mortgage advisers, its management team, and its customers. In any event, each of those stakeholders will need a different level of explainability; the bank’s credit risk and data science teams need to explain how the AI works in order to improve it and meet any governance or regulatory requirements. But a customer may only be concerned to understand the key reasons for the AI’s decision, for example, to know what they can do to improve their next application or whether they should contest the decision.

Practical steps

The government has produced guidance on public bodies using AI safely. What is right for each organisation may differ, but the government’s guidance is useful to show that there are different stages each with different ethical principles:

  • Create a framework of ethical values – this is to allow for a team to discuss ethical aspects of AI and establish defined criteria to evaluate the ethicality of AI systems
  • Establish a set of actionable principles – these will determine the approach an organisation will take at each relevant stage in the process to turn principles into practice 
  • Build a process-based governance framework – what are the governance goals and timeframes, who is responsible for each governance action, and how are governance decisions recorded for end-to-end auditability?
  • Implement effective monitoring, testing and reporting arrangements – in order to assess whether positive customer and corporate outcomes are being achieved (i.e. that the AI system operates in line with expectations and determined parameters), and take steps to communicate effectively with stakeholders (including regulators).

AI ethics is a developing area globally, and so is AI regulation, legislation and governance. What is right for an organisation and AI system will depend on the specific circumstances of the AI system and the needs of its stakeholders. 

Ethical principles need to be considered against any legal and regulatory requirements (and some may also be legal requirements, such as under data protection legislation). But there is a great deal of convergence around ethical principles and what makes an effective ethical framework for the design, deployment, and use of AI. 

We expect that they will become more relevant as AI technology, investment, and use cases grow. We also expect that they will take on greater importance as organisations use them to demonstrate the trustworthiness of their systems and, for some, give them a competitive advantage.