Working with colleagues at Princeton University, a researcher from the University of Bath has shown that artificial intelligence (AI) can exhibit the same prejudice and bias as humans.

“We show for the first time that if AI is to exploit via our language the vast knowledge that culture has compiled, it will inevitably inherit human-like prejudice”

 

Dr Joanna Bryson (pictured above) from the Computer  Science department at Bath is spending a year at the Centre for Information Technology Policy (CITP) at Princeton, researching the ethics of computing and AI. “The term ‘AI’ has negative associations yet we use AI every day. It is entrenched in almost everything we do with IT, from Google to Facebook to smartphones,” she said.

While artificial intelligence and machine learning are in a period of astounding growth, there are concerns that these technologies may be prone to the same prejudice and unfairness as many human institutions, say the researchers.

“We show for the first time that human-like semantic biases result from the application of standard machine learning to ordinary language — the same sort of language humans are exposed to every day,” said the team. Because the bias is in the language that is used to train the machines, it carries over to the results, from morally neutral bias towards insects or flowers to more problematic ones on race or gender.

“We show for the first time that if AI is to exploit via our language the vast knowledge that culture has compiled, it will inevitably inherit human-like prejudices,” said the team. “In other words, if AI learns enough about the properties of language to be able to understand and produce it, it also acquires cultural associations that can be offensive, objectionable, or harmful. These are much broader concerns than intentional discrimination, and possibly harder to address.”

The results have implications not only for AI and machine learning, but also for the fields of psychology, sociology and human ethics, as they raise the possibility that mere exposure to everyday language can account for the bias.

Can we avoid prejudice in AI?

Many people assume that machine learning is neutral, giving AI a fairness beyond what is present in human society. Instead, concerns about machine prejudice are now coming to the fore. These have been documented in research ranging from online advertising to criminal sentencing.

Most experts and commentators recommend that AI should always be applied transparently, and certainly without prejudice. Both the code of the algorithm and the process for applying it must be open to the public. Transparency should allow courts, companies, citizen watchdogs, and others to understand, monitor, and suggest improvements to algorithms.

Another recommendation has been to promote diversity among AI developers, to address insensitive or under-informed training of machine learning algorithms. A third has been to encourage collaboration between engineers and domain experts who are knowledgeable about historical inequalities.

You can follow Dr Bryson’s work at the CITP website