“We stand on the cusp of a revolution, the engineers tell us…the potential impact of these technologies challenge the very definition of what it is to be human. Seen in this light, the development and deployment of these technologies represent a practical experiment in the philosophical, today conducted by engineers and scientists. But we need a broader conversation” – Nils Gilman, Berggruen Institute

The development of deep learning over the past few years has meant the “artificial intelligence revolution” has arrived at last, with the promise of massive productivity increases and widespread disruptions in the labour markets, writes Kai-Fu Lee in his book, AI Superpowers. But the intense competition between America and China, and the rapid pace of change, means Europe may get left behind and “become irretrievably subordinated to the geopolitical algorithms of others”.

Yet there is an opportunity for Europe given its focus on protecting the privacy of the user, which Lee stated, “would cause the American giants some amount of trouble”. As suggested in a Washington Post article, Europe could “re-decentralise the Internet, both to assure a fairer allocation of the digital dividend and hand back control of personal data from big tech to individuals. This culture-bound constraint on data collection would re-orient the development of AI in a more social instead of consumer-marketing direction”.

While I agree that Europe, if it chooses, could have a unique part to play in the development of AI with a far stronger focus on privacy, data, and ethics than the American tech giants, what also gets lost the noise and the hype is a lack of demonstrable understanding of how humans and machines can work together to avoid the dystopian future the tech pessimists (or realists) are predicting.

We’re already seeing the damaging and dangerous effects of automated decision-making in areas such as policing, finance, credit reporting, and administration of public programmes, and a corresponding plea for the development of basic technological design principles to minimize harm. In the tsunami wave of technological advances to increase efficiency and drive automation, the social harm left in its wake is deeply disturbing.

More and more people and institutions are calling for ethical standards, for a Hippocratic oath for the data scientists, systems engineers, and computer scientists. Some are pointing to bioethics as a model.

Any form of red tape is probably a technology company’s worst nightmare, fearing innovation and speed-to-market would be threatened. But there’s so much at stake. As Lee states in his concluding chapter, “appreciating the momentous social and economic turbulence that is on our horizon should humble us”.

I’d argue that before we reach that horizon, technology companies need to work closely with social scientists, particularly anthropologists and sociologists, to understand how technology can align and integrate with human workers — what aspects can be automated and what needs to be done by humans. Machines are exceptional at churning through gigantean amounts of data at speed but less good at making nuanced judgements based on wisdom and knowledge.

Otherwise, the social harm of emerging technologies will continue unabated, no doubt making millions for the companies concerned but leaving yet more mess (not only social but economic, political, environmental) in its wake, which we all, collectively, will have to contend with. An increasingly ‘woke’ public is becoming less tolerant, and more demanding, of companies who repeatedly fail to consider the long-term downstream effects of their technologies — look at the significant fallout this past year alone over Facebook, Amazon, DeepMind, Uber, to name a few.

The technology companies who take ethics and social harm seriously, who take genuine steps to mitigate these, who ask questions such as ‘How can emerging technologies be designed for human futures’ and ‘what should those human futures look like’ will, I suspect, see their hoped-for benefits to society come to fruition.

Given Bristol is one of the UK’s top technology and innovation hubs, I’d like to see our city lead on this. Technologists cannot decide our collective future on their own. The question, what does it mean to be human, is a discussion for us all.