Opinion: “Excuse me?” AI really needs to change its tone
“We’re being held due to signalling problems ahead on the line.”
These are not welcome words to interrupt your morning commute. But the familiar and understanding tone of your train driver can help ease the anxiety. When the commentary slightly goes off script it can even rouse a few chuckles, even to the most hardened and weary passenger.
But what happens when all the drivers are automated? What should be the tone when an AI is informing you of a delay, or something more serious?
We’re just at the beginning of the AI revolution. AI will change our norms, as quickly as maps on smartphones have changed how we navigate cities. In the next few years, organisations will need to think about not just their brand image, but also their brand voice. What do they want to sound like to their users? What should their communication style be when chatbots and virtual voice assistants are doing the talking for them?
A recent survey by LivePerson asked 5000 people across six countries what personality they would prefer their AI chatbot to have and it uncovered some fascinating cultural differences. A resounding 78% of North Americans surveyed wanted a bot to have a friendly personality, compared to just 20% in Japan. 18% of Germans wanted a bot to have a curt, direct style, compared to 2% in Australia. Somewhat surprisingly the highest percentage wanting a ‘hip’ chatbot came from France (8%).
- You may like: Interview: The rise and rise of the chatbots
Survey results like these in isolation don’t lead to any major design breakthroughs. However, they do highlight an interesting design and delivery problem. AI chatbots and AI voice assistants will need to be speaking in a tone and character that fits both their role, service provider and cultural surroundings. In a classic British situation where the 7:47 Great Western train to Bristol is delayed, a clear and polite voice will suffice; in a late-night, fast food joint the AI taking your order on the phone or by text needs to be different.
One company that has been tackling this problem already is Apple.
When Siri was first launched (seven years ago already) she had to speak with the right tone. She had to be friendly, and helpful, ‘spunky without being sharp, happy without being cartoonish’. Over time, after a huge effort by Apple, she’s won many people over. An impressive 375 million people now speak to her each month. This is a commendable start.
For companies without the resources of Apple, though, the journey to create even a basic proof of concept is not yet so clear, strategically, or from a technological perspective. As adoption of AI voice assistants and chatbots increases, developers will need far greater cultural and situational awareness. Priorities for the experience need to be set from the start. Appreciating complex variables, and designing for them, will be critical. The technology will be continuously learning, and improvements will be made over time, but if the technology isn’t human-centric from the start there is a real risk of a damaged reputation for the company, and a grinding frustration for the user.
The cultural dimension of chatbots and AI assistants is a fascinating space. When a company is operating across language groups and cultures all kind of design considerations are needed. But even when just operating in one country – the sensitivities and subtleties of peoples’ lives need to be front of mind. The pointless assistant is really not worth paying any attention to.
You may like: Opinion: Technology doesn’t need to change. We do.
Josh is a digital consultant at BJSS SPARCK. He has a degree in anthropology and people-centered business from the University of Copenhagen. Since joining SPARCK he has worked across a range of innovation projects including updating an internal company communications systems, and exploring options for appropriate technology for travelling salespeople. He’s involved in internal user research and ethical tech groups as well as chatbot conversational design.