How Human is Too Human for AI | HuffPost - Action News
Home WebMail Monday, November 4, 2024, 08:41 PM | Calgary | 6.3°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Posted: 2017-12-09T16:29:22Z | Updated: 2017-12-09T16:29:22Z How Human is Too Human for AI | HuffPost

How Human is Too Human for AI

How Human is Too Human for AI
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

with Brett Christie, Andres Escobal, Farzaneh Bozorgi, Ankita Mahajan, Zichen Pan, Xiaotong Zhou, and Kabir Basu students in Markus Gieslers MBA course Customer Experience Design at the Schulich School of Business.

Open Image Modal
Jeena Paradies, flickr.com

There has been a lot of buzz around Artificial Intelligence (AI) lately as the technology is quickly finding its way into people's everyday lives. Tech giants around the world are investing heavily in this field and the market is experiencing an influx of AI-driven personal devices or assistants including Amazon s Alexa , Apples Siri and Microsofts Cortana . These devices are meant to both simplify and enrich users lives, which they are currently achieving - to a certain extent. There are still limitations to the technology that do not allow for truly seamless communication, however what happens when we do reach that threshold? Will we be turned-off by devices that too closely replicate human interaction?

Currently, the user experience of these personal assistants is hindered as they lack emotional intelligence and contextual awareness of voice commands. When a user speaks to them, they recognize keywords, but often miss the context in which they are used. This creates the stilted, weird magic-word type experience, characteristic of communicating with a machine that Google has explicitly stated they are trying to avoid. The Director of the Artificial Intelligence Lab at the University of Michigan explains that a user cannot have a dialog with Siri as she cannot understand the connection between commands like What is the weather today? and Should I bring an umbrella to work today?. Further, users cannot ask Alexa to Add apples and toothpaste to the shopping list, rather they must ask to add items one by one, which can become a very lengthy, tedious process. Moreover, these assistants are far from understanding emotional intelligence. Their incapability of recognizing facial expressions, sagging body language or a lack of energy in the voice makes them less capable of really understanding their users, and therefore less capable of helping in the way we want them to. These shortfalls of the current technology cause frustration and ultimately obstruct a truly valuable user experience. Scientists all over the world are trying to humanize these devices by making them emotionally intelligent in an effort to create a more seamless way of communicating with them.

Some technologies are closer to achieving this than others, for example IBM Watson has come a long way, proven by his Jeopardy win over the 74-consecutive-time champion Ken Jennings. The supercomputer understood the puns and red herrings in order to decipher the meaning of the clues in natural language. It is only a matter of time before further, widespread advancements are made.

But what happens when it gets too human?

However, the problem for customer designers does not end here. There are also some negative implications that have been studied, relating to our reactions when machines mimic humans too closely. In the 1970s, Masahiro Mori coined the term uncanny valley, which suggests that humanoid objects which appear almost like real human beings, elicit uncanny feelings of eeriness and even revulsion in observers. In a more recent study, this phenomenon was proven to extend to emotional behaviour as well as appearance when people were put on edge after a computer demonstrated real signs of sympathy and frustration. The study revealed that in fact, when it comes to machine social skills, there may not be an uncanny valley but, an uncanny cliff. Further, widely held fears of AI getting too smart and/or taking over have been created and exploited over and over again through film and television. Some notable examples include; 2001: A Space Odyssey 's HAL9000 making some critical mission changes - Im sorry Dave, Im afraid I cant do that., and more recently, HBOs Westworld android hosts deviating from their carefully thought-out scripts.

In our CX class, we learned that CX designers always connect technology with power. However, how do we balance the power that these devices bestow upon us with the fear of loss of power that we experience when engaging with them at their best? What does this mean for a customer experience designer? On one hand, users want to humanize their conversations and experiences with technology, and on the other hand, they become incredibly uneasy when technology bears a human-like resemblance. Therefore, it is necessary for CX designers in this space to find a sweet spot where AI does not seem too machine, nor too human. This will help steer clear of the doppelganger brand images of AI that exist both before and within the (uncanny) valley.

The first part of the solution rests in advancing the technology so that the devices make less mistakes and actually support users in the ways they desire. Ensuring consumers see this technology as power-enhancing rather than a power-stealing, is one key to its long-term success. We suggest explicitly reminding users of the power they are gaining over their lives and their households by using these products. Identifying the different types of power most important to each of the key consumer segments, and speaking directly to those when outlining the product benefits.

Second, in terms of addressing the more long-term threat of the looming valley/cliff, the solution lies in a deeper understanding of human psychology in this specific new context of AI-driven in-home and personal devices. As with many marketing-related questions and issues, further research is a good place to start; AI scientists, psychologists, and social scientists have to come together with CX designers to narrow down the optimal human-machine complement for their specific product application. The reaction users have while engaging in-context with a more linguistically advanced and emotionally-savvy device should be studied along with their post-engagement reaction, as it can feel great in the moment, but then upon reflection, can shift to unnerving, leaving a user uncertain about re-engaging with the device.

One high-level piece of advice that psychology professor Kurt Gray of the University of North Carolina provided was to keep human-machine interactions social and emotional, but not deep. However, finding out the balance between emotional and deep may be something that differs when a user is engaging with a robot-looking device such as Jibo , a character-based robot such as UBTECHs stormtrooper , or a Alexas voice coming out of what looks like a speaker. Therefore we highly encourage managers in this space to thoroughly study the boundaries of user-adoption success with their specific product so as not to, literally, scare customers away as they advance the technical capabilities of their devices.

And third, one other simple solution may be adopted to avoid the future threat of the valley/cliff; Be more transparent. This can come in two main forms:

- The system should continually be clear with users that it is in fact a computer. Currently, when you ask Siri if she is a robot, her answer is one of a few playful responses including I only share that information on a need-to-know basis, and while humour is important, it turns out this may not be the best place for it.

- Educate people about how the device/system works, what it is doing and exactly what previous interactions or behaviours led it to making new decisions about its user. Make it simple for users to access and digest this information and ask further questions if they so choose.

Looking Forward

In exploring this topic, we cannot help but recall the movie Her ... Surely, Amazon would love its customers to literally fall in love with Alexa, however (spoiler alert) not all love stories end well. Further, on a deeper sociological note, what happens to real relationships with real people if we become too attached to this software? Will real people not be able to stack up to our perfect AI assistants?

These devices show tremendous promise for a bright future in myriad ways. Yet, the producers of such technology must also stop and think about the real goal, the real purpose, and the real future they want to create in order to avoid as many unintended consequences as possible.

References

Furness, D. (March 15, 2017). Uncanny Valley of the mind: When emotional AI gets too human-like, it creeps humans out. Retrieved from Digital Trends

Hutson, M. (March 13, 2017). Beware emotional robots: Giving feelings to artificial beings could backfire, study suggests. Retrieved from Science Mag

Hyken, S. (July 15, 2017). AI And Chatbots Are Transforming The Customer Experience. Retrieved from Forbes https://www.forbes.com/sites/shephyken/2017/07/15/ai-and-chatbots-are-transforming-the-customer-experience/#900e81f41f7b .

May, K, T. (April 5, 2013). How did supercomputer Watson beat Jeopardy champion Ken Jennings? Experts discuss. Retrieved from TEDBlog

Newman, D. (May 24, 2017). The Case For EMotionally Intelligent AI. Retrieved from Forbes https://www.forbes.com/sites/danielnewman/2017/05/24/the-case-for-emotionally-intelligent-ai/#e4a84ff7788e .

Stumpf, S. (July 19, 2016). How Science Can Help us Make AI Less Creepy and More Trustworthy. Retrieved from The Conversation http://theconversation.com/how-science-can-help-us-make-ai-less-creepy-and-more-trustworthy-62008 .

Your Support Has Never Been More Critical

Other news outlets have retreated behind paywalls. At HuffPost, we believe journalism should be free for everyone.

Would you help us provide essential information to our readers during this critical time? We can't do it without you.

Support HuffPost