With two weeks to go in our Artificially Enhanced Banking Crowdstorm, we are excited to share this fourth blog post in our series of four exploring the background, culture, trends, and inspirations in the wide world of artificial intelligence. You can get up to speed by reading the first two posts, “An Introduction to AI Through Film”, “What Designers Need to Know about AI: A Crash Course. and “5 Great AI Use Cases You Should Know About”
Neuroscience to Design Great AI
Without neuroscience—the study of the nervous system and brain—we wouldn’t even have artificial intelligence. AI is built from models that imitate or even replicate the functions and processes of the brain. Some believe that these models are getting so good, so sophisticated, that we’re not far away from a computerized intelligence that is indistinguishable from the real thing.
How can we make the most of it?
Designers of experiences and services have a new tool in our toolbox: AI. For example, conversational interfaces are a huge part of designing interactions with artificial intelligence. As designers, it’s our job to think about when a conversation is the best form of exchange versus when a graphical or tactile interface would better suit the user’s needs. Insights from neuroscience can help us with these choices.
Nina Kraus is a renowned neuroscientist at Northwestern University in Chicago, Illinois. She has been studying language, sound and the brain for many years and has appeared on NPR, in the New York Times, the BBC and even testified in front of the United States Congress—all more than once. She has a deep understanding of how the brain processes different types of stimuli, how our brains make and understand conversation. Here’s what she had to say about AI, human communication, and service design.
How does the brain receive and interpret spoken language? How is it different from reading language?
Hearing happens in time, whereas vision is static. So that’s going to affect how you process the information. For example, if I’m listening and you’re talking to me, I can’t slow down or speed up how fast you’re talking. So I either can keep up, or not, or be bored. Whereas with reading, I can manage my own pace. Also, reading happens much faster than the time it takes to speak the words. You can process the words much faster than the time it takes to move your muscles to say things. On the other hand, our hearing system, in terms of how it processes information, is way faster, a thousand times faster, than vision.
How does the brain handle these two types of information (speech and visual) differently?
Written language is not as susceptible to noise as speaking is. Noise in sound is obviously a problem when you’re thinking about auditory stimulation. Just in terms of misinterpretation, think about the game of telephone. There’s a latin proverb, verba volant, scripta manent. Words fly, writing remains. I think that captures a lot of the differences. We’re wired to take auditory and visual information together, and certainly when we talk, if you’re looking at my lips, you’re getting a lot more information than if you weren’t. When you’re constructing [those experiences], it’s important to be thinking about whether the visual and auditory information [you’re providing is] congruent and reinforcing.
Sound— what’s so cool about it— it’s invisible, and it’s this incredibly powerful force that people don’t realize. It’s understated, underrated, because you don’t see it right in front of you, and it’s hugely powerful. Your auditory system carries information very, very quickly. And this is where the link with music comes in. There’s a lot of emotion in sound. Certainly there’s emotion you can conjure with words on a page, but when you’re speaking, it’s the music of speech. So we can get a lot of information about whether I really believe what I’m saying, how I feel about you, all of these things. There are these emotional cues.
Interface designers have to make lots of choices about when to use visuals versus spoken language versus written language. What should these designers know?
The most promising part of AI is in customer service, things that don’t require abstract thinking and judgement—a lot of updating of multiple sources of information. We’re never going to be close to figuring out what the brain does. There are just so many gaps—I’ve never seen modeling that made me say, ‘oh wow, this is really telling us an enormous amount about—we’re figuring out a lot.’ My opinion falls very, very short of ever truly understanding the system as a whole. But I think that it can inform subsections of how things work. Particularly, if there’s a task that a human knows needs to be done, you can program a computer to perform that task, and it’s likely to perform it with fewer mistakes.
From a product standpoint, speaking to one’s feelings is very important. We learn through a cognitive-sensory reward system, which links up how you think with how you feel. In very controlled animal experiments, if you manipulate the limbic system (which is feeling, the reward system) animals learn way faster, they remember much longer. So engaging that reward system is very important.
Sound is also very good with memory. There’s a reason why history was carried down by bards, even before we wrote. There’s circuitry that has evolved for millennia to help us make sense of sound. Writing is something that is evolutionarily much later. We haven’t even developed structures in our brains that have evolved solely for writing, because writing is just evolutionarily much more recent than speaking. With sound, we learn our ABC’s, as you say things again and again. Sound has this privileged role in the brain. So if you want a product to be remembered, there is something to be said for delivering it in a way that you’ll remember it. Think about theme songs, jingles, for example.
What’s the difference between a human brain and a machine brain?
We’ll never be able to replicate a human brain artificially because we don’t understand the brain to begin with. We can’t build a person. We can’t even build a liver. Yeah, you can build little bits and pieces and make that work, but it’s not the same. We don’t even know how the ear works. We’ve got these 30,000 specialized hair cells, and they’re all connected together, and they respond to signals from outside the head, and they respond to signals from inside the brain, and that’s just the end organ. There’s chemistry, there’s electricity, there’s magnetism, there’s hemodynamics, all working inside this little cochlea. We can’t even understand one little piece of the system thoroughly. We just don’t know how they work and how they fit together. That’s not even addressing how we feel, say, guilt, or how we consider abstract thoughts. And I don’t think we are ever going to know, but that’s just my opinion.
With this new found knowledge, head over to the Artificially Enhanced Banking Crowdstorm to make use out if straight away by sharing your vision of how Artificial Intelligence can help Deutsche Bank reinvent its customer service experience. Open for submissions until the 28th of June.