Voice control is natural, intuitive, fast and efficient – provided that the conversational AI is well designed.
But the great charm of voice control is also that we can use it when we are involved in a parallel task and cannot use our hands, or do not want to avert our gaze.
In fact, I’m often asked if using voice control, or having a conversation in general, isn’t a critical additional distraction in safety-critical situations. A typical example of such a situation is voice control while driving.
Of course, I dealt with this question very intensively during my time as conversation design lead at BMW.
So what happens in our brain when we perform several activities simultaneously?
The capacity of the so-called working memory, i.e. the part of the memory that allows us to store and manipulate information in the short term, is very limited (more on this topic here).
But what does this mean for the use of voice control as a “secondary task” while we are involved in a safety-critical “primary task”?
When we drive a vehicle, we feel that it is easier for us to make a parallel call, for example, than to write a message on our smartphone. This is because driving and texting are primarily visual tasks, while talking on the phone is primarily an auditory task.
Nevertheless, all these tasks naturally cause mental workload.
According to the theory of multiple resources (Wickens, 1989) tasks using different resources interfere less with each other than tasks using the same resources.
It assumes that total cognitive capacity is composed of different individual capacities that are independent of each other.
Visual and auditory channels use separate resources, both in the senses (eyes versus ears) and in the brain itself (auditory versus visual cortex). Driving and talking also require different kinds of processing methods (= “codes”, spatial versus verbal).
Because of the “code separation”, driving, a visual-spatial manual task, can be “time-shared” with conversation, a verbal task, with little dual task decrement.
Sounds totally confusing?!
Yes, it is, this theory is complex and I save myself here the execution of the further dimensions of the multiple resource theory model of time sharing and workload.
Are there also exciting studies on the subject, which are not so theoretical and prove the principle?!
Absolutely, glad you ask! 😉
There is a highly interesting dataset of real driving data – The second Strategic Highway Research Program (SHRP 2). It is the largest study of its kind, including data from 50 million vehicle miles and 5.4 million trips. Data from instrumented vehicles was collected from more than 3,500 participants during a 3-year period in the U.S.
Recorded data consisted of driving parameters such as speed, acceleration, and braking, all vehicle controls, forward radar, and lane position. In addition, video views forward, to the rear, on the driver’s face, and on the dashboard were captured.
Based on this exceptional data set, consisting of everyday driving situations, many different studies have been conducted. These came to the following results, among others:
✓ Visually distracting secondary tasks, such as texting or eating, are associated with driving errors and accidents.
✓ Cognitive distractions, such as talking on the phone or with a passenger, do not have a detrimental effect on driving performance. (Not even when this cognitive distraction occurs in combination with the strong emotion of anger, such as during an argument!)
✓ Cognitive distraction is often observed when drivers are tired and even has a protective effect on accidents! (Drivers deliberately distract themselves by making phone calls, for example, to keep themselves awake.)
So, to sum up, voice control is an ideal operating modality for situations where the eyes and hands are already tied, especially when driving.
Wickens, C.D. (1989). Processing resources in attention. In D. Holding (Ed.), Human skills (p. 77-105). New York: Wiley.