The Extra Dimension

The Extra Dimension features deep discussions on how technology intersects with other parts of our lives. Welcome to the heart of the technological convergence.

Infotainment Interface Design for Automobiles

Episode #28The Fringe #461

Ian R Buck makes an audio adaptation of his senior seminar paper from 2015.

Episode Summary

00:00 | Intro

00:54 | Abstract

In an increasingly connected, mobile world, situations where users do not interact with their digital lives are becoming few and far between. This can be a problem in situations that demand a user's attention for their safety. Driving is one such situation, and it is doubly important because a significant portion of the western population drives on a daily basis. Researchers have tested different interface designs with the goal of finding one that demands the least cognitive load while still allowing the user to perform the desired task efficiently. In this paper interfaces incorporating auditory cues, voice dictation, and air gestures are discussed.

01:42 | 1. Introduction

03:29 | 2. Background

03:38 | 2.1 User Interfaces

04:01 | 2.1.1 Touchscreens

04:56 | 2.1.2 Voice Dictation

05:37 | 2.1.3 Screen Reading

06:50 | 2.1.4 Air Gestures

07:25 | 2.2 Testing Distracted Driving

08:11 | 2.2.1 Lane Changing Exercise

08:44 | 2.2.2 Car Following Exercise

09:25 | 2.2.3 Eye Tracking

09:55 | 3. Auditory Cues

Figure 1, Mean time eyes on driving chart
Figure 1: Mean time visual fixation on the primary task. The control was significantly higher than all other conditions. Spindex+TTS was significantly higher than no sound and spearcon+TTS, marked here with dots.

14:15 | 4. Text-To-Speech and Voice Dictation

Figure 2a, Mean lateral deviation
Figure 2a: Mean lateral deviation during responding phase. Overall using the single TTS voice resulted in lower deviation than matching TTS voices.
Figure 2b, Mean lane change initiation
Figure 2b: Mean lane change initiation during responding phase. Using single TTS voice resulted in faster reaction times than matching TTS voices, but the difference was not significant.
Figure 3, Mean e-mail comprehention
Figure 3: Email comprehension. Matching TTS voices had much higher comprehension than single TTS voice for low complexity messages.

21:04 | 5. Air Gestures

Figure 4, Total time and gaze time for secondary tasks.
Figure 4: Total time and gaze time for secondary tasks.

26:27 | 6. Discussion

30:05 | 7. Conclusion

30:44 | Outro

References

  1. T. M. Gable, B. N. Walker, H. R. Moses, and R. D. Chitloor. Advanced auditory cues on mobile phones help keep drivers’ eyes on the road. In Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI ’13, pages 66–73, New York, NY, USA, 2013. ACM.
  2. J. Heikkinen, E. M¨akinen, J. Lylykangas, T. Pakkanen, K. V¨a¨an¨anen-Vainio-Mattila, and R. Raisamo. Mobile devices as infotainment user interfaces in the car: Contextual study and design implications. In Proceedings of the 15th International Conference on Human-computer Interaction with Mobile Devices and Services, MobileHCI ’13, pages 137–146, New York, NY, USA, 2013. ACM.
  3. Y. Liang, J. D. Lee, and L. Yekhshatyan. How dangerous is looking away from the road? algorithms predict crash risk from glance patterns in naturalistic driving. Human Factors: The Journal of the Human Factors and Ergonomics Society, 54(6):1104–1116, 2012.
  4. K. R. May, T. M. Gable, and B. N. Walker. A multimodal air gesture interface for in vehicle menu navigation. In Adjunct Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI ’14, pages 1–6, New York, NY, USA, 2014. ACM.
  5. S. Truschin, M. Schermann, S. Goswami, and H. Krcmar. Designing interfaces for multiple-goal environments: Experimental insights from in-vehicle speech interfaces. ACM Trans. Comput.-Hum. Interact., 21(1):7:1–7:24, Feb. 2014.

Attrubutions