Skip to the main content

Original scientific paper

https://doi.org/10.7307/ptt.v37i5.958

Research on Multimodal Human-Machine Interface for Takeover Request of Automated Vehicles

Junfeng WANG ; School of Design and Innovation, Shenzhen Technology University, Shenzhen, China
Yue WANG ; School of Design and Innovation, Shenzhen Technology University, Shenzhen, China
Yin CUI ; School of Design and Innovation, Shenzhen Technology University, Shenzhen, China *

* Corresponding author.


Full text: english pdf 785 Kb

page 1220-1235

downloads: 316

cite


Abstract

In L3 automated driving, the driver performing the non-driving related tasks (NDRT) is easy to miss the takeover request and cause safety hazards. The takeover prompt strategy has a great impact on this situation. In this paper, four multi-modal takeover interfaces for automatic driving are designed to address the typical takeover scenarios in which the driver is under medium and high-level task loads. The driving simulator is used to conduct experiments, and each scheme’s takeover success rate, takeover time and takeover quality are selected as the evaluation criteria to study the effect of different interfaces on the driver’s takeover performance. The results show that the multimodal takeover interface can shorten takeover time, and the visual-auditory-tactile prompt has the shortest takeover time; and the visual-auditory prompt and auditory-tactile prompt have nearly the same takeover time, but the latter increases the longitudinal deceleration of the vehicle; the visual-tactile prompt has the worst takeover performance. These results provide practical implications for developing suitable interfaces to remind drivers to take over the automated vehicles.

Keywords

human machine interface; takeover in driving automation; non-driving related tasks; takeover performance

Hrčak ID:

335997

URI

https://hrcak.srce.hr/335997

Publication date:

25.9.2025.

Visits: 398 *