Report abstract: Display of Artificial Intelligence in the Vehicle HMI

The KARLI consortium is researching AI applications designed to make automated driving safer. They are designed to detect driver states and interact with the vehicle occupants via various senses as needed. The AI-MMI (Artificial Intelligence of Human-Machine Interaction) work package focuses on the question of how the use of AI will affect the appearance of an MMI. This is also the subject of the report “Display of Artificial Intelligence in the Vehicle HMI” by Dr.-Ing Peter Rössger. Read the summary here.

Author: Dr.-Ing. Peter Rössger

With the work package AI HMI (artificial intelligence of human-machine interaction) the funded project KARLI focuses on the question how the use of Artificial Intelligence affects the appearance of a HMI. Specifically, in KARLI we are talking about AI applications that are intended to make automated driving safer by detecting driver states and interacting with vehicle occupants via various senses.

The world we live in is made of images. Considering how visual humans are and how much visual data we process on a daily basis, it makes sense that the KARLI project is investigating how visual representation of AI can create a better user experience while improving safety in vehicle driving and human-technology interaction.

Most AI applications run in the background, unnoticed by the user. They do not have their own HMI, but the results of the AI are displayed without making the AI context visible. An example: An AI recognizes by means of various parameters that a vehicle occupant is tired, but who should actually be taking the wheel. It interacts with the person by playing various acoustic signals, displaying symbols, opening the window or even parking the vehicle at the side of the road. The occupant of the vehicle is not aware that all of these actions were initiated by an AI.

A visualization of the AI should now take place with KARLI for three reasons:

  1. The project creates a framework in which concepts for exactly these use cases can be developed, ideas can be elaborated and tested.
  2. AI should be demystified, fear of users should be taken away.
  3. Project results can be communicated better, easier and faster with an AI representation.

In a first step, a structuring of possible representation types was made. In ascending order from abstract to realistic, the following categories were introduced:

  • Abstract: the representation of the HMI remains completely abstract, with actions represented by spheres, surfaces or other geometric elements. States of the HMI can be indicated, for example, by colors, shapes, color changes, and shape changes.
  • Abstract-figurative: the representation of the AI remains basically abstract, but has human features, for example, an abstracted face or a whole torso. Again, shapes, colors and their changes can be used as a means of communication. In addition, there are the possibilities offered by the stylized representation of human facial expressions. Lips and eyes can move, the forehead can be furrowed. Furthermore, abstract changes can be mixed with mimic ones to achieve additional effects.
  • Figurative: these are comic or cartoon-like representations. The avatar of the AI can be human, animal or plant. Hybrid forms and transitions are possible. Thus, communication about resource-saving driving can be done by a plant, that of media by a human.
  • Realistic: this is about photorealistic representations of AI. The likeness of a human is used and superimposed on the AI.
  • HMI-integrated: this category is outside the others. Here, the activity of the AI is represented by changes in the existing interaction HMI. This can be done by changes in the screen backgrounds, lighting effects on buttons or the movement of individual screen elements. This is the combination of abstract HMI representation and the interaction elements.

In a next step, existing AI applications were analyzed for visual representations. This analysis yielded amazingly few results. In the vast majority of cases, the existence of an AI is not represented in the HMI. In the case of voice assistants, a highly abstract solution is chosen, usually luminous elements such as rings or dots symbolizing the activity of the AI. A rare exception is the Nomi from the vehicle manufacturer NIO, where an abstract-figurative representation was implemented.

Subsequently, the research was extended to the representation of AI in art, primarily in movies. There, numerous examples of all the above-mentioned categories of representation can be found. The AI HAL from the movie “2001 – A Space Odyssey” is an example of an abstract representation, R2-D2 and C3-PO from “Star Wars” are abstract-figurative and Ava from “Ex Machina” is realistic.

Each of the representation types has advantages and disadvantages. Abstract representations tend to be difficult to understand. Users have to buy through a learning process to understand what an AI is trying to tell them. Realistic representations can come across as unsympathetic to the point of creepy. There is, the danger of Uncanny Valley, maximum uncertainty about whether one is interacting with a machine or a human. Furthermore, abstract and abstract-figurative representations require an increased design effort. This is naturally omitted for systems without AI representations. Realistic representations can also be easier to design than complex abstract visualizations.

For the KARLI project, after weighing all advantages and disadvantages, an abstract-figurative representation of AI was proposed. This combines high flexibility with good comprehensibility. At the end, user studies are to be carried out in which different variants are tested against each other.

Here you can read the full report (only available in German).