Sophisticated electronics instead of eye contact – human/machine interface for intuitive communication
- Outside: clear orientation for passengers, unambiguous authentication and more safety for pedestrians and cyclists
- Inside the people-mover module: 360-degree halo display for base information, augmented reality elements for individualised feed
Intuitive, automatic, subconscious – communication in road traffic has long been about more than visible features such as indicators or signage. Road users often reach agreement with gestures or even just eye contact. In the Vision URBANETIC, an innovative human/machine interface (HMI) ensures communication that is easy and intuitive to understand for passengers and environment. It also provides the interaction framework for all necessary communication between the vehicle and, for instance, pedestrians and cyclists. The result is seamless integration into regular road traffic.
LED display and digital shadows
LED displays on the people-mover module enable passengers to clearly identify their allocated Vision URBANETIC. Cameras on the right side of the vehicle and 360-degree sensors recognise if pedestrians or cyclists are in the immediate vicinity, which means a distance of between 30 centimetres and two metres. Digital shadowing uses LEDs on the exterior to project a shadow of the person’s silhouette onto the side of the vehicle. A total of 40 metres of LED strips contains hundreds of individual LEDs that can change colour. This interaction assures cyclists, for instance, that the Vision URBANETIC has seen them and will act accordingly.
Wide-ranging interaction builds trust in autonomous driving
Vision URBANETIC uses car-to-x communication via a standardised platform to interact with other vehicles and road users and to recognise road signs and traffic signals. Interaction with the passengers is more complex as they are used to communicating with a human being. To build trust and allay uncertainty, the HMI in Vision URBANETIC must ensure that all processes, from reserving a trip to exiting the vehicle at the destination, are as simple and self-explanatory as possible. Booking via an app is the starting point. Once a trip reservation has been made, the app shows where the user can board. Furthermore, the user receives a two-digit vehicle number, the colour of the display and a self-chosen avatar. When the Vision URBANETIC approaches the meeting point, the number and symbol are shown on the side display in the selected colour. Subsequent passenger authentication can take one of several forms: via an app or, on the next level, via a fingerprint or facial recognition.
Vehicle status (waiting, driving) is communicated to the outside world via a front-end display in the radiator grille. When the vehicle starts up and is about to set off, the sensors visibly deploy as a signal to the outside world of imminent movement.
A new dimension in infotainment
Vision URBANETIC offers a new level of infotainment diversity inside the people-mover module. The 360-degree halo display installed in the ceiling conveys the most important base information on the route, while augmented-reality (AR) elements for the user app on individual mobile devices enhance the journey with information tailored specifically to the passenger – from tourist hotspots in the city to news from individually selected topics. Other available features include suggested route guidance based on predetermined parameters and the respective user profile. The fastest route for commuters, the most inexpensive route for cost optimisation or a route that provides a succession of city sights – a wide range of available choices.
The user app on the passenger’s mobile device offers functions that extend way beyond searching for and booking connections. It guides the passenger to their virtual stop and to their final destination on exiting the vehicle – either with a two-dimensional map view or AR-guided navigation.