Voice Interface for Autonomous Driving based on User experienCe Techniques
Objective of the project
The major challenges facing the automotive industry over the next 15 years are the development of highly autonomous cars and advanced electrification of the engine systems. These developments aim to meet the following objectives:
- Offer to the public more fuel-efficient, less polluting cars while guaranteeing performance,
- Improve security,
- Improve the comfort of all occupants of the car to make better the passenger experience on board.
The VIADUCT project (Voice Interface for Autonomous Driving based on User ExperienCe Techniques) brings together the scientific and technological expertise of AW Europe, Acapela, Multitel, CETIC, UCL and the University of Namur to develop a new multimodal, adaptive and speech – centric human-machine interface for driving semi-autonomous cars, with a focus on the specificities of elderly drivers.
The project includes user experience (UX), artificial intelligence (AI) and next-generation voice technologies for interaction between the car and its occupants.
The project VIADUCT has been approved by the Mecatech Cluster in the framework of Plan Marshall of the Walloon Region.
Contribution of Multitel
In the VIADUCT project, Multitel will bring its expertise in signal processing and in multi-modal Human-Machine Interfaces. Multitel will contribute to the development of speech technologies, conversational agent and DMS used by HMI, particularly by conversational agent to contextualise its operation. By contextualisation, we mean here a set of information related to the environment (weather, traffic, …) and to the driver ( identity, physical condition, behaviour, focus of visual attention,…) which will serve to improve/to adapt systems.
Automotive Human-Machine Interfaces
- AW Technical Center Europe ( Aisin Group)
- UNamur - CRIDS/NaDI