Interaction Hardware has arrived

4th November 2021 One minute to read

We integrate a wide variety of input options into our prototypes. In doing so, we implement a so-called multimodal interaction, i.e., the intelligent combination of several input devices to enable flexible yet intuitive operation for all users.

Multiple input devices on a desk, i.e., a foot pedal, a webcam, a joystick, and an eyetracker.

In our past research, we have successfully combined multiple input devices for intuitive, fast, and reliable operation. Most notably, Ramin demonstrated in his work on accessible text input the combination of a touch screen with gaze control in “TAGSwipe”, as well as humming sound recognition with gaze control in “Hummer”. Similarly, Raphael successfully evaluated internet access with speech combined with gaze control in his research with the Web browser “GazeTheWeb”. We will evolve these approaches from our past research in Semanux to make them available to all people.

As part of out EXIST funding, we were able to purchase various input devices, which we are now integrating into our prototype:

We plan to incorporate different input combinations in our prototype: For example, an eyetracker can reveal which link is being looked at on the screen and a foot switch gives the signal to then follow that link. This is just a simple example of the various multimodal interactions we are evaluating at Semanux.

More Posts about Semanux in our Blog

Application for EXIST funding accepted
Talk at Cyber Valley Series on 15th July