In our past research, we have successfully combined multiple input devices for intuitive, fast, and reliable operation. Most notably, Ramin demonstrated in his work on accessible text input the combination of a touch screen with gaze control in "TAGSwipe", as well as humming sound recognition with gaze control in "Hummer". Similarly, Raphael successfully evaluated internet access with speech combined with gaze control in his research with the Web browser "GazeTheWeb". We will evolve these approaches from our past research in Semanux to make them available to all people.
As part of out EXIST funding, we were able to purchase various input devices, which we are now integrating into our prototype:
- Hand switches, which are triggered by a hand or individual fingers.
- Foot switches, which are triggered like a pedal with a foot.
- Webcams, which provide both sound and high-definition video.
- Depth camera, which use depth perception to detect body gestures.
- Eyetrackers, which precisely determine the point of eye gaze on the screen.
- Touch screens, which respond to finger touches.
We plan to incorporate different input combinations in our prototype: For example, an eyetracker can reveal which link is being looked at on the screen and a foot switch gives the signal to then follow that link. This is just a simple example of the various multimodal interactions we are evaluating at Semanux.