A short description of the project
Loss of the voluntary muscular control while preserving cognitive functions is a common symptom of neuromuscular diseases leading to a variety of functional deficits, including the ability to operate software tools that require the use of conventional interfaces like mouse, key-board, or touch-screens. As a result, the affected individuals are marginalized and unable to keep up with the rest of the society in a digitized world.
MAMEM’s goal is to integrate these people back into society by increasing their potential for communication and exchange in leisure (e.g. social networks) and non-leisure context (e.g. workplace). In this direction, MAMEM delivers the technology to enable interface channels that can be controlled through eye-movements and mental commands. This is accomplished by extending the core API of current operating systems with advanced function calls, appropriate for accessing the signals captured by an eye-tracker, an EEG-recorder and bio-measurement sensors. Then, pattern recognition and tracking algorithms are employed to jointly translate these signals into meaningful control and enable a set of novel paradigms for multimodal interaction. These paradigms will allow for low- (e.g., move a mouse), meso- (e.g., tick a box) and high-level (e.g., select n-out-of-m items) control of interface applications through eyes and mind. A set of persuasive design principles together with profiles modeling the users (dis-)abilities will be also employed for designing adapted interfaces for the disabled. MAMEM will engage three different cohorts of disabled (i.e. Parkinson’s disease, neuromuscular disease, and tetraplegia) that will be asked to test a set of prototype applications dealing with multimedia authoring and management.
MAMEM’s final objective is to assess the impact of this technology in making these people more socially integrated by, for instance, becoming more active in sharing content through social networks and communicating with their friends and family.