Related publications

Published papers related to the MAMEM Project.
1.

Vangelis P. Oikonomou, Georgios Liaros, Kostantinos Georgiadis, Elisavet Chatzilari, Katerina Adam, Spiros Nikolopoulos and Ioannis Kompatsiaris, Comparative evaluation of state-of-the-art algorithms for SSVEP-based BCIs, Technical Report - eprint arXiv:1602.00904, February 2016

Abstract Brain-computer interfaces (BCIs) have been gaining momentum in making human-computer interaction more natural, especially for people with neuro-muscular disabilities. Among the existing solutions the systems relying on electroencephalograms (EEG) occupy the most prominent place due to their non-invasiveness. However, the process of translating EEG signals into computer commands is far from trivial, since it requires the optimization of many different parameters that need to be tuned jointly. In this report, we focus on the category of EEG-based BCIs that rely on Steady-State-Visual-Evoked Potentials (SSVEPs) and perform a comparative evaluation of the most promising algorithms existing in the literature. More specifically, we define a set of algorithms for each of the various different parameters composing a BCI system (i.e. filtering, artifact removal, feature extraction, feature selection and classification) and study each parameter independently by keeping all other parameters fixed. The results obtained from this evaluation process are provided together with a dataset consisting of the 256-channel, EEG signals of 11 subjects, as well as a processing toolbox for reproducing the results and supporting further experimentation. In this way, we manage to make available for the community a state-of-the-art baseline for SSVEP-based BCIs that can be used as a basis for introducing novel methods and approaches.
2.

Zoe Katsarou, MD, Meir Plotnik, PhD, Gabi Zeilig, MD, Amihai Gottlieb, MSc, Rachel Kizony, PhD and Sevasti Bonstantjopoulou, Computer uses and difficulties in Parkinson’s disease, The MDS 20th International Congress of Parkinson's Disease and Movement Disorders, Berlin-Germany, June 2016

Abstract Thirty five PD pts with a long experience in computer operation were included in the study. Their mean age was 59.5 (SD 8.27) years. Most of them were in Hoehn and Yahr stage II. PD pts uses, habits, and difficulties with the computer were explored by means of a structured interview which provided information in the form of yes/no answers to questions relevant to a wide range of usual computer uses and applications as well as difficutlties in performing various tasks relevant to computer operation. Two quantitative scales one referring to the contribution of the computer in social life, every day activities, emotional well-being (total score: 9=not important/45-very important) and the other exploring the disease impact on various aspects of computer operation (total score: 11=no effect/55 maximum effect) were also employed.
3.

S. Bostantjopoulou, M. Plotnick, G. Zeilig, A. Gottlieb, R. Kizony, S. Chlomissiou, A. Nichogiannopoulou, Z. Katsarou, Computer use aspects in patients with motor disabilities, 2nd Congress of the European Academy of Neurology (EAN'2016), Copenhagen, Denmark, May 28-31, 2016

Abstract Three groups of neurological patients were studied:a)25 patients with Parkinson's disease (PD),b)23 patients with spinal cord injury (SCI) and c)19 with neuromuscular disorders ((NMD).All patients were assessed by means of two scales, one referring to the contribution of the computer in social life,everyday activities,emotional well-being (CCLS) [total score:9=not important/45=very important] and the other exploring the disease impact on various aspects of computer operation (DICOS) [total score:11=no effect/55=maximum effect]. Reliability of both scales was excellent (Cronbach's alpha was 0.87 for CCLS and 0.93 for DICOS).Between groups comparisons showed that NMD patients regarded conputer use as most important and SCI patients had the major difficulty.Mean total scores (SD) were as follows for a)CCLS:PD patients=23.28(7.22);SCI patients=20.78(9.72);NMD patients=32.84(5.12) [p=0.000] and b)DICOS:PD patients=25.9(9.9);SCI patients=31.22(15.0);NMD patients=20.53(5.15)[p=0.017]. Our preliminary results show that patients with motor disabilities regard computer use as an important aspect of their life and their disability has a significant effect in their ability to operate it satisfactorily.This information is important for the development of innovating technology helping patients to overcome their specific disabilities.
4.

Zeilig Gabi, Gottlieb Amihai, Kizony Rachel, Katsarou Zoe, Bostantzopoulou Sevasti, Nichgiannopoulou Ariana, Chlomissiou Sissy and Plotnik Meir, MAMEM – A novel computer brain interface platform for enhancing social interaction of people with disabilities – Clinical requirements resulting from focus groups and literature survey, 20th European Congress of Physical and Rehabilitation Medicine, Lisbon Portugal, April 2016

Abstract Health professionals with experience in the field of Parkinson Disease, neuromuscular conditions and tetraplegia following spinal cord injury, from three medical centers, from two countries, participated. We performed a literature survey, focusing on the characteristics of the study population, their computer and internet use habits, existing solutions, and specific challenges related to EEGs and EMs - based –computerassistive devices. We conducted three focus groups, with six health professionals per group. We also performed a qualitative analysis of the focus groups transcripts. The clinical requirements that resulted at the end of this phase have been then summarized, prioritized and coded with numbers from 1 (minimal) to 7 (maximal importance) by the health professionals from each site
5.

Sofia Fountoukidou, Jaap Ham, Peter Ruijten, and Uwe Matzat, Using personalized persuasive strategies to increase acceptance and use of HCI technology, Adjunct Proceedings of the 11th International Conference on Persuasive Technology, Salzburg-Austria, April 2016

Abstract It has been recognized that the adoption of a certain technology is not only dependent on the technological excellence. End-users can be reluctant to use it even though they are aware of its benefits. Personalized persuasive technology could be a key to the successful adoption of the MAMEM technology. Previous research has identified various persuasive strategies that can effective for behavior and/or attitude change. However, it is still unclear which persuasive strategies are most effective for which type of person. Thus, one of the objectives of MAMEM is to adapt persuasive technology interventions to the target group characteristics, in order to increase their effectiveness. For its realization, user profiles and sets of personas for the three target groups are created. Next, the Intervention Mapping framework is used for developing and implementing effective persuasive interventions tailored both to the user audience characteristics and to the specific target behaviors. All in all, MAMEM will execute extensive research, in order to contribute to the current state of the art of designing persuasive technologies taking into consideration both the user characteristics and the behavior in request.
6.

R. Menges, C. Kumar, K. Sengupta, S. Staab, eyeGUI: A Novel Framework for Eye-Controlled User Interfaces, 9th Nordic Conference on Human-Computer Interaction, NordiCHI 2016

Abstract The user interfaces and input events are typically composed of mouse and keyboard interactions in generic applications. Eye-controlled applications need to revise these interactions to eye gestures, and hence design and optimization of interface elements becomes a significant feature. In this work, we propose a novel eyeGUI framework, to support the development of such interactive eye-controlled applications with many vital aspects, like rendering, layout, dynamic modification of content, support of graphics and animation
7.

C. Kumar, R. Menges and S. Staab, Eye-Controlled Interfaces for Multimedia Interaction, in IEEE MultiMedia, vol. 23, no. 4, pp. 6-13, Oct.-Dec. 2016. doi: 10.1109/MMUL.2016.52

Abstract In the digitized world, interacting with multimedia information occupies a large portion of everyday activities; it’s now an essential part of how we gather knowledge and communicate with others. It involves several operations, including selecting, navigating through, and modifying multimedia, such as text, images, animations, and videos. These operations are usually performed by devices such as a mouse or keyboard, but people with motor disabilities often can’t use such devices. This limits their ability to interact with multimedia content and thus excludes them from the digital information spaces that help us stay connected with families, friends, and colleagues. In this paper, we primarily focus on the gaze-based control paradigm that we’ve developed as part of our work at the Institute for Web Science and Technologies (WeST) within the scope of MAMEM project. We outline the particular challenges of eye-controlled interaction with multimedia information, including initial project results. The objective is to investigate how eye-based interaction techniques can be made precise and fast enough to not only allow disabled people to interact with multimedia information but also make usage sufficiently simple and enticing such that healthy users might also want to include eye-based interaction.
8.

Korok Sengupta, Raphael Menges, Chandan Kumar and Steffen Staab, GazeTheKey: Interactive Keys to Integrate Word Predictions for Gaze-based Text Entry, Demo paper at the 22nd annual meeting of the intelligent user interfaces community, ACM IUI 2017, March 13 - 16, 2017 Limassol, Cyprus

Abstract In the conventional keyboard interfaces for eye typing, the functionalities of the virtual keys are static, i.e., user’s gaze at a particular key simply translates the associated letter as user’s input. In this work we argue the keys to be more dynamic and embed intelligent predictions to support gazebased text entry. In this regard, we demonstrate a novel "GazeTheKey" interface where a key not only signifies the input character, but also predict the relevant words that could be selected by user’s gaze utilizing a two-step dwell time.
9.

Zoe Katsarou, Gabi Zeilig, Meir Plotnik, Amihai Gotlieb, Rachael Kizony and Sevasti Bonstantjopoulou, Parkinson’s disease impact on computer use. A patients’ and caregivers perspective, The American Academy of Neurology 69th Annual Meeting, Neurology April 18, 2017 vol. 88 no. 16 Supplement P6.009, Boston, MA

Abstract The mean total score of PD paents on the CCSL scale was 22.7±6.9 Single items that scored high were relevant to interpersonal interacon, educaon, work and employment. The DICOS scale yielded a mean total score of 24.7± 10.0.Single items that had a significant impact on the whole score were speed of computer operaon and accuracy of performance. Caregivers’ mean scores on the CCSL and DICOS scales were similar to those of the paents (p=0.324).
10.

Anastasios Maronidis, Vangelis Oikonomou, Spiros Nikolopoulos and Ioannis (Yannis) Kompatsiaris, Steady State Visual Evoked Potential Detection Using Subclass Marginal Fisher Analysis, Proceedings of the 8th International IEEE EMBS Conference on Neural Engineering, May 25-28, 2017, Shanghai China

Abstract Recently, SSVEP detection from EEG signals has attracted the interest of the research community, leading to a number of well-tailored methods. Among these methods, Canonical Correlation Analysis (CCA) along with several variants have gained the leadership. Despite their effectiveness, due to their strong dependence on the correct calculation of correlations, these methods may prove to be inadequate in front of potential deficiency in the number of channels used, the number of available trials or the duration of the acquired signals. In this paper, we propose the use of Subclass Marginal Fisher Analysis (SFMA) in order to overcome such problems. SMFA has the power to effectively learn discriminative features of poor signals, and this advantage is expected to offer the appropriate robustness needed in order to handle such deficiencies. In this context, we pinpoint the qualitative advantages of SMFA, and through a series of experiments we prove its superiority over the state-of-the-art in detecting SSVEPs from EEG signals acquired with limited resources.
11.

Vangelis P. Oikonomou, Anastasios Maronidis, Georgios Liaros, Spiros Nikolopoulos and Ioannis Kompatsiaris, Sparse Bayesian Learning for Subject Independent Classification with Application to SSVEP-BCI, Proceedings of the 8th International IEEE EMBS Conference on Neural Engineering, May 25-28, 2017, Shanghai China

Abstract Sparse Bayesian Learning (SBL) is a widely used framework which helps us to deal with two basic problems of machine learning, to avoid overfitting of the model and to incorporate prior knowledge into it. In this work, multiple linear regression models under the SBL framework are used for the problem of multiclass classification when multiple subjects are available. As a case study, we apply our method to the detection of Steady State Visual Evoked Potentials (SSVEP), a problem that arises frequently into the Brain Computer Interface (BCI) paradigm. The multiclass classification problem is decomposed into multiple regression problems. By solving these regression problems, a discriminant vector is learned for further processing. In addition the adoption of the kernel trick and the special treatment of produced similarity matrix provide us with the ability to use a Leave-One-Subject-Out training procedure resulting in a classification system suitable for subject independent classification. Extensive comparisons are carried out between the proposed algorithm, the SVM classifier and the CCA based methodology. The experimental results demonstrate that the proposed algorithm outperforms the competing approaches, in terms of classification accuracy and Information Transfer Rate (ITR), when the number of utilized EEG channels is small.
12.

Raphael Menges, Chandan Kumar, Daniel Mueller, Korok Sengupta, GazeTheWeb: A Gaze-Controlled Web Browser, (TPG challenge winner) Proceedings of the 14th Web for All Conference. ACM, 2017

Abstract Web is essential for most people, and its accessibility should not be limited to conventional input sources like mouse and keyboard. In recent years, eye tracking systems have greatly improved, beginning to play an important role as input medium. In this work, we present GazeTheWeb, aWeb browser accessible solely by eye gaze input. It effectively supports all browsing operations like search, navigation and bookmarks. GazeTheWeb is based on a Chromium powered framework, comprising Web extraction to classify interactive elements, and application of gaze interaction paradigms to represent these elements.
13.

Chandan Kumar, Raphael Menges, Daniel Mueller, Steffen Staab, Chromium based Framework to Include Gaze Interaction in Web Browser, (honourable mention) Proceedings of the 26th International Conference Companion on World Wide Web (pp. 219-224). International World Wide Web Conferences Steering Committee, 2017

Abstract EnablingWeb interaction by non-conventional input sources like eyes has great potential to enhance Web accessibility. In this paper, we present a Chromium based inclusive framework to adapt eye gaze events in Web interfaces. The framework provides more utility and control to develop a fully featured interactive browser, compared to the related approaches of gaze-based mouse and keyboard emulation or browser extensions. We demonstrate the framework through a sophisticated gaze driven Web browser, which e effectively supports all browsing operations like search, navigation, bookmarks, and tab management.
14.

Fotis Kalaganis, Elisavet Chatzilari, Kostas Georgiadis, Spiros Nikolopoulos, Nikos Laskaris and Yiannis Kompatsiaris, An Error Aware SSVEP-based BCI, 30th IEEE International Symposium on Computer-based Medical Systems, Special Track on Multimodal Interfaces for Natural Human Computer Interaction: Theory and Applications, IEEE CBMS 2017, June 22-24, 2017 Thessaloniki - Greece

Abstract ErrPs have been used lately in order to improve several existing BCI applications. In our study we investigate the contribution of ErrPs in a SSVEP based BCI. An extensive study is presented in order to discover the limitations of the proposed scheme. Using Common Spatial Patterns and Random Forests we manage to show encouraging results regarding the incorporation of ErrPs in a SSVEP system. Finally, we provide a novel methodology (ICRT) that can measure the gain of a BCI system by incorporating ErrPs in terms of time efficiency.
15.

Vangelis Oikonomou, Kostas Georgiadis, Georgios Liaros, Spiros Nikolopoulos and Yiannis Kompatsiaris, A comparison study on EEG signal processing techniques using motor imagery EEG data, 30th IEEE International Symposium on Computer-based Medical Systems, Special Track on Multimodal Interfaces for Natural Human Computer Interaction: Theory and Applications, IEEE CBMS 2017, June 22-24, 2017 Thessaloniki - Greece

Abstract Brain-computer interfaces (BCIs) have been gaining momentum in making human-computer interaction more natural, especially for people with neuro-muscular disabilities. Among the existing solutions the systems relying on electroencephalograms (EEG) occupy the most prominent place due to their non-invasiveness. In this work we provide a review of various existing techniques for the identification of motor imagery (MI) tasks. More specifically we perform a comparison between CSP related features and features based on Power Spectral Density (PSD) techniques. Furthermore, for the identification of MI tasks two well-known classifiers are used, the Linear Discriminant Analysis (LDA) and the Support Vector Machine (SVM). Our results confirms that PSD features demonstrate the most consistent robustness and effectiveness in extracting patterns for accurately discriminating between left and right MI tasks.
16.

Korok Sengupta, Jun Sun, Raphael Menges, Chandan Kumar and Steffen Staab, Analyzing the Impact of Cognitive Load in Evaluating Gaze-based Typing, 30th IEEE International Symposium on Computer-based Medical Systems, Special Track on Multimodal Interfaces for Natural Human Computer Interaction: Theory and Applications, IEEE CBMS 2017, June 22-24, 2017 Thessaloniki - Greece, Best Student Paper Award

Abstract Gaze-based virtual keyboards allow people with motor disability a method for text entry by eye movements. The effectiveness and usability of these keyboards have traditionally been evaluated with conventional text entry performance measures such as words per minute, keystroke saving, error rate, accuracy, etc. However, in comparison to the conventional text entry approaches, gaze-based typing involves natural eye movements that are highly correlated with human brain cognition. Employing eye gaze as an input could lead to excessive mental demand, and in this work we argue the need to include cognitive load as an eye typing evaluation measure. We evaluate three variations of gaze-based virtual keyboards, which have variable design in terms of word suggestion positioning. The conventional text entry metrics indicate no significant difference in the performance of the different keyboard designs. However, STFT (Short-time Fourier Transformation) based analysis of EEG signals indicate variances in the mental workload of participants while interacting with these designs. Moreover, the EEG analysis provides us insights into the user’s cognition variation in different typing phases and intervals, which should be considered to improve eye typing usability.
17.

Chandan Kumar, Raphael Menges and Steffen Staab, Assessing the Usability of a Gaze-Adapted Interface with Conventional Eye-based Emulation, 30th IEEE International Symposium on Computer-based Medical Systems, Special Track on Multimodal Interfaces for Natural Human Computer Interaction: Theory and Applications, IEEE CBMS 2017, June 22-24, 2017 Thessaloniki - Greece

Abstract In recent years, eye tracking systems have greatly improved, beginning to play a promising role as an input medium. Eye trackers can be used for application control either by simply emulating the mouse device in the traditional graphical user interface, or by customized interfaces for eye gaze events. In this work we evaluate these two approaches to assess their impact in usability. We present a gaze-adapted Twitter application interface with direct interaction of eye gaze input, and compare it to the Twitter in a conventional browser interface with gaze-based mouse and keyboard emulation. We conducted an experimental study, which indicates a significantly better subjective user experience for the gazeadapted approach. Based on the results, we argue the need of user interfaces interacting directly to eye input to provide an improved user experience, more specifically in the field of accessibility.
18.

Vangelis Oikonomou, George Liaros, Spiros Nikolopoulos and Ioannis Kompatsiaris, Sparse Bayesian Learning for Multiclass Classification with application to SSVEP- BCI, 7th Graz Brain-Computer Interface Conference, September 18th – 22nd, 2017, Graz, Austria

Abstract Sparse Bayesian Learning (SBL) is a basic tool of machine learning. In this work, multiple linear regression models under the SBL framework (namely MultiLRM), are used for the problem of multiclass classification. As a case study we apply our method to the detection of Steady State Visual Evoked Potentials (SSVEP), a problem we encounter into the Brain Computer Interface (BCI) concept. The multiclass classification problem is decomposed into multiple regression problems. By solving these regression problems, a discriminant feature vector is learned for further processing. Furthermore by adopting the kernel trick the model is able to reduce its computational cost. To obtain the regression coefficients of each linear model, the Variational Bayesian framework is adopted. Extensive comparisons are carried out between the MultiLRM algorithm and several other competing methods. The experimental results demonstrate that the MultiLRM algorithm achieves better performance than the competing algorithms for SSVEP classification, especially when the number of EEG channels is small.
19.

Sofia Fountoukidou, Jaap Ham, Cees Midden, and Uwe Matzat, Using Tailoring to Increase the Effectiveness of a Persuasive Game-Based Training for Novel Technologies, Proceedings of the Personalization in Persuasive Technology Workshop, Persuasive Technology 2017, Amsterdam, The Netherlands, April 2017

Abstract A vast majority of people with motor disabilities cannot be part of the today’s digital society, due to the difficulties they face in using conventional interfaces (i.e., mouse and keyboard) for computer operation. The MAMEM project aims at facilitating the social inclusion of these people by developing a technology that allows computer operation, solely by using the eyes and mind. However, training is one of the key factors affecting the users’ technology acceptance. Game-based computer training including persuasive strategies could be an effective way to influence user beliefs and behaviours regarding a novel system. Tailoring these strategies to an individual level is a promising way to increase the effectiveness of a persuasive game. In the current paper, we briefly discuss the theoretical development of a persuasive game-based training for the MAMEM technology, as well as how we used tailored communication strategies to further enhance user technology acceptance. The development of such a tailored persuasive game will be essential for increasing acceptance and usage of assistive technology but also for the scientific insights in personalization of persuasion.
20.

Korok Sengupta, Chandan Kumar and Steffen Staab, Usability Heuristics for Eye-controlled User Interface, The 2017 COGAIN Symposium: Communication by Gaze Interaction, Wuppertal, Germany. August 19th and 21st, 2017

Abstract Evolution of affordable assistive technologies like eye tracking help people with motor disabilities to access information on the Internet or work on computers. However, eye tracking environments need to be specially built for better usability and accessibility of the content and should not be on interface layouts that are conducive to conventional mouse or touch based interfaces. In this work, we argue the need of the domain specific heuristic checklist for eye-controlled interfaces, which conforms to the usability, design principles and less demanding from cognitive load perspective. It focuses on the need to understand the product in use inside the gaze based environment and then apply the heuristic guidelines to design them. We propose an eight-point questionnaire to validate the usability heuristic guidelines for eye-controlled interfaces.
21.

Spiros Nikolopoulos, Kostas Georgiadis, Fotis Kalaganis, Georgios Liaros, Ioulietta Lazarou, Katerina Adam, Anastasios Papazoglou - Chalikias, Elisavet Chatzilari, Vangelis P. Oikonomou, Panagiotis C. Petrantonakis, Ioannis Kompatsiaris, Chandan Kumar, Raphael Menges, Steffen Staab, Daniel Müller, Korok Sengupta, Sevasti Bostantjopoulou, Zoe Katsarou, Gabi Zeilig, Meir Plotnik, Amihai Gottlieb, Sofia Fountoukidou, Jaap Ham, Dimitrios Athanasiou, Agnes Mariakaki, Dario Comanducci, Edoardo Sabatini, Walter Nistico and Markus Plank, The MAMEM Project – A dataset for multimodal human-computer interaction using biosignals and eye tracking information, Technical Report, Zenodo. http://doi.org/10.5281/zenodo.834154

Abstract In this report we present a dataset that combines multimodal biosignals and eye tracking information gathered under a human-computer interaction framework. The dataset was developed in the vein of the MAMEM project that aims to endow people with motor disabilities with the ability to edit and author multimedia content through mental commands and gaze activity. The dataset includes EEG, eye-tracking, and physiological (GSR and Heart rate) signals along with demographic, clinical and behavioral data collected from 36 individuals (18 able-bodied and 18 motor-impaired). Data were collected during the interaction with specifically designed interface for web browsing and multimedia content manipulation and during imaginary movement tasks. Alongside these data we also include evaluation reports both from the subjects and the experimenters as far as the experimental procedure and collected dataset are concerned. We believe that the presented dataset will contribute towards the development and evaluation of modern human-computer interaction systems that would foster the integration of people with severe motor impairments back into society.
22.

Fotis P. Kalaganis, Elisavet Chatzilari, Spiros Nikolopoulos, Nikos A. Laskaris and Yiannis Kompatsiaris, A Collaborative Representation Approach to Detecting Error-Related Potentials in SSVEP-BCIs, 25th ACM MM Conference, ThematicWorkshops’17, October 23–27, 2017, Mountain View, CA, USA

Abstract This study takes advantage of Error Related Potentials, a certain type of neurophysiological event associated with humans’ ability to observe and recognize erroneous actions, in order to improve SSVEP-based Brain Computer Interfaces (BCIs). The Error Related Potentials serve as a passive correction mechanism that originates directly from the user’s brain. In this paper we propose a novel approach to spatial filtering, based on a supervised variant of Collaborative Representation Projections (CRP) offering a more discriminant representation of electroencephalography signals for detecting Error Related Potentials. This new approach enhances the detectability of Error Related Potentials by projecting the spatial information of signals into a new space where samples of the same class tend to form local neighborhoods. Moreover, the limitations under which the Error Related Potentials positively contribute to the performance of a SSVEP-based BCI are explored. For this reason we also provide a new methodology, namely Inverse Correct Response Time (ICRT), that reliably captures the trade-off, between the gain of the automated error detection and the induced time delay of a BCI system that potentially incorporates Error Related Potentials.
23.

Elisavet Chatzilari, Georgios Liaros, Kostas Georgiadis, Spiros Nikolopoulos and Yiannis Kompatsiaris, Combining the Benefits of CCA and SVMs for SSVEP-based BCIs in Real-world Conditions, 25th ACM MM Conference, Workshop MMHealth’17, October 23–27, 2017, Mountain View, CA, USA

Abstract In this paper we propose a novel method for SSVEP classification that combines the benefits of the inherently multi-channel CCA, the state-of-the-art method for detecting SSVEPs, with the robust SVMs, one of the most popular machine learning algorithms. The employment of SVMs, except for the benefit of robustness, provides us also with a confidence score allowing to dynamically trade-off the trial length with the accuracy of the classifier, and vice versa. By balancing this trade-off we are able to offer personalized selfpaced BCIs that maximize the ITR of the system. Furthermore, we propose to perturb the template frequencies of CCA so as to accommodate with realworld BCI applications requirements, where the environmental conditions may not be ideal compared to existing methods that rely on the assumption of soundproof and distraction free environments.
24.

Menges, Raphael and Kumar, Chandan and Wechselberger, Ulrich and Schaefer, Christoph and Walber, Tina and Staab, Steffen, Schau genau! A Gaze-Controlled 3D Game for Entertainment and Education, COGAIN Symposium Wuppertal, August 21st 2017

Abstract Eye tracking devices have become affordable. However, they are still not very much pre- sent in everyday lives. To explore the feasibility of modern low-cost hardware in terms of reliability and usability for broad user groups, we present a gaze-controlled game in a standalone arcade box with a single physical buzzer for activation. The player controls an avatar in appearance of a butterfly, which flies over a meadow towards the horizon. Goal of the game is to collect spawning flowers by hitting them with the avatar, which increases the score. Three mappings of gaze on screen to world position of the avatar, featuring different levels of intelligence, have been defined and were randomly assigned to players. Both a survey after a session and the high score distribution are considered for evaluation of these control styles. An additional serious part of the game educates the players in flower species, who are rewarded with a point-multiplier for prior knowledge. During this part, gaze data on images is collected, which can be used for saliency calculations. Nearly 3000 completed game sessions were recorded on a state horticulture show in Germany, which demonstrates the impact and acceptability of this novel input technique among lay users
25.

Peter A. M. Ruijten, Cees J. H. Midden & Jaap Ham, Ambiguous Agents: The Influence of Consistency of an Artificial Agent’s Social Cues on Emotion Recognition, Recall, and Persuasiveness, International Journal of Human–Computer Interaction, Volume 32, Issue 9, 2016

Abstract This article explores the relation between consistency of social cues and persuasion by an artificial agent. Including (minimal) social cues in Persuasive Technology (PT) increases the probability that people attribute human-like characteristics to that technology, which in turn can make that technology more persuasive (see, e.g., Nass, Steuer, Tauber, & Reeder, 1993). PT in the social actor role can be equipped with a variety of social cues to create opportunities for applying social influence strategies (for an overview, see Fogg, 2003). However, multiple social cues may not always be perceived as being consistent, which could decrease their perceived human-likeness and their persuasiveness. In the current article, we investigate the relation between consistency of social cues and persuasion by an artificial agent. Findings of two studies show that consistency of social cues increases people’s recognition and recall of artificial agents’ emotional expressions, and make those agents more persuasive. These findings show the importance of the combined meaning of social cues in the design of persuasive artificial agents.
26.

Jaap Ham, Jef van Schendel, Saskia Koldijk and Evangelia Demerouti, Finding Kairos: The Influence of Context-Based Timing on Compliance with Well-Being Triggers, Symbiotic 2016, LNCS 9961, pp. 89–101, 2017.

Abstract For healthy computer use, frequent, short breaks are crucial. This research investigated whether context-aware persuasive technology can identify opportune and effective moments (of high user motivation and ability to perform target behavior) for triggering short breaks fostering symbiotic interactions between e-Coaching e-Health technology and users. In Study 1, office workers rated their motivation and ability to take a short break (probed at random moments). Simultaneously their computer activity was recorded. Results showed that computer activity (time since last break; change in computer activity level) can predict moments of high and low (perceived) ability (but not motivation) to take a short break. Study 2 showed that when office workers received triggers (to take a short break) at moments of high (vs. low) ability (predicted based on computer activity), compliance increased 70%. These results show that context information can be used to identify opportune moments, at which persuasive triggers are more effective.
27.

Konstantinos Ilias Georgiadis, Nikolaos Laskaris, Spiros Nikolopoulos and Yiannis Kompatsiaris, Discriminative Codewaves: a symbolic dynamics approach to SSVEP recognition for asynchronous BCI, Journal of Neural Engineering, Volume 15, Number 2, January 2018, © 2017 IOP Publishing Ltd

Abstract Objective: Steady-state visual evoked potential (SSVEP) is a very popular approach to establishing a communication pathway in brain computer interfaces (BCIs), without any training requirements for the user. Brain activity recorded over occipital regions, in association with stimuli flickering at distinct frequencies, is used to predict the gaze direction. High performance is achieved when the analysis of multichannel signal is guided by the driving signals. It is the scope of this study to introduce an efficient way to identify the attended stimulus without the need to register the driving signals. Approach: Brain response is described as a dynamical trajectory towards one of the attractors associated with the brainwave entrainment induced by the attended stimulus. A condensed description for each single-trial response is provided by means of discriminative vector quantization (DVQ), and different trajectories are disentangled based on a simple classification scheme that uses templates and confidence intervals derived from a small training dataset. Main results: Experiments, based on two different datasets, provided evidence that the introduced approach compares favorably to well-established alternatives, regarding the Information Transfer Rate (ITR). Significance: Our approach relies on (but not restricted to) single sensor traces, incorporates a novel description of brainwaves based on semi-supervised learning, and its great advantage stems from its potential for self-paced BCI. It is the scope of this study to introduce an efficient way to identify the attended stimulus without the need to register the driving signals. To this end, brain response is described as a dynamical trajectory towards one of the attractors associated with the brainwave entrainment induced by the attended stimulus. A condensed description for each single-trial response is provided by means of discriminative vector quantization (DVQ), and different trajectories are disentangled based on a simple classification scheme that uses templates and confidence intervals derived from a small training dataset. Our approach relies on (but not restricted to) single sensor traces, incorporates a novel description of brainwaves based on semi-supervised learning, and its great advantage stems from its potential for self-paced BCI. Experiments, based on two different datasets, provided evidence that the approach compares favorably to well-established alternatives, regarding the Information Transfer Rate (ITR).
28.

Dr. Meir PLOTNIK, Mr. Amihai GOTTLIEB, Dr. Rachel KIZONY, Dr. Zoe KATSAROU, Dr. Sevasti BOSTANTZOPOULOU, Ms. Ariana NICHOGIANNOPOULOU, Ms. Sissy CHLOMISSIOU, Dr. Gabi ZEILIG, Clinical Research Example – MAMEM – Multimedia Authoring and Management Using Your Eyes and Mind., Rehab science and technology update (RSTU) 7-10 February. 2016. Tel Aviv

Abstract With the growing number of people with severe disabilities who live longer and the growing use of computers for social interactions, we introduce an ambitious objective of The Multimedia Authoring and Management using your Eyes and Mind (MAMEM) project, i.e., a more natural human computer interfaces based on electroencephalography (EEG)/Eye movements (EMs) technologies.
29.

Spiros Nikolopoulos, Kostas Georgiadis, Fotis Kalaganis, Georgios Liaros, Ioulietta Lazarou, Katerina Adam, Anastasios Papazoglou - Chalikias, Elisavet Chatzilari, Vangelis P. Oikonomou, Panagiotis C. Petrantonakis, Ioannis Kompatsiaris, Chandan Kumar, Raphael Menges, Steffen Staab, Daniel Müller, Korok Sengupta, Sevasti Bostantjopoulou, Zoe Katsarou, Gabi Zeilig, Meir Plotnik, Amihai Gottlieb, RacheliKizoni, Sofia Fountoukidou, Jaap Ham, Dimitrios Athanasiou, Agnes Mariakaki, Dario Comanducci, Edoardo Sabatini, Walter Nistico and Markus Plank, A Multimodal dataset for authoring and editing multimedia content: The MAMEM project, In Data in Brief, Volume 15, 2017, Pages 1048-1056, ISSN 2352-3409

Abstract We present a dataset that combines multimodal biosignals and eye tracking information gathered under a human-computer interaction framework. The dataset was developed in the vein of the MAMEM project that aims to endow people with motor disabilities with the ability to edit and author multimedia content through mental commands and gaze activity. The dataset includes EEG, eye-tracking, and physiological (GSR and Heart rate) signals collected from 34 individuals (18 able-bodied and 16 motor-impaired). Data were collected during the interaction with specifically designed interface for web browsing and multimedia content manipulation and during imaginary movement tasks. The presented dataset will contribute towards the development and evaluation of modern human-computer interaction systems that would foster the integration of people with severe motor impairments back into society.
30.

Meir Plotnik, Zoe Katsarou, Amihai Gotlieb, Adam Grinberg, Rachel Kizony, Gabi Zeilig, Sevasti Bostantjopoulou-Kambouroglou, The importance of using computers in populations with Parkinson’s disease and spinal cord injury: a patients’ and caregivers’ perspective, SfN’s 47th annual meeting in Neuroscience, November 11-15, Washington, DC, USA, 2017

Abstract Twenty individuals with PD (mean age:59.1±8.05 years) and eighteen with SCI (mean age:45.4±15.5years) were included in the study. Participants' working habits with the computer were explored by means of a structured interview. For this interview, we adapted parts of the matching person and assistive technology questionnaire (MPT), to account for people with movement disabilities, and created a quantitative questionnaire, focused on the Contribution of the Computer to various aspects of Social Life.In addition, each participant was asked to define the three most important aspects. In parallel, the PD and SCI participants’ caregivers(20 and 11, respectively) were interviewed using the same method.
31.

Lazarou I, Nikolopoulos S, Petrantonakis PC, Kompatsiaris I and Tsolaki M, EEG-based brain computer interfaces for communication and rehabilitation of people with motor impairment: a novel approach of the 21st century, Front. Hum. Neurosci. 12:14. doi: 10.3389/fnhum.2018.00014

Abstract People with severe motor impairment face many challenges in communication and control of the environment, whilst survivors from neurological disorders have increased demand for advanced, adaptive and personalized rehabilitation. The last decades many studies have underlined the importance of brain-computer interfaces (BCIs) with great contributions ranging from communication restoration to motor rehabilitation. In this work we review BCI research that focuses on noninvasive, electroencephalography (EEG)-based BCI systems for people with motor impairment as far as communication and rehabilitation aspects are concerned. More specifically we overview milestone approaches that are primarily intended to help severely paralyzed and/or locked-in state patients by using three different BCI modalities, i.e., slow cortical potentials, sensorimotor rhythms and P300 potentials as operational mechanisms. In addition, we review BCI systems with special emphasis on restoration of motor function for patients with spinal cord injury and chronic stroke. Finally, we summarize how EEG-based BCI systems have contributed to communication and rehabilitation of motor impaired people, stress out advantages and limitations and discuss the challenges that these systems should address in the future.
32.

Raphael Menges, Hanadi Tamimi, Chandan Kumar, Tina Walber, Christoph Schaefer, Steffen Staab, Enhanced Representation of Web Pages for Usability Analysis with Eye Tracking, ACM Symposium on Eye Tracking Research and Applications (ETRA 18)

Abstract Eye tracking as a tool to quantify user attention plays a major role in research and application design. For Web page usability, it has become a prominent measure to assess which sections of aWeb page are read, glanced or skipped. Such assessments primarily depend on the mapping of gaze data to a Web page representation. However, current representation methods, a virtual screenshot of the Web page or a video recording of the complete interaction session, suffer either from accuracy or scalability issues. We present a method that identifies fixed elements on Web pages and combines user viewport screenshots in relation to fixed elements for an enhanced representation of the page. We conducted an experiment with 10 participants and the results signify that analysis with our method is more efficient than a video recording, which is an essential criterion for large scale Web studies.
33.

Korok Sengupta, Min Ke, Raphael Menges, Chandan Kumar, Steffen Staab, Hands-Free Web Browsing: Enriching the User Experience with Gaze and Voice Modality, ACM Symposium on Eye Tracking Research and Applications (ETRA 18)

Abstract Hands-free browsers provide an effective tool for Web interaction and accessibility, overcoming the need for conventional input sources. Current approaches to hands-free interaction are primarily categorized in either voice or gaze-based modality. In this work, we investigate how these two modalities could be integrated to provide a better hands-free experience for end-users. We demonstrate a multimodal browsing approach combining eye gaze and voice inputs for optimized interaction, and to suffice user preferences with unimodal benefits. The initial assessment with five participants indicates improved performance for the multimodal prototype in comparison to single modalities for hands-free Web browsing.
34.

Fountoukidou S., Ham J., Matzat U., Midden C., Using an Artificial Agent as a Behavior Model to Promote Assistive Technology Acceptance, In: Ham J., Karapanos E., Morita P., Burns C. (eds) Persuasive Technology. PERSUASIVE 2018. Lecture Notes in Computer Science, vol 10809. Springer, Cham

Abstract Despite technological advancements in assistive technologies, studies show high rates of non-use. Because of the rising numbers of people with disabilities, it is important to develop strategies to increase assistive technology acceptance. The current research investigated the use of an artificial agent (embedded into a system) as a persuasive behavior model to influence individuals’ technology acceptance beliefs. Specifically, we examined the effect of agent-delivered behavior modeling vs. two non-modeling instructional methods (agent-delivered instructional narration and no agent, text-only instruction) on individuals’ computer self-efficacy and perceived ease of use of an assistive technology. Overall, the results of the study confirmed our hypotheses, showing that the use of an artificial agent as a behavioral model leads to increased computer self-efficacy and perceived ease of use of a system. The implications for the inclusion of an artificial agent as a model in promoting technology acceptance are discussed.
35.

Vangelis P. Oikonomou, Spiros Nikolopoulos, Panagiotis Petrantonakis, Ioannis Kompatsiaris, Sparse Kernel Machines for motor imagery EEG classification, Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC'18) at the Honolulu, HI, USA, on July 17-21, 2018 (to appear)

Abstract Brain-computer interfaces (BCIs) make human-computer interaction more natural, especially for people with neuro-muscular disabilities. Among various data acquisition modalities the electroencephalograms (EEG) occupy the most prominent place due to their non-invasiveness. In this work, a method based on sparse kernel machines is proposed for the classification of motor imagery (MI) EEG data. More specifically, a new sparse prior is proposed for the selection of the most important information and the estimation of model parameters is performed using the bayesian framework. The experimental results obtained on a benchmarking EEG dataset for MI, have shown that the proposed method compares favorably with state of the art approaches in BCI literature.
36.

Panagiotis C. Petrantonakis and Ioannis Kompatsiaris, Detection of Mental Task Related Activity in NIRS-BCI systems Using Dirichlet Energy over Graphs, Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC'18) at the Honolulu, HI, USA, on July 17-21, 2018 (to appear)

Abstract Near Infrared Spectroscopy (NIRS)-based Brain Computer Interfaces (NIRS-BCI) rely mainly on the mean concentration changes and slope of the hemodynamic responses in separate recording channels to detect the mental-task related brain activity. Nevertheless, spatial patterns across the measurement channels are also present and should be taken into account for reliable evaluation of the aforementioned detection. In this work the Dirichlet Energy of NIRS signals over a graph is considered for the definition of a measure that would take into account the spatial NIRS features and would integrate the activity of multiple NIRS channels for robust mental task related activity detection. The application of the proposed measure on a real NIRS dataset demonstrates the efficiency of the proposed measure.
37.

P. C. Petrantonakis and I. Kompatsiaris, Single-trial NIRS data classification for brain-computer interfaces using graph signal processing, IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 26, no. 9, pp. 1700-1709, Sept. 2018. doi: 10.1109/TNSRE.2018.2860629

Abstract Near Infrared Spectroscopy (NIRS)-based Brain Computer Interface (BCI) systems use feature extraction methods relying mainly on the slope characteristics and mean changes of the hemodynamic responses in respect to certain mental tasks. Nevertheless, spatial patterns across the measurement channels have been detected and should be considered during the feature vector extraction stage of the BCI realization. In this work a Graph Signal Processing (GSP) approach for feature extraction is adopted in order to capture the aforementioned spatial information of the NIRS signals. The proposed GSP-based methodology for feature extraction in NIRS-based BCI systems, namely GNIRS, is applied on a publicly available dataset of NIRS recordings during mental arithmetic task. GNIRS exhibits higher classification rates, up to 92.52%, as compared to the classification rates of two state-of-the-art feature extraction methodologies related to slope and mean values of hemodynamic response, i.e., 90.35% and 82.60%, respectively. In addition, GNIRS leads to the formation of feature vectors with reduced dimensionality in comparison with the baseline approaches. Moreover, it is shown to facilitate high classification rates even from the first second after the onset of the mental task, paving the way for faster NIRS-based BCI systems.
38.

Fotis Kalaganis, Elisavet Chatzilari, Spiros Nikolopoulos, Ioannis Kompatsiaris, and Nikos Laskaris, An error-aware gaze-based keyboard by means of a hybrid BCI system, Scientific Reports 8, Article number: 13176 (2018)

Abstract Gaze-based keyboards offer a flexible way for human-computer interaction in both disabled and able-bodied people. Besides their convenience, they still lead to error-prone human-computer interaction. Eye tracking devices may misinterpret user’s gaze resulting in typesetting errors, especially when operated in fast mode. As a potential remedy, we present a novel error detection system that aggregates the decision from two distinct subsystems, with each one dealing with disparate data streams. The first subsystem operates on gaze-related measurements and exploits the eye-transition pattern to flag a typo. The second one is a brain-computer interface that utilizes a neural response, known as Error-Related Potentials (ErrP), which is inherently generated when the subject observes an erroneous action. By means of a suitable experimental set-up, we first demonstrate that ErrP-based Brain Computer Interfaces can be indeed useful in the context of gaze-based typesetting, despite the putative contamination of EEG activity from the eye-movement artefact. Then, we show that the performance of this subsystem can be further improved by considering also the error detection from the gaze-related subsystem. Finally, the proposed bimodal error detection system is shown to significantly reduce the typesetting time in a gaze-based keyboard.
39.

Vangelis Oikonomou, Spiros Nikolopoulos and Ioannis Kompatsiaris, A Bayesian Multiple Kernel Learning Algorithm for SSVEP BCI detection, IEEE Journal of Biomedical and Health Informatics, doi: 10.1109/JBHI.2018.2878048

Abstract Our work deals with the classification of Steady State Visual Evoked Potentials (SSVEP), which is a multiclass classification problem that arises frequently in the field of Brain Computer Interfaces (BCI). In particular, our method, named MultiLRM, uses multiple linear regression models under a Sparse Bayesian Learning (SBL) framework to discriminate between the SSVEP classes. The regression coefficients of each model are learned using the Variational Bayesian Framework and the kernel trick is adopted not only for reducing the computational cost of our method, but also for enabling the combination of different kernel spaces. We verify the ability of our method to handle different kernel spaces by evaluating its performance with a new kernel based on Canonical Correlation Analysis (CCA) and prove the benefit of combining multiple kernels by outperforming several state-of-the-art methods in two SSVEP datasets.
40.

Konstantinos Georgiadis, Nikos Laskaris, Spiros Nikolopoulos and Ioannis Kompatsiaris, Exploiting the heightened phase synchrony in patients with neuromuscular disease for the establishment of efficient motor imagery BCIs, Journal of NeuroEngineering and Rehabilitation, DOI: https://doi.org/10.1186/s12984-018-0431-6

Abstract Background: Phase synchrony has extensively been studied for understanding neural coordination in health and disease. There are a few studies concerning the implications in the context of BCIs, but its potential for establishing a communication channel in patients suffering from neuromuscular disorders remains totally unexplored. We investigate, here, this possibility by estimating the time-resolved phase connectivity patterns induced during a motor imagery (MI) task and adopting a supervised learning scheme to recover the subject's intention from the streaming data. Methods: Electroencephalographic activity from six patients suffering from neuromuscular disease (NMD) and six healthy individuals was recorded during two randomly alternating, externally cued, MI tasks (clenching either left or right fist) and a rest condition. The metric of Phase locking value (PLV) was used to describe the functional coupling between all recording sites. The functional connectivity patterns and the associate network organization was first compared between the two cohorts. Next, working at the level of individual patients, we trained support vector machines (SVMs) to discriminate between "left" and "right" based on different instantiations of connectivity patterns (depending on the encountered brain rhythm and the temporal interval). Finally, we designed and realized a novel brain decoding scheme that could interpret the intention from streaming connectivity patterns, based on an ensemble of SVMs. Results: The group-level analysis revealed increased phase synchrony and richer network organization in patients. This trend was also seen in the performance of the employed classifiers. Time-resolved connectivity led to superior performance, with distinct SVMs acting as local experts, specialized in the patterning emerged within specific temporal windows (defined with respect to the external trigger). This empirical finding was further exploited in implementing a decoding scheme that can be activated without the need of the precise timing of a trigger. Conclusion: The increased phase synchrony in NMD patients can turn to a valuable tool for MI decoding. Considering the fast implementation for the PLV pattern computation in multichannel signals, we can envision the development of efficient personalized BCI systems in assistance of these patients.
41.

Ghazali, A., Ham, J., Barakova, E., & Markopoulos, P., The Influence of Social Cues in Persuasive Social Robots on Psychological Reactance and Compliance, Computers in Human Behavior, 87, 58-65, 2018

Abstract People can react negatively to persuasive attempts experiencing reactance, which gives rise to negative feelings and thoughts and may reduce compliance. This research examines social responses towards persuasive social agents. We present a laboratory experiment which assessed reactance and compliance to persuasive attempts delivered by an artificial (non-robotic) social agent, a social robot with minimal social cues (human-like face with speech output and blinking eyes), and a social robot with enhanced social cues (human-like face with head movement, facial expression, affective intonation of speech output). Our results suggest that a social robot presenting more social cues will cause higher reactance and this effect is stronger when the user feels involved in the task at hand.
42.

Ghazali, A., Ham, J., Barakova, E., & Markopoulos, P., Effects of robot facial characteristics and gender in persuasive human-robot interaction, Frontiers in Robotics and AI. doi: 10.3389/frobt.2018.00073, 2018.

Abstract The growing interest in social robotics makes it relevant to examine the potential of robots as persuasive agents and, more specifically, to examine how robot characteristics influence the way people experience such interactions and comply with the persuasive attempts by robots. The purpose of this research is to identify how the (ostensible) gender and the facial characteristics of a robot influence the extent to which people trust it and the psychological reactance they experience from its persuasive attempts. This paper reports a laboratory study where SociBot™, a robot capable of displaying different faces and dynamic social cues, delivered persuasive messages to participants while playing a game. In-game choice behaviour was logged, and trust and reactance towards the advisor were measured using questionnaires. Results show that a robotic advisor with upturned eyebrows and lips (features that people tend to trust more in humans) is more persuasive, evokes more trust and less psychological reactance compared to one displaying eyebrows pointing down and lips curled downwards at the edges (facial characteristics typically not trusted in humans). Gender of the robot did not affect trust, but participants experienced higher psychological reactance when interacting with a robot of the opposite gender. Remarkably, mediation analysis showed that liking of the robot fully mediates the influence of facial characteristics on trusting beliefs and psychological reactance. Also, psychological reactance was a strong and reliable predictor of trusting beliefs but not of trusting behavior. These results suggest that robots which are intended to influence human behaviour should be designed to have facial characteristics we trust in humans and could be personalized to have the same gender as the user. Furthermore, personalization and adaptation techniques designed to make people like the robot more may help ensure they will also trust the robot
43.

Lazarou Ioulietta, Adam Katerina, Georgiadis Kostas, Tsolaki Anthoula, Nikolopoulos Spiros, Kompatsiaris (Yiannis) Ioannis and Tsolaki Magda, Can a Novel High-Density EEG Approach Disentangle the Differences of Visual Event Related Potential (N170), Elicited by Negative Facial Stimuli, in People with Subjective Cognitive Impairment?, Journal of Alzheimer's Disease, vol. 65, no. 2, pp. 543-575, 2018

Abstract Background: Studies on the Subjective Cognitive Impairment (SCI) and neural activation report controversial results. Objective: To evaluate the ability to disentangle the differences of visual ERPs generated by facial stimuli (Anger & Fear) as well as the cognitive deterioration of Subjective Cognitive Impairment (SCI), Mild Cognitive Impairment (MCI) and Alzheimer’s Disease (AD) compared to Healthy Controls (HC), as measured by the N170 event-related component (ERP). Method: Fifty seven participants (N=57) participated in this study. Images corresponding to two negative facial stimuli “Anger” and “Fear” were presented to 12 Healthy Controls (HC), 14 with SCI, 17 with MCI and 14 with AD participants. EEG data were recorded by using a HD-EEG HydroCel with 256 channels. Results: Results showed that the amplitude of N170 can contribute in distinguishing the SCI group, since statistically significant differences were observed with the HC (p< 0.05) and the MCI group from HC (p< .001), as well as AD from HC (p= .05) during the processing of facial stimuli. Despite the small sample size and the within group variability observed for each cohort, there are clear pieces of evidence on the potential of using the amplitude of N170 to create a bio-marker distinguishing SCI from the groups of MCI, AD and HC. As expected, in the case of AD group the amplitude of the negative N170 peaks was far larger than all the others, however this was also the case with the variability observed in this group. Moreover, positive correlations were observed between specific neuropsychological tests and the amplitude of N170 in the case of “Fear”. Noticeable differences were also observed in the topographic distribution of the N170 amplitude; while localization analyses by using sLORETA images confirmed the activation of superior, middle-temporal and frontal lobe brain regions. Finally, in case of “Fear”, SCI and HC demonstrated increased activation in Orbital and Inferior Frontal Gyrus respectively, MCI in Inferior Temporal Gyrus and AD in Lingual Gyrus. Conclusion: These preliminary findings suggest that the amplitude of N170 elicited after negative facial stimuli could be modulated by the decline related to pathological cognitive aging and can contribute in distinguishing HC from SCI, MCI and AD.

Publications post project completion

44.

Sevasti Bostantjopoulou-Kampouroglou, ZoeE Katsarou, Meir Plotnik, Gabi Zeilig, Ioannis Daglis, George Liaros, Fotios Kalaganis, Konstantinos Georgiadis, Yiannis Kompatsiaris, Spiros Nikolopoulos, Parkinsonian patients experiences operating the computer with their eyes: the MAMEM project, Neurology Apr 2019, 92 (15 Supplement) P5.8-043;

Abstract The purpose of this study ,that is the EU Horizon 2020,MAMEM project (Multimedia Authoring and Management using your Eyes and Mind), is to provide Parkinson’s disease (PD) patients innovating technology that will enable them to have a better use of the computers, through mental commands and gaze activity. Here we report the patients’ perspective from their attempts to operate the computer by using their eyes only.
45.

F. Kalaganis, N. Laskaris, E. Chatzilari, S. Nikolopoulos, I. Kompatsiaris, A Riemannian geometry approach to reduced and discriminative covariance estimation in Brain Computer Interfaces, IEEE Trans Biomed Eng. 2019 Apr 18

Abstract OBJECTIVE: Spatial covariance matrices are extensively employed as brain activity descriptors in brain computer interface (BCI) research that, typically, involve the whole array of sensors. Here, we introduce a methodological framework for delineating the subset of sensors, the covariance structure of which offers a reduced, but more powerful, representation of brain's coordination patterns that ultimately leads to reliable mind reading. METHODS: Adopting a Riemannian geometry approach, we turn the problem of sensor selection as a maximization of a functional that is computed over the manifold of symmetric positive definite (SPD) matrices and encapsulates class separability in a way that facilitates the search among subsets of different size. The introduced optimization task, namely discriminative covariance reduction (DCR), lacks an analytical solution and is tackled via the cross-entropy optimization technique. RESULTS: Based on two different EEG datasets and three distinct classification schemes, we demonstrate that the DCR approach provides a noteworthy gain in terms of accuracy (in some cases exceeding 20%) and a remarkable reduction in classification time (on average 82%). Additionally, results include the intriguing empirical finding that the pattern of selected sensors in the case of disabled persons depends on the type of disability. CONCLUSION: The proposed DCR framework can speed up the classification time in BCI-systems operating on the SPD manifolds by simultaneously enhancing their reliability. This is achieved without sacrificing the neuroscientific interpretability endowed in the topographical arrangement of the selected sensors. SIGNIFICANCE: Riemannian geometry is exploited for DCR in BCI systems, in a dimensionality-agnostic manner, guaranteeing improved performance.
46.

K. Georgiadis, N. Laskaris, S. Nikolopoulos and I. Kompatsiaris, Connectivity steered graph Fourier transform for motor imagery BCI decoding, Journal of Neural Engineering, Volume 16, Number 5, 21 August 2019

Abstract Objective. Graph signal processing (GSP) concepts are exploited for brain activity decoding and particularly the detection and recognition of a motor imagery (MI) movement. A novel signal analytic technique that combines graph Fourier transform (GFT) with estimates of cross-frequency coupling (CFC) and discriminative learning is introduced as a means to recover the subject's intention from the multichannel signal. Approach. Adopting a multi-view perspective, based on the popular concept of co-existing and interacting brain rhythms, a multilayer network model is first built from empirical data and its connectivity graph is used to derive the GFT-basis. A personalized decoding scheme supporting a binary decision, either 'left versus right' or 'rest versus MI', is crafted from a small set of training trials. Electroencephalographic (EEG) activity from 12 volunteers recorded during two randomly alternating, externally cued, MI tasks (clenching either left or right fist) and a rest condition is used to introduce and validate our methodology. In addition, the introduced methodology was further validated based on dataset IVa of BCI III competition. Main results. Our GFT-domain decoding scheme achieves nearly optimal performance and proves superior to alternative techniques that are very popular in the field. Significance. At a conceptual level, our work suggests a fruitful way to introduce network neuroscience in BCI research. At a more practical level, it is characterized by efficiency. Training is realized using a small number of exemplar trials and decoding requires very simple operations that leaves room for real-time implementation.
47.

I. Lazarou, S. Nikolopoulos, S. I. Dimitriadis, I. Kompatsiaris, M. Spilioti, M. Tsolaki, Is Brain Connectome Research the Future Frontier for Subjective Cognitive Decline? A Systematic Review, Clinical Neurophysiology, 2019, ISSN 1388-2457

Abstract Objective We performed a systematic literature review on Subjective Cognitive Decline (SCD) in order to examine whether the resemblance of brain connectome and functional connectivity (FC) alterations in SCD with respect to MCI, AD and HC can help us draw conclusions on the progression of SCD to more advanced stages of dementia. Methods We searched for studies that used any neuroimaging tool to investigate potential differences/similarities of brain connectome in SCD with respect to HC, MCI, and AD. Results Sixteen studies were finally included in the review. Apparent FC connections and disruptions were observed in the white matter, default mode and gray matter networks in SCD with regards to HC, MCI, and AD. Interestingly, more apparent connections in SCD were located over the posterior regions, while an increase of FC over anterior regions was observed as the disease progressed. Conclusions Elders with SCD display a significant disruption of the brain network, which in most of the cases is worse than HC across multiple network parameters. Significance The present review provides comprehensive and balanced coverage of a timely target research activity around SCD with the intention to identify similarities/differences across patient groups on the basis of brain connectome properties.
48.

F. Kalaganis, N. Laskaris, E. Chatzilari, D. A. Adamos, S. Nikolopoulos, & Y. Kompatsiaris, A complex-valued functional brain connectivity descriptor amenable to Riemannian geometry, Journal of Neural Engineering, 2020 (in press)

Abstract OBJECTIVE: We introduce a novel, phase-based, functional connectivity descriptor that encapsulates not only the synchronization strength between distinct brain regions, but also the time-lag between the involved neural oscillations. The new estimator employs complex-valued measurements and results in a brain network sketch that lives on the smooth manifold of Hermitian Positive Definite (HPD) matrices. APPROACH: Leveraging the HPD property of the proposed descriptor, we adapt a recently introduced dimensionality reduction methodology that is based on Riemannian Geometry and discriminatively detects the recording sites which best reflect the differences in network organization between contrasting recording conditions in order to overcome the problem of high-dimensionality, usually encountered in the connectivity patterns derived from multisite encephalographic recordings. MAIN RESULTS: The proposed framework is validated using an EEG dataset that refers to the challenging problem of differentiating between attentive and passive visual responses. We provide evidence that the reduced connectivity representation facilitates high classification performance and caters for neuroscientific explorations. SIGNIFICANCE: Our paper is the very first that introduces an advanced connectivity descriptor that can take advantage of Riemannian geometry tools. The proposed descriptor, that inherently and simultaneously captures both the strength and the corresponding time-lag of the phase synchronization, is the first phase-based descriptor tailored to leverage the benefits of Remanian geometry.
49.

V. P. Oikonomou, S. Nikolopoulos and I. Kompatsiaris, A Novel Compressive Sensing Scheme Under the Variational Bayesian Framework, 27th European Signal Processing Conference (EUSIPCO), A Coruna, Spain, September 2-6, 2017

Abstract In this work we provide a novel algorithm for Bayesian Compressive Sensing. The proposed algorithm is considered for signals that features two properties: grouping structure and sparsity between groups. The Compressive Sensing problem is formulated using the Bayesian linear model. Furthermore, the sparsity of the unknown signal is modeled by a parameterized sparse prior while the inference procedure is conducted using the Variational Bayesian framework. Experimental results, using 1D and 2D signals, demonstrate that the proposed algorithm provides superior performance compared to state-of-the-art Compressive Sensing reconstruction algorithms.
50.

V. P. Oikonomou, S. Nikolopoulos and I. Kompatsiaris, Discrimination of SSVEP responses using a kernel based approach, 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 2019, pp. 762-766

Abstract Brain Computer Interfaces based on Steady State Visual Evoked Potentials have gained increased attention due to their low training requirements and higher information transfer rates. In this work, a method based on sparse kernel machines is proposed for the discrimination of Steady State Visual Evoked Potentials responses. More specifically, a new kernel based on Partial Least Squares is introduced to describe the similarities between EEG trials, while the estimation of regression weights is performed using the Sparse Bayesian Learning framework. The experimental results obtained on two benchmarking datasets, have shown that the proposed method provides significantly better performance compared to state of the art approaches of the related literature.
51.

K. Georgiadis, N. Laskaris, S. Nikolopoulos, D. Α. Adamos and I. Kompatsiaris, Using Discriminative Lasso to Detect a Graph Fourier Transform (GFT) Subspace for robust decoding in Motor Imagery BCI, 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 2019, pp. 6167-6171

Abstract A novel decoding scheme for motor imagery (MI) brain computer interfaces (BCI's) is introduced based on the GFT concept. It considers the recorded EEG activity as a signal defined over (the graph of) the sensor array. A graph encapsulating the functional covariations emerging during the execution of a specific imagined movement is first defined, from a small training set of relevant trials. The ensemble of graphs signals corresponding to a multi-trial training dataset is then analyzed using a graph-guided decomposition and, based on discriminative Lasso (dLasso), an information-rich GFT subspace is defined. After training, only simple matrix operations are required for transforming the multichannel signal into features to be fed into a classifier that decides whether brain activity conforms with the graph structure associated with the targeted movement. The proposed decoding scheme is evaluated based on two different datasets and found to compare favorably against popular alternatives in the field.
52.

S. Bostantzopoulou-Kampouroglou, Zoe Katsarou, Meir Plotnik, Gabi Zeolig, Ioannis Daglis, G. Liaros, F. Kalaganis, K. Georgiadis, Y. Kompatsiaris, S. Nikololopoulos, Parkinsonian patients experiences operating the computer with their eyes: the MAMEM project, Neurology Apr 2019, 92 (15 Supplement) P5.8-043

Abstract Objective: Within the framework of the EU Horizon 2020, MAMEM project (Multimedia Authoring and Management using your Eyes and Mind), we developed for Parkinson’s disease (PD) patients an innovative technology , that will enable them to have a better use of the computers, through mental commands and gaze activity. The objective of the present report is to provide the patients’ subjective impressions about their attempts to operate the computer by using their eyes only. Background: The use of computers and information technologies is essential for social participation and productive life. Although PD patients consider computer use as an important part of their everyday life they face many operational difficulties and significant obstacles in computer operation due to the motor symptoms of the disease. Design/Methods: Ten PD patients participated in the study (mean age:55.6±7.3 ; mean Hoehn & Yahr stage :2.1±0.3). Patients were provided with the MAMEM platform to use at their homes for one month. The apparatus included a standard laptop computer with GazeTheWeb, i.e., the tool that was developed within the MAMEM platform, that enables surfing the internet with the use of the eyes - installed on it, together with an eye tracking system. Patient satisfaction was assessed by the SUS (System Usability Scale) and QUEST 2.0 scale (Quebec User Evaluation of Satisfaction with assistive Technology). Results: The mean SUS score given to the MAMEM platform was 75.5±13 ( a SUS score over 68 is considered above average), the mean score for the QUEST 2.0 was 4.2±0.5 (QUEST 2.0 score 5 indicates highest satisfaction) Conclusions: Our results show that the MAMEM platform is perceived by PD patients as a useful, usable and a satisfactory assistive device that enables computer usage and digital social activities. Further research is needed to realize how to utilize eye movements for functional compensation over motor disabilities in PD.
53.

Sengupta, Korok, Raphael Menges, Chandan Kumar, and Steffen Staab, Impact of variable positioning of text prediction in gaze-based text entry, In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications, pp. 1-9. 2019.

Abstract Text predictions play an important role in improving the performance of gaze-based text entry systems. However, visual search, scanning, and selection of text predictions require a shift in the user's attention from the keyboard layout. Hence the spatial positioning of predictions becomes an imperative aspect of the end-user experience. In this work, we investigate the role of spatial positioning by comparing the performance of three different keyboards entailing variable positions for text predictions. The experiment result shows no significant differences in the text entry performance, i.e., displaying suggestions closer to visual fovea did not enhance the text entry rate of participants, however they used more keystrokes and backspace. This implies to the inessential usage of suggestions when it is in the constant visual attention of users, resulting in increased cost of correction. Furthermore, we argue that the fast saccadic eye movements undermines the spatial distance optimization in prediction positioning.
54.

Kumar, Chandan, Daniyal Akbari, Raphael Menges, Scott MacKenzie, and Steffen Staab, TouchGazePath: Multimodal Interaction with Touch and Gaze Path for Secure Yet Efficient PIN Entry, In 2019 International Conference on Multimodal Interaction, pp. 329-338. 2019.

Abstract We present TouchGazePath, a multimodal method for entering personal identification numbers (PINs). Using a touch-sensitive display showing a virtual keypad, the user initiates input with a touch at any location, glances with their eye gaze on the keys bearing the PIN numbers, then terminates input by lifting their finger. TouchGazePath is not susceptible to security attacks, such as shoulder surfing, thermal attacks, or smudge attacks. In a user study with 18 participants, TouchGazePath was compared with the traditional Touch-Only method and the multimodal Touch+Gaze method, the latter using eye gaze for targeting and touch for selection. The average time to enter a PIN with TouchGazePath was 3.3 s. This was not as fast as Touch-Only (as expected), but was about twice as fast the Touch+Gaze. TouchGazePath was also more accurate than Touch+Gaze. TouchGazePath had high user ratings as a secure PIN input method and was the preferred PIN input method for 11 of 18 participants.
55.

Chandan Kumar, Ramin Hedeshy, Scott MacKenzie, and Steffen Staab, TAGSwipe: Touch Assisted Gaze Swipe for Text Entry, Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM. (CHI 20).

Abstract The conventional dwell-based methods for text entry by gaze are typically slow and uncomfortable. A swipe-based method that maps gaze path into words offers an alternative. However, it requires the user to explicitly indicate the beginning and ending of a word, which is typically achieved by tedious gazeonly selection. This paper introduces TAGSwipe, a bi-modal method that combines the simplicity of touch with the speed of gaze for swiping through a word. The result is an efficient and comfortable dwell-free text entry method. In the lab study TAGSwipe achieved an average text entry rate of 15.46 wpm and significantly outperformed conventional swipe-based and dwell-based methods in efficacy and user satisfaction.
56.

Menges, Raphael, Chandan Kumar, and Steffen Staab, Improving user experience of eye tracking-based interaction: Introspecting and adapting interfaces, ACM Transactions on Computer-Human Interaction (TOCHI) 26, no. 6 (2019): 1-46.

Abstract Eye tracking systems have greatly improved in recent years, being a viable and affordable option as digital communication channel, especially for people lacking fine motor skills. Using eye tracking as an input method is challenging due to accuracy and ambiguity issues, and therefore research in eye gaze interaction is mainly focused on better pointing and typing methods. However, these methods eventually need to be assimilated to enable users to control application interfaces. A common approach to employ eye tracking for controlling application interfaces is to emulate mouse and keyboard functionality. We argue that the emulation approach incurs unnecessary interaction and visual overhead for users, aggravating the entire experience of gaze-based computer access. We discuss how the knowledge about the interface semantics can help reducing the interaction and visual overhead to improve the user experience. Thus, we propose the efficient introspection of interfaces to retrieve the interface semantics and adapt the interaction with eye gaze. We have developed a Web browser, GazeTheWeb, that introspects Web page interfaces and adapts both the browser interface and the interaction elements on Web pages for gaze input. In a summative lab study with 20 participants, GazeTheWeb allowed the participants to accomplish information search and browsing tasks significantly faster than an emulation approach. Additional feasibility tests of GazeTheWeb in lab and home environment showcase its effectiveness in accomplishing daily Web browsing activities and adapting large variety of modern Web pages to suffice the interaction for people with motor impairment.
57.

Sengupta, K., Bhattarai, S., Sarcar, S., MacKenzie, I. S., & Staab, S., Leveraging Error Correction in Voice-based Text Entry by Talk-and-Gaze, Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI 20).

Abstract We present the design and evaluation of Talk-and-Gaze (TaG), a method for selecting and correcting errors with voice and gaze. TaG uses eye gaze to overcome the inability of voiceonly systems to provide spatial information. The user’s point of gaze is used to select an erroneous word either by dwelling on the word for 800 ms (D-TaG) or by uttering a “select” voice command (V-TaG). A user study with 12 participants compared D-TaG, V-TaG, and a voice-only method for selecting and correcting words. Corrections were performed more than 20% faster with D-TaG compared to the V-TaG or voice-only methods. As well, D-TaG was observed to require 24% less selection effort than V-TaG and 11% less selection effort than voice-only error correction. D-TaG was well received in a subjective assessment with 66% of users choosing it as their preferred choice for error correction in voice-based text entry.
58.

Ramin Hedeshy, Chandan Kumar, Raphael Menges, and Steffen Staab, GIUPlayer: A Gaze Immersive YouTube Player Enabling Eye Control and Attention Analysis, ACM Symposium on Eye Tracking Research and Applications (ETRA '20 Adjunct)

Abstract
59.

Raphael Menges, Sophia Kramer, Stefan Hill, Marius Nisslmüller, Chandan Kumar, Steffen Staab, A Visualization Tool for Eye Tracking Data Analysis in the Web, ACM Symposium on Eye Tracking Research and Applications (ETRA '20 Short Papers)

Abstract
60.

S. Nikolopoulos, C. Kumar, I. Kompatsiaris, Signal Processing to Drive Human-Computer Interaction: EEG and eye-controlled interfaces, Kompatsiaris, I; Kumar, C; Nikolopoulos, S. (Eds.), Series: Healthcare Technologies, Institution of Engineering and Technology, London, United Kingdom, 2020, ISBN= 9781785619199.

Abstract The evolution of eye tracking and brain-computer interfaces has given a new perspective on the control channels that can be used for interacting with computer applications. In this book leading researchers show how these technologies can be used as control channels with signal processing algorithms and interface adaptations to drive a human-computer interface. Topics included in the book include a comprehensive overview of eye-mind interaction incorporating algorithm and interface developments; modeling the (dis)abilities of people with motor impairment and their computer use requirements and expectations from assistive interfaces; and signal processing aspects including acquisition, pre-processing, enhancement, feature extraction, and classification of eye gaze, EEG (Steady-state visual evoked potentials, motor imagery and error-related potentials) and near-infrared spectroscopy (NIRS) signals. Finally, the book presents a comprehensive set of guidelines, with examples, for conducting evaluations to assess usability, performance, and feasibility of multi-model interfaces combining eye gaze and EEG based interaction algorithms. The contributors to this book are researchers, engineers, clinical experts, and industry practitioners who have collaborated on these topics, providing an interdisciplinary perspective on the underlying challenges of eye and mind interaction and outlining future directions in the field.
61.

V. P. Oikonomou, S. Nikolopoulos and I. Kompatsiaris, Robust Motor Imagery Classification Using Sparse Representations and Grouping Structures, in IEEE Access, vol. 8, pp. 98572-98583, 2020, doi: 10.1109/ACCESS.2020.2997116

Abstract The classification of Motor Imagery (MI) tasks constitutes one of the most challenging problems in Brain Computer Interfaces (BCI) mostly due to the varying conditions of its operation. These conditions may vary with respect to the number of electrodes, the time and effort that can be invested by the user for training/calibrating the system prior to its use, as well as the duration or even the type of the imaginary task that is most convenient for the user. Hence, it is desirable to design classification schemes that are not only accurate in terms of the classification output but also robust to changes in the operational conditions. Towards this goal, we propose a new sparse representation classification scheme that extends current sparse representation schemes by exploiting the group sparsity of relevant features. Based on this scheme each test signal is represented as a linear combination of train trials that are further constrained to belong in the same MI class. Our expectation is that this constrained linear combination exploiting the grouping structure of the training data will lead to representations that are more robust to varying operational conditions. Moreover, in order to avoid overfitting and provide a model with good generalization abilities we adopt the bayesian framework and, in particular, the Variational Bayesian Framework since we use a specific approximate posterior to exploit the grouping structure of the data. We have evaluated the proposed algorithm on two MI datasets using electroencephalograms (EEG) that allowed us to simulate different operational conditions like the number of available channels, the number of training trials, the type of MI tasks, as well as the duration of each trial. Results have shown that the proposed method presents state-of-the-art performance against well known classification methods in MI BCI literature.