Related publications

Published papers related to the MAMEM Project.
1.

Vangelis P. Oikonomou, Georgios Liaros, Kostantinos Georgiadis, Elisavet Chatzilari, Katerina Adam, Spiros Nikolopoulos and Ioannis Kompatsiaris, Comparative evaluation of state-of-the-art algorithms for SSVEP-based BCIs, Technical Report - eprint arXiv:1602.00904, February 2016

Abstract Brain-computer interfaces (BCIs) have been gaining momentum in making human-computer interaction more natural, especially for people with neuro-muscular disabilities. Among the existing solutions the systems relying on electroencephalograms (EEG) occupy the most prominent place due to their non-invasiveness. However, the process of translating EEG signals into computer commands is far from trivial, since it requires the optimization of many different parameters that need to be tuned jointly. In this report, we focus on the category of EEG-based BCIs that rely on Steady-State-Visual-Evoked Potentials (SSVEPs) and perform a comparative evaluation of the most promising algorithms existing in the literature. More specifically, we define a set of algorithms for each of the various different parameters composing a BCI system (i.e. filtering, artifact removal, feature extraction, feature selection and classification) and study each parameter independently by keeping all other parameters fixed. The results obtained from this evaluation process are provided together with a dataset consisting of the 256-channel, EEG signals of 11 subjects, as well as a processing toolbox for reproducing the results and supporting further experimentation. In this way, we manage to make available for the community a state-of-the-art baseline for SSVEP-based BCIs that can be used as a basis for introducing novel methods and approaches.
2.

Zoe Katsarou, MD, Meir Plotnik, PhD, Gabi Zeilig, MD, Amihai Gottlieb, MSc, Rachel Kizony, PhD and Sevasti Bonstantjopoulou, Computer uses and difficulties in Parkinson’s disease, The MDS 20th International Congress of Parkinson's Disease and Movement Disorders, Berlin-Germany, June 2016

Abstract Thirty five PD pts with a long experience in computer operation were included in the study. Their mean age was 59.5 (SD 8.27) years. Most of them were in Hoehn and Yahr stage II. PD pts uses, habits, and difficulties with the computer were explored by means of a structured interview which provided information in the form of yes/no answers to questions relevant to a wide range of usual computer uses and applications as well as difficutlties in performing various tasks relevant to computer operation. Two quantitative scales one referring to the contribution of the computer in social life, every day activities, emotional well-being (total score: 9=not important/45-very important) and the other exploring the disease impact on various aspects of computer operation (total score: 11=no effect/55 maximum effect) were also employed.
3.

S. Bostantjopoulou, M. Plotnick, G. Zeilig, A. Gottlieb, R. Kizony, S. Chlomissiou, A. Nichogiannopoulou, Z. Katsarou, Computer use aspects in patients with motor disabilities, 2nd Congress of the European Academy of Neurology (EAN'2016), Copenhagen, Denmark, May 28-31, 2016

Abstract Three groups of neurological patients were studied:a)25 patients with Parkinson's disease (PD),b)23 patients with spinal cord injury (SCI) and c)19 with neuromuscular disorders ((NMD).All patients were assessed by means of two scales, one referring to the contribution of the computer in social life,everyday activities,emotional well-being (CCLS) [total score:9=not important/45=very important] and the other exploring the disease impact on various aspects of computer operation (DICOS) [total score:11=no effect/55=maximum effect]. Reliability of both scales was excellent (Cronbach's alpha was 0.87 for CCLS and 0.93 for DICOS).Between groups comparisons showed that NMD patients regarded conputer use as most important and SCI patients had the major difficulty.Mean total scores (SD) were as follows for a)CCLS:PD patients=23.28(7.22);SCI patients=20.78(9.72);NMD patients=32.84(5.12) [p=0.000] and b)DICOS:PD patients=25.9(9.9);SCI patients=31.22(15.0);NMD patients=20.53(5.15)[p=0.017]. Our preliminary results show that patients with motor disabilities regard computer use as an important aspect of their life and their disability has a significant effect in their ability to operate it satisfactorily.This information is important for the development of innovating technology helping patients to overcome their specific disabilities.
4.

Zeilig Gabi, Gottlieb Amihai, Kizony Rachel, Katsarou Zoe, Bostantzopoulou Sevasti, Nichgiannopoulou Ariana, Chlomissiou Sissy and Plotnik Meir, MAMEM – A novel computer brain interface platform for enhancing social interaction of people with disabilities – Clinical requirements resulting from focus groups and literature survey, 20th European Congress of Physical and Rehabilitation Medicine, Lisbon Portugal, April 2016

Abstract Health professionals with experience in the field of Parkinson Disease, neuromuscular conditions and tetraplegia following spinal cord injury, from three medical centers, from two countries, participated. We performed a literature survey, focusing on the characteristics of the study population, their computer and internet use habits, existing solutions, and specific challenges related to EEGs and EMs - based –computerassistive devices. We conducted three focus groups, with six health professionals per group. We also performed a qualitative analysis of the focus groups transcripts. The clinical requirements that resulted at the end of this phase have been then summarized, prioritized and coded with numbers from 1 (minimal) to 7 (maximal importance) by the health professionals from each site
5.

Sofia Fountoukidou, Jaap Ham, Peter Ruijten, and Uwe Matzat, Using personalized persuasive strategies to increase acceptance and use of HCI technology, Adjunct Proceedings of the 11th International Conference on Persuasive Technology, Salzburg-Austria, April 2016

Abstract It has been recognized that the adoption of a certain technology is not only dependent on the technological excellence. End-users can be reluctant to use it even though they are aware of its benefits. Personalized persuasive technology could be a key to the successful adoption of the MAMEM technology. Previous research has identified various persuasive strategies that can effective for behavior and/or attitude change. However, it is still unclear which persuasive strategies are most effective for which type of person. Thus, one of the objectives of MAMEM is to adapt persuasive technology interventions to the target group characteristics, in order to increase their effectiveness. For its realization, user profiles and sets of personas for the three target groups are created. Next, the Intervention Mapping framework is used for developing and implementing effective persuasive interventions tailored both to the user audience characteristics and to the specific target behaviors. All in all, MAMEM will execute extensive research, in order to contribute to the current state of the art of designing persuasive technologies taking into consideration both the user characteristics and the behavior in request.
6.

R. Menges, C. Kumar, K. Sengupta, S. Staab, eyeGUI: A Novel Framework for Eye-Controlled User Interfaces, 9th Nordic Conference on Human-Computer Interaction, NordiCHI 2016

Abstract The user interfaces and input events are typically composed of mouse and keyboard interactions in generic applications. Eye-controlled applications need to revise these interactions to eye gestures, and hence design and optimization of interface elements becomes a significant feature. In this work, we propose a novel eyeGUI framework, to support the development of such interactive eye-controlled applications with many vital aspects, like rendering, layout, dynamic modification of content, support of graphics and animation
7.

C. Kumar, R. Menges and S. Staab, Eye-Controlled Interfaces for Multimedia Interaction, in IEEE MultiMedia, vol. 23, no. 4, pp. 6-13, Oct.-Dec. 2016. doi: 10.1109/MMUL.2016.52

Abstract In the digitized world, interacting with multimedia information occupies a large portion of everyday activities; it’s now an essential part of how we gather knowledge and communicate with others. It involves several operations, including selecting, navigating through, and modifying multimedia, such as text, images, animations, and videos. These operations are usually performed by devices such as a mouse or keyboard, but people with motor disabilities often can’t use such devices. This limits their ability to interact with multimedia content and thus excludes them from the digital information spaces that help us stay connected with families, friends, and colleagues. In this paper, we primarily focus on the gaze-based control paradigm that we’ve developed as part of our work at the Institute for Web Science and Technologies (WeST) within the scope of MAMEM project. We outline the particular challenges of eye-controlled interaction with multimedia information, including initial project results. The objective is to investigate how eye-based interaction techniques can be made precise and fast enough to not only allow disabled people to interact with multimedia information but also make usage sufficiently simple and enticing such that healthy users might also want to include eye-based interaction.
8.

Korok Sengupta, Raphael Menges, Chandan Kumar and Steffen Staab, GazeTheKey: Interactive Keys to Integrate Word Predictions for Gaze-based Text Entry, Demo paper at the 22nd annual meeting of the intelligent user interfaces community, ACM IUI 2017, March 13 - 16, 2017 Limassol, Cyprus

Abstract In the conventional keyboard interfaces for eye typing, the functionalities of the virtual keys are static, i.e., user’s gaze at a particular key simply translates the associated letter as user’s input. In this work we argue the keys to be more dynamic and embed intelligent predictions to support gazebased text entry. In this regard, we demonstrate a novel "GazeTheKey" interface where a key not only signifies the input character, but also predict the relevant words that could be selected by user’s gaze utilizing a two-step dwell time.
9.

Zoe Katsarou, Gabi Zeilig, Meir Plotnik, Amihai Gotlieb, Rachael Kizony and Sevasti Bonstantjopoulou, Parkinson’s disease impact on computer use. A patients’ and caregivers perspective, The American Academy of Neurology 69th Annual Meeting, Neurology April 18, 2017 vol. 88 no. 16 Supplement P6.009, Boston, MA

Abstract The mean total score of PD paents on the CCSL scale was 22.7±6.9 Single items that scored high were relevant to interpersonal interacon, educaon, work and employment. The DICOS scale yielded a mean total score of 24.7± 10.0.Single items that had a significant impact on the whole score were speed of computer operaon and accuracy of performance. Caregivers’ mean scores on the CCSL and DICOS scales were similar to those of the paents (p=0.324).
10.

Anastasios Maronidis, Vangelis Oikonomou, Spiros Nikolopoulos and Ioannis (Yannis) Kompatsiaris, Steady State Visual Evoked Potential Detection Using Subclass Marginal Fisher Analysis, Proceedings of the 8th International IEEE EMBS Conference on Neural Engineering, May 25-28, 2017, Shanghai China

Abstract Recently, SSVEP detection from EEG signals has attracted the interest of the research community, leading to a number of well-tailored methods. Among these methods, Canonical Correlation Analysis (CCA) along with several variants have gained the leadership. Despite their effectiveness, due to their strong dependence on the correct calculation of correlations, these methods may prove to be inadequate in front of potential deficiency in the number of channels used, the number of available trials or the duration of the acquired signals. In this paper, we propose the use of Subclass Marginal Fisher Analysis (SFMA) in order to overcome such problems. SMFA has the power to effectively learn discriminative features of poor signals, and this advantage is expected to offer the appropriate robustness needed in order to handle such deficiencies. In this context, we pinpoint the qualitative advantages of SMFA, and through a series of experiments we prove its superiority over the state-of-the-art in detecting SSVEPs from EEG signals acquired with limited resources.
11.

Vangelis P. Oikonomou, Anastasios Maronidis, Georgios Liaros, Spiros Nikolopoulos and Ioannis Kompatsiaris, Sparse Bayesian Learning for Subject Independent Classification with Application to SSVEP-BCI, Proceedings of the 8th International IEEE EMBS Conference on Neural Engineering, May 25-28, 2017, Shanghai China

Abstract Sparse Bayesian Learning (SBL) is a widely used framework which helps us to deal with two basic problems of machine learning, to avoid overfitting of the model and to incorporate prior knowledge into it. In this work, multiple linear regression models under the SBL framework are used for the problem of multiclass classification when multiple subjects are available. As a case study, we apply our method to the detection of Steady State Visual Evoked Potentials (SSVEP), a problem that arises frequently into the Brain Computer Interface (BCI) paradigm. The multiclass classification problem is decomposed into multiple regression problems. By solving these regression problems, a discriminant vector is learned for further processing. In addition the adoption of the kernel trick and the special treatment of produced similarity matrix provide us with the ability to use a Leave-One-Subject-Out training procedure resulting in a classification system suitable for subject independent classification. Extensive comparisons are carried out between the proposed algorithm, the SVM classifier and the CCA based methodology. The experimental results demonstrate that the proposed algorithm outperforms the competing approaches, in terms of classification accuracy and Information Transfer Rate (ITR), when the number of utilized EEG channels is small.
12.

Raphael Menges, Chandan Kumar, Daniel Mueller, Korok Sengupta, GazeTheWeb: A Gaze-Controlled Web Browser, (TPG challenge winner) Proceedings of the 14th Web for All Conference. ACM, 2017

Abstract Web is essential for most people, and its accessibility should not be limited to conventional input sources like mouse and keyboard. In recent years, eye tracking systems have greatly improved, beginning to play an important role as input medium. In this work, we present GazeTheWeb, aWeb browser accessible solely by eye gaze input. It effectively supports all browsing operations like search, navigation and bookmarks. GazeTheWeb is based on a Chromium powered framework, comprising Web extraction to classify interactive elements, and application of gaze interaction paradigms to represent these elements.
13.

Chandan Kumar, Raphael Menges, Daniel Mueller, Steffen Staab, Chromium based Framework to Include Gaze Interaction in Web Browser, (honourable mention) Proceedings of the 26th International Conference Companion on World Wide Web (pp. 219-224). International World Wide Web Conferences Steering Committee, 2017

Abstract EnablingWeb interaction by non-conventional input sources like eyes has great potential to enhance Web accessibility. In this paper, we present a Chromium based inclusive framework to adapt eye gaze events in Web interfaces. The framework provides more utility and control to develop a fully featured interactive browser, compared to the related approaches of gaze-based mouse and keyboard emulation or browser extensions. We demonstrate the framework through a sophisticated gaze driven Web browser, which e effectively supports all browsing operations like search, navigation, bookmarks, and tab management.
14.

Fotis Kalaganis, Elisavet Chatzilari, Kostas Georgiadis, Spiros Nikolopoulos, Nikos Laskaris and Yiannis Kompatsiaris, An Error Aware SSVEP-based BCI, 30th IEEE International Symposium on Computer-based Medical Systems, Special Track on Multimodal Interfaces for Natural Human Computer Interaction: Theory and Applications, IEEE CBMS 2017, June 22-24, 2017 Thessaloniki - Greece

Abstract ErrPs have been used lately in order to improve several existing BCI applications. In our study we investigate the contribution of ErrPs in a SSVEP based BCI. An extensive study is presented in order to discover the limitations of the proposed scheme. Using Common Spatial Patterns and Random Forests we manage to show encouraging results regarding the incorporation of ErrPs in a SSVEP system. Finally, we provide a novel methodology (ICRT) that can measure the gain of a BCI system by incorporating ErrPs in terms of time efficiency.
15.

Vangelis Oikonomou, Kostas Georgiadis, Georgios Liaros, Spiros Nikolopoulos and Yiannis Kompatsiaris, A comparison study on EEG signal processing techniques using motor imagery EEG data, 30th IEEE International Symposium on Computer-based Medical Systems, Special Track on Multimodal Interfaces for Natural Human Computer Interaction: Theory and Applications, IEEE CBMS 2017, June 22-24, 2017 Thessaloniki - Greece

Abstract Brain-computer interfaces (BCIs) have been gaining momentum in making human-computer interaction more natural, especially for people with neuro-muscular disabilities. Among the existing solutions the systems relying on electroencephalograms (EEG) occupy the most prominent place due to their non-invasiveness. In this work we provide a review of various existing techniques for the identification of motor imagery (MI) tasks. More specifically we perform a comparison between CSP related features and features based on Power Spectral Density (PSD) techniques. Furthermore, for the identification of MI tasks two well-known classifiers are used, the Linear Discriminant Analysis (LDA) and the Support Vector Machine (SVM). Our results confirms that PSD features demonstrate the most consistent robustness and effectiveness in extracting patterns for accurately discriminating between left and right MI tasks.
16.

Korok Sengupta, Jun Sun, Raphael Menges, Chandan Kumar and Steffen Staab, Analyzing the Impact of Cognitive Load in Evaluating Gaze-based Typing, 30th IEEE International Symposium on Computer-based Medical Systems, Special Track on Multimodal Interfaces for Natural Human Computer Interaction: Theory and Applications, IEEE CBMS 2017, June 22-24, 2017 Thessaloniki - Greece, Best Student Paper Award

Abstract Gaze-based virtual keyboards allow people with motor disability a method for text entry by eye movements. The effectiveness and usability of these keyboards have traditionally been evaluated with conventional text entry performance measures such as words per minute, keystroke saving, error rate, accuracy, etc. However, in comparison to the conventional text entry approaches, gaze-based typing involves natural eye movements that are highly correlated with human brain cognition. Employing eye gaze as an input could lead to excessive mental demand, and in this work we argue the need to include cognitive load as an eye typing evaluation measure. We evaluate three variations of gaze-based virtual keyboards, which have variable design in terms of word suggestion positioning. The conventional text entry metrics indicate no significant difference in the performance of the different keyboard designs. However, STFT (Short-time Fourier Transformation) based analysis of EEG signals indicate variances in the mental workload of participants while interacting with these designs. Moreover, the EEG analysis provides us insights into the user’s cognition variation in different typing phases and intervals, which should be considered to improve eye typing usability.
17.

Chandan Kumar, Raphael Menges and Steffen Staab, Assessing the Usability of a Gaze-Adapted Interface with Conventional Eye-based Emulation, 30th IEEE International Symposium on Computer-based Medical Systems, Special Track on Multimodal Interfaces for Natural Human Computer Interaction: Theory and Applications, IEEE CBMS 2017, June 22-24, 2017 Thessaloniki - Greece

Abstract In recent years, eye tracking systems have greatly improved, beginning to play a promising role as an input medium. Eye trackers can be used for application control either by simply emulating the mouse device in the traditional graphical user interface, or by customized interfaces for eye gaze events. In this work we evaluate these two approaches to assess their impact in usability. We present a gaze-adapted Twitter application interface with direct interaction of eye gaze input, and compare it to the Twitter in a conventional browser interface with gaze-based mouse and keyboard emulation. We conducted an experimental study, which indicates a significantly better subjective user experience for the gazeadapted approach. Based on the results, we argue the need of user interfaces interacting directly to eye input to provide an improved user experience, more specifically in the field of accessibility.
18.

Vangelis Oikonomou, George Liaros, Spiros Nikolopoulos and Ioannis Kompatsiaris, Sparse Bayesian Learning for Multiclass Classification with application to SSVEP- BCI, 7th Graz Brain-Computer Interface Conference, September 18th – 22nd, 2017, Graz, Austria

Abstract Sparse Bayesian Learning (SBL) is a basic tool of machine learning. In this work, multiple linear regression models under the SBL framework (namely MultiLRM), are used for the problem of multiclass classification. As a case study we apply our method to the detection of Steady State Visual Evoked Potentials (SSVEP), a problem we encounter into the Brain Computer Interface (BCI) concept. The multiclass classification problem is decomposed into multiple regression problems. By solving these regression problems, a discriminant feature vector is learned for further processing. Furthermore by adopting the kernel trick the model is able to reduce its computational cost. To obtain the regression coefficients of each linear model, the Variational Bayesian framework is adopted. Extensive comparisons are carried out between the MultiLRM algorithm and several other competing methods. The experimental results demonstrate that the MultiLRM algorithm achieves better performance than the competing algorithms for SSVEP classification, especially when the number of EEG channels is small.
19.

Sofia Fountoukidou, Jaap Ham, Cees Midden, and Uwe Matzat, Using Tailoring to Increase the Effectiveness of a Persuasive Game-Based Training for Novel Technologies, Proceedings of the Personalization in Persuasive Technology Workshop, Persuasive Technology 2017, Amsterdam, The Netherlands, April 2017

Abstract A vast majority of people with motor disabilities cannot be part of the today’s digital society, due to the difficulties they face in using conventional interfaces (i.e., mouse and keyboard) for computer operation. The MAMEM project aims at facilitating the social inclusion of these people by developing a technology that allows computer operation, solely by using the eyes and mind. However, training is one of the key factors affecting the users’ technology acceptance. Game-based computer training including persuasive strategies could be an effective way to influence user beliefs and behaviours regarding a novel system. Tailoring these strategies to an individual level is a promising way to increase the effectiveness of a persuasive game. In the current paper, we briefly discuss the theoretical development of a persuasive game-based training for the MAMEM technology, as well as how we used tailored communication strategies to further enhance user technology acceptance. The development of such a tailored persuasive game will be essential for increasing acceptance and usage of assistive technology but also for the scientific insights in personalization of persuasion.
20.

Korok Sengupta, Chandan Kumar and Steffen Staab, Usability Heuristics for Eye-controlled User Interface, The 2017 COGAIN Symposium: Communication by Gaze Interaction, Wuppertal, Germany. August 19th and 21st, 2017

Abstract Evolution of affordable assistive technologies like eye tracking help people with motor disabilities to access information on the Internet or work on computers. However, eye tracking environments need to be specially built for better usability and accessibility of the content and should not be on interface layouts that are conducive to conventional mouse or touch based interfaces. In this work, we argue the need of the domain specific heuristic checklist for eye-controlled interfaces, which conforms to the usability, design principles and less demanding from cognitive load perspective. It focuses on the need to understand the product in use inside the gaze based environment and then apply the heuristic guidelines to design them. We propose an eight-point questionnaire to validate the usability heuristic guidelines for eye-controlled interfaces.
21.

Spiros Nikolopoulos, Kostas Georgiadis, Fotis Kalaganis, Georgios Liaros, Ioulietta Lazarou, Katerina Adam, Anastasios Papazoglou - Chalikias, Elisavet Chatzilari, Vangelis P. Oikonomou, Panagiotis C. Petrantonakis, Ioannis Kompatsiaris, Chandan Kumar, Raphael Menges, Steffen Staab, Daniel Müller, Korok Sengupta, Sevasti Bostantjopoulou, Zoe Katsarou, Gabi Zeilig, Meir Plotnik, Amihai Gottlieb, Sofia Fountoukidou, Jaap Ham, Dimitrios Athanasiou, Agnes Mariakaki, Dario Comanducci, Edoardo Sabatini, Walter Nistico and Markus Plank, The MAMEM Project – A dataset for multimodal human-computer interaction using biosignals and eye tracking information, Technical Report, Zenodo. http://doi.org/10.5281/zenodo.834154

Abstract In this report we present a dataset that combines multimodal biosignals and eye tracking information gathered under a human-computer interaction framework. The dataset was developed in the vein of the MAMEM project that aims to endow people with motor disabilities with the ability to edit and author multimedia content through mental commands and gaze activity. The dataset includes EEG, eye-tracking, and physiological (GSR and Heart rate) signals along with demographic, clinical and behavioral data collected from 36 individuals (18 able-bodied and 18 motor-impaired). Data were collected during the interaction with specifically designed interface for web browsing and multimedia content manipulation and during imaginary movement tasks. Alongside these data we also include evaluation reports both from the subjects and the experimenters as far as the experimental procedure and collected dataset are concerned. We believe that the presented dataset will contribute towards the development and evaluation of modern human-computer interaction systems that would foster the integration of people with severe motor impairments back into society.
22.

Fotis P. Kalaganis, Elisavet Chatzilari, Spiros Nikolopoulos, Nikos A. Laskaris and Yiannis Kompatsiaris, A Collaborative Representation Approach to Detecting Error-Related Potentials in SSVEP-BCIs, 25th ACM MM Conference, ThematicWorkshops’17, October 23–27, 2017, Mountain View, CA, USA

Abstract This study takes advantage of Error Related Potentials, a certain type of neurophysiological event associated with humans’ ability to observe and recognize erroneous actions, in order to improve SSVEP-based Brain Computer Interfaces (BCIs). The Error Related Potentials serve as a passive correction mechanism that originates directly from the user’s brain. In this paper we propose a novel approach to spatial filtering, based on a supervised variant of Collaborative Representation Projections (CRP) offering a more discriminant representation of electroencephalography signals for detecting Error Related Potentials. This new approach enhances the detectability of Error Related Potentials by projecting the spatial information of signals into a new space where samples of the same class tend to form local neighborhoods. Moreover, the limitations under which the Error Related Potentials positively contribute to the performance of a SSVEP-based BCI are explored. For this reason we also provide a new methodology, namely Inverse Correct Response Time (ICRT), that reliably captures the trade-off, between the gain of the automated error detection and the induced time delay of a BCI system that potentially incorporates Error Related Potentials.
23.

Elisavet Chatzilari, Georgios Liaros, Kostas Georgiadis, Spiros Nikolopoulos and Yiannis Kompatsiaris, Combining the Benefits of CCA and SVMs for SSVEP-based BCIs in Real-world Conditions, 25th ACM MM Conference, Workshop MMHealth’17, October 23–27, 2017, Mountain View, CA, USA

Abstract In this paper we propose a novel method for SSVEP classification that combines the benefits of the inherently multi-channel CCA, the state-of-the-art method for detecting SSVEPs, with the robust SVMs, one of the most popular machine learning algorithms. The employment of SVMs, except for the benefit of robustness, provides us also with a confidence score allowing to dynamically trade-off the trial length with the accuracy of the classifier, and vice versa. By balancing this trade-off we are able to offer personalized selfpaced BCIs that maximize the ITR of the system. Furthermore, we propose to perturb the template frequencies of CCA so as to accommodate with realworld BCI applications requirements, where the environmental conditions may not be ideal compared to existing methods that rely on the assumption of soundproof and distraction free environments.
24.

Menges, Raphael and Kumar, Chandan and Wechselberger, Ulrich and Schaefer, Christoph and Walber, Tina and Staab, Steffen, Schau genau! A Gaze-Controlled 3D Game for Entertainment and Education, COGAIN Symposium Wuppertal, August 21st 2017

Abstract Eye tracking devices have become affordable. However, they are still not very much pre- sent in everyday lives. To explore the feasibility of modern low-cost hardware in terms of reliability and usability for broad user groups, we present a gaze-controlled game in a standalone arcade box with a single physical buzzer for activation. The player controls an avatar in appearance of a butterfly, which flies over a meadow towards the horizon. Goal of the game is to collect spawning flowers by hitting them with the avatar, which increases the score. Three mappings of gaze on screen to world position of the avatar, featuring different levels of intelligence, have been defined and were randomly assigned to players. Both a survey after a session and the high score distribution are considered for evaluation of these control styles. An additional serious part of the game educates the players in flower species, who are rewarded with a point-multiplier for prior knowledge. During this part, gaze data on images is collected, which can be used for saliency calculations. Nearly 3000 completed game sessions were recorded on a state horticulture show in Germany, which demonstrates the impact and acceptability of this novel input technique among lay users
25.

Peter A. M. Ruijten, Cees J. H. Midden & Jaap Ham, Ambiguous Agents: The Influence of Consistency of an Artificial Agent’s Social Cues on Emotion Recognition, Recall, and Persuasiveness, International Journal of Human–Computer Interaction, Volume 32, Issue 9, 2016

Abstract This article explores the relation between consistency of social cues and persuasion by an artificial agent. Including (minimal) social cues in Persuasive Technology (PT) increases the probability that people attribute human-like characteristics to that technology, which in turn can make that technology more persuasive (see, e.g., Nass, Steuer, Tauber, & Reeder, 1993). PT in the social actor role can be equipped with a variety of social cues to create opportunities for applying social influence strategies (for an overview, see Fogg, 2003). However, multiple social cues may not always be perceived as being consistent, which could decrease their perceived human-likeness and their persuasiveness. In the current article, we investigate the relation between consistency of social cues and persuasion by an artificial agent. Findings of two studies show that consistency of social cues increases people’s recognition and recall of artificial agents’ emotional expressions, and make those agents more persuasive. These findings show the importance of the combined meaning of social cues in the design of persuasive artificial agents.
26.

Jaap Ham, Jef van Schendel, Saskia Koldijk and Evangelia Demerouti, Finding Kairos: The Influence of Context-Based Timing on Compliance with Well-Being Triggers, Symbiotic 2016, LNCS 9961, pp. 89–101, 2017.

Abstract For healthy computer use, frequent, short breaks are crucial. This research investigated whether context-aware persuasive technology can identify opportune and effective moments (of high user motivation and ability to perform target behavior) for triggering short breaks fostering symbiotic interactions between e-Coaching e-Health technology and users. In Study 1, office workers rated their motivation and ability to take a short break (probed at random moments). Simultaneously their computer activity was recorded. Results showed that computer activity (time since last break; change in computer activity level) can predict moments of high and low (perceived) ability (but not motivation) to take a short break. Study 2 showed that when office workers received triggers (to take a short break) at moments of high (vs. low) ability (predicted based on computer activity), compliance increased 70%. These results show that context information can be used to identify opportune moments, at which persuasive triggers are more effective.
27.

Konstantinos Ilias Georgiadis, Nikolaos Laskaris, Spiros Nikolopoulos and Yiannis Kompatsiaris, Discriminative Codewaves: a symbolic dynamics approach to SSVEP recognition for asynchronous BCI, Journal of Neural Engineering, Volume 15, Number 2, January 2018, © 2017 IOP Publishing Ltd

Abstract Objective: Steady-state visual evoked potential (SSVEP) is a very popular approach to establishing a communication pathway in brain computer interfaces (BCIs), without any training requirements for the user. Brain activity recorded over occipital regions, in association with stimuli flickering at distinct frequencies, is used to predict the gaze direction. High performance is achieved when the analysis of multichannel signal is guided by the driving signals. It is the scope of this study to introduce an efficient way to identify the attended stimulus without the need to register the driving signals. Approach: Brain response is described as a dynamical trajectory towards one of the attractors associated with the brainwave entrainment induced by the attended stimulus. A condensed description for each single-trial response is provided by means of discriminative vector quantization (DVQ), and different trajectories are disentangled based on a simple classification scheme that uses templates and confidence intervals derived from a small training dataset. Main results: Experiments, based on two different datasets, provided evidence that the introduced approach compares favorably to well-established alternatives, regarding the Information Transfer Rate (ITR). Significance: Our approach relies on (but not restricted to) single sensor traces, incorporates a novel description of brainwaves based on semi-supervised learning, and its great advantage stems from its potential for self-paced BCI. It is the scope of this study to introduce an efficient way to identify the attended stimulus without the need to register the driving signals. To this end, brain response is described as a dynamical trajectory towards one of the attractors associated with the brainwave entrainment induced by the attended stimulus. A condensed description for each single-trial response is provided by means of discriminative vector quantization (DVQ), and different trajectories are disentangled based on a simple classification scheme that uses templates and confidence intervals derived from a small training dataset. Our approach relies on (but not restricted to) single sensor traces, incorporates a novel description of brainwaves based on semi-supervised learning, and its great advantage stems from its potential for self-paced BCI. Experiments, based on two different datasets, provided evidence that the approach compares favorably to well-established alternatives, regarding the Information Transfer Rate (ITR).
28.

Dr. Meir PLOTNIK, Mr. Amihai GOTTLIEB, Dr. Rachel KIZONY, Dr. Zoe KATSAROU, Dr. Sevasti BOSTANTZOPOULOU, Ms. Ariana NICHOGIANNOPOULOU, Ms. Sissy CHLOMISSIOU, Dr. Gabi ZEILIG, Clinical Research Example – MAMEM – Multimedia Authoring and Management Using Your Eyes and Mind., Rehab science and technology update (RSTU) 7-10 February. 2016. Tel Aviv

Abstract With the growing number of people with severe disabilities who live longer and the growing use of computers for social interactions, we introduce an ambitious objective of The Multimedia Authoring and Management using your Eyes and Mind (MAMEM) project, i.e., a more natural human computer interfaces based on electroencephalography (EEG)/Eye movements (EMs) technologies.
29.

Spiros Nikolopoulos, Kostas Georgiadis, Fotis Kalaganis, Georgios Liaros, Ioulietta Lazarou, Katerina Adam, Anastasios Papazoglou - Chalikias, Elisavet Chatzilari, Vangelis P. Oikonomou, Panagiotis C. Petrantonakis, Ioannis Kompatsiaris, Chandan Kumar, Raphael Menges, Steffen Staab, Daniel Müller, Korok Sengupta, Sevasti Bostantjopoulou, Zoe Katsarou, Gabi Zeilig, Meir Plotnik, Amihai Gottlieb, RacheliKizoni, Sofia Fountoukidou, Jaap Ham, Dimitrios Athanasiou, Agnes Mariakaki, Dario Comanducci, Edoardo Sabatini, Walter Nistico and Markus Plank, A Multimodal dataset for authoring and editing multimedia content: The MAMEM project, In Data in Brief, Volume 15, 2017, Pages 1048-1056, ISSN 2352-3409

Abstract We present a dataset that combines multimodal biosignals and eye tracking information gathered under a human-computer interaction framework. The dataset was developed in the vein of the MAMEM project that aims to endow people with motor disabilities with the ability to edit and author multimedia content through mental commands and gaze activity. The dataset includes EEG, eye-tracking, and physiological (GSR and Heart rate) signals collected from 34 individuals (18 able-bodied and 16 motor-impaired). Data were collected during the interaction with specifically designed interface for web browsing and multimedia content manipulation and during imaginary movement tasks. The presented dataset will contribute towards the development and evaluation of modern human-computer interaction systems that would foster the integration of people with severe motor impairments back into society.
30.

Meir Plotnik, Zoe Katsarou, Amihai Gotlieb, Adam Grinberg, Rachel Kizony, Gabi Zeilig, Sevasti Bostantjopoulou-Kambouroglou, The importance of using computers in populations with Parkinson’s disease and spinal cord injury: a patients’ and caregivers’ perspective, SfN’s 47th annual meeting in Neuroscience, November 11-15, Washington, DC, USA, 2017

Abstract Twenty individuals with PD (mean age:59.1±8.05 years) and eighteen with SCI (mean age:45.4±15.5years) were included in the study. Participants' working habits with the computer were explored by means of a structured interview. For this interview, we adapted parts of the matching person and assistive technology questionnaire (MPT), to account for people with movement disabilities, and created a quantitative questionnaire, focused on the Contribution of the Computer to various aspects of Social Life.In addition, each participant was asked to define the three most important aspects. In parallel, the PD and SCI participants’ caregivers(20 and 11, respectively) were interviewed using the same method.
31.

Lazarou I, Nikolopoulos S, Petrantonakis PC, Kompatsiaris I and Tsolaki M, EEG-based brain computer interfaces for communication and rehabilitation of people with motor impairment: a novel approach of the 21st century, Front. Hum. Neurosci. 12:14. doi: 10.3389/fnhum.2018.00014

Abstract People with severe motor impairment face many challenges in communication and control of the environment, whilst survivors from neurological disorders have increased demand for advanced, adaptive and personalized rehabilitation. The last decades many studies have underlined the importance of brain-computer interfaces (BCIs) with great contributions ranging from communication restoration to motor rehabilitation. In this work we review BCI research that focuses on noninvasive, electroencephalography (EEG)-based BCI systems for people with motor impairment as far as communication and rehabilitation aspects are concerned. More specifically we overview milestone approaches that are primarily intended to help severely paralyzed and/or locked-in state patients by using three different BCI modalities, i.e., slow cortical potentials, sensorimotor rhythms and P300 potentials as operational mechanisms. In addition, we review BCI systems with special emphasis on restoration of motor function for patients with spinal cord injury and chronic stroke. Finally, we summarize how EEG-based BCI systems have contributed to communication and rehabilitation of motor impaired people, stress out advantages and limitations and discuss the challenges that these systems should address in the future.
32.

Raphael Menges, Hanadi Tamimi, Chandan Kumar, Tina Walber, Christoph Schaefer, Steffen Staab, Enhanced Representation of Web Pages for Usability Analysis with Eye Tracking, ACM Symposium on Eye Tracking Research and Applications (ETRA 18)

Abstract Eye tracking as a tool to quantify user attention plays a major role in research and application design. For Web page usability, it has become a prominent measure to assess which sections of aWeb page are read, glanced or skipped. Such assessments primarily depend on the mapping of gaze data to a Web page representation. However, current representation methods, a virtual screenshot of the Web page or a video recording of the complete interaction session, suffer either from accuracy or scalability issues. We present a method that identifies fixed elements on Web pages and combines user viewport screenshots in relation to fixed elements for an enhanced representation of the page. We conducted an experiment with 10 participants and the results signify that analysis with our method is more efficient than a video recording, which is an essential criterion for large scale Web studies.
33.

Korok Sengupta, Min Ke, Raphael Menges, Chandan Kumar, Steffen Staab, Hands-Free Web Browsing: Enriching the User Experience with Gaze and Voice Modality, ACM Symposium on Eye Tracking Research and Applications (ETRA 18)

Abstract Hands-free browsers provide an effective tool for Web interaction and accessibility, overcoming the need for conventional input sources. Current approaches to hands-free interaction are primarily categorized in either voice or gaze-based modality. In this work, we investigate how these two modalities could be integrated to provide a better hands-free experience for end-users. We demonstrate a multimodal browsing approach combining eye gaze and voice inputs for optimized interaction, and to suffice user preferences with unimodal benefits. The initial assessment with five participants indicates improved performance for the multimodal prototype in comparison to single modalities for hands-free Web browsing.
34.

Fountoukidou S., Ham J., Matzat U., Midden C., Using an Artificial Agent as a Behavior Model to Promote Assistive Technology Acceptance, In: Ham J., Karapanos E., Morita P., Burns C. (eds) Persuasive Technology. PERSUASIVE 2018. Lecture Notes in Computer Science, vol 10809. Springer, Cham

Abstract Despite technological advancements in assistive technologies, studies show high rates of non-use. Because of the rising numbers of people with disabilities, it is important to develop strategies to increase assistive technology acceptance. The current research investigated the use of an artificial agent (embedded into a system) as a persuasive behavior model to influence individuals’ technology acceptance beliefs. Specifically, we examined the effect of agent-delivered behavior modeling vs. two non-modeling instructional methods (agent-delivered instructional narration and no agent, text-only instruction) on individuals’ computer self-efficacy and perceived ease of use of an assistive technology. Overall, the results of the study confirmed our hypotheses, showing that the use of an artificial agent as a behavioral model leads to increased computer self-efficacy and perceived ease of use of a system. The implications for the inclusion of an artificial agent as a model in promoting technology acceptance are discussed.
35.

Vangelis P. Oikonomou, Spiros Nikolopoulos, Panagiotis Petrantonakis, Ioannis Kompatsiaris, Sparse Kernel Machines for motor imagery EEG classification, Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC'18) at the Honolulu, HI, USA, on July 17-21, 2018 (to appear)

Abstract Brain-computer interfaces (BCIs) make human-computer interaction more natural, especially for people with neuro-muscular disabilities. Among various data acquisition modalities the electroencephalograms (EEG) occupy the most prominent place due to their non-invasiveness. In this work, a method based on sparse kernel machines is proposed for the classification of motor imagery (MI) EEG data. More specifically, a new sparse prior is proposed for the selection of the most important information and the estimation of model parameters is performed using the bayesian framework. The experimental results obtained on a benchmarking EEG dataset for MI, have shown that the proposed method compares favorably with state of the art approaches in BCI literature.
36.

Panagiotis C. Petrantonakis and Ioannis Kompatsiaris, Detection of Mental Task Related Activity in NIRS-BCI systems Using Dirichlet Energy over Graphs, Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC'18) at the Honolulu, HI, USA, on July 17-21, 2018 (to appear)

Abstract Near Infrared Spectroscopy (NIRS)-based Brain Computer Interfaces (NIRS-BCI) rely mainly on the mean concentration changes and slope of the hemodynamic responses in separate recording channels to detect the mental-task related brain activity. Nevertheless, spatial patterns across the measurement channels are also present and should be taken into account for reliable evaluation of the aforementioned detection. In this work the Dirichlet Energy of NIRS signals over a graph is considered for the definition of a measure that would take into account the spatial NIRS features and would integrate the activity of multiple NIRS channels for robust mental task related activity detection. The application of the proposed measure on a real NIRS dataset demonstrates the efficiency of the proposed measure.
37.

P. C. Petrantonakis and I. Kompatsiaris, Single-trial NIRS data classification for brain-computer interfaces using graph signal processing, IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2018 (to appear).

Abstract Near Infrared Spectroscopy (NIRS)-based Brain Computer Interface (BCI) systems use feature extraction methods relying mainly on the slope characteristics and mean changes of the hemodynamic responses in respect to certain mental tasks. Nevertheless, spatial patterns across the measurement channels have been detected and should be considered during the feature vector extraction stage of the BCI realization. In this work a Graph Signal Processing (GSP) approach for feature extraction is adopted in order to capture the aforementioned spatial information of the NIRS signals. The proposed GSP-based methodology for feature extraction in NIRS-based BCI systems, namely GNIRS, is applied on a publicly available dataset of NIRS recordings during mental arithmetic task. GNIRS exhibits higher classification rates, up to 92.52%, as compared to the classification rates of two state-of-the-art feature extraction methodologies related to slope and mean values of hemodynamic response, i.e., 90.35% and 82.60%, respectively. In addition, GNIRS leads to the formation of feature vectors with reduced dimensionality in comparison with the baseline approaches. Moreover, it is shown to facilitate high classification rates even from the first second after the onset of the mental task, paving the way for faster NIRS-based BCI systems.
38.

Fotis Kalaganis, Elisavet Chatzilari, Spiros Nikolopoulos, Ioannis Kompatsiaris, and Nikos Laskaris, An error-aware gaze-based keyboard by means of a hybrid BCI system, Scientific Reports (under major revision)

Abstract Gaze-based keyboards offer a flexible way for human-computer interaction in both disabled and able-bodied people. Besides their convenience, they still lead to error-prone human-computer interaction. Eye tracking devices may misinterpret user’s gaze resulting in typesetting errors, especially when operated in fast mode. As a potential remedy, we present a novel error detection system that aggregates the decision from two distinct subsystems, with each one dealing with disparate data streams. The first subsystem operates on gaze-related measurements and exploits the eye-transition pattern to flag a typo. The second one is a brain-computer interface that utilizes a neural response, known as Error-Related Potentials (ErrP), which is inherently generated when the subject observes an erroneous action. By means of a suitable experimental set-up, we first demonstrate that ErrP-based Brain Computer Interfaces can be indeed useful in the context of gaze-based typesetting, despite the putative contamination of EEG activity from the eye-movement artefact. Then, we show that the performance of this subsystem can be further improved by considering also the error detection from the gaze-related subsystem. Finally, the proposed bimodal error detection system is shown to significantly reduce the typesetting time in a gaze-based keyboard.
39.

Vangelis Oikonomou, Spiros Nikolopoulos and Ioannis Kompatsiaris, A Bayesian Multiple Kernel Learning Algorithm for SSVEP BCI detection, IEEE Journal of Biomedical and Health Informatics (under major revision)

Abstract Our work deals with the classification of Steady State Visual Evoked Potentials (SSVEP), which is a multiclass classification problem that arises frequently in the field of Brain Computer Interfaces (BCI). In particular, our method, named MultiLRM, uses multiple linear regression models under a Sparse Bayesian Learning (SBL) framework to discriminate between the SSVEP classes. The regression coefficients of each model are learned using the Variational Bayesian Framework and the kernel trick is adopted not only for reducing the computational cost of our method, but also for enabling the combination of different kernel spaces. We verify the ability of our method to handle different kernel spaces by evaluating its performance with a new kernel based on Canonical Correlation Analysis (CCA) and prove the benefit of combining multiple kernels by outperforming several state-of-the-art methods in two SSVEP datasets.
40.

Konstantinos Georgiadis, Nikos Laskaris, Spiros Nikolopoulos and Ioannis Kompatsiaris, Exploiting the heightened phase synchrony in patients with neuromuscular disease for the establishment of efficient motor imagery BCIs, Journal of NeuroEngineering and Rehabilitation (submitted)

Abstract Background: Phase synchrony has extensively been studied for understanding neural coordination in health and disease. There are a few studies concerning the implications in the context of BCIs, but its potential for establishing a communication channel in patients suffering from neuromuscular disorders remains totally unexplored. We investigate, here, this possibility by estimating the time-resolved phase connectivity patterns induced during a motor imagery (MI) task and adopting a supervised learning scheme to recover the subject's intention from the streaming data. Methods: Electroencephalographic activity from six patients suffering from neuromuscular disease (NMD) and six healthy individuals was recorded during two randomly alternating, externally cued, MI tasks (clenching either left or right fist) and a rest condition. The metric of Phase locking value (PLV) was used to describe the functional coupling between all recording sites. The functional connectivity patterns and the associate network organization was first compared between the two cohorts. Next, working at the level of individual patients, we trained support vector machines (SVMs) to discriminate between "left" and "right" based on different instantiations of connectivity patterns (depending on the encountered brain rhythm and the temporal interval). Finally, we designed and realized a novel brain decoding scheme that could interpret the intention from streaming connectivity patterns, based on an ensemble of SVMs. Results: The group-level analysis revealed increased phase synchrony and richer network organization in patients. This trend was also seen in the performance of the employed classifiers. Time-resolved connectivity led to superior performance, with distinct SVMs acting as local experts, specialized in the patterning emerged within specific temporal windows (defined with respect to the external trigger). This empirical finding was further exploited in implementing a decoding scheme that can be activated without the need of the precise timing of a trigger. Conclusion: The increased phase synchrony in NMD patients can turn to a valuable tool for MI decoding. Considering the fast implementation for the PLV pattern computation in multichannel signals, we can envision the development of efficient personalized BCI systems in assistance of these patients.
41.

Ghazali, A., Ham, J., Barakova, E., & Markopoulos, P., The Influence of Social Cues in Persuasive Social Robots on Psychological Reactance and Compliance, Computers in Human Behavior, 87, 58-65, 2018

Abstract People can react negatively to persuasive attempts experiencing reactance, which gives rise to negative feelings and thoughts and may reduce compliance. This research examines social responses towards persuasive social agents. We present a laboratory experiment which assessed reactance and compliance to persuasive attempts delivered by an artificial (non-robotic) social agent, a social robot with minimal social cues (human-like face with speech output and blinking eyes), and a social robot with enhanced social cues (human-like face with head movement, facial expression, affective intonation of speech output). Our results suggest that a social robot presenting more social cues will cause higher reactance and this effect is stronger when the user feels involved in the task at hand.
42.

Ghazali, A., Ham, J., Barakova, E., & Markopoulos, P., Effects of robot facial characteristics and gender in persuasive human-robot interaction, Frontiers in Robotics and AI. doi: 10.3389/frobt.2018.00073, 2018.

Abstract The growing interest in social robotics makes it relevant to examine the potential of robots as persuasive agents and, more specifically, to examine how robot characteristics influence the way people experience such interactions and comply with the persuasive attempts by robots. The purpose of this research is to identify how the (ostensible) gender and the facial characteristics of a robot influence the extent to which people trust it and the psychological reactance they experience from its persuasive attempts. This paper reports a laboratory study where SociBot™, a robot capable of displaying different faces and dynamic social cues, delivered persuasive messages to participants while playing a game. In-game choice behaviour was logged, and trust and reactance towards the advisor were measured using questionnaires. Results show that a robotic advisor with upturned eyebrows and lips (features that people tend to trust more in humans) is more persuasive, evokes more trust and less psychological reactance compared to one displaying eyebrows pointing down and lips curled downwards at the edges (facial characteristics typically not trusted in humans). Gender of the robot did not affect trust, but participants experienced higher psychological reactance when interacting with a robot of the opposite gender. Remarkably, mediation analysis showed that liking of the robot fully mediates the influence of facial characteristics on trusting beliefs and psychological reactance. Also, psychological reactance was a strong and reliable predictor of trusting beliefs but not of trusting behavior. These results suggest that robots which are intended to influence human behaviour should be designed to have facial characteristics we trust in humans and could be personalized to have the same gender as the user. Furthermore, personalization and adaptation techniques designed to make people like the robot more may help ensure they will also trust the robot
43.

Raphael Menges, Chandan Kumar, Korok Sengupta and Steffen Staab, Enhancing User Experience for Eye Gaze Interaction: The Case of GazeTheWeb, ACM Transactions on Computer-Human Interaction, 2018 (in preparation)

Abstract Eye tracking systems have greatly improved in recent years, being a viable and affordable option as communication channel, especially for people lacking fine motor skills. Using eye tracking as an active input channel is challenging due to accuracy and ambiguity issues, and therefore research in eye gaze interaction is mainly focused on better pointing and selection methods. However, these methods eventually need to be assimilated to enable end users to control application interfaces. A common approach to employ eye tracking for controlling application interfaces is to emulate mouse and keyboard functionality. We argue that the emulation approach incurs additional interaction and visual overhead for users, indulging the entire experience of gaze-controlled application. Interaction by eye gaze is tedious and cognitively demanding for users, and it is imperative to keep the number of required input-action steps, demand of visual search and memory load low, to provide a enjoyable user experience with gaze-controlled applications for daily usage.We propose how this improvement can be achieved with the knowledge about the interface semantics through introspection, and propose the implementation in a Web browsing environment. We have developed a Web browser that extracts the interface semantics of Web pages and adapts both the browser interface and the interaction elements in Web pages for gaze-control. In a controlled lab study with 20 participants, the system allowed the participants to accomplish browsing tasks significantly faster in comparison to the traditional method of emulation, and achieved higher usability ratings and less workload on the participants. Moreover, we present a feasibility study with 18 motor impaired users (alongside 18 able-bodied participants), which signifies the applicability of the direct gaze-based interaction for the target user group.
44.

Lazarou Ioulietta, Adam Katerina, Georgiadis Kostas, Tsolaki Anthoula, Nikolopoulos Spiros, Kompatsiaris (Yiannis) Ioannis and Tsolaki Magda, Can a Novel High-Density EEG Approach Disentangle the Differences of Visual Event Related Potential (N170), Elicited by Negative Facial Stimuli, in People with Subjective Cognitive Impairment?, Journals of Alzheimer Disease, 2018 (to appear)

Abstract Background: Studies on the Subjective Cognitive Impairment (SCI) and neural activation report controversial results. Objective: To evaluate the ability to disentangle the differences of visual ERPs generated by facial stimuli (Anger & Fear) as well as the cognitive deterioration of Subjective Cognitive Impairment (SCI), Mild Cognitive Impairment (MCI) and Alzheimer’s Disease (AD) compared to Healthy Controls (HC), as measured by the N170 event-related component (ERP). Method: Fifty seven participants (N=57) participated in this study. Images corresponding to two negative facial stimuli “Anger” and “Fear” were presented to 12 Healthy Controls (HC), 14 with SCI, 17 with MCI and 14 with AD participants. EEG data were recorded by using a HD-EEG HydroCel with 256 channels. Results: Results showed that the amplitude of N170 can contribute in distinguishing the SCI group, since statistically significant differences were observed with the HC (p< 0.05) and the MCI group from HC (p< .001), as well as AD from HC (p= .05) during the processing of facial stimuli. Despite the small sample size and the within group variability observed for each cohort, there are clear pieces of evidence on the potential of using the amplitude of N170 to create a bio-marker distinguishing SCI from the groups of MCI, AD and HC. As expected, in the case of AD group the amplitude of the negative N170 peaks was far larger than all the others, however this was also the case with the variability observed in this group. Moreover, positive correlations were observed between specific neuropsychological tests and the amplitude of N170 in the case of “Fear”. Noticeable differences were also observed in the topographic distribution of the N170 amplitude; while localization analyses by using sLORETA images confirmed the activation of superior, middle-temporal and frontal lobe brain regions. Finally, in case of “Fear”, SCI and HC demonstrated increased activation in Orbital and Inferior Frontal Gyrus respectively, MCI in Inferior Temporal Gyrus and AD in Lingual Gyrus. Conclusion: These preliminary findings suggest that the amplitude of N170 elicited after negative facial stimuli could be modulated by the decline related to pathological cognitive aging and can contribute in distinguishing HC from SCI, MCI and AD.