MAMEM makes publicly available it’s second experimental dataset (EEG SSVEP Dataset II), using the exact same subjects and EEG equiment as in it’s first dataset (EEG SSVEP Dataset I). The second dataset has been generated using a more challenging SSVEP-based protocol where the visual stimuli are presented simultaneously. MAMEM’s processing toolbox (ssvep-eeg-processing-toolbox) can be used for processing this dataset and for extracting results comparable with the initial dataset.
More specifically, EEG signals with 256 channels captured from 11 subjects executing a SSVEP-based experimental protocol. Five different frequencies (6.66, 7.50, 8.57, 10.00 and 12.00 Hz) presented simultaneously in a cross-layout arrangement have been used for the visual stimulation, and the EGI 300 Geodesic EEG System (GES 300), using a 256-channel HydroCel Geodesic Sensor Net (HCGSN) and a sampling rate of 250 Hz has been used for capturing the signals.
Eleven volunteers participated in this study. They all were present employees of Centre for Research and Technology Hellas (CERTH). Specifically, 8 of them were male and 3 female. Their ages ranged from 25 to 39 years old. All of them were able-bodied subjects without any known neuro-muscular or mental disorders. Furthermore, to all but one subjects the adult medium Geodesic Sensor Net (GSN) was applied. The visual stimuli were projected on a 22’’ LCD monitor, with a refresh rate of 60 Hz and 1680×1080 pixel resolution. The visual stimulation of the experiment was designed using OpenViBE. A graphic card (Nvidia GeForce GT 740) fast enough to render more frames than the screen can display was used and vertical synchronization. Also, the option “vertical synchronization” of the graphic card was enabled in order to ensure that only whole frames are seen on screen.
High Dimensional – EEG data were recorded with the EGI 300 Geodesic EEG System (GES 300), using a 256-channel HydroCel Geodesic Sensor Net (HCGSN) and a sampling rate of 250 Hz. The adult medium (56 – 58 cm) HCGSN was used. The contact impedance at each sensor was ensured to be at most 40 KΩ before the initialization of every new session.
The synchronization of the stimulus with the recorded EEG signal was performed with the aid of the Stim Tracker model ST – 100 (developed by Cedrus), and a light sensor attached to the monitor that added markers (denoted hereafter as Dins) to the captured EEG signal. More specifically, the light sensor was able to detect with high precision the onset of the visual stimuli and place Dins on the EEG signal for as long as the visual stimuli flickered, providing evidence of the lasting period. Subsequently, in the offline data processing, these Dins were used to separate the raw signal into the part generated during the visual stimuli and the part generated during the resting period.
The stimuli of the experiment were five violet boxes presented simultaneously in a cross-layout arrangement, flickering in 5 different frequencies (6.66, 7.50, 8.57, 10.00 and 12.00 Hz). Each box was flickering in a specific frequency and they were all presented for 5 seconds at the same time, followed by 5 seconds without visual stimulation before the flickering boxes appear again. Prior to the stimulation period, one of the boxes was marked by a yellow arrow identifying the box subjects had to focus on. The marking arrow is shown during the trial, making it easier for the subjects to focus correctly for the trial’s whole length. The background color was black for the whole experiment.
Resources:
EEG SSVEP Dataset II along with the accompanying report.
ssvep-eeg-processing-toolbox that can be used to process this dataset and support experimentation.
Video demonstrating one trial.