Techniques for a Practical BCI System: Massive Multi-User, Visual Imagery BCI and EEG-Transformers
- Author(s)
- Sunghan Lee
- Type
- Thesis
- Degree
- Doctor
- Department
- 대학원 전기전자컴퓨터공학부
- Advisor
- Jun, Sung Chan
- Abstract
- Brain-computer interface (BCI) technology decodes humans’ intentions by using only brain signals to control computers and machines, such as wheelchairs and exoskeletons, without also using external equipment. BCI technology has the potential to expand communications and enhance other human functions. As a result, BCI has been attracting attention as a promising technology for many years. However, it is still not widely used in daily life and is still primarily used in very limited situations, such as rehabilitation. There are several reasons why BCI is not yet widely used in public, including the hassle of wearing brain imaging equipment, the non-stationary electro-encephalography (EEG), a long and tedious training process, and low signal-to-noise ratio. Although various attempts have already been made to address these limitations of BCI, three more approaches are proposed in this thesis. First, a framework was developed to measure the wireless EEG of eight or more multi-users simultaneously. We tried to solve the BCI’s challenging task of training through this framework’s multi-user approach. The number of trials required for potential event-related detection was greatly reduced in the case of multi-users compared to the case of a single-user. The use of this framework should enable us to design BCI experiments closer to an actual environment, and this would be the first step in brain-to-brain and social interaction research that extends the use of BCI. Second, a new experimental paradigm approach to BCI through visual imagery was proposed. The paradigm of motor imagery (MI) BCI is difficult to train and has limited controllable degrees of freedom. Other paradigms, such as P300 and steady-state visual evoked potential, require external stimulation and are limited by indirectly conveying the intentions one letter at a time. However, if the classification of visual imagery (VI) is possible, very intuitive control can be achieved compared to the current BCI. A VI experiment was performed on the object, digit, and shape category image, and then classification performance was compared by processing EEG in various forms. Although we did not obtain the very high accuracy needed for the practical usability for VI BCI, we did confirm that a reasonable classification is possible for object, digit, and shape category classification. This suggests the possibility that VI can be used as a paradigm for intuitive BCI in the future. Third, the applicability of the transformers’ network to the classification of four-class MI data was explored. Features were extracted with a time-varying filter band common spatial pattern, and a vision transformer-based classifier was introduced. The proposed network obtained classification results similar to those of recent studies reporting on the latest performance. This study suggest that it is important not only to introduce an artificial intelligence-based (AI) classifier but also to use an appropriate feature extraction technique suitable for the paradigm in EEG-based MI classification. Among the challenges to be addressed for practical application of BCI, this dissertation presents a solution from all three perspectives in an effort to reduce the training phase, present a more intuitive brain-computer interface experimental paradigm, and introduce AI models for effective feature extraction and classification. Approaches in this dissertation are meaningful in that they challenge the practical use of BCI from three perspectives and indicate the future development. It is hoped that the proposed method helps researchers in the field of BCI who take a similar approach to overcoming the challenges of BCI.
- URI
- https://scholar.gist.ac.kr/handle/local/19796
- Fulltext
- http://gist.dcollection.net/common/orgView/200000883143
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.