Notes
![]() ![]() Notes - notes.io |
Brain-machine interfaces (BMIs) translate neural signals into digital commands to control external devices. During the use of BMI, neurons may change their activity corresponding to the same stimuli or movement. The changes are represented by the neural tuning parameters which may change gradually and abruptly. Adaptive algorithms were proposed to estimate the time-varying parameters in order to keep decoding performance stable. The existing methods only searched new parameters locally which failed to detect the abrupt changes. Global search helps but requires the known boundary of estimated parameter which is hard to be defined in many cases. We propose to estimate the neural modulation parameter by the global search using adaptive point process estimation. This neural modulation parameter represents the similarity between the kinematics and the neural preferred hyper tuning direction with finite range [0,1]. The preferred hyper tuning direction is then decoupled from the neural modulation parameter by gradient descent method. We apply the proposed method on real data to detect the abrupt change of the neural tuning parameter when the subject switched from manual control to brain control mode. The proposed method demonstrates better tracking on the neural hyper tuning parameters than local searching method and validated by KS statistical test.Passive brain-computer interfaces (BCIs) covertly decode the cognitive and emotional states of users by using neurophysiological signals. An important issue for passive BCIs is to monitor the attentional state of the brain. Previous studies mainly focus on the classification of attention levels, i.e. high vs. low levels, but few has investigated the classification of attention focuses during speech perception. In this paper, we tried to use electroencephalography (EEG) to recognize the subject's attention focuses on either call sign or number when listening to a short sentence. Fifteen subjects participated in this study, and they were required to focus on either call sign or number for each listening task. A new algorithm was proposed to classify the EEG patterns of different attention focuses, which combined common spatial pattern (CSP), short-time Fourier transformation (STFT) and discriminative canonical pattern matching (DCPM). As a result, the accuracy reached an average of 78.38% with a peak of 93.93% for single trial classification. The results of this study demonstrate the proposed algorithm is effective to classify the auditory attention focuses during speech perception.Task-related component analysis (TRCA) has been the most effective spatial filtering method in implementing high-speed brain-computer interfaces (BCIs) based on steady-state visual evoked potentials (SSVEPs). TRCA is a data-driven method, in which spatial filters are optimized to maximize inter-trial covariance of time-locked electroencephalographic (EEG) data, formulated as a generalized eigenvalue problem. Although multiple eigenvectors can be obtained by TRCA, the traditional TRCA-based SSVEP detection considered only one that corresponds to the largest eigenvalue to reduce its computational cost. This study proposes using multiple eigen-vectors to classify SSVEPs. Specifically, this study integrates a task consistency test, which statistically identifies whether the component reconstructed by each eigenvector is task-related or not, with the TRCA-based SSVEP detection method. The proposed method was evaluated by using a 12-class SSVEP dataset recorded from 10 subjects. The study results indicated that the task consistency test usually identified and suggested more than one eigenvectors (i.e., spatial filters). Further, the use of additional spatial filters significantly improved the classification accuracy of the TRCA-based SSVEP detection.The goal of this study is to estimate the thermal impact of a titanium skull unit (SU) implanted on the exterior aspect of the human skull. We envision this unit to house the front-end of a fully implantable electrocorticogram (ECoG)-based bi-directional (BD) brain-computer interface (BCI). Starting from the bio-heat transfer equation with physiologically and anatomically constrained tissue parameters, we used the finite element method (FEM) implemented in COMSOL to build a computational model of the SU's thermal impact. Based on our simulations, we predicted that the SU could consume up to 75 mW of power without raising the temperature of surrounding tissues above the safe limits (increase in temperature of 1°C). This power budget by far exceeds the power consumption of our front-end prototypes, suggesting that this design can sustain the SU's ability to record ECoG signals and deliver cortical stimulation. These predictions will be used to further refine the existing SU design and inform the design of future SU prototypes.Electroencephalogram (EEG) based brain-computer interfaces (BCIs) enable communication by interpreting the user intent based on measured brain electrical activity. Such interpretation is usually performed by supervised classifiers constructed in training sessions. However, changes in cognitive states of the user, such as alertness and vigilance, during test sessions lead to variations in EEG patterns, causing classification performance decline in BCI systems. This research focuses on effects of alertness on the performance of motor imagery (MI) BCI as a common mental control paradigm. It proposes a new protocol to predict MI performance decline by alertness-related pre-trial spatio-spectral EEG features. The proposed protocol can be used for adapting the classifier or restoring alertness based on the cognitive state of the user during BCI applications.The study reports the performance of Parkinson's disease (PD) patients to operate Motor-Imagery based Brain-Computer Interface (MI-BCI) and compares three selected pre-processing and classification approaches. The experiment was conducted on 7 PD patients who performed a total of 14 MI-BCI sessions targeting lower extremities. see more EEG was recorded during the initial calibration phase of each session, and the specific BCI models were produced by using Spectrally weighted Common Spatial Patterns (SpecCSP), Source Power Comodulation (SPoC) and Filter-Bank Common Spatial Patterns (FBCSP) methods. The results showed that FBCSP outperformed SPoC in terms of accuracy, and both SPoC and SpecCSP in terms of the false-positive ratio. The study also demonstrates that PD patients were capable of operating MI-BCI, although with lower accuracy.In order to explore the effect of low frequency stimulation on pupil size and electroencephalogram (EEG), we presented subjects with 1-6Hz black-and-white-alternating flickering stimulus, and compared the differences of signal-to-noise ratio (SNR) and classification performance between pupil size and visual evoked potentials (VEPs). The results showed that the SNR of the pupillary response reached the highest at 1Hz (17.19± 0.10dB) and 100% accuracy was obtained at 1s data length, while the performance was poor at the stimulation frequency above 3Hz. In contrast, the SNR of VEPs reached the highest at 6Hz (18.57± 0.37dB), and the accuracy of all stimulus frequencies could reach 100%, with the minimum data length of 1.5s. This study lays a theoretical foundation for further implementation of a hybrid brain-computer interface (BCI) that integrates pupillometry and EEG.Studies have shown the possibility of using brain signals that are automatically generated while observing a navigation task as feedback for semi-autonomous control of a robot. This allows the robot to learn quasi-optimal routes to intended targets. We have combined the subclassification of two different types of navigational errors, with the subclassification of two different types of correct navigational actions, to create a 4-way classification strategy, providing detailed information about the type of action the robot performed. We used a 2-stage stepwise linear discriminant analysis approach, and tested this using brain signals from 8 and 14 participants observing two robot navigation tasks. Classification results were significantly above the chance level, with mean overall accuracy of 44.3% and 36.0% for the two datasets. As a proof of concept, we have shown that it is possible to perform fine-grained, 4-way classification of robot navigational actions, based on the electroencephalogram responses of participants who only had to observe the task. This study provides the next step towards comprehensive implicit brain-machine communication, and towards an efficient semi-autonomous brain-computer interface.In the design of brain-machine interface (BMI), as the number of electrodes used to collect neural spike signals declines slowly, it is important to be able to decode with fewer units. We tried to train a monkey to control a cursor to perform a two-dimensional (2D) center-out task smoothly with spiking activities only from two units (direct units). At the same time, we studied how the direct units did change their tuning to the preferred direction during BMI training and tried to explore the underlying mechanism of how the monkey learned to control the cursor with their neural signals. In this study, we observed that both direct units slowly changed their preferred directions during BMI learning. Although the initial angles between the preferred directions of 3 pairs units are different, the angle between their preferred directions approached 90 degrees at the end of the training. Our results imply that BMI learning made the two units independent of each other. To our knowledge, it is the first time to demonstrate that only two units could be used to control a 2D cursor movements. Meanwhile, orthogonalizing the activities of two units driven by BMI learning in this study implies that the plasticity of the motor cortex is capable of providing an efficient strategy for motor control.The success of deep learning (DL) methods in the Brain-Computer Interfaces (BCI) field for classification of electroencephalographic (EEG) recordings has been restricted by the lack of large datasets. Privacy concerns associated with EEG signals limit the possibility of constructing a large EEG-BCI dataset by the conglomeration of multiple small ones for jointly training machine learning models. Hence, in this paper, we propose a novel privacy-preserving DL architecture named federated transfer learning (FTL) for EEG classification that is based on the federated learning framework. Working with the single-trial covariance matrix, the proposed architecture extracts common discriminative information from multi-subject EEG data with the help of domain adaptation techniques. We evaluate the performance of the proposed architecture on the PhysioNet dataset for 2-class motor imagery classification. While avoiding the actual data sharing, our FTL approach achieves 2% higher classification accuracy in a subject-adaptive analysis. Also, in the absence of multi-subject data, our architecture provides 6% better accuracy compared to other state-of-the-art DL architectures.The concept of 'presence' in the context of virtual reality (VR) refers to the experience of being in the virtual environment, even when one is physically situated in the real world. Therefore, it is a key parameter of assessing a VR system, based on which, improvements can be made to it. To overcome the limitations of existing methods that are based on standard questionnaires and behavioral analysis, this study proposes to investigate the suitability of biosignals of the user to derive an objective measure of presence. The proposed approach includes experiments conducted on 20 users, recording EEG, ECG and electrodermal activity (EDA) signals while experiencing custom designed VR scenarios with factors contributing to presence suppressed and unsuppressed. Mutual Information based feature selection and subsequent paired t-tests used to identify significant variations in biosignal features when each factor of presence is suppressed revealed significant (p less then 0.05) differences in the mean values of EEG signal power and coherence within alpha, beta and gamma bands distributed in specific regions of the brain.
Here's my website: https://www.selleckchem.com/products/way-100635.html
![]() |
Notes is a web-based application for online taking notes. You can take your notes and share with others people. If you like taking long notes, notes.io is designed for you. To date, over 8,000,000,000+ notes created and continuing...
With notes.io;
- * You can take a note from anywhere and any device with internet connection.
- * You can share the notes in social platforms (YouTube, Facebook, Twitter, instagram etc.).
- * You can quickly share your contents without website, blog and e-mail.
- * You don't need to create any Account to share a note. As you wish you can use quick, easy and best shortened notes with sms, websites, e-mail, or messaging services (WhatsApp, iMessage, Telegram, Signal).
- * Notes.io has fabulous infrastructure design for a short link and allows you to share the note as an easy and understandable link.
Fast: Notes.io is built for speed and performance. You can take a notes quickly and browse your archive.
Easy: Notes.io doesn’t require installation. Just write and share note!
Short: Notes.io’s url just 8 character. You’ll get shorten link of your note when you want to share. (Ex: notes.io/q )
Free: Notes.io works for 14 years and has been free since the day it was started.
You immediately create your first note and start sharing with the ones you wish. If you want to contact us, you can use the following communication channels;
Email: [email protected]
Twitter: http://twitter.com/notesio
Instagram: http://instagram.com/notes.io
Facebook: http://facebook.com/notesio
Regards;
Notes.io Team