MEG videos - Data Analysis
Alexandre Gramfort: Decoding MEG data (demo)
A multivariate non-parametric approach is demonstrated, based in the following approach: if there is an effect in the data, we should be able to learn it from a fraction of such data and then make predictions out of independent new data. MVPA/decoding benefits MEG analysis by leveraging information of all channels and time points, increasing evidence (this is the base of cluster level analysis: improves statistical power looking a neighbouring power and frequencies).
1. Source estimation current dipole models in MEG
That MEG/EEG signals are linearly related to the current amplitude is a physical fact. After an introduction to MEG and EEG source estimation this video describes the “inverse problem” (methods, restrictions, ambiguity, common features, terminology). It means that based on the data we arrive at an approximation of the currents in the brain. Besides, current dipole models are addressed (effect of source extent, find the best-fitting dipole, etc.) along with their caveats, challenges and solving strategies.
2. Frequency analysis in Neuroscience
The video describes how oscillations reveal aspects of brain functions. “Classic” papers are presented to show early observations and then explain: spatiotemporal rhythms analysis, microscopic/macroscopic neurophysiology, age-related changes in mu-rhythms and time-course of the phase-locked value (PLV). Following, how to extract activity with MNE (minimum norm estimates) and GLM (general linear model), conclusions about connectivity and correlation between neurophysiological and behavioural data.
1. Beamforming EEG and MEG data
Beamforming is a technique of source reconstruction, a scanning method where each point is estimated independently, inverse modelling by spatial filtering helps solving issues, is possible to apply in time and frequency domain, and has almost no a priori assumptions on the sources. This video tries to convey the methods of the beamformer, its ingredients (forward model, experimental data, etc.) and the way it is implemented.
2. Connectivity Analysis EEG/MEG
3. Forward modelling EEG/MEG
4. Frequency analysis
5. Introduction to FieldTrip
6. Real time MEG analysis
Connectivity analysis goes beyond univariate analysis and looks at the networks between different sources and MEG requieres computing the connectivity at a source level (i.e. beamforming). Here we explain different measures of connectivity: effective and functional. Concepts such as phase, oscillation, and measures of frequency domain connectivity (phase-lag index, phase-lock value, etc.) are along with the very important term “coherence” and linear prediction (regressive models, Granger causality).
The video starts with some motivation and background. Then, it goes into forward modelling, needed for interpreting scalp topography as “source estimation”, covering its main aspects: source model, volume control model (analytical with a spherical model or numerical with a realistic model) and a comparison of forward modelling between MEG and EEG. Last, inverse modelling is addressed: single and multiple dipole fitting, distributed source models and spatial filters; with different mathematical approaches.
This video is on analysing oscillatory sources, which are a temporal aspect of brain activity. It goes through spectral analysis (spectral decomposition, time-frequency relation, tapers and linear regression using oscillatory basis functions), time-frequency analysis, (time vs frequency resolution) and wavelets. Also shows examples in time-frequency analysis of power (hanning window, wavelets and multitapers).
The video shows how FieldTrip analyses M/EEG signals. A background is provided to put the toolbox in a broader perspective and understand how it came about and keeps developing. It describes signal characteristics considered during the analysis: time-course of the activity (ERP/F), spectral characteristics (power spectrum), spatial distribution over the head (source reconstruction), as well as FieldTrip vs. default MATLAB, some example scripts and some tips for the analyses.
Why would you want to do real-time analysis? It allows to interact with the subjects’ brain states or relate cortical activity to behaviour. It is important to have access to the data, and since it is happening in real time, time is really important, so we will consider timing requirements, along with slightly technical examples (algorithm for efficiency analysis). Also we will present the pipeline sequence of the analysis and some efficiency considerations (i.e. due to parsimony, only do a real time analysis if necessary).