AnalysisPath - Meg Wiki

Upload page content

You can upload content for the page named below. If you change the page name, you can also upload content for another page. If the page name is empty, we derive the page name from the file name.

File to load page content from
Page name
Comment
In thi sntence, what word is mad fro the mising letters?

location: AnalysisPath

MEG Analysis Path Without Individual MRI images

The following analysis path is probaby the quickest to lead you from your MEG data to presentable source estimates, with some compromises with respect to accuracy and statistical analysis. The general idea is this:

Overview

  1. interpolate your MEG data to a standardised sensory array

  2. apply statistics in "signal space" (e.g. using SensorSPM), in order to detect significant contrasts and latency ranges

  3. run source analysis using a standardised head model on the grand-averaged MEG data (e.g. using a commercial software package, offering several different approaches to explore together with good visualisation)

  4. possibly apply the same source analysis on individual subject data, e.g. for statistical analysis in "source space", or for ROI analysis

This closely resembles the analysis strategy employed for ERP analysis, where electrodes are usually placed at standardised positions (e.g. the "10/20 system"), such that data for different subjects are very easy to combine (e.g. for grand-averages, statistics on the same eletrodes etc.).

Motivation

Although using individual structural MRI data sets for source estimation, and applying group statistics in source space on individual subject data, is in principle the most sophisticated approach to EEG/MEG data analysis, there are also some limitations that should be considered before choosing the most efficient analysis path:

  1. Some source estimation procedures rely on rather restrictive modeling assumptions. This is the case for dipole models, for example, which make assumptions about the number and approximate locations of sources. These assumptions may not be fullfilled for single-subject data, either because of high noise levels, or because of inter-individual variation of the generator distributions.

  2. Some distributed source methods make weaker assumptions about the number active sources, but still implement constraints on focality of sources (e.g. L1-norm, multiple sparse priors). These methods may still require a high SNR in order to produce accurate results, which may not be achieved for single-subject data.

  3. Source analysis on the single-subject level can be veeeeery time consuming, in particular if realistic head models and cortex reconstructions are involved. Depending on the research questions, testing the predicitions underlying the experiment may not require this effort.

  4. Analysing grand-mean data in signal and source space can provide clues about the optimal analysis strategy on the single-subject level, e.g. with respect to latency ranges, optimal source models, ROIs etc.

  5. Many MEG papers do not report any data in signal space (e.g. time course information), which could provide important information about data quality. Furthermore, since different papers often use different source estimation procedures, it can be difficult to judge whether different studies replicate each others' results.

  6. Spatial resolution of EEG/MEG source estimation is limited for fundamental physical reasons ("inverse problem"). Spatial normalisation to a standard brain in source space further lowers spatial resolution. So why not normalise in signal space and apply the standard head model for all source analyses?

Doing it

In MEG recordings, the position and orientation of the sensor array usually differs significantly across subjects (unless special precautions are taken). Until recently, it was not straightforward to interpolate MEG data from different subjects into a common sensor array (e.g. the average across subjects). It has therefore been the standard to analyse MEG data only in source space, i.e. after source estimation procedures have been applied on single-subject data. However, it is possible to transform individual subject data into a common sensor frame (e.g. the average sensor configuration of a group of subjects) using the "-trans" option of the Maxfilter utility. It is only necessary to apply this to the averaged data (i.e. not the huge raw data files). The following steps are required:

  1. Pre-process your MEG data up to the stage of individual averages (PreProcessing)

  2. Choose/determine your standard sensory array (see StandardSensorArray)

  3. interpolate single-subject MEG data to standard sensory array (InterpolateData)

  4. compute grand-mean data (GrandMean)

  5. run statistics in sensor space (SensorStats)

  6. apply source estimation to grand-mean data (SourceEstimation)