|Deletions are marked like this.||Additions are marked like this.|
|Line 1:||Line 1:|
|'''DTI analysis tutorial'''||= DTI analysis tutorial =|
DTI analysis tutorial
These instructions are for preprocessing DTI data, usually with the aim of conducting a TBSS (“tract-based spatial statistics”) analysis. TBSS is a whole-brain method of comparing diffusion measures such as FA (fractional anisotropy) or MD (mean diffusivity) between groups (e.g. patients and controls), or correlating diffusion measures with cognition (e.g. neuropsychology test score). It is specialised for the analysis of diffusion data, and more sensitive to group differences in diffusion measures than a VBM-style analysis using SPM (see Rae et al. Neuroimage, in press).
FSL software runs in linux, and I tend to run most of my preprocessing from the command line using a sequence of bash scripts (although some stages can be run in the GUI if you prefer).
Some basic instructions and example bash scripts are given here, but the FSL website is also an excellent source of information. See both the diffusion “tutorial” and TBSS webpages at: http://www.fmrib.ox.ac.uk/fslcourse/lectures/practicals/fdt/index.htm http://www.fmrib.ox.ac.uk/fsl/tbss/index.html
All the example bash scripts given here need a small amount of editing by inserting your subject numbers and study directory. Once the scripts are in your /home/bin/, you need to make them all executable by typing “chmod a+rwx *” at the command line. Otherwise you shouldn’t need to be an experienced linux scripter to use them.
The way I do things is to make an overall "dti_project" folder, and then individual subject folders within this directory, based on subject scan numbers e.g. “110159, 110309" etc. In each subject folder, you want a .nii (or .nii.gz) diffusion datafile, which should have between 10-100 volumes (typically around 60, depending on the sequence applied). You also need a .bvec and a .bval file, which contain information about the diffusion image acquisition parameters.
The “copy_series_dcm2nii” script will make a folder for each subject in your study directory, copy the diffusion Series from the CBU /mridata/cbu/ to the subject folder, and convert the dicoms to niftis using dcm2nii. You will need to specify your study directory and subject numbers in this script, as with all the following ones.
Then, use the eddy_correct_data script to correct for motion and eddy currents in each subject's data (which you always get in "raw" diffusion data, although on a modern Siemens Trio the eddy currents are not that bad).
Movement artefacts may be a concern in diffusion data, as with fMRI. The eddy_correct script corrects for this, to some degree (the translations, at least). In healthy young subjects being scanned with a short diffusion sequence, movement artefacts are often minimal. However, to check, you can use “dti_motion_data”, which calls a “dti_motion” script given to me by Mark Jenkinson at FMRIB. (So make sure you also have the “dti_motion” script in your /home/bin/). It uses the eddy_correct logfile to calculate movement, and gives you “ec_trans.png” and “ec_rot.png” images showing subject movements in translations and rotations. They can be viewed using a linux graphics viewer such as Eye of Gnome (type eog at the command line).
Next you need to extract a volume from the .nii datafile with no diffusion weighting applied (you can usually spot this because it looks "white" when the others look grey, and it's nearly always the first volume in the sequence). If it's the first volume, FSL thinks of this as the "0" volume. Then, you need to skull strip this no diffusion weighting (nodif.nii) brain using the FSL tool "bet". You should apply the option "-m" to get a brain mask, which you need for the next pre-processing step. All this is done for each subject in the "brain_mask" script. You should probably check the skull stripped nodif_brain to check it looks brain shaped with no big black holes in, and check that the nodif_brain_mask is generally brain shaped and not odd. The nodif_brain_mask should also be binary (i.e. have a value of “1” at all brain voxels).
Then, you fit the diffusion tensors - this models three orthogonal directions of diffusion at each voxel. This is in the "dtifit_data" script (takes a few minutes for each subject). You should use the eddy_corrected .nii data, and the .bval and .bvec file.
Then you'll probably want to check that the tensor fitting has worked ok and the data looks anatomically sensible. To do this, look at the FA image in FSLview. Start FSLview, and open dti_V1. Add FA as an overlay. While V1 is highlighted, click the blue "i" box, and from "DTI display options" choose "RGB" and "Modulate by FA". Tracts oriented left-right should appear red (e.g. corpus callosum), tracts oriented front-back green (e.g. cingulum bundle), and tracts oriented superior-inferior blue (e.g. corticospinal tract). You can use a DTI atlas to help you with this. The more you look at FA images, the more you will easily be able to see if something is wrong (in young healthy subjects, if the tensors have been fitted properly, there shouldn’t be). If you are in any doubt, email me a screenshot at email@example.com .
Finally, you can use the "copy_FA" script to copy the FA (fractional anisotropy) images over to a TBSS analysis directory, where you'll run the TBSS.
Then, you don't need to do any more scripting for TBSS. Just run tbss_1_preproc * and all the following steps according to the instructions on the FSL website.
In summary, this is the sequence in which the preprocessing bash scripts should be applied. There is a reminder of this order in the dti_preprocessing_steps.txt file which you might find useful to keep on hand in your study directory. copy_series_dcm2nii eddy_correct_data dti_motion_data (optional, just for interest to check how much subjects move) brain_mask dtifit_data copy_FA
Rae CL, Correia MM, Altena E, Hughes LE, Barker RA, Rowe JB. (in press) White matter pathology in Parkinson's disease: the effect of imaging protocol differences and relevance to executive function. Neuroimage