AutomatedSpikeDetection - Methods
location: AutomatedSpikeDetection

Automated DTI spike detection

Matthew is working on a DTI toolbox tailored to this problem: see http://imaging.mrc-cbu.cam.ac.uk/svn/cbumethods/dti/trunk/

In its current incarnation this does:

  1. Automatic conversion of Siemens DTI data to SPM format, and gradient direction reading
  2. Automatic conversion of this format to FSL format
  3. Brain detection using various methods include registration to SPM template.
  4. Removal of slices detected as bad, by pooling data across volumes with the same (or similar) diffusion direction.
  5. Various methods of outlier detection

Outlier detection

The current most successful method works like this:

  1. Identify diffusion B value groups, usually just B0 scans and B=1000 scans in our data
  2. Within B~=0 groups:
    1. Collect slice number N for all volumes
    2. Run PCA on these slices, and remove first - say - 25% of variance, assuming this is brain or diffusion related rather than artefact related. Of course it's difficult to predict apriori how much variance the artefacts are going to contribute. Because they are almost invariably from a single slice in the slice time series (of say the 128 B=1000 volumes we collect), it seems reasonable to assume the artefact won't be in the top 25% of variance. It looks as if this is more certain for slices that contain a lot of brain. In these slices, 50% variance removal seems to be effective.
    3. Run a 2D Fourier transform on the PCA cleaned slices, and take the abs to get frequency power in X and Y. This should give us something closer to the cause of the artefact, because this comes from a spike in frequency (K) space. In our case, the image reconstruction will have smoothed this spike in frequency space.
    4. We now have an X by Y by N series where N is the number of time points = diffusion acquisitions. Our job is to detect spikes in space and time, where the spikes will be roughly 4x4 pixels in space, and in a single time point (because the spikes occur at random times).
    5. So, we smooth the frequency power maps by around 4 pixels (= frequency steps) (by eye, the rough size of the artefact in frequency space.
    6. Next step is to run outlier detection in space and time. We do this by calculating the outlierness of points using the interquartile range - first in space, and then in time. In fact we iterate, running (space outlierness, time outlierness) iterations - say - 5 times.
    7. The remaining scores for each slice give a combination of space and time outlierness. We take the maximum outlier score for each slice and use that to predict whether this slice contains an artefact. We can do this with a simple threshold (for the interquartile range) - of by using the distribution - again - say - with an interquartile range algorithm.

Performance of the method so far

John, Paul and I ran through one dataset carefully looking for outliers. Matthew has since updated this list of known outliers with any outliers convincingly detected with the various algorithms he's tried. Note that there are 6144 slices in the diffusion data (48 slices, 128 volumes). The current algorithm has the following performance on this relatively independent set of outliers:

  • Hit rate around 90% - detecting 327 slices
  • False positive rate around 7% - detecting 409 slices

Running over the apparent false positives reveals that the algorithm has in fact detected many new true positives. Including these in the ground truth positives (a biased measure, obviously), gives:

  • Hit rate around 93% - detecting 464 slices
  • False positive rate of around 5% - detecting 272 slices

Misses are therefore 35 slices.

The remaining problem is that it still looks as if the method may continue to be sensitive somewhat to the diffusion signal - which also changes over time, and also can have specific frequency components. This makes it rather dangerous to apply blind. Other sources of false positives are movement artefact (not very worrying), and vessel artefact (medium worry level).

Implications of the results so far for the analysis

The algorithm detects in the region of 10% bad slices in the test dataset - maybe there are more it has missed.

That makes it very likely indeed that there will be at least one slice per volume with an artefact. It also means that you have in the region of a 40% chance that, for any one direction, there will be at least one slice with artefacts in both repetitions, and that in turn means that, in this dataset, you're going to be down to one set of directions (you'll need all the pairs to compile an artefact free single volume), and only about 60% of these directions will survive, in some random pattern. That may well be very noisy to analyze.

Other things we've tried

  1. Looking for outliers in the background noise
  2. Using peak detection to look for peaks in the frequency maps, as above
  3. Using ICA to decompose the frequency maps into independent components, and looking for components with a single spike in the time series (moderately successful)
  4. Simple PCA detection (very unsuccessful)
  5. Taking the residuals from a simple diffusion tensor fit (very unsuccessful) - see http://imaging.mrc-cbu.cam.ac.uk/svn/cbumethods/dti/trunk/@dtiset/tensor_data_slice.m

Things that might be worthwhile

  1. Windowed FFTS removing brain signal
  2. Wavelets? Major faff.
  3. Pursuing tensor fit a bit further

Plan

  1. Matthew to package toolbox for mini-release as stands
  2. Take the detection rate as at least being proportional to true positive and run on other datasets to estimate the severity of the problem
  3. In next week or so, Matthew will try some of the options above
  4. Review at the end of next week to see whether further options worth pursuing.

MatthewBrett, JohnGriffiths

None: AutomatedSpikeDetection (last edited 2013-03-08 10:28:27 by localhost)