fMRIPrep at MRC CBU
For details about what fMRIPrep is and why you'd want to use it, see their website. This page is about how you use it at CBU with Singularity. Notice that fMRIPrep has its own documentation on Singularity use, so please consult that also.
Singularity is a container image runner, like the more famous Docker. The preferred route for using fMRIPrep is as a Docker container image, which we run in Singularity rather than Docker. Easy right? There is a good explanation on this CBU Intranet page.
Wait, what's a container image?
You can think of it like a little virtual machine that bundles a version of Linux, all the dependencies fMRIPrep needs, and the fMRIPrep application itself into a neat little package that can run independently of the rest of the imaging system.
Using fMRIPrep at CBU
We keep containers for each release at these imaging system paths:
/imaging/local/software/singularity_images/fmriprep/fmriprep-1.1.4.simg /imaging/local/software/singularity_images/fmriprep/fmriprep-1.1.7.simg /imaging/local/software/singularity_images/fmriprep/fmriprep-1.1.8.simg /imaging/local/software/singularity_images/fmriprep/fmriprep-1.2.0.simg /imaging/local/software/singularity_images/fmriprep/fmriprep-1.4.0.simg /imaging/local/software/singularity_images/fmriprep/fmriprep-1.4.1rc4.simg /imaging/local/software/singularity_images/fmriprep/fmriprep-1.4.1.simg /imaging/local/software/singularity_images/fmriprep/fmriprep-1.5.0.simg
To use it, you would do something like this frankly intimidating command, substituting the version that you want to use:
singularity run -C -B /imaging/jc01/kamitani:/kamitani /imaging/local/software/singularity_images/fmriprep/fmriprep-1.1.8.simg /kamitani/bids /kamitani/fmriprepnew participant --participant-label sub-03 -w /kamitani/fmriprepwork --nthreads 16 --omp-nthreads 16 --fs-license-file /kamitani/license.txt --output-space T1w
It's a bit less intimidating if you break it over multiple lines (but harder to copy and paste successfully):
1 singularity run -C \ 2 -B /imaging/jc01/kamitani:/kamitani \ 3 /imaging/local/software/singularity_images/fmriprep/fmriprep-1.1.8.simg \ 4 /kamitani/bids /kamitani/fmriprep participant \ 5 --participant-label sub-03 -w /kamitani/fmriprepwork --nthreads 16 \ 6 --omp-nthreads 16 --fs-license-file /kamitani/license.txt --output-space T1w
Again, computing has a nice explanation of what it all means on this CBU Intranet page. The key things to notice are
The -B flag is for a bind-mount - like a virtual machine, the container can't see any paths on the wider imaging system unless you make them available inside the container as mounts. So for instance -B /imaging/jc01/kamitani:/kamitani makes the former imaging system path accessible inside the container as the latter path.
- You call the container image like any binary, by plugging in its full absolute path (this is different from what you may be used to coming from Docker)
The first 3 arguments to fMRIprep itself (the ones that come after the path to the image) are the input directory (here, /kamitani/bids, which is actually at /imaging/jc01/kamitani/bids, since these arguments are evaluated inside the container!), the output directory (here /kamitani/fmriprep), and the analysis level ( (participant - this is a bit redundant and later fMRIprep versions don't require this argument)
-w /kamitani/fmriprepwork sets a working directory where in-progress files are kept. Useful if you want to re-run pipelines, or inspect intermediate results. Be good to IT and delete this directory once you are happy with the preprocessed data.
--fs-license-file /kamitani/license.txt is necessary to point to a Freesurfer license file. You should obtain your own Freesurfer license for this. You can instead disable freesurfer if you don't need surface reconstructions.
Worked example: Handling custom templates, submitting to SLURM cluster
(thanks Joff Jones)
fMRIPrep supports custom templates, but getting them into your container can be challenging! There is information on this issue in the fMRIPrep docs. Below is a pattern that Joff Jones has found to work at CBU. The tricky bit here is keeping track of what happens natively (on the login node), and what happens inside the container.
Another tricky thing is to run the fMRIPrep job on the cluster. It is bad practice to run these fairly intense jobs on the login nodes. The example below takes you through one way to do this. You could run this in an iPython shell session, or put the examples into a Jupyter Lab notebook.
First you need templateflow on your Python path, so try something like activating Neuroconda.
Then we want to start by downloading the template we want to use:
After that, we can write a little Python shell script that submits the job.
1 # setup template flow environment 2 my_env = os.environ.copy() 3 # tell templateflow where to look for the templates *inside the container* 4 my_env["SINGULARITYENV_TEMPLATEFLOW_HOME"] = "/templateflow" 5 6 # directories 7 bids_dir = '/path/to/bids' 8 slurm_output_dir = '/path/to/cluster/log/files' 9 os.mkdir(slurm_output_dir) 10 11 # set range of BIDS_ID numbers to be processed 12 for n in range(1, 100): 13 p = subprocess.run(" ".join(["sbatch", "--mincpus=6", "--time=48:00:00", #sbatch cmd 14 "--job-name=fmriprep", 15 "--output", slurm_output_dir + "/sub-%03d.out" %(n), 16 "singularity", "exec", "-C", #singularity call 17 "-B", "/imaging/jj02/CALM:/CALM", # freesurfer license location 18 "-B", bids_dir + ":/bids", # bids directory 19 "-B", "/home/jj02/.cache/templateflow:/templateflow", # these might need to be set to your home 20 "-B", "/home/jj02/.cache/fmriprep:/home/fmriprep", # these might need to be set to your home 21 "-B", "/tmp:/tmp", 22 "/imaging/local/software/singularity_images/fmriprep/fmriprep-1.5.0.simg", # singularity image 23 "fmriprep", # run fmriprep 24 "/bids", # bids directory 25 "/bids/derivatives/fmriprep-1.5.0", # fmriprep directory 26 "participant", "--participant_label", '%03d' %(n), # participant info 27 "-v", "-w", "/bids/derivatives/fmriprepwork-1.5.0", # wording directory 28 "--skull-strip-template", "MNIPediatricAsym:cohort-1", # child skull-strip template 29 "--output-spaces", "MNIPediatricAsym:cohort-1:res-2", "T1w", # child template 30 "MNI152NLin6Asym:res-2", "MNI152NLin2009cAsym", # For ICA-AROMA & carpet plot 31 "fsaverage", # For freesurfer BBR and surface-based BOLD 32 "--use-aroma", # ICA-AROMA denoising output 33 "--fs-license-file", "/CALM/license.txt", # freesurfer license 34 "--write-graph", 35 "--fd-spike-threshold", "0.5", "--dvars-spike-threshold", "0.5", 36 "--notrack", "--resource-monitor", 37 "--skip-bids-validation"]), # skip this for cluster jobs as it tries to go online 38 shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=my_env) 39 print(p.args) 40 print(p.stdout.decode()) 41 print(p.stderr.decode())
Notice that several bind-mounts are used (the -B flags) to mount various paths that are needed inside the container, including the /templateflow path where the template we downloaded natively becomes available inside the container.
An example notebook that also implements MRIQC is available on for download here.