Automated DTI preprocessing with FSL, Diffusion Toolkit, and LONI Pipeline

Download: http://github.com/johncolby/DTI-Preprocessing

This page is to collect information on our current DTI preprocessing workflow. We use a combination of FSL tools (for TBSS and bedpostx/probtrackx), as well as Diffusion Toolkit (for streamline fiber tracking). The preprocessing for both of these routes consists of running the raw diffusion weighted image volumes through a handful of modular unix style command line tools. This makes using the LONI Pipeline very appealing, since it can automate this whole process in a way that is parallelized and reproducible.

Underlying tools

Directory layout

To use these scripts as-is, the following directory structure should be used (but of course the tweaks to adapt this to other layouts are simple). For an expanded listing with the files too, see here.

  • exptDir - The base experiment directory.
    • PIPELINE - The Pipeline input/output lists get generated in here.
      • grad - Put the gradient tables in here.
    • SCRIPTS - After I edit the .pipe and setup script for a particular set of data, I save them in here.
    • SUBJECTS - Contains all of the individual subject folders
      • 20037 - An example subject folder
        • 1avg, 2avg, etc. - Folders generated with output of Pipeline runs
        • RAW - Contains the raw 4D DWI series. These are usually just symbolic links to the real files in a centralized repository area that contains all our raw data.
          • reject - Any rejected files get moved in here with a note.

Here is the layout of folders:

/path/to/exptDir/
|-- ANALYSIS
|   |-- slicesdir_1avg
|   `-- slicesdir_2avg
|-- PIPELINE
|   `-- grad
|-- SCRIPTS
|-- SUBJECTS
|   |-- 20037
|   |   |-- 1avg
|   |   |   |-- diffusion_toolkit
|   |   |   |-- dtifit
|   |   |   `-- track
|   |   `-- RAW
|   |       `-- reject
|   |-- 20056
|   |   |-- 1avg
|   |   |   |-- diffusion_toolkit
|   |   |   |-- dtifit
|   |   |   `-- track
|   |   |-- 2avg
|   |   |   |-- diffusion_toolkit
|   |   |   |-- dtifit
|   |   |   `-- track
|   |   `-- RAW
...

Gradient tables (bvecs/bvals)

FSL

  • Wide format
  • bvals go in a separate file
  • Must explicitly include entries for all b0s and scan repetitions
    • i.e. you'll need separate bvecs/bvals files when using 4D files with differing numbers of scan repetitions stacked together.

Diffusion toolkit

  • Long format
  • bvals get specified at the command line or GUI
  • Do not include rows for b0s
  • Automatically expands the bvecs if it senses multiple scans

MATLAB Pipeline setup script

I like to create Pipeline workflows that use lists for inputs and outputs. This has 2 advantages: 1) Once the .pipe file is created, it can be locked to prevent accidental editing, and 2) A separate script can be used to impose more complex logic on the subjects that get fed into Pipeline. A MATLAB script called make_DTI_lists.m is included for this purpose.

This script is setup as a function to take the following input parameters:

  • exptDir - Path of the base experiment directory from above.
  • outName - Desired output directory name to go inside each subject folder (ex: 1avg)
  • nScans - The number of scans to include in the run. Only subjects with at least this many scans will be included.
  • idStr - A string that identifies raw DWI series from other files that may be in the RAW folder.
  • subIDs - A list of subjects to potentially process.

The function can then be called interactively with MATLAB.

Python Pipeline setup CLI

A make_DTI_lists.py Python command line tool is also included. This works the same way as the make_DTI_lists.m setup script, but doesn't rely on MATLAB, and can be executed straight from the UNIX command line.

$ ./make_DTI_lists.py -h
usage: make_DTI_lists.py [-h] [-s SUBIDS [SUBIDS ...]]
                         exptDir outName nScans idStr

For a given experiment directory, this script checks to see how many DTI scan
repetitions are present, and generates the appropriate input lists for the
DTI_preprocessing.pipe workflow in the <exptDir>/PIPELINE directory.

positional arguments:
  exptDir               Path to base experiment directory. Should contain
                        SUBJECTS and PIPELINE directories.
  outName               Desired output directory name that will be created in
                        each <exptDir>/SUBJECTS/<subID>/ directory.
  nScans                Choose the number of scans to average.
  idStr                 Specify a string to identify raw DWI series.

optional arguments:
  -h, --help            show this help message and exit
  -s SUBIDS [SUBIDS ...]
                        Subset of subject ID directories in <exptDir>/SUBJECTS
                        to process. Default is to process all subjects.

Example: make_DTI_lists.py /path/to/exptDir 2avg 2 30DIR -s `ls /path/to/exptDir/SUBJECTS | grep <some pattern goes here>`

Pipeline workflow

For a given subject list, this workflow:

  1. Stacks the repeated scans together with fslmerge.
  2. Performs affine registration to correct for eddy current induced geometric distortions with eddy_correct.
  3. Performs skull-stripping with bet.
  4. Sets up a track folder with the appropriate inputs for the bedpostx probabilistic tractography tool.
  5. Performs tensor fitting with dtifit.
  6. Performs streamline tractography with Diffusion Toolkit, including
    1. Tensor fitting with dti_recon.
    2. Fiber tracing with dti_tracker.
    3. Streamline filtering with spline_filter.

Setup

  • Select Window → Variables, and change the exptDir variable to the same path as above.
  • Make sure that the wrapper scripts point to the right executables by changing /path/to/fsl according to your setup:
    $ cat wrappers/dtifit.sh
    #! /bin/bash
     
    FSLDIR="/path/to/fsl"
    ...
  • Make sure the Pipeline modules point to the right wrapper scripts. It would be nice to do this path setup automatically with some setup_paths.sh script. TODO
  • Inspect/change the other module options (ex: BET thresholds, FA & fiber turning angle limits, etc.)
  • Make sure properly formatted bvecs/bvals are in the grad folder.

Notes

Wrapper scripts

Most FSL tools assume that you've sourced the main FSL environment setup script. Since the pipeline is executed by a different user, this typically won't be true. Therefore, for pipelining FSL tools it's usually a good idea to make a wrapper script that will make sure this gets done.

QC

The slicesdir script in FSL is a nice way to quickly generate a QC report, as it will let you glob together files spread out across many directories. For example, if I just ran the Pipeline to do the 2avg processing, I could execute the following from the exptDir:

$ slicesdir SUBJECTS/*/2avg/diffusion_toolkit/dti_fa.nii.gz

Provenance

One really neat thing about putting a preprocessing workflow like this in Pipeline is that it will automatically save a copy of the workflow - including all parameter settings - as a .prov file next to any output files that are generated. This means that there will never be any confusion over which settings were used to generate a given file.

MATLAB batch mode

If you are on a machine that doesn't allow interactive jobs, you'll need to modify make_DTI_lists.m so that it can be executed as a script. To do this,

  1. Remove the function definition at the top.
  2. Add explicit definitions for the variables that would normally be taken in as input arguments.
  3. Execute the script from the Unix command line like matlab -nodisplay < make_DTI_lists.m.
    1. Alternatively, submit this job to the cluster with qsub or the fsl_sub wrapper.
neuroimaging/dti-preprocessing.txt · Last modified: 2011/09/23 10:11 am PDT by John Colby
 
Except where otherwise noted, content on this wiki is licensed under the following license: CC Attribution-Noncommercial-Share Alike 4.0 International
Recent changes RSS feed Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki