Commandline Usage¶
Introduction¶
The Connectome Mapper 3
BIDS App takes as principal input the path of the dataset
that is to be processed.
The input dataset is required to be in valid BIDS format, and it must include at least one T1w or MPRAGE structural image.
We highly recommend that you validate your dataset with the free, online
BIDS Validator.
Commandline Arguments¶
The command to run Connectome Mapper 3
follow the BIDS-Apps definition with additional options for loading pipeline configuration files.
Entrypoint script of the BIDS-App Connectome Mapper version v3.0.0-beta-20191021
usage: connectomemapper3 [-h]
[--participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]]
[--anat_pipeline_config ANAT_PIPELINE_CONFIG]
[--dwi_pipeline_config DWI_PIPELINE_CONFIG]
[--func_pipeline_config FUNC_PIPELINE_CONFIG]
[--number_of_participants_processed_in_parallel NUMBER_OF_PARTICIPANTS_PROCESSED_IN_PARALLEL]
[--fs_license FS_LICENSE] [-v]
bids_dir output_dir {participant,group}
Positional Arguments¶
bids_dir | The directory with the input dataset formatted according to the BIDS standard. |
output_dir | The directory where the output files should be stored. If you are running group level analysis this folder should be prepopulated with the results of theparticipant level analysis. |
analysis_level | Possible choices: participant, group Level of the analysis that will be performed. Multiple participant level analyses can be run independently (in parallel) using the same output_dir. |
Named Arguments¶
--participant_label | |
The label(s) of the participant(s) that should be analyzed. The label corresponds to sub-<participant_label> from the BIDS spec (so it does not include “sub-“). If this parameter is not provided all subjects should be analyzed. Multiple participants can be specified with a space separated list. | |
--anat_pipeline_config | |
Configuration .txt file for processing stages of the anatomical MRI processing pipeline | |
--dwi_pipeline_config | |
Configuration .txt file for processing stages of the diffusion MRI processing pipeline | |
--func_pipeline_config | |
Configuration .txt file for processing stages of the fMRI processing pipeline | |
--number_of_participants_processed_in_parallel | |
The number of subjects to be processed in parallel (One core used by default). | |
--fs_license | Freesurfer license.txt |
-v, --version | show program’s version number and exit |
Participant Level Analysis¶
To run the docker image in participant level mode (for one participant)
docker run -it --rm \
-v /home/localadmin/data/ds001:/tmp \
-v /media/localadmin/data/ds001/derivatives:/tmp/derivatives \
-v /usr/local/freesurfer/license.txt:/tmp/code/license.txt \
sebastientourbier/connectomemapper3:latest \
/tmp /tmp/derivatives participant --participant_label 01 \
--anat_pipeline_config /tmp/code/ref_anatomical_config.ini \
(--dwi_pipeline_config /tmp/code/ref_diffusion_config.ini \)
(--func_pipeline_config /tmp/code/ref_fMRI_config.ini \)
Note
The local directory of the input BIDS dataset (here: /home/localadmin/data/ds001
) and the output directory (here: /media/localadmin/data/ds001/derivatives
) used to process have to be mapped to the folders /tmp
and /tmp/derivatives
respectively using the -v
docker run option.
Note
At least a configuration file describing the processing stages of the anatomical pipeline should be provided. Diffusion and/or Functional MRI pipeline are performed only if a configuration file is set.
Debugging¶
Logs are outputted into
<output dir>/cmp/sub-<participant_label>/sub-<participant_label>_log-cmpbidsapp.txt
.
Support and communication¶
The documentation of this project is found here: https://connectome-mapper-3.readthedocs.io/en/latest/.
All bugs, concerns and enhancement requests for this software can be submitted here: https://gitlab.com/connectomicslab/connectomemapper3/issues.
If you run into any problems or have any questions, you can post to the CMTK-users group.
Not running on a local machine? - Data transfer¶
If you intend to run connectomemapper3
on a remote system, you will need to
make your data available within that system first. Comprehensive solutions such as Datalad will handle data transfers with the appropriate
settings and commands. Datalad also performs version control over your data.