After running the individual segmentations, checking the output, and creating the files, we are ready to run the "Vertex Analysis." A model is created with a single command, but there are several options.
Table of Contents |
---|
Creating Statistical Model
This is the equivalent of "Estimating" the SPM.mat file in SPM.
first_utils --vertexAnalysis --usebvars -i concatenated_bvars -d design.mat -o output_basename [--useReconNative --useRigidAlign ] [--useReconMNI] [--usePCAfilter -n number_of_modes]
...
[user@localhost temp]$ first_utils --vertexAnalysis --usebvars -i L_Hipp_all.bvars -d designd des_2sample.mat -o L_Hipp_vertexMNI_2sample --useReconMNI
The various options are specified in the User Guide under Vertex Analysis, Usage. It is important to come up with a naming convention for the output files. I suggest starting with the structure name, adding "_output_vertex" to indicate this file is the output of the vertex analysis, then add suffixes. Here is a convention:
- "_vertexNative" ==>> "--useReconNative --useRigidAlign" (volume differences)
- "_vertexNativeScale" ==>> "--useReconNative --useRigidAlign –useScale" --useScale" (shape differences)
- "_vertexMNI" ==>> "--useReconMNI --useRigidAlign" (volume differences accounting for head size)
- "_vertexMNIScale" ==>> "--useReconMNI --useRigidAlign –useScale"--useScale" (differences after accounting for head size/shape and structure size - hard to interpret but should have the lowest variability)
The output of this step is a 4D nifti file (*.nii.gz) with the name as indicated in the -o option, and a _mask file with a similar name. Additional files with the same name as the design matrix but with different extensions are also created: *.con for t-contrast, *.fts for f-contrast.
The 4D file contains values for each subject at the voxels on the boundary of the mean surface (defined by the mask file) as distance from the mean, in voxel units. (Note that the FSL person posted in 2013 that he may change the units to mm at some point.) Here is an example from the hippocampus, where the distance is positive (values can be negative as well).
Analysis options (from UserGuide)
The --useReconNative option carries out vertex analysis in native space, along with the --useRigidAlign option. The --useReconMNI option may also be used to carry out vertex analysis, it will do it in the MNI standard space instead, which normalises for brain size. It is difficult to say which will be more sensitive to changes in shape, and so it may be interesting to try both the --useReconNative and the --useReconMNI options. Also note that the --useScale option will not be used. Without the --useScale option, changes in both local shape and size can be found in shape analysis. This type of finding can be interpreted, for example, as local atrophy. With the--useScale option, overall changes in size are lost.
More notes from a practical:
- To run vertex analysis, you will need the
.bvars
files output by FIRST and a design matrix. These contain all the information required byfirst_utils
. - It sometimes may be desirable to reconstruct the surfaces in native space (i.e. without the affine normalization to MNI152 space). To do this, instead of
--useReconMNI
, use the--useReconNative
and--useRigidAlign
options. - When using the
--useRigidAlign
flag,first_utils
will align each surface to the mean shape (from the model used by FIRST) with 6 degrees of freedom (translation and rotation). The transformation is calculated such that the sum-of-squared distances between the corresponding vertices is minimized. This command is needed when using--useReconNative
, however, can be used with--useReconMNI
to remove local rigid body differences. - The
--useScale
flag can be used in combination with--useRigidAlign
to align the surfaces using 7 dof.--useScale
will indicate tofirst_utils
to remove global scaling.
Running Analysis
Randomize is like "Results" in SPM in that it does a statistical test on a contrast. The inputs (analysis and mask) are the outputs from the previous step.
[user@localhost temp]$ randomise -i L_Hipp_vertexMNI_2sample.nii.gz -m L_Hipp_vertexMNI_2sample_mask.nii.gz -o L_Hipp_vertexMNI_2sample_rand -d des_2sample.mat -t des_2sample.con -f des_2sample.fts --fonly -D (output/multiple comparison correction options)
Output Multiple comparison options: (see table in randomise User Guide):
Possible options include -x, --T2, -F <threshold*>, -S <threshold>
* At some point, this could also run without a threshold; however, it usually gives an error ("F missing non-optional argument" or similar).
Just as in SPM "Results" where we put in statistical tests and thresholds, we do the same with randomise (see here for initial details). A starting point might be
-F 3
to look at bi-directional effects (--fonly), after de-meaning the data (-D), with an F threshold of 3, with cluster-based correction for multiple comparisons. The guide (see Running vertex analysis) suggests always using demean and F-test only).
The "-o" indicates the output file name, so you probably want to label this with the test parameters (e.g., -o L_Hipp_vertexMNI_2sample_rand_F3 for the suggested options).
Examples
Refer to table in randomise guide for files that are created (randomise User Guide).
[user@localhost temp]$ randomise -i L_Hipp_vertexMNI_2sample.nii.gz -m L_Hipp_vertexMNI_2sample_mask.nii.gz -o L_Hipp_vertexMNI_2sample_rand -d des_2sample.mat -t des_2sample.con -f des_2sample.fts --fonly -D -F 3
[user@localhost temp]$ randomise -i L_Hipp_vertexMNI_2sample.nii.gz -m L_Hipp_vertexMNI_2sample_mask.nii.gz -o L_Hipp_vertexMNI_2sample_rand -d des_2sample.mat -t des_2sample.con -f des_2sample.fts --fonly -D -x
[user@localhost temp]$ randomise -i L_Hipp_vertexMNI_2sample.nii.gz -m L_Hipp_vertexMNI_2sample_mask.nii.gz -o L_Hipp_vertexMNI_2sample_rand -d des_2sample.mat -t des_2sample.con -f des_2sample.fts --fonly -D --T2
[user@localhost temp]$ randomise -i L_Hipp_vertexMNI_2sample.nii.gz -m L_Hipp_vertexMNI_2sample_mask.nii.gz -o L_Hipp_vertexMNI_2sample_rand -d des_2sample.mat -t des_2sample.con -f des_2sample.fts --fonly -D -S 3
Each analysis will generate a number of files, one for each contrast. There are maps of f statistics, p values, and corrected p values, regions that are significantly different based on the threshold if one was included (F statistic > 3 in the last example):