After running the individual segmentations, checking the output, and creating the files, we are ready to run the "Vertex Analysis." A model is created with a single command, but there are several options.
Creating Statistical Model
This is the equivalent of "Estimating" the SPM.mat file in SPM.
first_utils --vertexAnalysis --usebvars -i concatenated_bvars -d design.mat -o output_basename [--useReconNative --useRigidAlign ] [--useReconMNI] [--usePCAfilter -n number_of_modes]
Based on earlier examples, this is one possible command line (assuming you are in the directory with the files):
[user@localhost temp]$ first_utils --vertexAnalysis --usebvars -i L_Hipp_all.bvars -d design.mat -o L_Hipp_vertexMNI --useReconMNI
The various options are specified in the User Guide under Vertex Analysis, Usage. It is important to come up with a naming convention for the output files. I suggest starting with the structure name, adding "_vertex" to indicate this file is the output of the vertex analysis, then add suffixes. Here is a convention:
- "_vertexNative" ==>> "--useReconNative --useRigidAlign"
- "_vertexNativeScale" ==>> "--useReconNative --useRigidAlign –useScale"
- "_vertexMNI" ==>> "--useReconMNI --useRigidAlign"
- "_vertexMNIScale" ==>> "--useReconMNI --useRigidAlign –useScale"
The output of this step is a 4D nifti file (*.nii.gz) with the name as indicated in the -o option, and a _mask file with a similar name. Additional files with the same name as the design matrix but with different extenstions are also created: *.con for t-contrast, *.fts for f-contrast,
Running Analysis
Randomize is like "Results" in SPM in that it does a statistical test on a contrast. The inputs are the outputs from the previous step.
[user@localhost temp]$ randomise -i L_Hipp_vertexMNI.nii.gz -m L_Hipp_vertexMNI_mask.nii.gz -o L_Hipp_vertexMNI_rand -d design.mat -t design.con -f design.fts (more options)
more options:
Just as in SPM Results we put in statistical tests (F or t) and thresholds, so we do the same with randomise (see here for initial details). A starting point might be
--fonly -D -F 3
to look at bi-directional effects (fonly), after de-meaning the data (-D), with an F threshold of 3, with cluster-based correction for multiple comparisons.