Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 15 Next »

After running the individual segmentations, checking the output, and creating the files, we are ready to run the "Vertex Analysis." A model is created with a single command, but there are several options.

Creating Statistical Model

This is the equivalent of "Estimating" the SPM.mat file in SPM.

first_utils --vertexAnalysis --usebvars -i concatenated_bvars -d design.mat -o output_basename [--useReconNative --useRigidAlign ] [--useReconMNI] [--usePCAfilter -n number_of_modes]

Based on earlier examples, this is one possible command line (assuming you are in the directory with the files):

[user@localhost temp]$ first_utils --vertexAnalysis --usebvars -i L_Hipp_all.bvars  -d design.mat -o L_Hipp_vertexMNI --useReconMNI  

The various options are specified in the User Guide under Vertex Analysis, Usage. It is important to come up with a naming convention for the output files. I suggest starting with the structure name, adding "_vertex" to indicate this file is the output of the vertex analysis, then add suffixes. Here is a convention:

  • "_vertexNative" ==>> "--useReconNative --useRigidAlign" 
  • "_vertexNativeScale" ==>> "--useReconNative --useRigidAlign –useScale
  • "_vertexMNI" ==>> "--useReconMNI --useRigidAlign" 
  • "_vertexMNIScale" ==>> "--useReconMNI --useRigidAlign –useScale"

The output of this step is a 4D nifti file (*.nii.gz) with the name as indicated in the -o option, and a _mask file with a similar name. Additional files with the same name as the design matrix but with different extenstions are also created: *.con for t-contrast, *.fts for f-contrast.

Analysis options (from UserGuide)

The --useReconNative option carries out vertex analysis in native space, along with the --useRigidAlign option. The --useReconMNI option may also be used to carry out vertex analysis, it will do it in the MNI standard space instead, which normalises for brain size. It is difficult to say which will be more sensitive to changes in shape, and so it may be interesting to try both the --useReconNative and the --useReconMNI options. Also note that the --useScale option will not be used. Without the --useScale option, changes in both local shape and size can be found in shape analysis. This type of finding can be interpreted, for example, as local atrophy. With the--useScale option, overall changes in size are lost.

Running Analysis

Randomize is like "Results" in SPM in that it does a statistical test on a contrast. The inputs are the outputs from the previous step.

[user@localhost temp]$ randomise -i L_Hipp_vertexMNI.nii.gz -m L_Hipp_vertexMNI_mask.nii.gz -o L_Hipp_vertexMNI_rand -d design.mat -t design.con -f design.fts (more options required)

more options:
Just as in SPM Results we put in statistical tests (F or t) and thresholds, so we do the same with randomise (see here for initial details). A starting point might be 

--fonly -D -F 3

to look at bi-directional effects (fonly), after de-meaning the data (-D), with an F threshold of 3, with cluster-based correction for multiple comparisons. 

Note: the *.fts file is only needed for f tests; it can be left out for t tests.

The "-o" indicates the output file name, so you probably want to label this with the test parameters (e.g., -o L_Hipp_vertexMNI_rand_D_F3 for the suggested options).

Example 1: t-tests

To run a t test

[user@localhost temp]$ randomise -i L_Hipp_vertexMNI.nii.gz -m L_Hipp_vertexMNI_mask.nii.gz -o L_Hipp_vertexMNI_rand -d design.mat -t design.con -T 

Note: the T option can include a threshold (e.g., "-T 2" for t-statistic threshold of 2), but this is optional.

This will generate a number of files, one for each contrast (I think!). There are maps of t statistics, p values, and corrected p values, presumeably of regions that are significantly different based on the threshold (t statistic = 2 in the above example):

 






  • No labels