Multimodal MRI Image Analysis
Multimodal MRI Image Analysis
Multimodal MRI Image Analysis
Individual MRI modality provides a unique channel of information to view and understand one aspect
of the brain’s structural or functional organization. In combination, however, these modalities provide
complementary and mutually informative data about tissue organization that is more than their sum.
Moreover, their use in combination can help us understand causal mechanisms between modalities.
Incorporation of multiple, informative imaging modalities therefore can tell us much more about the
neural basis of behavior than any single imaging modality can alone. Furthermore, multimodal imaging
can aid study of the neurobiological determinants of disease states. The findings from one modality, for
example, can help to constrain the interpretations of findings from another modality, thereby
improving the neurobiological validity of those findings and interpretations.
Currently there are no available pipelines in other software packages that integrate all the four
different modalities data we have been processing here.
Part 1. Spatial Normalization of Mulitimodal Images
Voxel‐wise correlations require the precise normalization of data from various modalities into a
template space. Multimodal imaging data are spatially normalized into the coordinate space defined by
the high‐resolution, T1‐weighted anatomical image of the template brain. To facilitate voxel‐wise
statistical analyses across these images with differing resolutions, each of the imaging datasets is
normalized into the common coordinate space by reformatting the DT and fMRI images to voxels 1x1x1
mm3 in size. They are then spatially normalized by first co‐registering them using a rigid body similarity
transformation, and then nonlinearly warping the images using a method based on fluid dynamics.
1. Anatomical Images
(1) Smooth the source image using a Gaussian kernel with FWHM = 8mm to reduce noise, and
then resample the image to voxel dimensions of 1x1x1 mm3.
(2) Coregister the resampled image to the template image using an affine transformation with
12 parameters. This step generates a deformation field (D4) between the two images that
encoded the transformation mapping each point in the source image into a point in the
target image.
(3) Nonlinearly warp the coregistered image is then into the coordinate space of the template
image to generate a high‐dimensional, nonlinear deformation field (D5).
2. Diffusion Tensor Images
(1) Warp the FA image from each participant to the same person’s high‐resolution T1‐
weighted anatomical image by appling two successive deformation fields, the first (D1) an
affine transformation (3 rotations, 3 translations, and global scale) and the second (D2) a
high‐dimensional, nonlinear warping. The application of D1 and D2 transformed the source
FA maps in their native space into the coordinate space of the anatomical image for each
participant.
(2) Successively apply the same two deformation fields, D4 and D5, previously determined for
the coregistration of that participant’s anatomical image with the template brain (above).
D1, D2, D4 and D5 are concatenated and applied to the individual DTI datasets to bring
them into the template space of high resolution anatomical data, using seamless
procrustean algorithm for preserving the unique diffusion characteristics encoded in DTI
data.
3. Magnetic Resonance Chemical Shift Images
(1) Coregister the low‐resolution T1‐weighted overlay image to the high‐resolution anatomical
image of the same person, generating an affine transformation (D3).
(2) Successively apply deformation fields D4 and D5, determined previously for the
coregistration of that participant’s anatomical image with the template brain, to the MRSI
data to bring them into the template space.
4. Functional Magnetic Resonance Images
(1) Spatially smooth the motion‐corrected functional images using a Gaussian‐kernel
smoothing function with Full Width at Half Maximum (FWHM) equal to 8mm
(2) Resample the smoothed images to a resolution of 1x1x1 mm
(3) Coregister these resampled functional images to the high‐resolution anatomical image of
the same person
(4) Normalize the coregistered functional images to the template brain by successively
applying the two deformation fields, D4 and D5, determined previously for the
coregistration of that participant’s anatomical image with the template brain
Notes: Spatial normalization combines rigid body transformation, affine transformation, non‐linear
transformation. These transformations can be generated using software packages such as:
1. FSL
includes different packages for linear and non‐linear registrations:
FLIRT package for rigid‐body and affine transformations
FNIRT package for non‐linear deformation
2. SPM8
includes functions for linear and non‐linear registrations. DARTEL is the implementation for
state‐of‐the‐art normalization
3. ANTs
Includes functions for both linear and non‐linear registrations. Its SyN (symmetric image
normalization) method for normalization is regarded as the best publicly available
normalization software.
Part 2. Processing of Mulitimodal Images
Data from each MRI modality are processed to correct for spatial distortions, intensity inhomogeneities,
motion artifacts, and differing image resolution across imaging modalities. In addition, NAA values were
calibrated across participants.
1. Anatomical Images
(1) Correct inhomogeneities in pixel intensity across the image caused by nonuniformities in
the Radio Frequency (RF) field using the computerized method “N3”
(2) Remove extracerebral tissues using an automated tool for extracting the brain combined
with manual editing.
(3) Remove connecting dura manually on each slice in sagittal, coronal, and axial views.
(4) Transect the brainstem at the pontomedullary junction.
(5) Reslice all images into a standard orientation using anterior‐ and posterior‐commissure
landmarks to correct for any residual head flexion/extension and standard midline
landmarks to correct for head rotation and tilt.
2. Diffusion Tensor Images
(1) Visually inspect all diffusion weighted images (DWIs) and discard those having motion
larger than 2‐3mm or susceptibility artifacts.
(2) Correct eddy‐current spatial distortions along the phase‐encoding direction
(3) Computed the diffusion tensor at each voxel by fitting an ellipsoid to the DWI data
acquired along 25 gradient directions and 3 baseline images
(4) Calculate fractional anisotropy (FA) value, mean diffusivity (MD), ellipsoidal area ratio (EAR),
etc. for each tensor
(5) Track fibers in the whole brain using the tensor data
3. Magnetic Resonance Chemical Shift Images
(1) Preprocess MPCSI data using the software 3DiCSI.
(2) Spatially filtered the data using a Hamming window prior to 2D Fourier transformation.
(3) Suppress residual water signal using a high pass filter.
(4) Select an ROI in each slice and save the spectral data of the ROI to a file
(5) Use a model‐based spectral fitting to model the spectrum in each voxel with a curve,
identifying peaks in the spectrum for NAA, Cr, Cho, and lipid metabolites
(6) Use ratios of peak area to noise under different receiver gain as correction factors to
compensate for the effects of varying RGs on peak areas.
(7) Normalize the peak areas by the noise level and used these normalized areas to
reconstruct spectroscopic images (SIs) of the metabolites.
(8) Performe partial volume correction on the NAA values.
4. Functional Magnetic Resonance Images
(1) Visually inspect images to ensure the absence of motion artifacts.
(2) Motion‐correct all functional images of each participant using a rigid transformation to the
middle functional images of the same participant to guarantee that all motion estimates
are <1 mm displacement and <2 degree rotation along any axis.
(3) Remove intensity drifts in the images across time using a 6th order, Butterworth‐type high‐
pass filter with a cutoff frequency equal to ¾ of the task frequency.
(4) Compute activations via voxel‐wise General Linear Modeling (GLM).
Part 3. Statistical Analysis
The Pearson’s correlation coefficient r is used to measure the strength of the pair‐wise linear
association of two imaging measures at each voxel. These imaging measures included (1) the
concentration of NAA (a marker of neuronal density) from MPCSI data, (2) fractional anisotropy (FA, a
measure of the directional constraint on the diffusion of water) from DTI data, (3) the Z‐contrasts of
brain activations (a measure of task‐induced neural responsivity) from fMRI data, and (4) an index of
local volume expansion or compression from anatomical MRI data, calculated using volume‐preserved‐
warping (VPW). VPW preserves during spatial normalization the intensity weighted volume (i.e.,
intensity × volume of the voxel) of each voxel. Spatial normalization using VPW condensed relatively
larger volumes so that they appeared as voxels of relatively higher signal intensity, and it expanded
smaller volumes so that they appeared as voxels of relatively lower signal intensity.
In addition, mediator analyses are used to test the hypothesis that one MRI measure may have
mediated the relationships between several other MRI measures, as an example, whether NAA
mediate the association of FA values with higher cognitive functions. Following standard procedures for
mediator analyses , the statistical significance that variable M mediated the association between the
independent variable, X, and the dependent variable, Y is tested.
Part 4. Visualization
Visualization of the above integrated functional and structural image data requires advanced
techniques, including surface and volume rendering of complex multidimensional 3‐D scenes. We
support many 2D and 3D visualization techniques within the multimodal framework, such as:
1. 3D surface generation from volumetric data
2. Display and editing of 3D surface
3. 3D volume rendering
4. Mapping of volumetric data (such as fMRI activation) over 3D surface
5. 2D image generation of 3D volume
6. Visualization and editing of 3D volumes
7. Combined visualization of all modality data in a single 3D space
8. Movie generation of rotated and/or zoomed view of 3D volume
9. Surface parcellation using cytoarchtectonic labeling
10. Boundary curve overlaid display onto surfaces
11. Smoothing of boundary curves as well as surfaces
Part 5. Longitudinal Data Processing
Compared with cross‐sectional studies, a longitudinal design can significantly reduce the confounding
effect of inter‐individual morphological variability by using each subject as his or her own control. As a
result, longitudinal imaging studies are getting increased interest and popularity in various aspects of
neuroscience.
Though public available software packages do not offer clear multimodal processing pipelines, some of
them do offer pipeline for longitudinal data processing, such as FreeSurfer.
FreeSurfer's longitudinal scheme is designed to be unbiased with respect to any time point (TP).
Instead of initializing it with information from a specific time point, a template volume is created and
run through FreeSurfer. This template can be seen as an initial guess for the segmentation and surface
reconstruction. The FreeSurfer cortical and subcortical segmentation and parcellation procedure
involves solving many complex nonlinear optimization problems, such as the deformable surface
reconstruction, the nonlinear atlas‐image registration, and the nonlinear spherical surface registration.
It is believed that by initializing the processing of a new data set in a longitudinal series using the
processed results from the unbiased template, one can reduce the random variation in the processing
procedure and improve the robustness and sensitivity of the overall longitudinal analysis. Such an
initialization scheme makes sense also because a longitudinal design is often targeted at detecting
small or subtle changes. Additionally to the template new probabilistic methods (temporal fusion) were
introduced to further reduce the variability of results across time points. For these algorithms it
becomes necessary to process all time points cross‐sectionally first. The detailed pipeline:
1. Cross‐sectionally process all time points with the default workflow (tpN is one of the
timepoints):
recon‐all ‐all ‐s <tpNid> ‐i path_to_tpN_dcm
2. Create an unbiased template from all time points for each subject and process it with recon‐all:
recon‐all ‐base <templateid> ‐tp <tp1id> ‐tp <tp2id> ... ‐all
(1) Template Initialization (‐base‐init) by computing the median of the N input images and then
register all time points images to it and then iterate the process until convergence
(2) Intensity Normalization (‐normalize)
(3) Skull Strip (‐skullstrip)
(4) EM (expectation maximization) Registration (‐gcareg) to register to Talairach space
(5) CA Normalization (‐canorm) based on Gaussian Classifier Atlas (GCA) to ensure that the
norm_template is correctly normalized to Talairach space.
3. Longitudinally process all timepoints:
recon‐all ‐long <tpNid> <templateid> ‐all
(1) Motion Corrections (‐motioncor)
(2) Non‐uniform Intensity Correction (‐nuintensitycor)
(3) Copy the talairach.xfm from the template (keeps edits)
(4) Intensity Normalization (‐normalization)
(5) Skull Strip (‐skullstrip)
(6) EM (expectation maximization) Registration (‐gcareg) to register to Talairach space
(7) CA Normalization using Gaussian Classifier Atlas
(8) CA Nonlinear Registration (‐careg) to align with GCA atlas
(9) CA Nonlinear Registration Inverse transform (‐careginv) to computes the inverse of the
nonlinear transform to align with GCA atlas
(10) Remove Neck (‐rmneck)
(11) EM Registration (with skull but no neck) (‐skull‐lta) to compute transform to align no‐neck
volume with GCA volume possessing the skull
(12) CA Label (‐calabel) to label subcortical structures, based in GCA model
(13) Intensity Normalization 2 (‐normalization2) to perform a second (major) intensity
correction using only the brain volume as the input
(14) Mask Brain to remove skull for Final Surface generaton (‐maskbfs)
(15) White Matter Segmentation (‐segmentation)
(16) Cut/Fill (‐fill) to create the subcortical mass from which the orig surface is created. The mid
brain is cut from the cerebrum, and the hemispheres are cut from each other
(17) Tessellate (‐tesselate) to generate white matter surface
(18) Orig Surface Smoothing 1 (‐smooth1) to smooth the jagged surface
(19) Surface Inflation 1 (‐inflate1) before topology fixing
(20) Map to Quasi‐Sphere (‐qsphere) to set up the automatic topology fixing
(21) Automatic Topology Fixer (‐fix) to remove topology defects so that surface has the same
topology as a sphere
(22) Final Surfaces (‐finalsurfs) to make sure all surfaces are registered across all time points
(23) Surface Volume (‐surfvolume) to create gray matter volume from white matter surface and
pial surface
(24) Orig Surface Smoothing 2 (‐smooth2) to smooth surface after topology fixing
(25) Inflate 2 (‐inflate2) to inflate surface after topology fixing
(26) ASeg Stats (‐segstats) to compute statistics on the segmented subcortical structures
(27) Spherical Inflation (‐sphere) to inflate the surface into a sphere while minimizing metric
distortion
(28) Ipsilateral Surface Registation (Spherical Morph) (‐surfreg) to register the surface to the
spherical atlas
(29) Jacobian (‐jacobian_white) to compute how much the white surface was distorted in order
to register to the spherical atlas
(30) Average Curvature (‐avgcurv) to resample the average curvature from the atlas to that of
the subject
(31) Cortical Parcellation (‐cortparc) to assign a neuroanatomical label to each location on the
cortical surface
(32) Parcellation Statistics (‐parcstats) to create a summary table of cortical parcellation
statistics for each structure
(33) Cortical Parcellation 2 (‐cortparc) using a different label template
(34) Parcellation Statistics 2(‐parcstats) to create a summary table of cortical parcellation
statistics for each structure with the different label template
(35) Cortical Ribbon Mask (‐cortribbon) to create binary volume masks of the cortical ribbon, ie,
each voxel is either a 1 or 0 depending upon whether it falls in the ribbon or not
(36) Add Parcellation to ASeg (‐aparc2aseg) to add information from the ribbon into the
volume parcellation
(37) Update WMparc(‐wmparc) to add White Matter Parcellation info into the aseg and
computes statistics
4. Compare results from step 3 by calculating the differences between <tp1id>.long.<templateid>
and <tp2id>.long.<templateid>.