Intro fMRI 2010 01 Preprocessing
Intro fMRI 2010 01 Preprocessing
Intro fMRI 2010 01 Preprocessing
Single
Group
subject Publish…
stats
stats
Overview of analysis
Single
Group
subject Publish…
stats
stats
Initial diagnostics with tsdiffana
Slice by slice
Variance
Scaled mean
voxel intensity
Max / Min /
Mean slice
variance
Look for obvious
distortions + artefacts
Realignment – What & Why?
What?
Within modality coregistration – usually this means realigning each of the
images in a functional time series so that they’re all in the same orientation
Why?
Because people move their heads…
This causes problems in several ways:
• Voxel contents change over time (e.g. from white matter to grey
matter or vv), this can add considerable noise (unexplained
variance) to the analysis.
• Interactions between head movements and inhomogeneities in the
magnetic field – the magnetic field within the scanner isn’t perfectly
uniform and this can cause distortions which interact with head
position.
Realignment – How?
Rigid body transformation using 6 parameters:
- = ^2 =
Realignment – Results
2 common solutions:
Make sure they’re comfortable to begin with – encourage them to relax their
neck and shoulders.
What?
Deals with similar problem as unwarping – adjust images to correct for
distortions caused by magnetic field inhomogeneities.
Why?
Unwarp does not actually remove the “static” distortions, it only estimates the
interactions between distortions and movement (i.e. the first derivative, or
the change of deformation with respect to movement). Unwarp will only
undistort to some “average” distortion.
Undistortion attempts to correct for static distortions and return the image to
something closer to the actual brain shape
Undistortion – how
NB – Usually only collect one set of fieldmaps which are specific to the head
position at acquisition
What?
Adjust the values in the image to make it appear that all voxels have been
acquired at the same time
Why?
Most functional sequences collect data in
discrete slices
Each slice is acquired at a different time
In an EPI sequence with 32 slices and a
slice acquisition time of 62.5 ms, the signal
in the last slice is acquired ~1.9 seconds
after the first slice
Problem if modelling rapid events (not
necessarily such an issue in block designs)
Slice time correction – how?
Create an interpolated time course for later slices
Shift each voxel's time course back in time
10
Slice no
8
10 slice acquisition 6
4
2
0 1 2 3 4 5 6 7 8 9
Slice 1 2
Time course of 0
Voxel in slice 1 -2
0 1 2 3 4 5 6 7 8 9
2
Slice 5
Time course of 0
Voxel in slice 5 -2
0 1 2 3 4 5 6 7 8 9
2
Slice 5
Interpolated time 0
course in slice 5 -2
0 1 2 3 4 5 6 7 8 9
2
Slice 5
Estimated value at 0
time of first slice -2
0 1 2 3 4 5 6 7 8 9
Time (in TRs)
Slice time correction – possible issues
What?
Cross modality registration – realigning images collected using different
acquisition sequences. Most commonly registering T1 weighted structural
image to T2* weighted functional images.
Why?
Head movement again…
Precursor to spatial normalisation
Often better to normalise the structural image (higher spatial resolution,
fewer artefacts and distortions) and then apply the parameters to the
functional data.
So, want the structural in the same space as the functional images
Coregistration – how?
1 1 1 1 0 0 0 4 2 0 0 2
Value of Y
1 1 1 1 0 0 4 0 0 2 2 0
Joint histograms of X, Y:
1 1 1 1 0 4 0 0 0 2 2 0
1 1 1 1 4 0 0 0 2 0 0 2
T2 intensity
T1 intensity
CSF
T2 intensity
GM
WM
Intensity Intensity
Air
T1 intensity
Normalisation – what & why
What?
Registration between different brains. Transforming one brain so its shape
matches that of a different brain.
Why?
People have different shaped brains…
Allows group analyses since the data from multiple subjects is transformed
into the same space
Facilitates cross study comparisons since activation co-ordinates can be
reported in a standard space (rather than trying to identify landmarks in
each individual study)
Normalisation – different approaches
Landmark matching
• try to identify, then align homologous anatomical features in different
brains, e.g. major sulci.
• Time consuming and potentially subjective – manual identification of
features.
Intensity matching
• Minimise differences in voxel intensity between different brains
• More easily automated – like realignment and coregistration, can assign
some cost function based on differences in image intensity, then find
parameters that minimise this cost function.
Normalisation – how
SPM uses a procedure that attempts to
minimise the differences between an image
and a template space
Like realignment, start with affine (linear) Rotation Shear
transformations.
As well as the 3 translations and 3
rotations, also apply 3 zooms and 3 shears.
Translation Zoom
This matches the overall size and position
of the images, but not necessarily
differences in shape
6 images
registered to the
MNI template
using only affine
transformations
MNI T1 (left) and T2 templates
Normalisation – how
= ∑ wifni
Weighted
combination
of basis
images
Common templates:
• Talairach and Tournoux, 1988 (detailed anatomical study of a single
subject…)
• Montreal Neurological Institute 152 (MNI152; averaged from T1 MRI
images of 152 subjects)
• Similar, but not identical
• SPM uses MNI152 template
• To report co-ordinates in Talairach space, have to convert using
something like mni2tal.m
Smoothing – what & why
What
Spatial averaging - replace the value at each voxel with a weighted
average of the values in surrounding voxels
Why
Increase signal to noise
Random noise tends to be reduced by the process of averaging (since it’s
a mixture of high and low values)
Smoothing – how
Apply a smoothing “kernel” to each voxel in turn, replacing the value in that
voxel with a weighted average of the values in surrounding voxels
The kernel is simply a function that defines how surrounding voxels
contribute to the weighted average
Which kernel?
Ideally, want a kernel that matches the spatial
properties of the signal FWHM
“Matched filter theorem”
In practice, usually use a 3D Gaussian
Shape defined by Full Width at Half Maximum
height (FWHM)
Usually don’t know the spatial extent of the
signal
Can make some assumptions though – e.g. if
looking at specific visual areas a smaller kernel
may be optimal, whereas if looking at
prefrontal, a larger kernel may be best
In practice, 8-10mm is common