Intro fMRI 2010 01 Preprocessing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

fMRI Basics: Spatial pre-processing

With acknowledgements to Matthew Brett, Rik Henson, and the authors of


Human Brain Function (2nd ed)
Types of Data

Most commonly collected types of image include:

MPRAGE – Anatomical data. T1 weighted, high


spatial resolution (usually 1x1x1 mm). Optimised for
contrast between white and grey matter. Acquisition
time ~5 minutes

EPI – Functional data. Usually a T2* weighted


gradient echo sequence. Optimised for contrast
between oxy- and deoxygentated blood. Fast
acquisition (32 slices in ~2 seconds), reasonable
resolution (3x3x3.75 mm)

Fieldmaps – map of magnetic field inhomogeneities


within the scanner
What’s in an image?

• Image itself is just a large matrix of data


• Divided into “voxels” (= volumetric pixels )
• Data is accompanied by a header file that contains
information about how to interpret the data:
• Data type (integer, floating point etc)
• Data scaling
• Image dimensions
• Voxel size
• Voxelmm transformation matrix
• Most commonly used formats here are Analyze
and Nifti

• Nifti images contain data and header in single .nii file


• Analyse images come in .img / .hdr pairs
• SPM uses nifti except for images generated as part of statistical analyses,
where it still uses Analyze
Voxel space vs. world space

• Values at each voxel depend on


where the acquisition grid is placed.
Voxel • Values in this “voxel space” are
coordinates defined purely in terms of where
they occur in the image
• Values in “world space” are defined
-3.0 -0.2 0.1 108.8 34.3 0
in meaningful units (mm) from a
-0.3 2.8 -1.3 -61.7 27.2 0 point of origin
x =
0 1.0 3.8 -45.5 5.3 0
• Images with the same origin share a
0 0 0 1 1 1 common frame of reference
• Changing the transformation matrix
changes the relationship between
voxel co-ords and world co-ords
• The matrix can be used to store
mm transformations without having to
coordinates resample (reslice) the image
Overview of analysis

Images in Convert to Initial image


dicom format format used diagnostics
by SPM (nifti) (tsdiffana)

Smooth EPI Normalise Normalise Coregister EPI Within series processing:


images EPI images structural data to structural • Realignment
image
• Slice time correction
• Undistortion

Single
Group
subject Publish…
stats
stats
Overview of analysis

Images in Convert to Initial image


dicom format format used diagnostics
by SPM (nifti) (tsdiffana)

Smooth EPI Normalise Normalise Coregister EPI Within series processing:


images EPI images structural data to structural • Realignment
image
• Slice time correction
(MVPA analysis) • Undistortion

Single
Group
subject Publish…
stats
stats
Initial diagnostics with tsdiffana

Mean and variance Diagnostic plots:


images:
Scaled Variance

Slice by slice
Variance

Scaled mean
voxel intensity

Max / Min /
Mean slice
variance
Look for obvious
distortions + artefacts
Realignment – What & Why?

What?
Within modality coregistration – usually this means realigning each of the
images in a functional time series so that they’re all in the same orientation

Why?
Because people move their heads…
This causes problems in several ways:
• Voxel contents change over time (e.g. from white matter to grey
matter or vv), this can add considerable noise (unexplained
variance) to the analysis.
• Interactions between head movements and inhomogeneities in the
magnetic field – the magnetic field within the scanner isn’t perfectly
uniform and this can cause distortions which interact with head
position.
Realignment – How?
Rigid body transformation using 6 parameters:

Translation in X Translation in Y Translation in Z

Rotation around X Rotation around Y Rotation around Z


(Pitch) (Roll) (Yaw)
Realignment

Find optimal values for these 6 parameters


Optimal = values that give the minimum value for a particular cost function
Cost function = sum of squared difference between consecutive images
Successive approximation - start with one set of parameters and iteratively try
different combinations in order to find minimum sum of squared diffs
(Gauss-Newton algorithm provides a systematic way of modifying the
parameters at each iteration)

Image 1 Image 2 Difference Variance

- = ^2 =
Realignment – Results

Rigid body transformations parameterised by:


Transformation saved in the
Translations Pitch Roll Yaw voxmm matrix stored in the
1 0 0 X trans  1 0 0 0  cos( Θ ) 0 sin( Θ ) 0  cos( Ω) sin( Ω ) 0 0
       
0
0
1 0 Y trans 
× 0
0
cos( Φ ) sin( Φ) 0
×
0 1 0 0
×  − sin( Ω ) cos( Ω ) 0 0 image header.
0 1 Zt rans  −sin( Φ) cos( Φ) 0   −sin( Θ ) 0 cos( Θ ) 0  0 0 1 0
       
0 0 0 1  0 0 0 1  0 0 0 1  0 0 0 1
Image can also be “resliced”,
i.e. resampled so that the
transformation is perm-
anently applied to the image
Like shifting the “voxel
space” frame of reference.
Need to interpolate new
voxel values

Process also displays graphs


of the parameters giving an
impression of movement
throughout the experiment
Realignment – Possible issues
Realignment solves the voxel correspondence problem
– the same voxel now contains the same bit of the
brain over the entire time series

Doesn’t solve all movement related problems though.


In particular interactions between movement and field
inhomogeneity remain.

Inhomogeneities in the magnetic field affect both signal


strength and spatial encoding of signals, causing
dropouts and distortions.

These effects are are partially caused by the head, and


are dependant on the position of the head. Distortions
can be specific to a particular head position, so you can
get non-rigid body movements.

The signal in each voxel could be recorded from a


different position within the scanner, and potentially a
different field strength, at different points in time.
Realignment – Possible issues

2 common solutions:

1. Include the realignment parameters as covariates in the statistical model


• Idea is to capture any movement related variance in the data.
• Can be problematic if movement is correlated with effects of interest
(esp. button pushes, verbal responses etc)
• If the movement parameters are correlated with your experimental
conditions, they can remove the effects of interest.
2. Unwarping
• Try to estimate the effects of interactions between field
inhomogeneity and movement and compensate for them
• Estimate rigid body parameters
• Observe remaining variance
• Estimate the “derivative fields” – how distortions at any point change
with movement
• Correct image to compensate for these
• Re-estimate movement parameters
• Iterate until minimal change
Realignment – Possible issues

Even after all this, movement artefacts still remain.

• There’s no way of detecting rapid movements within a scan


• Spin history effects (movement may make the effective TR longer /
shorter for some slices)
• Dropout by movement artefacts

The moral of the story? Stop people moving…

Make sure they’re comfortable to begin with – encourage them to relax their
neck and shoulders.

Discourage them from talking during breaks between sessions, be careful of


using messages like “End of part 1” etc.

Reject any data with too much movement


Undistortion – what & why

What?
Deals with similar problem as unwarping – adjust images to correct for
distortions caused by magnetic field inhomogeneities.

Why?
Unwarp does not actually remove the “static” distortions, it only estimates the
interactions between distortions and movement (i.e. the first derivative, or
the change of deformation with respect to movement). Unwarp will only
undistort to some “average” distortion.

Undistortion attempts to correct for static distortions and return the image to
something closer to the actual brain shape
Undistortion – how

During scanning, collect a fieldmap – an image showing how the magnetic


field varies within the scanner

This can be converted to a voxel displacement map – a map of how voxel


value are displaced due to field inhomogeneities

This in turn can be used to calculate the original voxel values

NB – Usually only collect one set of fieldmaps which are specific to the head
position at acquisition

Static undistortion can be combined with unwarping though


Slice time correction – what & why?

What?
Adjust the values in the image to make it appear that all voxels have been
acquired at the same time

Why?
Most functional sequences collect data in
discrete slices
Each slice is acquired at a different time
In an EPI sequence with 32 slices and a
slice acquisition time of 62.5 ms, the signal
in the last slice is acquired ~1.9 seconds
after the first slice
Problem if modelling rapid events (not
necessarily such an issue in block designs)
Slice time correction – how?
Create an interpolated time course for later slices
Shift each voxel's time course back in time

10

Slice no
8
10 slice acquisition 6
4
2
0 1 2 3 4 5 6 7 8 9
Slice 1 2
Time course of 0
Voxel in slice 1 -2
0 1 2 3 4 5 6 7 8 9
2
Slice 5

Time course of 0
Voxel in slice 5 -2
0 1 2 3 4 5 6 7 8 9
2
Slice 5

Interpolated time 0
course in slice 5 -2
0 1 2 3 4 5 6 7 8 9
2
Slice 5

Estimated value at 0
time of first slice -2
0 1 2 3 4 5 6 7 8 9
Time (in TRs)
Slice time correction – possible issues

Which slice to align to?


• Using the middle rather than the first slice means the maximum
interpolation necessary is reduced, which may reduce interpolation
artefacts
• Possibly beneficial with long TRs
• Be careful to modify event onset times in statistical model though!
Coregistration – what & why?

What?
Cross modality registration – realigning images collected using different
acquisition sequences. Most commonly registering T1 weighted structural
image to T2* weighted functional images.

Why?
Head movement again…
Precursor to spatial normalisation
Often better to normalise the structural image (higher spatial resolution,
fewer artefacts and distortions) and then apply the parameters to the
functional data.
So, want the structural in the same space as the functional images
Coregistration – how?

• Similar to realignment - find parameters for translations in X, Y, and Z, and


rotations around X, Y, and Z
• BUT - different acquisition sequences have different properties, e.g. CSF is
bright in T2 functional images, dark in T1 structural images
• Can’t simply subtract images and minimise the squared difference
• Have to use another cost function - “Mutual Information”
• How much does knowing the value of one variable (e.g. T1 intensity) tell us
about the possible values of another variable (e.g. T2 intensity)

1 1 1 1 0 0 0 4 2 0 0 2
Value of Y

1 1 1 1 0 0 4 0 0 2 2 0
Joint histograms of X, Y:
1 1 1 1 0 4 0 0 0 2 2 0
1 1 1 1 4 0 0 0 2 0 0 2

Value of X Value of X Value of X

None High Some


Coregistration – how?
Joint histograms pre…
T1 T2

T2 intensity
T1 intensity

T1 histogram T2 histogram …and post registration


Frequency

CSF

T2 intensity
GM
WM
Intensity Intensity
Air
T1 intensity
Normalisation – what & why

What?
Registration between different brains. Transforming one brain so its shape
matches that of a different brain.

Why?
People have different shaped brains…
Allows group analyses since the data from multiple subjects is transformed
into the same space
Facilitates cross study comparisons since activation co-ordinates can be
reported in a standard space (rather than trying to identify landmarks in
each individual study)
Normalisation – different approaches

Landmark matching
• try to identify, then align homologous anatomical features in different
brains, e.g. major sulci.
• Time consuming and potentially subjective – manual identification of
features.

Intensity matching
• Minimise differences in voxel intensity between different brains
• More easily automated – like realignment and coregistration, can assign
some cost function based on differences in image intensity, then find
parameters that minimise this cost function.
Normalisation – how
SPM uses a procedure that attempts to
minimise the differences between an image
and a template space
Like realignment, start with affine (linear) Rotation Shear
transformations.
As well as the 3 translations and 3
rotations, also apply 3 zooms and 3 shears.
Translation Zoom
This matches the overall size and position
of the images, but not necessarily
differences in shape

6 images
registered to the
MNI template
using only affine
transformations
MNI T1 (left) and T2 templates
Normalisation – how

Next, apply nonlinear transformations


A quick digression into basis functions…
A complex function can be described as a linear combination of a set of
simpler basis functions:

= ∑ wifni

w1fn1 w2fn2 w3fn3 w4fn4


Normalisation – how
6 images
Nonlinear transformations
registered to
implemented by applying the MNI
deformation fields template using
These are modelled using a linear and
nonlinear
linear combination of cosine
transformations
basis images
Matches size, position and
global shape of template.

Weighted
combination
of basis
images

Cosine basis images


Normalisation – how

SPM Algorithm simultaneously minimises:


• Sum of squared difference between template and object image
• Squared distance between the parameters and their expected
values
The latter condition is referred to as “regularisation”
Essentially a way of incorporating prior knowledge about the range of
values that parameters can take in order to constrain current estimates.
Helps reduce unnecessary distortions (an example of overfitting due to
the large number of available parameters)
Normalisation – how

Affine only – still


Template some differences
in shape

Affine and nonlinear


Affine and
with regularisation –
nonlinear without
good match to
regularisation. This
overall shape, but
can introduce
some high spatial
overfitting and
frequency
unnecessary warps
differences
Normalisation – templates

Common templates:
• Talairach and Tournoux, 1988 (detailed anatomical study of a single
subject…)
• Montreal Neurological Institute 152 (MNI152; averaged from T1 MRI
images of 152 subjects)
• Similar, but not identical
• SPM uses MNI152 template
• To report co-ordinates in Talairach space, have to convert using
something like mni2tal.m
Smoothing – what & why

What
Spatial averaging - replace the value at each voxel with a weighted
average of the values in surrounding voxels

Why
Increase signal to noise
Random noise tends to be reduced by the process of averaging (since it’s
a mixture of high and low values)
Smoothing – how

Apply a smoothing “kernel” to each voxel in turn, replacing the value in that
voxel with a weighted average of the values in surrounding voxels
The kernel is simply a function that defines how surrounding voxels
contribute to the weighted average

Original signal Signal plus noise Apply kernel to Recover signal


each point in
turn
Smoothing – how

Which kernel?
Ideally, want a kernel that matches the spatial
properties of the signal FWHM
“Matched filter theorem”
In practice, usually use a 3D Gaussian
Shape defined by Full Width at Half Maximum
height (FWHM)
Usually don’t know the spatial extent of the
signal
Can make some assumptions though – e.g. if
looking at specific visual areas a smaller kernel
may be optimal, whereas if looking at
prefrontal, a larger kernel may be best
In practice, 8-10mm is common

You might also like