908

Download as pdf or txt
Download as pdf or txt
You are on page 1of 67

Visit https://ebookfinal.

com to download the full version and


explore more ebook

CT Imaging 1st Edition Alexander C. Mamourian

_____ Click the link below to download _____


https://ebookfinal.com/download/ct-imaging-1st-edition-
alexander-c-mamourian/

Explore and download more ebook at ebookfinal.com


Here are some recommended products that might interest you.
You can download now and explore!

Handbook Of Small Animal Imaging Preclinical Imaging


Therapy And Applications 1st Edition George C. Kagadis

https://ebookfinal.com/download/handbook-of-small-animal-imaging-
preclinical-imaging-therapy-and-applications-1st-edition-george-c-
kagadis/
ebookfinal.com

Mayo Clinic Gastrointestinal Imaging Review 1st Edition C.


Daniel Johnson

https://ebookfinal.com/download/mayo-clinic-gastrointestinal-imaging-
review-1st-edition-c-daniel-johnson/

ebookfinal.com

Quantitative imaging in cell biology 1st Edition Jennifer


C Waters

https://ebookfinal.com/download/quantitative-imaging-in-cell-
biology-1st-edition-jennifer-c-waters/

ebookfinal.com

Normal Findings in Ct and Mri 1st Edition Wolfgang Dorr

https://ebookfinal.com/download/normal-findings-in-ct-and-mri-1st-
edition-wolfgang-dorr/

ebookfinal.com
ESWT and Ultrasound Imaging of the Musculoskeletal System
1st Edition C. E. Bachmann

https://ebookfinal.com/download/eswt-and-ultrasound-imaging-of-the-
musculoskeletal-system-1st-edition-c-e-bachmann/

ebookfinal.com

Inverse Synthetic Aperture Radar Imaging Principles


Algorithms and Applications 1st Edition Victor C. Chen

https://ebookfinal.com/download/inverse-synthetic-aperture-radar-
imaging-principles-algorithms-and-applications-1st-edition-victor-c-
chen/
ebookfinal.com

Fundamentals of Body CT 4e 4th Edition W. Richard Webb

https://ebookfinal.com/download/fundamentals-of-body-ct-4e-4th-
edition-w-richard-webb/

ebookfinal.com

Problem Solving in Abdominal Imaging 1 Har/Cdr Edition


Neal C. Dalrymple Md

https://ebookfinal.com/download/problem-solving-in-abdominal-
imaging-1-har-cdr-edition-neal-c-dalrymple-md/

ebookfinal.com

The Cambridge Companion to Durkheim Cambridge Companions


to Philosophy 1st Edition Jeffrey C. Alexander

https://ebookfinal.com/download/the-cambridge-companion-to-durkheim-
cambridge-companions-to-philosophy-1st-edition-jeffrey-c-alexander/

ebookfinal.com
CT Imaging 1st Edition Alexander C. Mamourian Digital
Instant Download
Author(s): Alexander C. Mamourian
ISBN(s): 9780199987993, 0199987998
Edition: 1
File Details: PDF, 9.30 MB
Year: 2013
Language: english
CT IMAGING
This page intentionally left blank
CT IMAGING
PRACTICAL PHYSICS,
ARTIFACTS, AND PITFALLS

Editor:
Alexander C. Mamourian MD
Professor of Radiology
Division of Neuroradiology
Department of Radiology
Perelman School of Medicine of the
University of Pennsylvania
Philadelphia, Pennsylvania

Contributors:
Harold Litt MD, PhD Nicholas Papanicolaou MD, FACR
Assoc. Professor of Radiology and Medicine Co-Chief, Body CT Section
Chief, Cardiovascular Imaging Professor of Radiology
Department of Radiology Department of Radiology
Perelman School of Medicine of the Perelman School of Medicine of the
University of Pennsylvania University of Pennsylvania
Philadelphia, Pennsylvania Philadelphia, Pennsylvania

Supratik Moulik MD Josef P. Debbins PhD, PE, DABMP


Fellow, Cardiovascular Imaging Staff Scientist
Department of Radiology Keller Center for Imaging Innovation
Perelman School of Medicine of the Department of Radiology
University of Pennsylvania St. Joseph’s Hospital and Medical Center
Philadelphia, Pennsylvania Phoenix, Arizona

1 2013
1
Oxford University Press is a department of the University of Oxford. It furthers
the University’s objective of excellence
in research, scholarship, and education by publishing worldwide

Oxford New York


Auckland Cape Town Dar es Salaam Hong Kong Karachi
Kuala Lumpur Madrid Melbourne Mexico City Nairobi
New Delhi Shanghai Taipei Toronto

With offices in
Argentina Austria Brazil Chile Czech Republic France Greece
Guatemala Hungary Italy Japan Poland Portugal Singapore
South Korea Switzerland Thailand Turkey Ukraine Vietnam

© Oxford University Press 2013

Published in the United States of America by Oxford University Press


198 Madison Avenue, New York, New York 10016
www.oup.com

Oxford is a registered trademark of Oxford University Press in the UK and certain other countries

All rights reserved. No part of this publication may be reproduced,


stored in a retrieval system, or transmitted, in any form or by any means,
electronic, mechanical, photocopying, recording, or otherwise,
without the prior permission of Oxford University Press.

Library of Congress Cataloging-in-Publication Data


CT imaging : practical physics, artifacts, and pitfalls / editor,
Alexander C. Mamourian; contributors, Harold Litt ... [et al.].
p. ; cm.
Includes bibliographical references and index.
ISBN 978-0-19-978260-4(pbk. : alk. paper)
I. Mamourian, Alexander C. II. Litt, Harold I.
[DNLM: 1. Tomography, X-Ray Computed. 2. Cardiac Imaging Techniques.
3. Nervous System—radiography. 4. Radiation Dosage. 5. Radiation Protection.
6. Whole Body Imaging. WN 206]
LC Classification not assigned
616.07'5722—dc23
2012038160

1 3 5 7 9 8 6 4 2
Printed in the United States of America
on acid-free paper
CONTENTS
Introduction vii
Acknowledgements ix
Dedication xi

1 HISTORY AND PHYSICS OF CT IMAGING 1


Alexander C. Mamourian

2 RADIATION SAFETY AND RISKS 35


Alexander C. Mamourian and Josef P. Debbins

3 CARDIAC CT IMAGING TECHNIQUES 55


Supratik Moulik and Harold Litt

4 CARDIAC CT ARTIFACTS AND PITFALLS 71


Supratik Moulik and Harold Litt

5 NEURO CT ARTIFACTS 113


Alexander C. Mamourian

6 NEURO CT PITFALLS 147


Alexander C. Mamourian

7 BODY CT ARTIFACTS 197


Nicholas Papanicolaou

8 BODY CT PITFALLS 215


Nicholas Papanicolaou

9 TEST QUESTIONS 225


Alexander C. Mamourian

Index 233
This page intentionally left blank
INTRODUCTION
I could say that computed tomography (CT) and my career started together, since the first units arrived
in most hospitals the same year that I entered my radiology residency. But while I knew the physics of
CT well at that time, over the next 30 years CT became increasingly complicated in a quiet sort of way.
While MR stole the spotlight during much of that time, studies that were formerly unthinkable, like CT
imaging the heart and cerebral vasculature, have become routine in clinical practice. But these expand-
ing capabilities of CT have been made possible by increasingly sophisticated hardware and software.
And while most manufacturers provide a clever interface for their CT units that may lull some into
thinking that things are under control, the user must understand both the general principles of CT as
well as the specific capabilities of their machine because of the potential to harm patients with X-rays.
For example, it was reported not long ago that hundreds of patients received an excessive X-ray dose
during their CT brain perfusion exams. Although that was troubling enough, the unusually high dose
was eventually attributed in some share to the well-meaning but improper use of software commonly
used to reduce patient X-ray dose but only for specific applications that do not include perfusion.
This book was never intended to be the defi nitive text on the history, physics, and techniques of CT
scanning. Our goal was to offer a collection of useful advice taken from our experience about modern
CT imaging for an audience of radiology residents, fellows, and technologists. It was an honor and a
pleasure to work with my co-authors, an all-star cast of experts in this field, and it is our collective
hope you will fi nd this book helpful in the same way that the owner’s manual that comes with a new
car is helpful; not enough information to rebuild the engine, but what you need to reset the clock when
daylight saving rolls around or change the oil. Many experienced CT users will very likely fi nd some
things useful here as well.
The review of CT hardware in Chapter 1 should get you off to a good start since the early scanners
were just simpler and for that reason easier to understand. The following chapters build on that foun-
dation. Chapter 2 provides a review of the language of X-ray dose and dose reduction, followed by a
comprehensive description of the advanced techniques used for cardiac CT in Chapter 3. Feel free at
any time to explore the cases in Chapters 4 through 8. Most of these include discussions of practical
physics appropriate to that particular artifact or pitfall. In the fi nal chapter, you will fi nd 10 questions
that will test your understanding of CT principles. Take it at the start or at the end to see how you
stand on this topic. While there is a rationale to the arrangement of the book you may want to keep
it nearby and go to appropriate chapters for those questions that may arise about CT dose, protocols,
and artifacts in your daily practice.
If you get nothing else from reading this book, you should be sure to learn the language of CT dose
explained in Chapter 2. Understanding radiation dose specific to CT has become more important
than ever in this time of increasing patient awareness, CT utilization, and availability of new software
tools for dose reduction. We hope that this book will help you to create the best possible CT images,
at the lowest possible dose, for your patients.
This page intentionally left blank
ACKNOWLEDGMENTS
I want to thank Cheryl Boghosian and Neil Roth in New Hampshire, for their wonderful hospitality,
generous spirit, and faithful friendship over many years, and most recently for giving me the time and
space to fi nish this book. My sincere thanks also go to Andrea Seils at Oxford Press. Every writer
should be blessed with an editor of her caliber. I will be forever grateful to Dr. Robert Spetzler and
all the staff at the Barrow Neurological Institute for giving me the inspiration and the opportunity to
write at all.
This page intentionally left blank
DEDICATION
I dedicate this book to my parents, Marcus and Maritza, who have given unselfi shly of themselves to
so many.
To Pamela, Ani, Molly, Elizabeth, and Marcus, I can fi nd no words that can express my endless
affection and gratitude.
This page intentionally left blank
1 HISTORY AND PHYSICS OF
CT IMAGING
Alexander C. Mamourian
2 CT IMAGING

The discovery of X-rays over 100 years ago by Wilhelm Roentgen marks the stunning beginning of
the entire field of diagnostic medical imaging. While the impact of his discovery on the fields of phys-
ics and chemistry followed, the potential for medical uses of X-rays was so apparent from the start
that, within months of his fi rst report, the fi rst clinical image was taken an ocean away in Hanover,
New Hampshire. A photograph of that particular event serves as a reminder of how naïve early users
of X-ray were with regard to adverse effects of radiation (Figure 1.1). We can only hope that our
grandchildren will not look back at our utilization of CT in quite the same way.
Although plain X-ray images remain the standard for long bone fractures and preliminary chest
examinations, they proved to be of little value for the diagnosis of diseases involving the brain, pel-
vis, or abdomen. This is because conventional X-ray images represent the net attenuation of all the
tissue between the X-ray source and the fi lm (Figures 1.2–1.4).
This inability to differentiate tissues of similar density on X-ray is due in part to the requirement for
the X-ray beam to be broad enough to cover all the anatomy at once. As a result of this large beam,
many of the X-rays that are captured on film have been diverted from their original path into other
directions, and these scattered X-rays limit the contrast between similar tissues. This problem was well
known to early imagers, and, prior to the invention of computed tomography (CT), a number of solu-
tions were proposed to accentuate tissue contrast on X-ray images. The most effective of these was a
device that linked the X-ray tube and film holder together, so that they would swing back and forth in
reciprocal directions on either side of patient, around a single pivot point. This was effective to some

Figure 1.1 This photograph captures the spirit of early X-ray exams. Note the pocket watch used to time the exposure (left ) and the
absence of any type of radiation protection for the patient or observers. The glowing cathode ray tube (positioned over the arm of
the patient, who is sitting with his back to the photographer) was borrowed from the department of physics at Dartmouth College. As
rudimentary as this apparatus might appear, it was effective in demonstrating the patient’s wrist fracture. Image provided courtesy of
Dr. Peter Spiegel, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire.
History and Physics of CT Imaging 3

(A) (B)

Film

Net attenuation
X-ray
1 1 20 1 1 24
beam

X-ray
5 4 5 5 5 24
beam

Figure 1.2 While X-ray images (A) are useful for demonstrating contrast between bone, soft tissue, and air, they are not effective at
showing contrast between tissues of similar attenuation. In this image, the pancreas, liver, and kidneys cannot be identified separately
because they all blend with nearby tissues of similar density. That is in part because the flat X-ray image can only show the net attenu-
ation of all the tissues between the X-ray source and the film or detector. This is illustrated mathematically in B, where these two rows
of blocks of varying attenuation would nevertheless have the same net attenuation on a conventional X-ray image.

degree because it created blurring of the tissues above and below the pivot plane (Figure 1.5), and this
technique became know as simply tomography. When I was a resident, we used several variations of
this technique for imaging of the kidneys and temporal bones to good effect since the tissues in the
plane of the pivot point were in relatively sharp focus, at least sharper than conventional X-rays.
Computed tomography proved to be much more than an incremental advance over simple X-ray
tomography, however. That is because it both improved tissue contrast and, for the fi rst time, allowed
imagers to see the patient in cross-section. The remarkable sensitivity to tissue contrast offered by
CT was in some sense serendipitous since it was the byproduct of the use a very narrow X-ray beam
for data collection (Figure 1.6). This narrow beam, unlike the wide X-ray beam used for plain fi lms,
significantly reduces scatter radiation. For physicians familiar with conventional X-ray images, those
early CT images were really just as remarkable as Roentgen’s original X-ray images.
The benefits offered by CT imaging to health care was formally acknowledged with the 1979
Nobel Prize for medicine going to Godfrey Hounsfield, just 6 years after his fi rst report of it. The
prize was shared with Allan Cormack, in recognition of his contributions to the process of CT image
reconstruction. But this prestigious award was not necessary to bring public attention to this new
imaging device. At the time the Nobel was awarded, there were already over 1,000 CT units operat-
ing or on order worldwide.
At the time of his discovery, Godfrey Hounsfield was employed by a British fi rm called EMI
(Electrical and Musical Industries) that had interests in both music and musical hardware. While
EMI is better known now for its association with both Elvis Presley and the Beatles, it was much
more than a small recording company with some good fortune in signing future stars. EMI manu-
factured a broad range of electrical hardware, from record players to giant radio transmitters, and
4 CT IMAGING

Figure 1.3 and 1.4 Another significant limitation of plain film is that there is no indication of depth even when sufficient image
contrast is present. For example, on this single plain film of the skull it appears at first glance that this patient’s head is full of metal
pins (1.3). This is because an X-ray image is just a two-dimensional representation of a three-dimensional object, and each point on
the image reflects the sum attenuation of everything that lies between the X-ray source and that point on the film. While you can easily
see that there are a large number of metal pins superimposed on the skull in this example, you cannot tell whether they are on top of
the skull, behind the skull, or inside the skull (perhaps from some terrible industrial accident). The computed tomography (CT) image
of this patient shows that they are, fortunately, hairpins that are outside the skull (Figure 1.4; arrows).

a fortuitous and unusual combination of broad interests in electronics with substantial financial
support offered by its music contracts apparently gave Hounsfield the latitude necessary for his
distinctly unmusical research into CT imaging. In his lab, he built a device intended to measure the
variations in attenuation across a phantom using a single gamma ray source and single detector.
Gamma rays are, of course, naturally occurring radiation, and so the fi rst device he built did not use
an X-ray tube at all but a constrained radioactive element.
By measuring precisely how much the phantom attenuated the gamma rays in discrete steps from
side to side, and then repeating those measurements in small degrees of rotation around the object,
Hounsfield showed that it was possible to recreate the internal composition of a solid phantom
using exclusively external measurements. While CT is commonplace now, at the start this capabil-
ity to see inside opaque objects must have seemed analogous to Superman’s power to see through
solid walls. That large dataset collected by Hounsfield’s device was then converted into an image
using known mathematical calculations (Figures 1.7, 1.8) with the aid of a computer of that era.
Computed tomography was initially considered to be a variation of existing tomography, so it was called
“computed” tomography, or more accurately computed axial tomography aka CAT scanning. This acro-
nym was commonly a source of humor when confused with the pet (no pun intended), and eventually it
was shortened to just “CT.” Hounsfield was honored for the creation of this remarkable imaging tool by
having the standard unit of CT attenuation named a “Hounsfield unit,” which is abbreviated HU.
History and Physics of CT Imaging 5

Figure 1.5 This drawing from a patent illustration shows the complex mechanics of a tomography device. In this design, the X-ray
tube is under the patient table and the film above. The belt at the bottom drives the to-and-fro movement of the entire apparatus. From
AG Filler. The history, development and impact of computed imaging in neurological diagnosis and neurosurgery: CT, MRI, and DTI.
Doi:10.103/npre.2009.3267.5

The medical implications of his device were quite evident to Hounsfield from his earliest experi-
ments, and EMI was supportive of his research in this direction. As the invention moved into a
clinical imaging tool, the mathematical reconstruction used for initial experiments proved to be too
time-consuming using the computers available at that time. Faster reconstruction was essential for
clinical use and, in recognition of his research that contributed to the faster reconstruction speeds
for CT, Allan Cormack was also recognized with a share of the 1979 Nobel Prize.
In common with many scientific advances, Cormack’s investigations preceded the invention of CT
imaging by many years. It was twenty years prior to Hounsfield’s work, after the resignation of
the only other nuclear physicist in Capetown, South Africa, that Cormack became responsible for
the supervision of the radiation therapy program at a nearby hospital. Without a dedicated medical
background, he brought a fresh perspective on his new responsibilities and was puzzled at the usual
therapy planning process used at that time. It presumed that the human body was homogeneous as far
as X-rays are concerned, when it clearly was not. He thought that if the tissue-specific X-ray attenua-
tion values for different tissues were known, it would eventually be of benefit not only for therapy but
also for diagnosis. He eventually published his work on this subject in 1963, nearly a decade prior to
Hounsfield’s first report of his CT device. In his Nobel acceptance lecture, Cormack reflected that,
immediately after the publication of his work, it received little attention except from a Swiss center for
avalanche prediction that hoped it would prove to be of value for their purposes. It did not.
6 CT IMAGING

Figure 1.6 This early CT of the brain allowed the imager to see the low attenuation CSF within the ventricles as well as the high
attenuation calcifications in the ventricular wall in this patient with tuberous sclerosis.

Axial Versus Helical Imaging

While early CT scanners were quite remarkable in their time, they were really quite slow as they went
about their businesslike “translate-rotate” method of data collection. For example, it took about
5 minutes to accumulate the data for two thick (>10mm) slices of the brain at an 80 ×80 matrix.
While still remarkable at that time, these scanners were deemed inadequate for much else apart from
brain imaging.
Even with their limitations, early EMI CT scanners were very expensive, costing about $300,000
dollars even in 1978, and that got the attention of many other manufacturers around the world. It
became a race among them to establish a foothold in this lucrative new market. As a result of this
concerted effort, CT scan times dropped rapidly as manufacturers offered faster and better units; as
a result, it was not long before EMI was left behind.
Those fi rst-generation scanners were made obsolete by faster “second-generation” units that used
multiple X-ray sources and detectors. Not long afterward, these second-generation scanners were
surpassed by scanners using what we call “third-generation” design, which eliminated the “trans-
late” movement. Now the X-ray fan beam, along with its curved detector row (Figure 1.9), could
spin around the patient without stopping. That design still remains the preferred arrangement on
current scanners since it readily accommodates large X-ray tubes, both axial and helical imaging,
and wide detector arrays. Since they spin together, the large detector arrays nicely balance the large
X-ray tubes.
History and Physics of CT Imaging 7

Figure 1.7 Hounsfield’s patent on CT included an illustration (upper left drawing labeled A) of the lines of data that were collected
in a translate-rotate pattern, shown here for only three different angles. From AG Filler. The history, development and impact of com-
puted imaging in neurological diagnosis and neurosurgery: CT, MRI, and DTI. Doi:10.103/npre.2009.3267.5

On the early CT units, the only technique of imaging available was what we now call axial mode
or step-and-shoot. The later term better captures the rhythm of axial mode imaging since all the
data necessary for a single slice is collected (shoot) in a spin before the patient is moved (step) to the
next slice position. While axial mode has advantages in some circumstances and is still available
on scanners, it takes more time than helical scanning since the stepwise movement of the patient is
time-consuming relative to the time spent actually scanning.
On early scanners with only a single detector row, the act of decreasing slice thickness by half
would result in doubling the scan time. That is because scanning the same anatomy but with thinner
sections was just like walking but taking smaller steps. The process of acquiring single axial scans
had other limitations and many were due the relatively long scan time. For example, if there were
any patient motion during acquisition of those single scans, misregistration or steps would appear
between slices on reconstruction (Figure 1.10).
This aversion to patient motion during axial CT scanning, imprinted on imagers for over a decade,
made the spiral CT technique all the more remarkable when it was introduced in 1990. Now, patient
8 CT IMAGING

Xray source

Phantom

Detector

Mathematical
0 0 0 0 0 0 0 01 3 7 3 3 3 0 0 0 0 0 0 0 0 attenuation

Projection

Figure 1.8 This illustrates just one pass of the collector and gamma ray source across a phantom containing water surrounding a
aluminum rod. In CT language, this simple motion was called “translate.” After each pass across the object, the entire assembly would
rotate 1 degree and collect another projection; so, this motion of first-generation CT scanners was called “translate-rotate.”
The line below marked “mathematical” shows a numeric representation of the attenuation measurements collected by the detectors
that could be used for image reconstruction. This information can also be represented graphically, as seen in the line “projection.”
The first CT images were made using an algebraic reconstruction, but later all CT scanners used the projections in a reconstruction
technique called back-projection, or more specifically filtered back-projection, because it proved to be faster than the purely algebraic
reconstruction using computers of that era.

motion became a requirement for CT scanning. This innovative approach to CT imaging is credited
to Willi Kalender, and the terms “spiral” and later “helical” were used to describe the path now
traced by the rotating X-ray beam onto the moving patient (Figure 1.11).
Helical imaging at fi rst was limited by scanner hardware, and only a short section of anatomy at
a time could be covered in a scan because the wires that attached the X-ray tube to the gantry had
to be unwound. Eventually, CT hardware was improved to maximize the benefits of helical scan-
ning, and once continuous gantry rotation became possible, CT scan times dropped precipitously.
Continuous gantry spin was made possible by the use of a slip-ring attachment between the tube and
detectors to conduct power and data, respectively. But this was not a uniquely CT invention as slip
rings were already commonplace on tank turrets and home TV antennas (Figures 1.12, 1.13).
When we perform CT in the axial mode, the data for one slice goes on to image reconstruction as
a discrete packet of information. In helical mode, since the X-ray beam actually sweeps obliquely to
the moving patient, each of the axial slices must be created using data collected from more than one
of those rotations. The attenuation values for the direct axial slice, or from any other plane for that
History and Physics of CT Imaging 9

X-ray tube

Detectors

Figure 1.9 The arrangement of tube and detectors in a third-generation CT scanner. Unlike the “translate-rotate” approach, in this
design, the tube and detectors move in a circle around the patient. While early versions of this design used a single row, current CT
scanners use the same design but incorporate multiple detectors rows each with hundreds of individual detectors.

Figure 1.10 The irregular contour of this skull (arrows) is due to patient motion during the acquisition of the axial scans used for the
reconstruction.

matter, are estimated from the known data points that were measured during the helical scan. This
process of estimating the attenuation values in nearby tissue using the known, but only nearby, data,
is called interpolation. It is really very much like the method used to estimate the value of a house
before it is placed on the market. To provide a reliable estimate of a sale price, the appraiser does
not actually add up the value of the many components of a house to determine its market value. The
10 CT IMAGING

(A)

(B)

Figure 1.11 In axial mode (A), the CT scanner gantry spins around just once and in the plane perpendicular to the patient. In helical
mode (B), the gantry spins in the same but continuously while the patient table moves through the its center. The combination of these
simultaneous motions (i.e., continuous rotation of the tube and the advancement of the patient), results in an oblique path of the X-ray
beam across the patient. This X-ray beam trajectory can be described as “helical,” and this term is preferred instead of “spiral” since
that term implies a continuously changing diameter as well.

projected selling price is based almost entirely on the recent sale prices of comparable houses nearby.
For example, if there had been completed sales during the past year of the houses on either side of
your house, the estimated value of your house would be much more reliable than if you were sell-
ing a custom built ten room mansion is upper Maine and the closest reference houses were in towns
many miles away. The same principle holds true for helical imaging. The closer the helical wraps are
together, the more accurate those estimated or interpolated attenuation values will be in tissues not
directly in the scan trajectory. This explains why the use of a low pitch, that allows interpolation
over a shorter distances, provides better resolution. In cases where a pitch value less than one is used,
the overlapping of the helical sweeps allows the scanner to measure attenuation of some of the tissues
more than once and that also decreases noise but at the expense of time and patient dose.

Multidetector CT: Beam Collimation Versus Detector


Collimation

The fi nal advance that will bring us up to date with modern CT scanners was the addition of multiple
detector rows to the helical scanner. It is worth acknowledging that the very first EMI scanners also
acquired more than one slice at a time, so the notion is not entirely new to CT, although the ratio-
nale for it changed with the different generations of scanners. On those very early translate-rotate
scanners, a single rotation around the patient might take 5 minutes, so the use of a pair of detectors
could significantly reduce total scan time. With the arrival of second and third generation designs,
however, the second detector row was dropped presumably to save cost and reduce the complexity
of reconstruction.
History and Physics of CT Imaging 11

Figure 1.12 Hard for many to believe now, but there was a time when the TV signal was collected free using a fixed antenna attached
to the roof of a house. The quality of the TV image was of course related to the strength of the signal received and that meant, for many
rural households far from the transmitters, that decent TV signal reception required sensitive antennas. The best of these could be rotated
remotely from the living room while standing near the TV set in order to optimize the direction of the antenna and viewing the image as the
antenna was rotated. By using slip ring contacts on the shaft of the antenna (arrows), the antenna could be rotated in either direction without
worrying about later having to climb on the roof to unwrap the antenna wires. This was no small comfort on a cold Vermont winter night.

Twenty years after the EMI scanner, Elscint reintroduced the use of multiple detector row CT, but
the rationale at that time was to limit tube heating during helical scanning. After the arrival of slip ring
scanners, many sites were experiencing unwanted scanner shutdowns when performing wide coverage,
helical imaging and that was because continuous scanning would make the X-ray tubes of that time over-
heat. Once that occurred, it required a forced break from imaging to provide time for the X-ray tube to
cool off. This often occurred in inopportune moments, for example while imaging a patient after major
trauma, and there were few precedents since it had been only rarely encountered previously when using
CT scanners in the axial mode. This was because the time spent moving the patient between each rota-
tion of the gantry, albeit short, provided enough time for the X-ray tube to cool off. Elscint’s design was
intended to limit tube heating by decreasing the duration of the “tube on” time for the helical scans.
Manufacturers quickly found there were other significant benefits of multidetector scanning, even
after the tube heating problems were minimized by the introduction of X-rays tubes with substan-
tially more heat capacity. While early multidetector scanners could provide either faster scan times or
thinner slices, as the number of detector rows increased it became possible to provide both. Over the
course of the next decade, scanners would appear with 4, 8, 16, 64, 128, and most recently 320 rows
(Figure 1.14). Keep in mind that multidector arrays come at a cost since each detector row still contains
nearly a thousand individual detector elements, and the use of metal dividers between rows to limit
scatter meant that these multi-detector arrays are heavy, difficult to build, and expensive.
12 CT IMAGING

Figure 1.13 A slip ring on a CT scanner (arrows). The contacts fixed on the large plate on the left ride on the circular conductive
metal rails provide power and convey data while the entire gantry freely rotates.

Users need to be aware of exactly how the detector rows are arranged on their scanners since that
can vary among the different manufacturers, and there is almost no way to know their arrangement
intuitively. It is also important to recognize that some scanners provide fewer data channels than the
number of available detector rows. So, a manufacturer may offer a scanner called the “Framostat
40” with only 20 data channels. In that case, you will fi nd that the scans can take longer than
expected when using the thinnest detector collimation because only half of the total detector rows
are active at the smallest detector collimation (Figure 1.15).
The advantage of offering choices for the activation of detector rows is that it gives the user the
options of using either the narrow center detector rows to provide the best detail or using all the
rows for rapid coverage of large anatomic regions. So, keep in mind that your choice of “detector
collimation” is not trivial since it determines not only the scan resolution but also the total number
of detector rows activated, and that has a significant effect on scan time.

CT Image Contrast

At the risk of stating the obvious, the shades of gray on a CT image are based on a linear scale of
attenuation values. Wherever the X-rays are significantly absorbed or deflected, i.e. attenuated, by
the tissues, very few X-rays will arrive at the detectors and those corresponding tissues will appear
white on the image. Wherever there is little or no attenuation of the X-ray beam, more X-rays will
arrive at the detectors and those tissues will be represented as black on the CT image. That is why air
History and Physics of CT Imaging 13

Figure 1.14 This fountain pen was placed on the plastic shield in this 320-detector row scanner to provide some perspective to its
width. Using this scanner, the detector array is sufficiently wide to cover the entire head in a single rotation of the gantry.

appears black, bone appears white, and fat and brain are represented as shades of gray in between
(Figure 1.16). This direct correlation of just the single value of X-ray attenuation with gray scale
display differs substantially from magnetic resonance (MR) images, where there are multiple sources
of information displayed on image, and so a dark area on the image might be attributed to signal
loss from flow, low proton density, or even magnetic susceptibility effects depending on the scan
technique and the anatomic location.
Although CT imaging seems simpler than MR in principle, a number of factors confound our
ability to assign the correct attenuation values to the imaged tissue and there are many illustra-
tions of this problem included in the case fi les. For example, a renal cyst may appear to have higher
attenuation on CT due to pseudo-enhancement (Chapter 8, pitfall 1), or CSF in the sella may be
mistakenly assigned the same attenuation value as fat due to beam hardening (Chapter 5, artifact
6). So, while CT image display seems to be more straightforward than MR imaging, you must fully
understand the many factors that can confound the accuracy of attenuation values displayed on a
CT scan.

Slice Thickness

While early scanners produced images with choppy images with visibly large pixels, since they used
a matrix of 80 ×80, the in-plane resolution of CT images improved quickly. With each new genera-
tion of CT scanner, pixel size decreased fairly quickly to the current submillimeter standard size.
But CT image resolution is determined by voxel size and that is determined by both the pixel size
and the slice thickness.
14 CT IMAGING

64 - 0.625 mm detector rows


total widh = 4.0 cm

16 - 1.5 mm 32 - 0.625 mm 16 - 1.5 mm


detectors detectors detectors

Total width = 7cm

Figure 1.15 These two different scanners both have 64 detector rows on their detector arrays but provide different usable scan
widths depending on how they are activated. In the top example, the 64 detector rows are each 0.625mm wide, evenly spaced, and
there is a data channel for each row. This arrangement could offer -.625mm detector collimation with a total usable scan width of 4cm.
In the lower example, there are also 64 rows but only the center 32 rows are 0.625mm wide. The remaining 32 rows are all 1.5mm
wide and arranged as a pair of 16 detector rows on the outside of the array. A scanner with this arrangement would offer only 2cm of
coverage when using 0.625mm detector collimation, and that is half of that of the upper arrangement.
However, using the center rows in pairs, they would function like an additional 16 1.5mm rows, and using that arrangement the total
usable array width becomes 48–1.5mm detector rows. This would provide 7cm of coverage with each rotation, and that is nearly twice
that of the upper array. So one manufacturer might offer their scanner with the lower arrangement to provide a wider array width for
rapid body or lung imaging with the option to do finer imaging, like brain CT angiography. However, a CTA using a detector collimation
of 0.625mm with the lower array would take twice as long as the same scan using the upper array. You need to know how the detector
elements are arranged to correctly design scan protocols on your scanner for different imaging requirements.

Whenever images are created using thick slices, small structures may be obscured because each
voxel is represented by a single attenuation value, and that is determined by the average attenuation
of all the contents. This resembles the presidential primary process for states like Florida. There,
all the delegates are awarded to the overall winner, unlike in New Hampshire, where they are frac-
tionally awarded based on the candidate’s portion of the total vote. For example, if a single voxel
contains both fat (low attenuation) and calcification (high attenuation), the mean attenuation of that
voxel could be exactly the same as normal brain, making both the fat and calcification inapparent
on a CT scan. It is more common to fi nd that a very small, dense calcification that occupies only a
fraction of a voxel will cause the entire voxel to have the attenuation of calcium and that will result
in an exaggeration of actual size of the calcification on a CT scan.
On early single-slice CT scanners it was undesirable to decrease slice thickness for most imaging
tasks because that significantly increased the time required to compete the scan. However, when
History and Physics of CT Imaging 15

Figure 1.16 This patient was lying on an ice bag during the CT exam performed for neck pain. Notice that the ice blocks (arrow ) are
darker than the surrounding water. By CT convention, this means that the ice attenuates the X-ray beam less than liquid water. Since
both the liquid water and solid ice have exactly the same molecular composition, this difference in attenuation must be the result of
the slight separation of water molecules as water changes state to crystalline ice. In addition to this high sensitivity of CT imaging to
differences in attenuation, it also provides sufficiently high resolution to show the air, note the dark spots, frozen within the ice.

using CT scanners with multiple detector rows, scan time is for all practical purposes independent
of slice thickness. For example, a scanner with 64 channels using sub millimeter slice thickness can
provide faster scans over comparable anatomy than can a four-slice scanner using 5mm slice thick-
ness. This capability of multidetector scanners to provide very thin slice thickness without adding to
scan time has made high quality multiplanar reconstructions commonplace.

Isotropic Voxels and Reconstructions

The ability to scan with very thin sections has proved to be among the most significant advances of
modern CT imaging. While early CT scanners were capable of providing good quality axial images
when viewed slice by slice, whenever they were reconstructed into another plane of display the qual-
ity of these reconstructions was surprisingly poor due to thick slice thickness. For example, using
a slice thickness of 1cm meant that the depth of each voxel was more than 10 times larger than the
pixel size. These asymmetric voxels resulted in reconstructions with a striking “stair-step” appear-
ance that were of little diagnostic value apart from gross alignment. However, the ability to scan
using cubic or isotropic voxels in which the slice thickness is the same as the pixel size provides
reconstructions in any plane that are equivalent in quality to the images in the acquisition plane
(Figures 1.17, 1.18).
16 CT IMAGING

(A) (B)

Figure 1.17 These drawing show the difference between (A) voxels created using thick CT detector collimation, called anisotropic,
compared with (B) those using very thin detector collimation, called isotropic voxels. Isotropic, or cubic voxels, are created when the slice
data is nearly the same dimension as the length of one side of a pixel. For example, when using a 512 × 512 matrix for scan reconstruction,
the detector collimation needs to be less than 1 mm in order to provide cubic voxels. The advantage of creating isotropic voxels is that the
scan reconstructions in any plane (e.g., sagittal, coronal, or oblique) will be nearly equivalent in quality to images in the plane of acquisition.
Illustrations provided by Dr. Rihan Khan, University of Arizona, Department of Radiology.

(A) (B)

Figure 1.18 Sagittal view made from standard 5mm reconstructions (A) and the 0.7mm original scan data (B).

While high-quality reconstructions are routine now for body and neuroimaging, it is important to
consider that when using the thinnest available detector collimation, the signal-to-noise ratio (SNR)
on each slice will be less than that available with the use of either wide detector collimation or slice
reconstruction thickness when using narrow detector collimation (Figure 1.19).
If the thin sections are to have the same SNR as thicker sections, the radiation dose for the scan
must be increased. In practice, however, this problem is mitigated because the thin sections are
rarely viewed primarily. By reconstructing the submillimeter data images in the desired plane of
section at 3–5 mm slice thickness, the overall SNR is significantly better the thin source images.
The principle of “scan thin, view thick” is the basis of most brain imaging because detector
History and Physics of CT Imaging 17

(A) (B)

Figure 1.19 Notice that the noise visible in the 0.625mm section (A) becomes less apparent after merging data from multiple detec-
tors together into a thicker slice, here as a 4.5 mm slice (B).

B
A

Figure 1.20 Helical imaging requires collecting data from either 180 or 360 degrees of tube rotation so that corresponding views
are available of any structure (note black structure A). However, off-center structures (note black structure B) may only be imaged
once because of the divergence of the X-ray beam necessary for CT scanners with wide detector arrays. This undersampling artifact is
called “partial volume” and it can results in blurring of the margins of that structure. This artifact should not be confused with volume
averaging the (see Chapter 7).

collimation also minimizes beam hardening artifacts in posterior fossa and, for helical imaging,
cone beam and partial volume artifacts ( Figure 1.20). But, when considering X-ray dose in this
context, keep in mind that a small increase in dose can provide sufficient image quality for high
quality reconstructions and that will ultimately save patient dose if it eliminates the need for a
second scan. For example, by reconstructing axial CT data of the paranasal sinuses into the coro-
nal plane, it eliminates the need for direct coronal scanning and thus reduces the total patient
dose for the scan by nearly 50%.
18 CT IMAGING

Image Reconstruction and Detector Arrays

Hounsfield’s fi rst CT experiments used a pure algebraic reconstruction (Figure 1.21) to create images.
In fact, it would appear that his first device was basically designed to collect the numbers in a man-
ner best suited to solve the reconstruction formula.
Although effective, algebraic reconstruction proved impractical for two reasons. First, it is very
computationally demanding, and, second, it is impossible to use straight calculations to solve for the
unknowns in an equation when the known values are not quite correct. That is the case with CT
mathematical reconstructions since CT measurements include noise and a whole variety of artifacts.
While there has been renewed interest in pure algebraic reconstruction techniques now that comput-
ers are fast enough to make it feasible, most CT scanners still use a less demanding approach called
back-projection or more accurately filtered back-projection. The scan information can be thought
of as a series of projections rather than a set of numbers (as shown previously in Figure 1.8). This
technique, patented by Gabriel Frank in 1940, was originally proposed as an optical back-projection
technique 30 years before the discovery of CT (Figure 1.22).
To correct for the edge artifacts that are inherent with back-projection, an additional step is added
to improve the quality of the fi nal CT images. This step is called filtering, although that term should
not be confused with the physical act of fi ltering of the X-ray beam use to eliminate the low-energy
X-rays. There are many filters, also called “kernels” which eliminates the confusion with metal
X-ray filters, that the user can choose for reconstructing CT images. These range from “soft” fi lters
that reduce noise at the expense of some image blurring to “sharp” fi lters used to display bone but
increase apparent noise. The process of fi ltering occurs after data acquisition but prior to image
display and can not be modified by the viewer afterwards. This of course differs from the setting of
window and level used to view the reconstructed images (Figure 1.23).
Upon the introduction of helical scanning, a new method for CT reconstruction was necessary
to allow reconstruction of date acquired in a continuous fashion as the patient moved past the

2 3 4 9

1 5 9

7 2 1 10

10 8 10 6

Figure 1.21 This 3 × 3 matrix demonstrates simply how one can use the sum of all the rows, columns, and diagonals outside the
matrix to predict the value of the central, unknown, cell. In this simple example, the value of the blank cell in the center is of course
3. Early scanners used an 80 ×80 matrix that required hours of calculations using this algebraic approach. That approach was soon
replaced with back-projection reconstruction techniques largely because they are faster.
History and Physics of CT Imaging 19

Figure 1.22 In this drawing from Gabriel Frank’s patent on back-projection, you can see that it was initially intended it to be a visual
projection technique. Image B shows the light inside a cylinder that has collected the projections of the revolving object, line by line,
in A. CT now uses a mathematical, not optical, application of this concept for reconstruction. From AG Filler. The history, development
and impact of computed imaging in neurological diagnosis and neurosurgery: CT, MRI, and DTI. Doi:10.103/npre.2009.3267.5

detectors. This style of reconstruction incorporates the notion of estimation or interpolation of


attenuation values for those tissues that fall between those actually measured during the X-ray beam
sweep over the body (Figures 1.24, 1.25).
Other challenges had to be addressed with each advance in CT technology complexity. For exam-
ple, techniques needed to be developed for reconstruction of data collected simultaneously from
each channel of a large multidetector array in helical mode. This was not simply an issue of handling
larger datasets. As the number of detector rows increased, the X-ray beam increased in width in the
craniocaudal direction to cover the array. That is why the thick fan beam of CT is sometimes called
a “cone beam.” Since the beam arises from a small focal spot on the anode, the X-rays striking the
outer detector rows arrive at a much steeper angle compared with those in the center rows. As a
result, even for a uniform phantom, the X-rays arriving at the outer rows will have a longer path
than those in the center. The already complex reconstruction algorithms now had to accommodate
the differences in X-rays path lengths. As one might expect, these new methods for reconstruction
also introduced some new and unfamiliar artifacts.
The computational requirements for image reconstruction increased as the total number of detectors
used for data acquisition exploded. Considering that since most scanners now have 700–1,000 sepa-
rate detectors in each detector row, one rotation of the gantry provides a stunning amount of data to
process. For example, one commercial dual-source scanner has over 77,000 separate detector elements
in its two arrays that are intended to continuously collect data during each subsecond rotation of the
gantry.
20 CT IMAGING

(A) (B)

(C) (D)

Figure 1.23 These four images illustrate the difference between image filtering and windowing. The image in A is processed with a
soft tissue filter and is displayed at a soft tissue window. The image in B shows the same dataset but now processed with a bone filter
but displayed with the same soft tissue window and level as in Figure A. Notice how much more noise is apparent as result of this
change in filter.
The image in C was also processed using a bone filter but it is displayed with a bone window and level. Notice how much detail is now
evident in the skull bones. The image in D shows the scan data displayed with the same bone window and level, but reconstructed
using a soft tissue filter. Notice on this image how the bone edges appear much less sharp than image C.
These paired images illustrate the balance between edge enhancement and noise that is determined by your choice of filter. Your
choice of filter, also called kernel, will indirectly influence the dose necessary to scan the patient since it is an important factor in your
perception of noise on the images.

Cone Beam Imaging

A logical next step in the evolution of cross-sectional imaging would replace the complex multidetector
array with a single flat detector similar to the ones that have replaced image intensifiers used for con-
ventional angiography (Figure 1.26). While there are some similarities in configuration between a wide
multidetector array and a flat-panel detector, there are also some significant differences to consider.
History and Physics of CT Imaging 21

9 9 7 8 8 8 7 6 5 4 4 4 3

Figure 1.24 When using helical mode for scanning, the X-ray beam trajectory is angled to the long axis of the patient, and this angle
increases as pitch increases. In order to assign attenuation values to the voxels that lie in between the actual beam path, the attenuation
values need to be estimated or “interpolated” from known data points. And, the further away those directly measured points reside, the
greater the degree of estimation. In this drawing the numbers that are not circled must be estimated based on known values determined
from the directly measured points that lie on the oblique lines (solid lines).

9 8 7 7.5 8 7 6 5.5 5 2.5 0 1.5 3

Figure 1.25 In this drawing there are more known values (circled) so there is less estimation of the values between the oblique lines
necessary. Note the the values in between the solid circles are different from those in Figure 1.24. This illustrates why high pitch heli-
cal imaging, since the scan lines are farther apart, will have lower resolution.

Unlike conventional X-ray images, in which both direct and scattered X-rays contribute to the image,
early CT scanners used a relatively narrow- ray beam that limited the contribution of scattered X-rays
to the final image. As the number of detector rows in modern CT scanner’s detector array increased the
beam became wide in two directions, its shape now resembled a cone rather than a fan (Figures 1.27,
1.28) since it must diverge from the anode in two directions, i.e., side-to-side and top-to-bottom.
To minimize scattered X-rays from striking the detectors when using the wide fan beam in a usual
multidetector scanner, the detector arrays incorporate thin metal plates, called septa, between each
detector row. These septa absorb most of the scattered X-rays and are designed to allow only those
X-rays oriented perpendicular to the detector to contribute to the image. While the use of septa
improves image quality, they add weight and complexity to the array and also add to patient dose.
A CT scanner using a flat-plate detector must also have a wide beam in two dimensions to provide
even coverage of the flat panel. The terminology gets somewhat confusing, since the beam shaped
used on a multidetector scanner can also be described as a cone beam, but many authors call any CT
device using a flat-panel detector instead of multiple row detectors a “cone beam scanner.” But, these
flat panel scanners, since the panel does not lend itself well to the use of septa common to multide-
tector scanners, must offer other methods to minimize the deleterious effect of scattered X-rays on
image contrast. The use of a grid, not unlike those used with conventional X-ray films, can improve
image quality but their use again requires an increase in patient dose. For example, as much as 20%
of the total patient dose may be lost in the septa of a multidetector scanner and it is anticipated that
this percentage could be more when using a grid on a flat panel or cone beam scanner.
22 CT IMAGING

Figure 1.26 This image of an angiography unit during assembly demonstrates the flat-panel detector (arrows) at the top of the C arm
with its X-ray tube at the bottom.

Figure 1.27 Multidetector scanners use an X-ray beam pattern that resembles the blades of this kitchen tool, used to cut butter into
flour, since it also diverges in two directions.
History and Physics of CT Imaging 23

X-ray source

Patient

Detector

Figure 1.28 The usual third-generation CT scanner design has the X-ray tube (top, left image) move around the patient accompanied
by the detectors (bottom, left image) that are rigidly attached opposite the tube on the gantry. Viewed from the side, the X-ray beam
on a single-detector scanner is very narrow from head to foot and like a paper fan (middle drawing ). However, to accommodate the
multiple detector arrays on modern scanners, the X-ray beam must be wide from head to foot as well as from side to side (far right
drawing ).This figure provided by Josef Debbins PhD, Barrow Neurological Institute, Phoenix, Arizona

These factors are considered in the term dose efficiency, and this measurement is the composite
of both the absorption efficiency and geometric efficiency of the scanner hardware. For example,
the early single-slice scanners had a very high geometric efficiency since almost all the X-rays in the
beam were collected by the single detector row. However, those early scanners had relatively low
absorption efficiency because of the materials then available for the detectors. This has improved so
that modern CT scanners offer a very high absorption efficiency, >90%, but a lower geometric effi-
ciency compared with single-slice scanners. This give and take explains the surprising fact that the
patient dose using a single-slice scanner in axial mode may be lower than the dose for an equivalent
CT scan using modern multidetector scanner in helical mode.
So, if dose increases and contrast decreases using a flat panel for CT, why bother? One reason is
that flat panel scanners offer the potential for improved resolution compared with multidetector CT.
Another is that a flat panel detector weighs considerably less than a large detector array and that
offers the possibility of faster rotation times. But there is another limiting factor to rotation time
that is rarely considered these days called recovery time. The limit to gantry rotation speed is usually
considered to be the physical limits of spinning a very heavy object at high speeds. But another limit-
ing factor is the time necessary for the detectors to reset after each exposure to the X-ray beam. For
example, there would be no point in spinning the gantry at four rotations a second if it required a
full second for the detectors to return to their baseline state after each exposure. This time necessary
for the detectors to reset, also called afterglow, was a problem with older detector design but is neg-
ligible on modern multidetector scanners. However, flat-panel scanners will require more time for
recovery so even though the gantry can physically spin faster, it won’t matter unless faster recovery
time for the detector panel become possible.
Dose constraints and potentially lower contrast, along with complex reconstruction algorithms
have proved to be obstacles to the commercial development of cone beam CT for the time being. But
this design does offer some advantages, and it deserves our continued attention since it is likely that
24 CT IMAGING

many of these problems can be addressed with ongoing development of this technique. Cone beam
CT is currently offered as an option on some angiography units and has proved to be useful in that
setting for problem solving during complex interventions and the management of emergencies dur-
ing interventional procedures.

Iterative Reconstruction

All current CT scanners use variations of back-projection for image reconstruction. Recently, many
scanner manufacturers began offering variations of mathematical or algebraic reconstruction, usu-
ally called iterative reconstruction (IR), for their scanners. There are two good reasons why. First,
because of the increased utilization of CT, there has been an appropriate emphasis placed on reduc-
ing CT dose. Second, as a result of the relatively low price of supercomputer capabilities, it is now
feasible to perform algebraic reconstructions at acceptable speeds and cost. Early indications suggest
that dose reductions on the order of 50–75% are feasible for body imaging using IR without signifi-
cant compromise in image quality using these mathematical reconstruction techniques.
Many variations on this theme are now provided by vendors of CT equipment. Some versions even
limit noise by accounting for the specific errors in the imaging chain, also called optics. Others,
rather than use a pure mathematical reconstruction, use hybrid techniques that start with the tradi-
tional fi ltered back-projection but then use a mathematical technique to reduce noise by comparing
that reconstruction to the raw data in an iterative process.
The term “iterative reconstruction” describes a process of revising the image data in order to provide a
“best fit” with the actual scan data. This is done in a continually updating, or iterative, process. I think
of this much like the way one fills in a crossword puzzle (Figure 1.29). The reason most of us use a pencil
to fill out these puzzles is because we may find opportunities to reconsider our response to an “across”
clue once we figure out the “down” clue in that same location. I think of the iterative reconstruction
process in this simple way; the software takes it best shot at creating the image, then goes back to the
raw data to see how well it did, adjusts a few things, and checks again to see if that fits any better.
One reason why this cannot be easily accomplished in a single, powerful calculation is that the raw
data itself contains errors and noise. As a result, there is no single solution for the calculations, and so
the most that can be hoped for is the creation of a “best fit” for that dataset. Think of it like a crossword
puzzle but, in several spots on the grid, there is no word can satisfy both the “across” and “down” clues.
While IR can be used to reduce dose or improve image quality at the same dose, it does require special
software and computer hardware and currently it adds time for processing. Nevertheless, because it
holds considerable promise for significant dose reduction and will be widely adopted in some fashion.
This approach also offers new tools for minimizing streaks that arise from implanted metal.
While some other and less expensive postprocessing options are available that do not refer back to
the raw data in the same way, these should be considered carefully since they present the risk of cre-
ating “pretty” images at the expense of smoothing over clinically important contrast. For example,
a postprocessing algorithm that eliminates noise in homogeneous areas of anatomy could potentially
obscure true but subtle differences in attenuation. But iterative reconstruction combined with large
decreases in dose will without doubt have its own limitations, and it will take some time to validate
all these new techniques in the clinical arena before they can be used with complete confidence.
History and Physics of CT Imaging 25

(A) 5 2 1
H B

A O
2
N
G R E T A
K
A
3 4
S O P H I A R

N T

(B) 5 2 1
T H E K I N G A N D I
A N
2
N
G R E T A
K
R
3 4
S O P H I A I

N D

Figure 1.29 In A, you could choose “Bogart” for #1 down: a six-letter word for a leading actor in the movie Casablanca , but you
would have to revise it when you find that the first letter of the word must be “I” after you fill in #5 across: title of a film nominated for
nine academy awards starring Yul Brynner and Deborah Kerr (B).

Gantry Angulation and Image Display

Most single-slice CT scanners included a mechanism to tilt the scanner gantry relative to the patient
table. This was used on a regular basis to optimize the plane of imaging for axial brain scanning or,
when combined with head tilt, to provide direct coronal images of the brain or sinuses. One substan-
tial benefit to angulation on early scanners was that, by using tilt, one could minimize the number of
26 CT IMAGING

slices necessary to cover the brain. And when imaging with CT was considered in terms of “minutes
per slice,” eliminating one slice was not trivial. As scanner speed improved, the primary function for
angulation in brain imaging was dose reduction to the eyes, and it was generally recommended to
exclude them from the scan since they are susceptible to radiation injury.
Now, however, on most scanners in helical mode and those units with large multidetector arrays
or two sources, gantry tilt is not available for brain imaging. In spite of this change in hardware,
it is commonplace to continue to present head CT scans with the traditional angulation since it is
familiar to imagers and it makes comparison with prior CT scans easier.
While gantry tilt had been used with patient positioning to provide direct coronal imaging for
temporal bone and paranasal sinus imaging, since most modern scanners offer near isotropic voxel
images, direct coronal imaging is really no longer necessary. Now, even reconstructions in sagittal
views that were formerly unthinkable are routine. In fact, isotropic voxel imaging has created an
imaging environment that resembles MR since even oblique reconstructions of diagnostic quality are
now available on multidetector scanners in both axial and helical modes (Figure 1.30A and B).
The loss of gantry angulation has created two new problems, however. The radiation dose to the
eye is lowest on those scanners that offer gantry angulation if the user prescribes the scan angle and
range to exclude the orbits. However, on scanners that do not allow gantry angulation, the eyes are
always included in the scan but the imager may not be as aware that the eyes were included if the
data is reconstructed into the traditional display angle.
So, while the lens is always included on head scans performed on new scanners without gantry
angulation, the measured dose to the eye during direct helical imaging with a modern multislice
scanner is still quite low. This represents another one of the compromises of CT imaging. As scan-
ners enlarged to incorporate multiple detector rows, the tilt option was lost but the potential for
increased dose was offset by more sophisticated automatic exposure control, beam filtering, and
diminished dose from overbeaming with more detector rows (see Chapter 2, Overbeaming). While
the use of automatic exposure control for brain imaging may not make sense otherwise for a roughly

(A) (B)

Figure 1.30 A, B This high quality coronal CT image (A) was reconstructed from the thin section axial imaging data. Note the small defect
in the bone of the sphenoid sinus (arrow) that corresponds to the site of a CSF leak noted on the coronal MR T2 weighted scan (B, arrow).
History and Physics of CT Imaging 27

spherical object, it can be worthwhile by providing greater dose reduction to the lens. Another
option to reduce lens dose is to use bismuth X-ray attenuating eyecups, but this adds cost and time
(see Chapter 2, Shielding).
The second problem encountered with brain scans performed without gantry tilt is that the user
needs to be attentive to artifacts from hardware in the mouth, such as amalgam, crowns, and
implanted posts. While these were almost never an issue when gantry tilt was used, the metal arti-
facts arising from X-ray shadowing behind these very dense materials frequently projects directly
over the posterior fossa and, in some cases, significantly degrades the diagnostic value of the CT scan
(Figure 1.31). One option to minimize this artifact is to instruct cooperative patients to tuck their
chins during the scan. This recreates the traditional imaging angle without requiring gantry angula-
tion and should be helpful in limiting the metal artifacts from teeth and, if carefully done, it offers
the potential for reducing eye dose as well.
Medical practice is at times an odd mix of eager acceptance of new technology and rigid resis-
tance to change in almost every other way. With the arrival of scanners without the capability of
gantry angle, the only real benefit now to viewing CT brain scans in the old fashion is that the
orientation is familiar to imagers. Straight imaging in many respects would make it easier to com-
pare CT scans with MR scans, since the later are routinely displayed without angle (Chapter 5,
Artifact 9). But it seems likely that, as more centers move to isotropic imaging of the brain, head
scans will eventually be presented in two or three orthogonal planes for review similar to the way
most body CT images are displayed now.

(A) (B)

Figure 1.31 The axial CT scan (A) shows considerable artifact overlying the cranio-cervical junction without a clear source. The
coronal reconstruction (B) shows that the streaks are arising from dental amalgam and projects over the skull base in this case
because no gantry tilt was available on this scanner.
Exploring the Variety of Random
Documents with Different Content
PART II

THE PHYSICAL WORLD


CHAPTER IX
THE STRUCTURE OF THE ATOM

In all that we have said hitherto on the subject of man from without,
we have taken a common-sense view of the material world. We have
not asked ourselves: what is matter? Is there such a thing, or is the
outside world composed of stuff of a different kind? And what light
does a correct theory of the physical world throw upon the process of
perception? These are questions which we must attempt to answer
in the following chapters. And in doing so the science upon which we
must depend is physics. Modern physics, however, is very abstract,
and by no means easy to explain in simple language. I shall do my
best, but the reader must not blame me too severely if, here and
there, he finds some slight difficulty or obscurity. The physical world,
both through the theory of relativity and through the most recent
doctrines as to the structure of the atom, has become very different
from the world of everyday life, and also from that of scientific
materialism of the eighteenth-century variety. No philosophy can
ignore the revolutionary changes in our physical ideas that the men
of science have found necessary; indeed it may be said that all
traditional philosophies have to be discarded, and we have to start
afresh with as little respect as possible for the systems of the past.
Our age has penetrated more deeply into the nature of things than
any earlier age, and it would be a false modesty to over-estimate
what can still be learned from the metaphysicians of the
seventeenth, eighteenth and nineteenth centuries.
What physics has to say about matter, and the physical world
generally, from the standpoint of the philosopher, comes under two
main heads: first, the structure of the atom; secondly, the theory of
relativity. The former was, until recently, the less revolutionary
philosophically, though the more revolutionary in physics. Until 1925,
theories of the structure of the atom were based upon the old
conception of matter as indestructible substance, although this was
already regarded as no more than a convenience. Now, owing
chiefly to two German physicists, Heisenberg and Schrödinger, the
last vestiges of the old solid atom have melted away, and matter has
become as ghostly as anything in a spiritualist seance. But before
tackling these newer views, it is necessary to understand the much
simpler theory which they have displaced. This theory does not,
except here and there, take account of the new doctrines on
fundamentals that have been introduced by Einstein, and it is much
easier to understand than relativity. It explains so much of the facts
that, whatever may happen, it must remain a stepping-stone to a
complete theory of the structure of the atom; indeed, the newer
theories have grown directly out of it, and could hardly have arisen in
any other way. We must therefore spend a little time in giving a bare
outline, which is the less to be regretted as the theory is in itself
fascinating.
The theory that matter consists of “atoms”, i.e. of little bits that
cannot be divided, is due to the Greeks, but with them it was only a
speculation. The evidence for what is called the atomic theory was
derived from chemistry, and the theory itself, in its nineteenth-century
form, was mainly due to Dalton. It was found that there were a
number of “elements”, and that other substances were compounds
of these elements. Compound substances were found to be
composed of “molecules”, each molecule being composed of
“atoms” of one substance combined with “atoms” of another or of the
same. A molecule of water consists of two atoms of hydrogen and
one atom of oxygen; they can be separated by electrolysis. It was
supposed, until radio-activity was discovered, that atoms were
indestructible and unchangeable. Substances which were not
compounds were called “elements”. The Russian chemist
Mendeleev discovered that the elements can be arranged in a series
by means of progressive changes in their properties; in his time,
there were gaps in this series, but most of them have since been
filled by the discovery of new elements. If all the gaps were filled,
there would be 92 elements; actually the number known is 87, or,
including three about which there is still some doubt, 90. The place
of an element in this series is called its “atomic number”. Hydrogen is
the first, and has the atomic number 1; helium is the second, and
has the atomic number 2; uranium is the last, and has the atomic
number 92. Perhaps in the stars there are elements with higher
atomic numbers, but so far none has been actually observed.
The discovery of radio-activity necessitated new views as to
“atoms”. It was found that an atom of one radio-active element can
break up into an atom of another element and an atom of helium,
and that there is also another way in which it can change. It was
found also that there can be different elements having the same
place in the series; these are called “isotopes”. For example, when
radium disintegrates it gives rise, in the end, to a kind of lead, but
this is somewhat different from the lead found in lead-mines. A great
many “elements” have been shown by Dr. F. W. Aston to be really
mixtures of isotopes, which can be sorted out by ingenious methods.
All this, but more especially the transmutation of elements in radio-
activity, led to the conclusion that what had been called “atoms” were
really complex structures, which could change into atoms of a
different sort by losing a part. After various attempts to imagine the
structure of an atom, physicists were led to accept the view of Sir
Ernest Rutherford, which was further developed by Niels Bohr.
In this theory, which, in spite of recent developments, remains
substantially correct, all matter is composed of two sorts of units,
electrons and protons. All electrons are exactly alike, and all protons
are exactly alike. All protons carry a certain amount of positive
electricity, and all electrons carry an equal amount of negative
electricity. But the mass of a proton is about 1835 times that of an
electron: it takes 1835 electrons to weigh as much as one proton.
Protons repel each other, and electrons repel each other, but an
electron and a proton attract each other. Every atom is a structure
consisting of electrons and protons. The hydrogen atom, which is the
simplest, consists of one proton with one electron going round it as a
planet goes round the sun. The electron may be lost, and the proton
left alone; the atom is then positively electrified. But when it has its
electron, it is, as a whole, electrically neutral, since the positive
electricity of the proton is exactly balanced by the negative electricity
of the electron.
The second element, helium, has already a much more
complicated structure. It has a nucleus, consisting of four protons,
and two electrons very close together, and in its normal state it has
two planetary electrons going round the nucleus. But it may lose
either or both of these, and it is then positively electrified.
All the latter elements consist, like helium, of a nucleus
composed of protons and electrons, and a number of planetary
electrons going round the nucleus. There are more protons than
electrons in the nucleus, but the excess is balanced by the planetary
electrons when the atom is unelectrified. The number of protons in
the nucleus gives the “atomic weight” of the element: the excess of
protons over electrons in the nucleus gives the “atomic number”,
which is also the number of planetary electrons when the atom is
unelectrified. Uranium, the last element, has 238 protons and 146
electrons in the nucleus, and when unelectrified it has 92 planetary
electrons. The arrangement of the planetary electrons in atoms other
than hydrogen is not accurately known, but it is clear that, in some
sense, they form different rings, those in the outer rings being more
easily lost than those nearer the nucleus.
I come now to what Bohr added to the theory of atoms as
developed by Rutherford. This was a most curious discovery,
introducing, in a new field, a certain type of discontinuity which was
already known to be exhibited by some other natural processes. No
adage had seemed more respectable in philosophy than “natura non
facit saltum”, Nature makes no jumps. But if there is one thing more
than another that the experience of a long life has taught me, it is
that Latin tags always express falsehoods; and so it has proved in
this case. Apparently Nature does make jumps, not only now and
then, but whenever a body emits light, as well as on certain other
occasions. The German physicist Planck was the first to
demonstrate the necessity of jumps. He was considering how bodies
radiate heat when they are warmer than their surroundings. Heat, as
has long been known, consists of vibrations, which are distinguished
by their “frequency”, i.e. by the number of vibrations per second.
Planck showed that, for vibrations having a given frequency, not all
amounts of energy are possible, but only those having to the
frequency a ratio which is a certain quantity h multiplied by 1 or 2 or
3 or some other whole number, in practice always a small whole
number. The quantity h is known as “Planck’s constant”; it has turned
out to be involved practically everywhere where measurement is
delicate enough to know whether it is involved or not. It is such a
small quantity that, except where measurement can reach a very
high degree of accuracy, the departure from continuity is not
7
appreciable.

7
The dimensions of h are those of “action”, i.e.
energy multiplied by time, or moment of
momentum, or mass multiplied by length
multiplied by velocity. Its magnitude is about 6.55
× 10.27 erg secs.

Bohr’s great discovery was that this same quantity h is involved


in the orbits of the planetary electrons in atoms, and that it limits the
possible orbits in ways for which nothing in Newtonian dynamics had
prepared us, and for which so far, there is nothing in relativity-
dynamics to account. According to Newtonian principles, an electron
ought to be able to go round the nucleus in any circle with the
nucleus in the centre, or in any ellipse with the nucleus in a focus;
among possible orbits, it would select one or another according to its
direction and velocity. But in fact only certain out of all these orbits
occur. Those that occur are among those that are possible on
Newtonian principles, but are only an infinitesimal selection from
among these. It will simplify the explanation if we confine ourselves,
as Bohr did at first, to circular orbits; moreover we will consider only
the hydrogen atom, which has one planetary electron and a nucleus
consisting of one proton. To define the circular orbits that are found
to be possible, we proceed as follows: multiply the mass of the
electron by the circumference of its orbit, and this again by the
velocity of the electron; the result will always be h or 2h, or 3h, or
some other small exact multiple of h, where h, as before, is “Planck’s
constant”. There is thus a smallest possible orbit, in which the above
product is h; the radius of the next orbit, in which the above produce
is 2h, will have a length four times this minimum; the next, nine
times; the next, sixteen times; and so on through the “square
numbers” (i.e. those got by multiplying a number by itself).
Apparently no other circular orbits than these are possible in the
hydrogen atom. Elliptic orbits are possible, and these again
introduce exact multiples of h: but we need not, for our purposes,
concern ourselves with them.
When a hydrogen atom is left to itself, if the electron is in the
minimum orbit it will continue to rotate in that orbit so long as nothing
from outside disturbs it; but if the electron is in any of the larger
possible orbits, it may sooner or later jump suddenly to a smaller
orbit, either the minimum or one of the intermediate possible orbits.
So long as the electron does not change its orbit, the atom does not
radiate energy, but when the electron jumps to a smaller orbit, the
atom loses energy, which is radiated out in the form of a light-wave.
This light-wave is always such that its energy divided by its
frequency is exactly h. The atom may absorb energy from without,
and it does so by the electron jumping to a larger orbit. It may then
afterwards, when the external source of energy is removed, jump
back to the smaller orbit; this is the cause of fluorescence, since, in
doing so, the atom gives out energy in the form of light.
The same principles, with greater mathematical complications,
apply to the other elements. There is, however, with some of the
latest elements, a phenomenon which cannot have any analogue in
hydrogen, and that is radio-activity. When an atom is radio-active, it
emits rays of three kinds, called respectively α-rays, β-rays, and γ-
rays. Of these, the γ-rays are analogous to light, but of much higher
frequencies, or shorter wave-lengths; we need not further concern
ourselves with them. The α-rays and β-rays, on the contrary, are
important as our chief source of knowledge concerning the nuclei of
atoms. It is found that the α-rays consist of helium nuclei, while the
β-rays consist of electrons. Both come out of the nucleus, since the
atom after radio-activity disruption is a different element from what it
was before. But no one knows just why the nucleus disintegrates
when it does, nor why, in a piece of radium, for example, some
atoms break down while others do not.
The three principal sources of our knowledge concerning atoms
have been the light they emit, X-rays and radio-activity. As everyone
knows, when the light emitted by a glowing gas is passed through a
prism, it is found to consist of well-defined lines of different colours,
which are characteristic for each element, and constitute what is
called its “spectrum”. The spectrum extends beyond the range of
visible light, both into the infra-red and into the ultra-violet. In the
latter direction, it extends right into the region of X-rays, which are
only ultra-ultra-violet light. By means of crystals, it has been found
possible to study X-ray spectra as exactly as those of ordinary light.
The great merit of Bohr’s theory was that it explained why elements
have the spectra they do have, which had, before, been a complete
mystery. In the cases of hydrogen and positively electrified helium,
the explanation, particularly as extended by the German physicist
Sommerfeld, gave the most minute numerical agreement between
theory and observation; in other cases, mathematical difficulties
made this completeness impossible, but there was every reason to
think that the same principles were adequate. This was the main
reason for accepting Bohr’s theory; and certainly it was a very strong
one. It was found that visible light enabled us to study the outer rings
of planetary electrons, X-rays enabled us to study the inner rings,
and radio-activity enabled us to study the nucleus. For the latter
purpose, there are also other methods, more particularly
Rutherford’s “bombardment”, which aims at breaking up nuclei by
firing projectiles at them, and sometimes succeeds in making a hit in
spite of the smallness of the target.
The theory of atomic structure that has just been outlined, like
everything in theoretical physics, is capable of expression in
mathematical formulæ; but like many things in theoretical physics, it
is also capable of expression in the form of an imaginative picture.
But here, as always, it is necessary to distinguish sharply between
the mathematical symbols and the pictorial words. The symbols are
pretty sure to be right, or nearly so; the imaginative picture, on the
other hand, should not be taken too seriously. When we consider the
nature of the evidence upon which the above theory of the atom is
based, we can see that the attempt to make a picture of what goes
on has led us to be far more concrete than we have any right to be. If
we want to assert only what we have good reason to believe, we
shall have to abandon the attempt to be concrete about what goes
on in the atom, and say merely something like this: An atom with its
electrons is a system characterised by certain integers, all small, and
all capable of changing independently. These integers are the
multiples of h involved. When any of them changes to a smaller
integer, energy of a definite amount is emitted, and its frequency will
be obtained by dividing the energy of h. When any of the integers
concerned changes to a larger integer, energy is absorbed, and
again the amount absorbed is definite. But we cannot know what
goes on when the atom is neither absorbing nor radiating energy,
since then it has no effects in surrounding regions; consequently all
evidence as to atoms is as to their changes, not as to their steady
states.
The point is not that the facts do not fit with the hypothesis of the
atom as a planetary system. There are, it is true, certain difficulties
which afford empirical grounds for the newer theory which has
superseded Bohr’s, and which we shall shortly consider. But even if
no such grounds existed, it would be obvious that Bohr’s theory
states more than we have a right to infer from what we can observe.
Of theories that state so much, there must be an infinite number that
are compatible with what is known, and it is only what all of these
have in common that we are really entitled to assert. Suppose your
knowledge of Great Britain were entirely confined to observing the
people and goods that enter and leave the ports; you could, in that
case, invent many theories as to the interior of Great Britain, all of
which would agree with all known facts. This is an exact analogy. If
you delimit in the physical universe any region, large or small, not
containing a scientific observer, all scientific observers will have
exactly the same experiences whatever happens inside this region,
provided it does not affect the flow of energy across the boundary of
the region. And so, if the region contains one atom, any two theories
which give the same results as to the energy that the atom radiates
or absorbs are empirically indistinguishable, and there can be no
reason except simplicity for preferring one of them to the other. On
this ground, even if on no other, prudence compels us to seek a
more abstract theory of the atom than that which we owe to
Rutherford and Bohr.
The newer theory has been put forward mainly by two physicists
already mentioned, Heisenberg and Schrödinger, in forms which look
different, but are in fact mathematically equivalent. It is as yet an
impossible task to describe this theory in simple language, but
something can be said to show its philosophical bearing. Broadly
speaking, it describes the atom by means of the radiations that come
out of it. In Bohr’s theory, the planetary electrons are supposed to
describe orbits over and over again while the atom is not radiating; in
the newer theory, we say nothing at all as to what happens at these
times. The aim is to confine the theory to what is empirically
verifiable, namely radiations; as to what there is where the radiations
come from, we cannot tell, and it is scientifically unnecessary to
speculate. The theory requires modifications in our conception of
space, of a sort not yet quite clear. It also has the consequence that
we cannot identify an electron at one time with an electron at
another, if in the interval, the atom has radiated energy. The electron
ceases altogether to have the properties of a “thing” as conceived by
common sense; it is merely a region from which energy may radiate.
On the subject of discontinuity, there is disagreement between
Schrödinger and other physicists. Most of them maintain that
quantum changes—i.e. the changes that occur in an atom when it
radiates or absorbs energy—must be discontinuous. Schrödinger
thinks otherwise. This is a matter in debate among experts, as to
which it would be rash to venture an opinion. Probably it will be
decided one way or other before very long.
The main point for the philosopher in the modern theory is the
disappearance of matter as a “thing”. It has been replaced by
emanations from a locality—the sort of influences that characterise
haunted rooms in ghost stories. As we shall see in the next chapter,
the theory of relativity leads to a similar destruction of the solidity of
matter, by a different line of argument. All sorts of events happen in
the physical world, but tables and chairs, the sun and moon, and
even our daily bread, have become pale abstractions, mere laws
exhibited in the successions of events which radiate from certain
regions.
CHAPTER X
RELATIVITY

We have seen that the world of the atom is a world of revolution


rather than evolution: the electron which has been moving in one
orbit hops quite suddenly into another, so that the motion is what is
called “discontinuous”, that is to say, the electron is first in one place
and then in another, without having passed over any intermediate
places. This sounds like magic, and there may be some way of
avoiding such a disconcerting hypothesis. At any rate, nothing of the
sort seems to happen in the regions where there are no electrons
and protons. In these regions, so far as we can discover, there is
continuity, that is to say, everything goes by gradual transitions, not
by jumps. The regions in which there are no electrons and protons
may be called “æther” or “empty space” as you prefer: the difference
is only verbal. The theory of relativity is especially concerned with
what goes on in these regions, as opposed to what goes on where
there are electrons and protons. Apart from the theory of relativity,
what we know about these regions is that waves travel across them,
and that these waves, when they are waves of light or
electromagnetism (which are identical), behave in a certain fashion
set forth by Maxwell in certain formulæ called “Maxwell’s equations”.
When I say we “know” this, I am saying more than is strictly correct,
because all we know is what happens when the waves reach our
bodies. It is as if we could not see the sea, but could only see the
people disembarking at Dover, and inferred the waves from the fact
that the people looked green. It is obvious, in any case, that we can
only know so much about the waves as is involved in their having
such-and-such causes at one end and such-and-such effects at the
other. What can be inferred in this way will be, at best, something
wholly expressible in terms of mathematical structure. We must not
think of the waves as being necessarily “in” the æther or “in”
anything else; they are to be thought of merely as progressive
periodic processes, whose laws are more or less known, but whose
intrinsic character is not known and never can be.
The theory of relativity has arisen from the study of what goes on
in the regions where there are no electrons and protons. While the
study of the atom has led us to discontinuities, relativity has
produced a completely continuous theory of the intervening medium
—far more continuous than any theory formerly imagined. At the
moment, these two points of view stand more or less opposed to
each other, but no doubt before long they will be reconciled. There is
not, even now, any logical contradiction between them; there is only
a fairly complete lack of connection.
For philosophy, far the most important thing about the theory of
relativity is the abolition of the one cosmic time and the one
persistent space, and the substitution of space-time in place of both.
This is a change of quite enormous importance, because it alters
fundamentally our notion of the structure of the physical world, and
has, I think, repercussions in psychology. It would be useless, in our
day, to talk about philosophy without explaining this matter.
Therefore I shall make the attempt, in spite of some difficulty.
Common-sense and pre-relativity physicists believed that, if two
events happen in different places, there must always be a definite
answer, in theory, to the question whether they were simultaneous.
This is found to be a mistake. Let us suppose two persons A and B a
long way apart, each provided with a mirror and means of sending
out light-signals. The events that happen to A still have a perfectly
definite time-order, and so have those that happen to B; the difficulty
comes in connecting A’s time with B’s. Suppose A sends a flash to B,
B’s mirror reflects it, and it returns to A after a certain time. If A is on
the earth and B on the sun, the time will be about sixteen minutes.
We shall naturally say that the time when B received the light-signal
is half way between the times when A sent it out and received it
back. But this definition turns out to be not unambiguous; it will
depend upon how A and B are moving relatively to each other. The
more this difficulty is examined, the more insuperable it is seen to
be. Anything that happens to A after he sends out the flash and
before he gets it back is neither definitely before nor definitely after
nor definitely simultaneous with the arrival of the flash at B. To this
extent, there is no unambiguous way of correlating times in different
places.
The notion of a “place” is also quite vague. Is London a “place”?
But the earth is rotating. Is the earth a “place”? But it is going round
the sun. Is the sun a “place”? But it is moving relatively to the stars.
At best you could talk of a place at a given time; but then it is
ambiguous what is a given time, unless you confine yourself to one
place. So the notion of “place” evaporates.
We naturally think of the universe as being in one state at one
time and in another at another. This is a mistake. There is no cosmic
time, and so we cannot speak of the state of the universe at a given
time. And similarly we cannot speak unambiguously of the distance
between two bodies at a given time. If we take the time appropriate
to one of the two bodies, we shall get one estimate; if the time of the
other, another. This makes the Newtonian law of gravitation
ambiguous, and shows that it needs restatement, independently of
empirical evidence.
Geometry also goes wrong. A straight line, for example, is
supposed to be a certain track in space whose parts all exist
simultaneously. We shall now find that what is a straight line for one
observer is not a straight line for another. Therefore geometry
ceases to be separable from physics.
The “observer” need not be a mind, but may be a photographic
plate. The peculiarities of the “observer” in this region belong to
physics, not to psychology.
So long as we continue to think in terms of bodies moving, and
try to adjust this way of thinking to the new ideas by successive
corrections, we shall only get more and more confused. The only
way to get clear is to make a fresh start, with events instead of
bodies. In physics, an “event” is anything which, according to the old
notions, would be said to have both a date and a place. An
explosion, a flash of lightning, the starting of a light-wave from an
atom, the arrival of the light-wave at some other body, any of these
would be an “event”. Some strings of events make up what we
regard as the history of one body; some make up the course of one
light-wave; and so on. The unity of a body is a unity of history—it is
like the unity of a tune, which takes time to play, and does not exist
whole in any one moment. What exists at any one moment is only
what we call an “event”. It may be that the word “event”, as used in
physics, cannot be quite identified with the same word as used in
psychology; for the present we are concerned with “events” as the
constituents of physical processes, and need not trouble ourselves
about “events” in psychology.
The events in the physical world have relations to each other
which are of the sort that have led to the notions of space and time.
They have relations of order, so that we can say that one event is
nearer to a second than to a third. In this way we can arrive at the
notion of the “neighbourhood” of an event: it will consist roughly
speaking of all the events that are very near the given event. When
we say that neighbouring events have a certain relation, we shall
mean that the nearer two events are to each other, the more nearly
they have this relation, and that they approximate to having it without
limit as they are taken nearer and nearer together.
Two neighbouring events have a measurable quantitative
relation called “interval”, which is sometimes analogous to distance
in space, sometimes to lapse of time. In the former case it is called
space-like, in the latter time-like. The interval between two events is
time-like when one body might be present at both—for example,
when both are parts of the history of your body. The interval is
space-like in the contrary case. In the marginal case between the
two, the interval is zero; this happens when both are parts of one
light-ray.
The interval between two neighbouring events is something
objective, in the sense that any two careful observers will arrive at
the same estimate of it. They will not arrive at the same estimate for
the distance in space or the lapse of time between the two events,
but the interval is a genuine physical fact, the same for all. If a body
can travel freely from one event to the other, the interval between the
two events will be the same as the time between them as measured
by a clock travelling with the body. If such a journey is physically
impossible, the interval will be the same as the distance as
estimated by an observer to whom the two events are simultaneous.
But the interval is only definite when the two events are very near
together; otherwise the interval depends upon the route chosen for
travelling from the one event to the other.
Four numbers are needed to fix the position of an event in the
world; these correspond to the time and the three dimensions of
space in the old reckoning. These four numbers are called the co-
ordinates of the event. They may be assigned on any principle which
gives neighbouring co-ordinates to neighbouring events; subject to
this condition, they are merely conventional. For example, suppose
an aeroplane has had an accident. You can fix the position of the
accident by four numbers: latitude, longitude, altitude above sea-
level, and Greenwich Mean Time. But you cannot fix the position of
the explosion in space-time by means of less than four numbers.
Everything in relativity-theory goes (in a sense) from next to
next; there are no direct relations between distant events, such as
distance in time or space. And of course there are no forces acting at
a distance; in fact, except as a convenient fiction, there are no
“forces” at all. Bodies take the course which is easiest at each
moment, according to the character of space-time in the particular
region where they are; this course is called a geodesic.
Now it will be observed that I have been speaking freely of
bodies and motion, although I said that bodies were merely certain
strings of events. That being so, it is of course necessary to say what
strings of events constitute bodies, since not all continuous strings of
events do so, not even all geodesics. Until we have defined the sort
of thing that makes a body, we cannot legitimately speak of motion,
since this involves the presence of one body on different occasions.
We must therefore set to work to define what we mean by the
persistence of a body, and how a string of events constituting a body
differs from one which does not. This topic will occupy the next
chapter.
But it may be useful, as a preliminary, to teach our imagination to
work in accordance with the new ideas. We must give up what
Whitehead admirably calls the “pushiness” of matter. We naturally
think of an atom as being like a billiard-ball; we should do better to
think of it as like a ghost, which has no “pushiness” and yet can
make you fly. We have to change our notions both of substance and
of cause. To say that an atom persists is like saying that a tune
persists. If a tune takes five minutes to play, we do not conceive of it
as a single thing which exists throughout that time, but as a series of
notes, so related as to form a unity. In the case of the tune, the unity
is æsthetic; in the case of the atom, it is causal. But when I say
“causal” I do not mean exactly what the word naturally conveys.
There must be no idea of compulsion or “force”, neither the force of
contact which we imagine we see between billiard-balls nor the
action at a distance which was formerly supposed to constitute
gravitation. There is merely an observed law of succession from next
to next. An event at one moment is succeeded by an event at a
neighbouring moment, which, to the first order of small quantities,
can be calculated from the earlier event. This enables us to construct
a string of events, each, approximately, growing out of a slightly
earlier event according to an intrinsic law. Outside influences only
affect the second order of small quantities. A string of events
connected, in this way, by an approximate intrinsic law of
development is called one piece of matter. This is what I mean by
saying that the unity of a piece of matter is causal. I shall explain this
notion more fully in later chapters.
CHAPTER XI
CAUSAL LAWS IN PHYSICS

In the last chapter we spoke about the substitution of space-time for


space and time, and the effect which this has had in substituting
strings of events for “things” conceived as substances. In this
chapter we will deal with cause and effect as they appear in the light
of modern science. It is at least as difficult to purge our imagination
of irrelevances in this matter as in regard to substance. The old-
fashioned notion of cause appeared in dynamics as “force”. We still
speak of forces just as we still speak of the sunrise, but we
recognise that this is nothing but a convenient way of speaking, in
the one case as in the other.
Causation is deeply embedded in language and common sense.
We say that people build houses or make roads: to “build” and to
“make” are both notions involving causality. We say that a man is
“powerful”, meaning that his volitions are causes over a wide range.
Some examples of causation seem to us quite natural, others less
so. It seems natural that our muscles should obey our will, and only
reflection makes us perceive the necessity of finding an explanation
of this phenomenon. It seems natural that when you hit a billiard-ball
with a cue it moves. When we see a horse pulling a cart, or a heavy
object being dragged by a rope, we feel as if we understood all about
it. It is events of this sort that have given rise to the common-sense
belief in causes and forces.
But as a matter of fact the world is incredibly more complicated
than it seems to common sense. When we think we understand a
process—I mean by “we” the non-reflective part in each of us—what
really happens is that there is some sequence of events so familiar
through past experience that at each stage we expect the next
stage. The whole process seems to us peculiarly intelligible when
human desires enter in, for example, in watching a game: what the
ball does and what the players do seem “natural”, and we feel as if
we quite understood how the stages succeed each other. We thus
arrive at the notion of what is called “necessary” sequence. The text-
books say that A is the cause of B if A is “necessarily” followed by B.
This notion of “necessity” seems to be purely anthropomorphic, and
not based upon anything that is a discoverable feature of the world.
Things happen according to certain rules; the rules can be
generalised, but in the end remain brute facts. Unless the rules are
concealed conventions or definitions, no reason can be given why
they should not be completely different.
To say that A is “necessarily” followed by B is thus to say no
more than that there is some general rule, exemplified in a very large
number of observed instances, and falsified in none, according to
which events such as A are followed by events such as B. We must
not have any notion of “compulsion”, as if the cause forced the effect
to happen. A good test for the imagination in this respect is the
reversibility of causal laws. We can just as often infer backwards as
forwards. When you get a letter, you are justified in inferring that
somebody wrote it, but you do not feel that your receiving it
compelled the sender to write it. The notion of compulsion is just as
little applicable to effects as to causes. To say that causes compel
effects is as misleading as to say that effects compel causes.
Compulsion is anthropomorphic: a man is compelled to do
something when he wishes to do the opposite, but except where
human or animal wishes come in the notion of compulsion is
inapplicable. Science is concerned merely with what happens, not
with what must happen.
When we look for invariable rules of sequence in nature, we find
that they are not such as common sense sets up. Common sense
says: thunder follows lightning, waves at sea follow wind, and so on.
Rules of this sort are indispensable in practical life, but in science
they are all only approximate. If there is any finite interval of time,
however short, between the cause and the effect, something may
happen to prevent the effect from occurring. Scientific laws can only
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

ebookfinal.com

You might also like