908
908
908
https://ebookfinal.com/download/handbook-of-small-animal-imaging-
preclinical-imaging-therapy-and-applications-1st-edition-george-c-
kagadis/
ebookfinal.com
https://ebookfinal.com/download/mayo-clinic-gastrointestinal-imaging-
review-1st-edition-c-daniel-johnson/
ebookfinal.com
https://ebookfinal.com/download/quantitative-imaging-in-cell-
biology-1st-edition-jennifer-c-waters/
ebookfinal.com
https://ebookfinal.com/download/normal-findings-in-ct-and-mri-1st-
edition-wolfgang-dorr/
ebookfinal.com
ESWT and Ultrasound Imaging of the Musculoskeletal System
1st Edition C. E. Bachmann
https://ebookfinal.com/download/eswt-and-ultrasound-imaging-of-the-
musculoskeletal-system-1st-edition-c-e-bachmann/
ebookfinal.com
https://ebookfinal.com/download/inverse-synthetic-aperture-radar-
imaging-principles-algorithms-and-applications-1st-edition-victor-c-
chen/
ebookfinal.com
https://ebookfinal.com/download/fundamentals-of-body-ct-4e-4th-
edition-w-richard-webb/
ebookfinal.com
https://ebookfinal.com/download/problem-solving-in-abdominal-
imaging-1-har-cdr-edition-neal-c-dalrymple-md/
ebookfinal.com
https://ebookfinal.com/download/the-cambridge-companion-to-durkheim-
cambridge-companions-to-philosophy-1st-edition-jeffrey-c-alexander/
ebookfinal.com
CT Imaging 1st Edition Alexander C. Mamourian Digital
Instant Download
Author(s): Alexander C. Mamourian
ISBN(s): 9780199987993, 0199987998
Edition: 1
File Details: PDF, 9.30 MB
Year: 2013
Language: english
CT IMAGING
This page intentionally left blank
CT IMAGING
PRACTICAL PHYSICS,
ARTIFACTS, AND PITFALLS
Editor:
Alexander C. Mamourian MD
Professor of Radiology
Division of Neuroradiology
Department of Radiology
Perelman School of Medicine of the
University of Pennsylvania
Philadelphia, Pennsylvania
Contributors:
Harold Litt MD, PhD Nicholas Papanicolaou MD, FACR
Assoc. Professor of Radiology and Medicine Co-Chief, Body CT Section
Chief, Cardiovascular Imaging Professor of Radiology
Department of Radiology Department of Radiology
Perelman School of Medicine of the Perelman School of Medicine of the
University of Pennsylvania University of Pennsylvania
Philadelphia, Pennsylvania Philadelphia, Pennsylvania
1 2013
1
Oxford University Press is a department of the University of Oxford. It furthers
the University’s objective of excellence
in research, scholarship, and education by publishing worldwide
With offices in
Argentina Austria Brazil Chile Czech Republic France Greece
Guatemala Hungary Italy Japan Poland Portugal Singapore
South Korea Switzerland Thailand Turkey Ukraine Vietnam
Oxford is a registered trademark of Oxford University Press in the UK and certain other countries
1 3 5 7 9 8 6 4 2
Printed in the United States of America
on acid-free paper
CONTENTS
Introduction vii
Acknowledgements ix
Dedication xi
Index 233
This page intentionally left blank
INTRODUCTION
I could say that computed tomography (CT) and my career started together, since the first units arrived
in most hospitals the same year that I entered my radiology residency. But while I knew the physics of
CT well at that time, over the next 30 years CT became increasingly complicated in a quiet sort of way.
While MR stole the spotlight during much of that time, studies that were formerly unthinkable, like CT
imaging the heart and cerebral vasculature, have become routine in clinical practice. But these expand-
ing capabilities of CT have been made possible by increasingly sophisticated hardware and software.
And while most manufacturers provide a clever interface for their CT units that may lull some into
thinking that things are under control, the user must understand both the general principles of CT as
well as the specific capabilities of their machine because of the potential to harm patients with X-rays.
For example, it was reported not long ago that hundreds of patients received an excessive X-ray dose
during their CT brain perfusion exams. Although that was troubling enough, the unusually high dose
was eventually attributed in some share to the well-meaning but improper use of software commonly
used to reduce patient X-ray dose but only for specific applications that do not include perfusion.
This book was never intended to be the defi nitive text on the history, physics, and techniques of CT
scanning. Our goal was to offer a collection of useful advice taken from our experience about modern
CT imaging for an audience of radiology residents, fellows, and technologists. It was an honor and a
pleasure to work with my co-authors, an all-star cast of experts in this field, and it is our collective
hope you will fi nd this book helpful in the same way that the owner’s manual that comes with a new
car is helpful; not enough information to rebuild the engine, but what you need to reset the clock when
daylight saving rolls around or change the oil. Many experienced CT users will very likely fi nd some
things useful here as well.
The review of CT hardware in Chapter 1 should get you off to a good start since the early scanners
were just simpler and for that reason easier to understand. The following chapters build on that foun-
dation. Chapter 2 provides a review of the language of X-ray dose and dose reduction, followed by a
comprehensive description of the advanced techniques used for cardiac CT in Chapter 3. Feel free at
any time to explore the cases in Chapters 4 through 8. Most of these include discussions of practical
physics appropriate to that particular artifact or pitfall. In the fi nal chapter, you will fi nd 10 questions
that will test your understanding of CT principles. Take it at the start or at the end to see how you
stand on this topic. While there is a rationale to the arrangement of the book you may want to keep
it nearby and go to appropriate chapters for those questions that may arise about CT dose, protocols,
and artifacts in your daily practice.
If you get nothing else from reading this book, you should be sure to learn the language of CT dose
explained in Chapter 2. Understanding radiation dose specific to CT has become more important
than ever in this time of increasing patient awareness, CT utilization, and availability of new software
tools for dose reduction. We hope that this book will help you to create the best possible CT images,
at the lowest possible dose, for your patients.
This page intentionally left blank
ACKNOWLEDGMENTS
I want to thank Cheryl Boghosian and Neil Roth in New Hampshire, for their wonderful hospitality,
generous spirit, and faithful friendship over many years, and most recently for giving me the time and
space to fi nish this book. My sincere thanks also go to Andrea Seils at Oxford Press. Every writer
should be blessed with an editor of her caliber. I will be forever grateful to Dr. Robert Spetzler and
all the staff at the Barrow Neurological Institute for giving me the inspiration and the opportunity to
write at all.
This page intentionally left blank
DEDICATION
I dedicate this book to my parents, Marcus and Maritza, who have given unselfi shly of themselves to
so many.
To Pamela, Ani, Molly, Elizabeth, and Marcus, I can fi nd no words that can express my endless
affection and gratitude.
This page intentionally left blank
1 HISTORY AND PHYSICS OF
CT IMAGING
Alexander C. Mamourian
2 CT IMAGING
The discovery of X-rays over 100 years ago by Wilhelm Roentgen marks the stunning beginning of
the entire field of diagnostic medical imaging. While the impact of his discovery on the fields of phys-
ics and chemistry followed, the potential for medical uses of X-rays was so apparent from the start
that, within months of his fi rst report, the fi rst clinical image was taken an ocean away in Hanover,
New Hampshire. A photograph of that particular event serves as a reminder of how naïve early users
of X-ray were with regard to adverse effects of radiation (Figure 1.1). We can only hope that our
grandchildren will not look back at our utilization of CT in quite the same way.
Although plain X-ray images remain the standard for long bone fractures and preliminary chest
examinations, they proved to be of little value for the diagnosis of diseases involving the brain, pel-
vis, or abdomen. This is because conventional X-ray images represent the net attenuation of all the
tissue between the X-ray source and the fi lm (Figures 1.2–1.4).
This inability to differentiate tissues of similar density on X-ray is due in part to the requirement for
the X-ray beam to be broad enough to cover all the anatomy at once. As a result of this large beam,
many of the X-rays that are captured on film have been diverted from their original path into other
directions, and these scattered X-rays limit the contrast between similar tissues. This problem was well
known to early imagers, and, prior to the invention of computed tomography (CT), a number of solu-
tions were proposed to accentuate tissue contrast on X-ray images. The most effective of these was a
device that linked the X-ray tube and film holder together, so that they would swing back and forth in
reciprocal directions on either side of patient, around a single pivot point. This was effective to some
Figure 1.1 This photograph captures the spirit of early X-ray exams. Note the pocket watch used to time the exposure (left ) and the
absence of any type of radiation protection for the patient or observers. The glowing cathode ray tube (positioned over the arm of
the patient, who is sitting with his back to the photographer) was borrowed from the department of physics at Dartmouth College. As
rudimentary as this apparatus might appear, it was effective in demonstrating the patient’s wrist fracture. Image provided courtesy of
Dr. Peter Spiegel, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire.
History and Physics of CT Imaging 3
(A) (B)
Film
Net attenuation
X-ray
1 1 20 1 1 24
beam
X-ray
5 4 5 5 5 24
beam
Figure 1.2 While X-ray images (A) are useful for demonstrating contrast between bone, soft tissue, and air, they are not effective at
showing contrast between tissues of similar attenuation. In this image, the pancreas, liver, and kidneys cannot be identified separately
because they all blend with nearby tissues of similar density. That is in part because the flat X-ray image can only show the net attenu-
ation of all the tissues between the X-ray source and the film or detector. This is illustrated mathematically in B, where these two rows
of blocks of varying attenuation would nevertheless have the same net attenuation on a conventional X-ray image.
degree because it created blurring of the tissues above and below the pivot plane (Figure 1.5), and this
technique became know as simply tomography. When I was a resident, we used several variations of
this technique for imaging of the kidneys and temporal bones to good effect since the tissues in the
plane of the pivot point were in relatively sharp focus, at least sharper than conventional X-rays.
Computed tomography proved to be much more than an incremental advance over simple X-ray
tomography, however. That is because it both improved tissue contrast and, for the fi rst time, allowed
imagers to see the patient in cross-section. The remarkable sensitivity to tissue contrast offered by
CT was in some sense serendipitous since it was the byproduct of the use a very narrow X-ray beam
for data collection (Figure 1.6). This narrow beam, unlike the wide X-ray beam used for plain fi lms,
significantly reduces scatter radiation. For physicians familiar with conventional X-ray images, those
early CT images were really just as remarkable as Roentgen’s original X-ray images.
The benefits offered by CT imaging to health care was formally acknowledged with the 1979
Nobel Prize for medicine going to Godfrey Hounsfield, just 6 years after his fi rst report of it. The
prize was shared with Allan Cormack, in recognition of his contributions to the process of CT image
reconstruction. But this prestigious award was not necessary to bring public attention to this new
imaging device. At the time the Nobel was awarded, there were already over 1,000 CT units operat-
ing or on order worldwide.
At the time of his discovery, Godfrey Hounsfield was employed by a British fi rm called EMI
(Electrical and Musical Industries) that had interests in both music and musical hardware. While
EMI is better known now for its association with both Elvis Presley and the Beatles, it was much
more than a small recording company with some good fortune in signing future stars. EMI manu-
factured a broad range of electrical hardware, from record players to giant radio transmitters, and
4 CT IMAGING
Figure 1.3 and 1.4 Another significant limitation of plain film is that there is no indication of depth even when sufficient image
contrast is present. For example, on this single plain film of the skull it appears at first glance that this patient’s head is full of metal
pins (1.3). This is because an X-ray image is just a two-dimensional representation of a three-dimensional object, and each point on
the image reflects the sum attenuation of everything that lies between the X-ray source and that point on the film. While you can easily
see that there are a large number of metal pins superimposed on the skull in this example, you cannot tell whether they are on top of
the skull, behind the skull, or inside the skull (perhaps from some terrible industrial accident). The computed tomography (CT) image
of this patient shows that they are, fortunately, hairpins that are outside the skull (Figure 1.4; arrows).
a fortuitous and unusual combination of broad interests in electronics with substantial financial
support offered by its music contracts apparently gave Hounsfield the latitude necessary for his
distinctly unmusical research into CT imaging. In his lab, he built a device intended to measure the
variations in attenuation across a phantom using a single gamma ray source and single detector.
Gamma rays are, of course, naturally occurring radiation, and so the fi rst device he built did not use
an X-ray tube at all but a constrained radioactive element.
By measuring precisely how much the phantom attenuated the gamma rays in discrete steps from
side to side, and then repeating those measurements in small degrees of rotation around the object,
Hounsfield showed that it was possible to recreate the internal composition of a solid phantom
using exclusively external measurements. While CT is commonplace now, at the start this capabil-
ity to see inside opaque objects must have seemed analogous to Superman’s power to see through
solid walls. That large dataset collected by Hounsfield’s device was then converted into an image
using known mathematical calculations (Figures 1.7, 1.8) with the aid of a computer of that era.
Computed tomography was initially considered to be a variation of existing tomography, so it was called
“computed” tomography, or more accurately computed axial tomography aka CAT scanning. This acro-
nym was commonly a source of humor when confused with the pet (no pun intended), and eventually it
was shortened to just “CT.” Hounsfield was honored for the creation of this remarkable imaging tool by
having the standard unit of CT attenuation named a “Hounsfield unit,” which is abbreviated HU.
History and Physics of CT Imaging 5
Figure 1.5 This drawing from a patent illustration shows the complex mechanics of a tomography device. In this design, the X-ray
tube is under the patient table and the film above. The belt at the bottom drives the to-and-fro movement of the entire apparatus. From
AG Filler. The history, development and impact of computed imaging in neurological diagnosis and neurosurgery: CT, MRI, and DTI.
Doi:10.103/npre.2009.3267.5
The medical implications of his device were quite evident to Hounsfield from his earliest experi-
ments, and EMI was supportive of his research in this direction. As the invention moved into a
clinical imaging tool, the mathematical reconstruction used for initial experiments proved to be too
time-consuming using the computers available at that time. Faster reconstruction was essential for
clinical use and, in recognition of his research that contributed to the faster reconstruction speeds
for CT, Allan Cormack was also recognized with a share of the 1979 Nobel Prize.
In common with many scientific advances, Cormack’s investigations preceded the invention of CT
imaging by many years. It was twenty years prior to Hounsfield’s work, after the resignation of
the only other nuclear physicist in Capetown, South Africa, that Cormack became responsible for
the supervision of the radiation therapy program at a nearby hospital. Without a dedicated medical
background, he brought a fresh perspective on his new responsibilities and was puzzled at the usual
therapy planning process used at that time. It presumed that the human body was homogeneous as far
as X-rays are concerned, when it clearly was not. He thought that if the tissue-specific X-ray attenua-
tion values for different tissues were known, it would eventually be of benefit not only for therapy but
also for diagnosis. He eventually published his work on this subject in 1963, nearly a decade prior to
Hounsfield’s first report of his CT device. In his Nobel acceptance lecture, Cormack reflected that,
immediately after the publication of his work, it received little attention except from a Swiss center for
avalanche prediction that hoped it would prove to be of value for their purposes. It did not.
6 CT IMAGING
Figure 1.6 This early CT of the brain allowed the imager to see the low attenuation CSF within the ventricles as well as the high
attenuation calcifications in the ventricular wall in this patient with tuberous sclerosis.
While early CT scanners were quite remarkable in their time, they were really quite slow as they went
about their businesslike “translate-rotate” method of data collection. For example, it took about
5 minutes to accumulate the data for two thick (>10mm) slices of the brain at an 80 ×80 matrix.
While still remarkable at that time, these scanners were deemed inadequate for much else apart from
brain imaging.
Even with their limitations, early EMI CT scanners were very expensive, costing about $300,000
dollars even in 1978, and that got the attention of many other manufacturers around the world. It
became a race among them to establish a foothold in this lucrative new market. As a result of this
concerted effort, CT scan times dropped rapidly as manufacturers offered faster and better units; as
a result, it was not long before EMI was left behind.
Those fi rst-generation scanners were made obsolete by faster “second-generation” units that used
multiple X-ray sources and detectors. Not long afterward, these second-generation scanners were
surpassed by scanners using what we call “third-generation” design, which eliminated the “trans-
late” movement. Now the X-ray fan beam, along with its curved detector row (Figure 1.9), could
spin around the patient without stopping. That design still remains the preferred arrangement on
current scanners since it readily accommodates large X-ray tubes, both axial and helical imaging,
and wide detector arrays. Since they spin together, the large detector arrays nicely balance the large
X-ray tubes.
History and Physics of CT Imaging 7
Figure 1.7 Hounsfield’s patent on CT included an illustration (upper left drawing labeled A) of the lines of data that were collected
in a translate-rotate pattern, shown here for only three different angles. From AG Filler. The history, development and impact of com-
puted imaging in neurological diagnosis and neurosurgery: CT, MRI, and DTI. Doi:10.103/npre.2009.3267.5
On the early CT units, the only technique of imaging available was what we now call axial mode
or step-and-shoot. The later term better captures the rhythm of axial mode imaging since all the
data necessary for a single slice is collected (shoot) in a spin before the patient is moved (step) to the
next slice position. While axial mode has advantages in some circumstances and is still available
on scanners, it takes more time than helical scanning since the stepwise movement of the patient is
time-consuming relative to the time spent actually scanning.
On early scanners with only a single detector row, the act of decreasing slice thickness by half
would result in doubling the scan time. That is because scanning the same anatomy but with thinner
sections was just like walking but taking smaller steps. The process of acquiring single axial scans
had other limitations and many were due the relatively long scan time. For example, if there were
any patient motion during acquisition of those single scans, misregistration or steps would appear
between slices on reconstruction (Figure 1.10).
This aversion to patient motion during axial CT scanning, imprinted on imagers for over a decade,
made the spiral CT technique all the more remarkable when it was introduced in 1990. Now, patient
8 CT IMAGING
Xray source
Phantom
Detector
Mathematical
0 0 0 0 0 0 0 01 3 7 3 3 3 0 0 0 0 0 0 0 0 attenuation
Projection
Figure 1.8 This illustrates just one pass of the collector and gamma ray source across a phantom containing water surrounding a
aluminum rod. In CT language, this simple motion was called “translate.” After each pass across the object, the entire assembly would
rotate 1 degree and collect another projection; so, this motion of first-generation CT scanners was called “translate-rotate.”
The line below marked “mathematical” shows a numeric representation of the attenuation measurements collected by the detectors
that could be used for image reconstruction. This information can also be represented graphically, as seen in the line “projection.”
The first CT images were made using an algebraic reconstruction, but later all CT scanners used the projections in a reconstruction
technique called back-projection, or more specifically filtered back-projection, because it proved to be faster than the purely algebraic
reconstruction using computers of that era.
motion became a requirement for CT scanning. This innovative approach to CT imaging is credited
to Willi Kalender, and the terms “spiral” and later “helical” were used to describe the path now
traced by the rotating X-ray beam onto the moving patient (Figure 1.11).
Helical imaging at fi rst was limited by scanner hardware, and only a short section of anatomy at
a time could be covered in a scan because the wires that attached the X-ray tube to the gantry had
to be unwound. Eventually, CT hardware was improved to maximize the benefits of helical scan-
ning, and once continuous gantry rotation became possible, CT scan times dropped precipitously.
Continuous gantry spin was made possible by the use of a slip-ring attachment between the tube and
detectors to conduct power and data, respectively. But this was not a uniquely CT invention as slip
rings were already commonplace on tank turrets and home TV antennas (Figures 1.12, 1.13).
When we perform CT in the axial mode, the data for one slice goes on to image reconstruction as
a discrete packet of information. In helical mode, since the X-ray beam actually sweeps obliquely to
the moving patient, each of the axial slices must be created using data collected from more than one
of those rotations. The attenuation values for the direct axial slice, or from any other plane for that
History and Physics of CT Imaging 9
X-ray tube
Detectors
Figure 1.9 The arrangement of tube and detectors in a third-generation CT scanner. Unlike the “translate-rotate” approach, in this
design, the tube and detectors move in a circle around the patient. While early versions of this design used a single row, current CT
scanners use the same design but incorporate multiple detectors rows each with hundreds of individual detectors.
Figure 1.10 The irregular contour of this skull (arrows) is due to patient motion during the acquisition of the axial scans used for the
reconstruction.
matter, are estimated from the known data points that were measured during the helical scan. This
process of estimating the attenuation values in nearby tissue using the known, but only nearby, data,
is called interpolation. It is really very much like the method used to estimate the value of a house
before it is placed on the market. To provide a reliable estimate of a sale price, the appraiser does
not actually add up the value of the many components of a house to determine its market value. The
10 CT IMAGING
(A)
(B)
Figure 1.11 In axial mode (A), the CT scanner gantry spins around just once and in the plane perpendicular to the patient. In helical
mode (B), the gantry spins in the same but continuously while the patient table moves through the its center. The combination of these
simultaneous motions (i.e., continuous rotation of the tube and the advancement of the patient), results in an oblique path of the X-ray
beam across the patient. This X-ray beam trajectory can be described as “helical,” and this term is preferred instead of “spiral” since
that term implies a continuously changing diameter as well.
projected selling price is based almost entirely on the recent sale prices of comparable houses nearby.
For example, if there had been completed sales during the past year of the houses on either side of
your house, the estimated value of your house would be much more reliable than if you were sell-
ing a custom built ten room mansion is upper Maine and the closest reference houses were in towns
many miles away. The same principle holds true for helical imaging. The closer the helical wraps are
together, the more accurate those estimated or interpolated attenuation values will be in tissues not
directly in the scan trajectory. This explains why the use of a low pitch, that allows interpolation
over a shorter distances, provides better resolution. In cases where a pitch value less than one is used,
the overlapping of the helical sweeps allows the scanner to measure attenuation of some of the tissues
more than once and that also decreases noise but at the expense of time and patient dose.
The fi nal advance that will bring us up to date with modern CT scanners was the addition of multiple
detector rows to the helical scanner. It is worth acknowledging that the very first EMI scanners also
acquired more than one slice at a time, so the notion is not entirely new to CT, although the ratio-
nale for it changed with the different generations of scanners. On those very early translate-rotate
scanners, a single rotation around the patient might take 5 minutes, so the use of a pair of detectors
could significantly reduce total scan time. With the arrival of second and third generation designs,
however, the second detector row was dropped presumably to save cost and reduce the complexity
of reconstruction.
History and Physics of CT Imaging 11
Figure 1.12 Hard for many to believe now, but there was a time when the TV signal was collected free using a fixed antenna attached
to the roof of a house. The quality of the TV image was of course related to the strength of the signal received and that meant, for many
rural households far from the transmitters, that decent TV signal reception required sensitive antennas. The best of these could be rotated
remotely from the living room while standing near the TV set in order to optimize the direction of the antenna and viewing the image as the
antenna was rotated. By using slip ring contacts on the shaft of the antenna (arrows), the antenna could be rotated in either direction without
worrying about later having to climb on the roof to unwrap the antenna wires. This was no small comfort on a cold Vermont winter night.
Twenty years after the EMI scanner, Elscint reintroduced the use of multiple detector row CT, but
the rationale at that time was to limit tube heating during helical scanning. After the arrival of slip ring
scanners, many sites were experiencing unwanted scanner shutdowns when performing wide coverage,
helical imaging and that was because continuous scanning would make the X-ray tubes of that time over-
heat. Once that occurred, it required a forced break from imaging to provide time for the X-ray tube to
cool off. This often occurred in inopportune moments, for example while imaging a patient after major
trauma, and there were few precedents since it had been only rarely encountered previously when using
CT scanners in the axial mode. This was because the time spent moving the patient between each rota-
tion of the gantry, albeit short, provided enough time for the X-ray tube to cool off. Elscint’s design was
intended to limit tube heating by decreasing the duration of the “tube on” time for the helical scans.
Manufacturers quickly found there were other significant benefits of multidetector scanning, even
after the tube heating problems were minimized by the introduction of X-rays tubes with substan-
tially more heat capacity. While early multidetector scanners could provide either faster scan times or
thinner slices, as the number of detector rows increased it became possible to provide both. Over the
course of the next decade, scanners would appear with 4, 8, 16, 64, 128, and most recently 320 rows
(Figure 1.14). Keep in mind that multidector arrays come at a cost since each detector row still contains
nearly a thousand individual detector elements, and the use of metal dividers between rows to limit
scatter meant that these multi-detector arrays are heavy, difficult to build, and expensive.
12 CT IMAGING
Figure 1.13 A slip ring on a CT scanner (arrows). The contacts fixed on the large plate on the left ride on the circular conductive
metal rails provide power and convey data while the entire gantry freely rotates.
Users need to be aware of exactly how the detector rows are arranged on their scanners since that
can vary among the different manufacturers, and there is almost no way to know their arrangement
intuitively. It is also important to recognize that some scanners provide fewer data channels than the
number of available detector rows. So, a manufacturer may offer a scanner called the “Framostat
40” with only 20 data channels. In that case, you will fi nd that the scans can take longer than
expected when using the thinnest detector collimation because only half of the total detector rows
are active at the smallest detector collimation (Figure 1.15).
The advantage of offering choices for the activation of detector rows is that it gives the user the
options of using either the narrow center detector rows to provide the best detail or using all the
rows for rapid coverage of large anatomic regions. So, keep in mind that your choice of “detector
collimation” is not trivial since it determines not only the scan resolution but also the total number
of detector rows activated, and that has a significant effect on scan time.
CT Image Contrast
At the risk of stating the obvious, the shades of gray on a CT image are based on a linear scale of
attenuation values. Wherever the X-rays are significantly absorbed or deflected, i.e. attenuated, by
the tissues, very few X-rays will arrive at the detectors and those corresponding tissues will appear
white on the image. Wherever there is little or no attenuation of the X-ray beam, more X-rays will
arrive at the detectors and those tissues will be represented as black on the CT image. That is why air
History and Physics of CT Imaging 13
Figure 1.14 This fountain pen was placed on the plastic shield in this 320-detector row scanner to provide some perspective to its
width. Using this scanner, the detector array is sufficiently wide to cover the entire head in a single rotation of the gantry.
appears black, bone appears white, and fat and brain are represented as shades of gray in between
(Figure 1.16). This direct correlation of just the single value of X-ray attenuation with gray scale
display differs substantially from magnetic resonance (MR) images, where there are multiple sources
of information displayed on image, and so a dark area on the image might be attributed to signal
loss from flow, low proton density, or even magnetic susceptibility effects depending on the scan
technique and the anatomic location.
Although CT imaging seems simpler than MR in principle, a number of factors confound our
ability to assign the correct attenuation values to the imaged tissue and there are many illustra-
tions of this problem included in the case fi les. For example, a renal cyst may appear to have higher
attenuation on CT due to pseudo-enhancement (Chapter 8, pitfall 1), or CSF in the sella may be
mistakenly assigned the same attenuation value as fat due to beam hardening (Chapter 5, artifact
6). So, while CT image display seems to be more straightforward than MR imaging, you must fully
understand the many factors that can confound the accuracy of attenuation values displayed on a
CT scan.
Slice Thickness
While early scanners produced images with choppy images with visibly large pixels, since they used
a matrix of 80 ×80, the in-plane resolution of CT images improved quickly. With each new genera-
tion of CT scanner, pixel size decreased fairly quickly to the current submillimeter standard size.
But CT image resolution is determined by voxel size and that is determined by both the pixel size
and the slice thickness.
14 CT IMAGING
Figure 1.15 These two different scanners both have 64 detector rows on their detector arrays but provide different usable scan
widths depending on how they are activated. In the top example, the 64 detector rows are each 0.625mm wide, evenly spaced, and
there is a data channel for each row. This arrangement could offer -.625mm detector collimation with a total usable scan width of 4cm.
In the lower example, there are also 64 rows but only the center 32 rows are 0.625mm wide. The remaining 32 rows are all 1.5mm
wide and arranged as a pair of 16 detector rows on the outside of the array. A scanner with this arrangement would offer only 2cm of
coverage when using 0.625mm detector collimation, and that is half of that of the upper arrangement.
However, using the center rows in pairs, they would function like an additional 16 1.5mm rows, and using that arrangement the total
usable array width becomes 48–1.5mm detector rows. This would provide 7cm of coverage with each rotation, and that is nearly twice
that of the upper array. So one manufacturer might offer their scanner with the lower arrangement to provide a wider array width for
rapid body or lung imaging with the option to do finer imaging, like brain CT angiography. However, a CTA using a detector collimation
of 0.625mm with the lower array would take twice as long as the same scan using the upper array. You need to know how the detector
elements are arranged to correctly design scan protocols on your scanner for different imaging requirements.
Whenever images are created using thick slices, small structures may be obscured because each
voxel is represented by a single attenuation value, and that is determined by the average attenuation
of all the contents. This resembles the presidential primary process for states like Florida. There,
all the delegates are awarded to the overall winner, unlike in New Hampshire, where they are frac-
tionally awarded based on the candidate’s portion of the total vote. For example, if a single voxel
contains both fat (low attenuation) and calcification (high attenuation), the mean attenuation of that
voxel could be exactly the same as normal brain, making both the fat and calcification inapparent
on a CT scan. It is more common to fi nd that a very small, dense calcification that occupies only a
fraction of a voxel will cause the entire voxel to have the attenuation of calcium and that will result
in an exaggeration of actual size of the calcification on a CT scan.
On early single-slice CT scanners it was undesirable to decrease slice thickness for most imaging
tasks because that significantly increased the time required to compete the scan. However, when
History and Physics of CT Imaging 15
Figure 1.16 This patient was lying on an ice bag during the CT exam performed for neck pain. Notice that the ice blocks (arrow ) are
darker than the surrounding water. By CT convention, this means that the ice attenuates the X-ray beam less than liquid water. Since
both the liquid water and solid ice have exactly the same molecular composition, this difference in attenuation must be the result of
the slight separation of water molecules as water changes state to crystalline ice. In addition to this high sensitivity of CT imaging to
differences in attenuation, it also provides sufficiently high resolution to show the air, note the dark spots, frozen within the ice.
using CT scanners with multiple detector rows, scan time is for all practical purposes independent
of slice thickness. For example, a scanner with 64 channels using sub millimeter slice thickness can
provide faster scans over comparable anatomy than can a four-slice scanner using 5mm slice thick-
ness. This capability of multidetector scanners to provide very thin slice thickness without adding to
scan time has made high quality multiplanar reconstructions commonplace.
The ability to scan with very thin sections has proved to be among the most significant advances of
modern CT imaging. While early CT scanners were capable of providing good quality axial images
when viewed slice by slice, whenever they were reconstructed into another plane of display the qual-
ity of these reconstructions was surprisingly poor due to thick slice thickness. For example, using
a slice thickness of 1cm meant that the depth of each voxel was more than 10 times larger than the
pixel size. These asymmetric voxels resulted in reconstructions with a striking “stair-step” appear-
ance that were of little diagnostic value apart from gross alignment. However, the ability to scan
using cubic or isotropic voxels in which the slice thickness is the same as the pixel size provides
reconstructions in any plane that are equivalent in quality to the images in the acquisition plane
(Figures 1.17, 1.18).
16 CT IMAGING
(A) (B)
Figure 1.17 These drawing show the difference between (A) voxels created using thick CT detector collimation, called anisotropic,
compared with (B) those using very thin detector collimation, called isotropic voxels. Isotropic, or cubic voxels, are created when the slice
data is nearly the same dimension as the length of one side of a pixel. For example, when using a 512 × 512 matrix for scan reconstruction,
the detector collimation needs to be less than 1 mm in order to provide cubic voxels. The advantage of creating isotropic voxels is that the
scan reconstructions in any plane (e.g., sagittal, coronal, or oblique) will be nearly equivalent in quality to images in the plane of acquisition.
Illustrations provided by Dr. Rihan Khan, University of Arizona, Department of Radiology.
(A) (B)
Figure 1.18 Sagittal view made from standard 5mm reconstructions (A) and the 0.7mm original scan data (B).
While high-quality reconstructions are routine now for body and neuroimaging, it is important to
consider that when using the thinnest available detector collimation, the signal-to-noise ratio (SNR)
on each slice will be less than that available with the use of either wide detector collimation or slice
reconstruction thickness when using narrow detector collimation (Figure 1.19).
If the thin sections are to have the same SNR as thicker sections, the radiation dose for the scan
must be increased. In practice, however, this problem is mitigated because the thin sections are
rarely viewed primarily. By reconstructing the submillimeter data images in the desired plane of
section at 3–5 mm slice thickness, the overall SNR is significantly better the thin source images.
The principle of “scan thin, view thick” is the basis of most brain imaging because detector
History and Physics of CT Imaging 17
(A) (B)
Figure 1.19 Notice that the noise visible in the 0.625mm section (A) becomes less apparent after merging data from multiple detec-
tors together into a thicker slice, here as a 4.5 mm slice (B).
B
A
Figure 1.20 Helical imaging requires collecting data from either 180 or 360 degrees of tube rotation so that corresponding views
are available of any structure (note black structure A). However, off-center structures (note black structure B) may only be imaged
once because of the divergence of the X-ray beam necessary for CT scanners with wide detector arrays. This undersampling artifact is
called “partial volume” and it can results in blurring of the margins of that structure. This artifact should not be confused with volume
averaging the (see Chapter 7).
collimation also minimizes beam hardening artifacts in posterior fossa and, for helical imaging,
cone beam and partial volume artifacts ( Figure 1.20). But, when considering X-ray dose in this
context, keep in mind that a small increase in dose can provide sufficient image quality for high
quality reconstructions and that will ultimately save patient dose if it eliminates the need for a
second scan. For example, by reconstructing axial CT data of the paranasal sinuses into the coro-
nal plane, it eliminates the need for direct coronal scanning and thus reduces the total patient
dose for the scan by nearly 50%.
18 CT IMAGING
Hounsfield’s fi rst CT experiments used a pure algebraic reconstruction (Figure 1.21) to create images.
In fact, it would appear that his first device was basically designed to collect the numbers in a man-
ner best suited to solve the reconstruction formula.
Although effective, algebraic reconstruction proved impractical for two reasons. First, it is very
computationally demanding, and, second, it is impossible to use straight calculations to solve for the
unknowns in an equation when the known values are not quite correct. That is the case with CT
mathematical reconstructions since CT measurements include noise and a whole variety of artifacts.
While there has been renewed interest in pure algebraic reconstruction techniques now that comput-
ers are fast enough to make it feasible, most CT scanners still use a less demanding approach called
back-projection or more accurately filtered back-projection. The scan information can be thought
of as a series of projections rather than a set of numbers (as shown previously in Figure 1.8). This
technique, patented by Gabriel Frank in 1940, was originally proposed as an optical back-projection
technique 30 years before the discovery of CT (Figure 1.22).
To correct for the edge artifacts that are inherent with back-projection, an additional step is added
to improve the quality of the fi nal CT images. This step is called filtering, although that term should
not be confused with the physical act of fi ltering of the X-ray beam use to eliminate the low-energy
X-rays. There are many filters, also called “kernels” which eliminates the confusion with metal
X-ray filters, that the user can choose for reconstructing CT images. These range from “soft” fi lters
that reduce noise at the expense of some image blurring to “sharp” fi lters used to display bone but
increase apparent noise. The process of fi ltering occurs after data acquisition but prior to image
display and can not be modified by the viewer afterwards. This of course differs from the setting of
window and level used to view the reconstructed images (Figure 1.23).
Upon the introduction of helical scanning, a new method for CT reconstruction was necessary
to allow reconstruction of date acquired in a continuous fashion as the patient moved past the
2 3 4 9
1 5 9
7 2 1 10
10 8 10 6
Figure 1.21 This 3 × 3 matrix demonstrates simply how one can use the sum of all the rows, columns, and diagonals outside the
matrix to predict the value of the central, unknown, cell. In this simple example, the value of the blank cell in the center is of course
3. Early scanners used an 80 ×80 matrix that required hours of calculations using this algebraic approach. That approach was soon
replaced with back-projection reconstruction techniques largely because they are faster.
History and Physics of CT Imaging 19
Figure 1.22 In this drawing from Gabriel Frank’s patent on back-projection, you can see that it was initially intended it to be a visual
projection technique. Image B shows the light inside a cylinder that has collected the projections of the revolving object, line by line,
in A. CT now uses a mathematical, not optical, application of this concept for reconstruction. From AG Filler. The history, development
and impact of computed imaging in neurological diagnosis and neurosurgery: CT, MRI, and DTI. Doi:10.103/npre.2009.3267.5
(A) (B)
(C) (D)
Figure 1.23 These four images illustrate the difference between image filtering and windowing. The image in A is processed with a
soft tissue filter and is displayed at a soft tissue window. The image in B shows the same dataset but now processed with a bone filter
but displayed with the same soft tissue window and level as in Figure A. Notice how much more noise is apparent as result of this
change in filter.
The image in C was also processed using a bone filter but it is displayed with a bone window and level. Notice how much detail is now
evident in the skull bones. The image in D shows the scan data displayed with the same bone window and level, but reconstructed
using a soft tissue filter. Notice on this image how the bone edges appear much less sharp than image C.
These paired images illustrate the balance between edge enhancement and noise that is determined by your choice of filter. Your
choice of filter, also called kernel, will indirectly influence the dose necessary to scan the patient since it is an important factor in your
perception of noise on the images.
A logical next step in the evolution of cross-sectional imaging would replace the complex multidetector
array with a single flat detector similar to the ones that have replaced image intensifiers used for con-
ventional angiography (Figure 1.26). While there are some similarities in configuration between a wide
multidetector array and a flat-panel detector, there are also some significant differences to consider.
History and Physics of CT Imaging 21
9 9 7 8 8 8 7 6 5 4 4 4 3
Figure 1.24 When using helical mode for scanning, the X-ray beam trajectory is angled to the long axis of the patient, and this angle
increases as pitch increases. In order to assign attenuation values to the voxels that lie in between the actual beam path, the attenuation
values need to be estimated or “interpolated” from known data points. And, the further away those directly measured points reside, the
greater the degree of estimation. In this drawing the numbers that are not circled must be estimated based on known values determined
from the directly measured points that lie on the oblique lines (solid lines).
Figure 1.25 In this drawing there are more known values (circled) so there is less estimation of the values between the oblique lines
necessary. Note the the values in between the solid circles are different from those in Figure 1.24. This illustrates why high pitch heli-
cal imaging, since the scan lines are farther apart, will have lower resolution.
Unlike conventional X-ray images, in which both direct and scattered X-rays contribute to the image,
early CT scanners used a relatively narrow- ray beam that limited the contribution of scattered X-rays
to the final image. As the number of detector rows in modern CT scanner’s detector array increased the
beam became wide in two directions, its shape now resembled a cone rather than a fan (Figures 1.27,
1.28) since it must diverge from the anode in two directions, i.e., side-to-side and top-to-bottom.
To minimize scattered X-rays from striking the detectors when using the wide fan beam in a usual
multidetector scanner, the detector arrays incorporate thin metal plates, called septa, between each
detector row. These septa absorb most of the scattered X-rays and are designed to allow only those
X-rays oriented perpendicular to the detector to contribute to the image. While the use of septa
improves image quality, they add weight and complexity to the array and also add to patient dose.
A CT scanner using a flat-plate detector must also have a wide beam in two dimensions to provide
even coverage of the flat panel. The terminology gets somewhat confusing, since the beam shaped
used on a multidetector scanner can also be described as a cone beam, but many authors call any CT
device using a flat-panel detector instead of multiple row detectors a “cone beam scanner.” But, these
flat panel scanners, since the panel does not lend itself well to the use of septa common to multide-
tector scanners, must offer other methods to minimize the deleterious effect of scattered X-rays on
image contrast. The use of a grid, not unlike those used with conventional X-ray films, can improve
image quality but their use again requires an increase in patient dose. For example, as much as 20%
of the total patient dose may be lost in the septa of a multidetector scanner and it is anticipated that
this percentage could be more when using a grid on a flat panel or cone beam scanner.
22 CT IMAGING
Figure 1.26 This image of an angiography unit during assembly demonstrates the flat-panel detector (arrows) at the top of the C arm
with its X-ray tube at the bottom.
Figure 1.27 Multidetector scanners use an X-ray beam pattern that resembles the blades of this kitchen tool, used to cut butter into
flour, since it also diverges in two directions.
History and Physics of CT Imaging 23
X-ray source
Patient
Detector
Figure 1.28 The usual third-generation CT scanner design has the X-ray tube (top, left image) move around the patient accompanied
by the detectors (bottom, left image) that are rigidly attached opposite the tube on the gantry. Viewed from the side, the X-ray beam
on a single-detector scanner is very narrow from head to foot and like a paper fan (middle drawing ). However, to accommodate the
multiple detector arrays on modern scanners, the X-ray beam must be wide from head to foot as well as from side to side (far right
drawing ).This figure provided by Josef Debbins PhD, Barrow Neurological Institute, Phoenix, Arizona
These factors are considered in the term dose efficiency, and this measurement is the composite
of both the absorption efficiency and geometric efficiency of the scanner hardware. For example,
the early single-slice scanners had a very high geometric efficiency since almost all the X-rays in the
beam were collected by the single detector row. However, those early scanners had relatively low
absorption efficiency because of the materials then available for the detectors. This has improved so
that modern CT scanners offer a very high absorption efficiency, >90%, but a lower geometric effi-
ciency compared with single-slice scanners. This give and take explains the surprising fact that the
patient dose using a single-slice scanner in axial mode may be lower than the dose for an equivalent
CT scan using modern multidetector scanner in helical mode.
So, if dose increases and contrast decreases using a flat panel for CT, why bother? One reason is
that flat panel scanners offer the potential for improved resolution compared with multidetector CT.
Another is that a flat panel detector weighs considerably less than a large detector array and that
offers the possibility of faster rotation times. But there is another limiting factor to rotation time
that is rarely considered these days called recovery time. The limit to gantry rotation speed is usually
considered to be the physical limits of spinning a very heavy object at high speeds. But another limit-
ing factor is the time necessary for the detectors to reset after each exposure to the X-ray beam. For
example, there would be no point in spinning the gantry at four rotations a second if it required a
full second for the detectors to return to their baseline state after each exposure. This time necessary
for the detectors to reset, also called afterglow, was a problem with older detector design but is neg-
ligible on modern multidetector scanners. However, flat-panel scanners will require more time for
recovery so even though the gantry can physically spin faster, it won’t matter unless faster recovery
time for the detector panel become possible.
Dose constraints and potentially lower contrast, along with complex reconstruction algorithms
have proved to be obstacles to the commercial development of cone beam CT for the time being. But
this design does offer some advantages, and it deserves our continued attention since it is likely that
24 CT IMAGING
many of these problems can be addressed with ongoing development of this technique. Cone beam
CT is currently offered as an option on some angiography units and has proved to be useful in that
setting for problem solving during complex interventions and the management of emergencies dur-
ing interventional procedures.
Iterative Reconstruction
All current CT scanners use variations of back-projection for image reconstruction. Recently, many
scanner manufacturers began offering variations of mathematical or algebraic reconstruction, usu-
ally called iterative reconstruction (IR), for their scanners. There are two good reasons why. First,
because of the increased utilization of CT, there has been an appropriate emphasis placed on reduc-
ing CT dose. Second, as a result of the relatively low price of supercomputer capabilities, it is now
feasible to perform algebraic reconstructions at acceptable speeds and cost. Early indications suggest
that dose reductions on the order of 50–75% are feasible for body imaging using IR without signifi-
cant compromise in image quality using these mathematical reconstruction techniques.
Many variations on this theme are now provided by vendors of CT equipment. Some versions even
limit noise by accounting for the specific errors in the imaging chain, also called optics. Others,
rather than use a pure mathematical reconstruction, use hybrid techniques that start with the tradi-
tional fi ltered back-projection but then use a mathematical technique to reduce noise by comparing
that reconstruction to the raw data in an iterative process.
The term “iterative reconstruction” describes a process of revising the image data in order to provide a
“best fit” with the actual scan data. This is done in a continually updating, or iterative, process. I think
of this much like the way one fills in a crossword puzzle (Figure 1.29). The reason most of us use a pencil
to fill out these puzzles is because we may find opportunities to reconsider our response to an “across”
clue once we figure out the “down” clue in that same location. I think of the iterative reconstruction
process in this simple way; the software takes it best shot at creating the image, then goes back to the
raw data to see how well it did, adjusts a few things, and checks again to see if that fits any better.
One reason why this cannot be easily accomplished in a single, powerful calculation is that the raw
data itself contains errors and noise. As a result, there is no single solution for the calculations, and so
the most that can be hoped for is the creation of a “best fit” for that dataset. Think of it like a crossword
puzzle but, in several spots on the grid, there is no word can satisfy both the “across” and “down” clues.
While IR can be used to reduce dose or improve image quality at the same dose, it does require special
software and computer hardware and currently it adds time for processing. Nevertheless, because it
holds considerable promise for significant dose reduction and will be widely adopted in some fashion.
This approach also offers new tools for minimizing streaks that arise from implanted metal.
While some other and less expensive postprocessing options are available that do not refer back to
the raw data in the same way, these should be considered carefully since they present the risk of cre-
ating “pretty” images at the expense of smoothing over clinically important contrast. For example,
a postprocessing algorithm that eliminates noise in homogeneous areas of anatomy could potentially
obscure true but subtle differences in attenuation. But iterative reconstruction combined with large
decreases in dose will without doubt have its own limitations, and it will take some time to validate
all these new techniques in the clinical arena before they can be used with complete confidence.
History and Physics of CT Imaging 25
(A) 5 2 1
H B
A O
2
N
G R E T A
K
A
3 4
S O P H I A R
N T
(B) 5 2 1
T H E K I N G A N D I
A N
2
N
G R E T A
K
R
3 4
S O P H I A I
N D
Figure 1.29 In A, you could choose “Bogart” for #1 down: a six-letter word for a leading actor in the movie Casablanca , but you
would have to revise it when you find that the first letter of the word must be “I” after you fill in #5 across: title of a film nominated for
nine academy awards starring Yul Brynner and Deborah Kerr (B).
Most single-slice CT scanners included a mechanism to tilt the scanner gantry relative to the patient
table. This was used on a regular basis to optimize the plane of imaging for axial brain scanning or,
when combined with head tilt, to provide direct coronal images of the brain or sinuses. One substan-
tial benefit to angulation on early scanners was that, by using tilt, one could minimize the number of
26 CT IMAGING
slices necessary to cover the brain. And when imaging with CT was considered in terms of “minutes
per slice,” eliminating one slice was not trivial. As scanner speed improved, the primary function for
angulation in brain imaging was dose reduction to the eyes, and it was generally recommended to
exclude them from the scan since they are susceptible to radiation injury.
Now, however, on most scanners in helical mode and those units with large multidetector arrays
or two sources, gantry tilt is not available for brain imaging. In spite of this change in hardware,
it is commonplace to continue to present head CT scans with the traditional angulation since it is
familiar to imagers and it makes comparison with prior CT scans easier.
While gantry tilt had been used with patient positioning to provide direct coronal imaging for
temporal bone and paranasal sinus imaging, since most modern scanners offer near isotropic voxel
images, direct coronal imaging is really no longer necessary. Now, even reconstructions in sagittal
views that were formerly unthinkable are routine. In fact, isotropic voxel imaging has created an
imaging environment that resembles MR since even oblique reconstructions of diagnostic quality are
now available on multidetector scanners in both axial and helical modes (Figure 1.30A and B).
The loss of gantry angulation has created two new problems, however. The radiation dose to the
eye is lowest on those scanners that offer gantry angulation if the user prescribes the scan angle and
range to exclude the orbits. However, on scanners that do not allow gantry angulation, the eyes are
always included in the scan but the imager may not be as aware that the eyes were included if the
data is reconstructed into the traditional display angle.
So, while the lens is always included on head scans performed on new scanners without gantry
angulation, the measured dose to the eye during direct helical imaging with a modern multislice
scanner is still quite low. This represents another one of the compromises of CT imaging. As scan-
ners enlarged to incorporate multiple detector rows, the tilt option was lost but the potential for
increased dose was offset by more sophisticated automatic exposure control, beam filtering, and
diminished dose from overbeaming with more detector rows (see Chapter 2, Overbeaming). While
the use of automatic exposure control for brain imaging may not make sense otherwise for a roughly
(A) (B)
Figure 1.30 A, B This high quality coronal CT image (A) was reconstructed from the thin section axial imaging data. Note the small defect
in the bone of the sphenoid sinus (arrow) that corresponds to the site of a CSF leak noted on the coronal MR T2 weighted scan (B, arrow).
History and Physics of CT Imaging 27
spherical object, it can be worthwhile by providing greater dose reduction to the lens. Another
option to reduce lens dose is to use bismuth X-ray attenuating eyecups, but this adds cost and time
(see Chapter 2, Shielding).
The second problem encountered with brain scans performed without gantry tilt is that the user
needs to be attentive to artifacts from hardware in the mouth, such as amalgam, crowns, and
implanted posts. While these were almost never an issue when gantry tilt was used, the metal arti-
facts arising from X-ray shadowing behind these very dense materials frequently projects directly
over the posterior fossa and, in some cases, significantly degrades the diagnostic value of the CT scan
(Figure 1.31). One option to minimize this artifact is to instruct cooperative patients to tuck their
chins during the scan. This recreates the traditional imaging angle without requiring gantry angula-
tion and should be helpful in limiting the metal artifacts from teeth and, if carefully done, it offers
the potential for reducing eye dose as well.
Medical practice is at times an odd mix of eager acceptance of new technology and rigid resis-
tance to change in almost every other way. With the arrival of scanners without the capability of
gantry angle, the only real benefit now to viewing CT brain scans in the old fashion is that the
orientation is familiar to imagers. Straight imaging in many respects would make it easier to com-
pare CT scans with MR scans, since the later are routinely displayed without angle (Chapter 5,
Artifact 9). But it seems likely that, as more centers move to isotropic imaging of the brain, head
scans will eventually be presented in two or three orthogonal planes for review similar to the way
most body CT images are displayed now.
(A) (B)
Figure 1.31 The axial CT scan (A) shows considerable artifact overlying the cranio-cervical junction without a clear source. The
coronal reconstruction (B) shows that the streaks are arising from dental amalgam and projects over the skull base in this case
because no gantry tilt was available on this scanner.
Exploring the Variety of Random
Documents with Different Content
PART II
In all that we have said hitherto on the subject of man from without,
we have taken a common-sense view of the material world. We have
not asked ourselves: what is matter? Is there such a thing, or is the
outside world composed of stuff of a different kind? And what light
does a correct theory of the physical world throw upon the process of
perception? These are questions which we must attempt to answer
in the following chapters. And in doing so the science upon which we
must depend is physics. Modern physics, however, is very abstract,
and by no means easy to explain in simple language. I shall do my
best, but the reader must not blame me too severely if, here and
there, he finds some slight difficulty or obscurity. The physical world,
both through the theory of relativity and through the most recent
doctrines as to the structure of the atom, has become very different
from the world of everyday life, and also from that of scientific
materialism of the eighteenth-century variety. No philosophy can
ignore the revolutionary changes in our physical ideas that the men
of science have found necessary; indeed it may be said that all
traditional philosophies have to be discarded, and we have to start
afresh with as little respect as possible for the systems of the past.
Our age has penetrated more deeply into the nature of things than
any earlier age, and it would be a false modesty to over-estimate
what can still be learned from the metaphysicians of the
seventeenth, eighteenth and nineteenth centuries.
What physics has to say about matter, and the physical world
generally, from the standpoint of the philosopher, comes under two
main heads: first, the structure of the atom; secondly, the theory of
relativity. The former was, until recently, the less revolutionary
philosophically, though the more revolutionary in physics. Until 1925,
theories of the structure of the atom were based upon the old
conception of matter as indestructible substance, although this was
already regarded as no more than a convenience. Now, owing
chiefly to two German physicists, Heisenberg and Schrödinger, the
last vestiges of the old solid atom have melted away, and matter has
become as ghostly as anything in a spiritualist seance. But before
tackling these newer views, it is necessary to understand the much
simpler theory which they have displaced. This theory does not,
except here and there, take account of the new doctrines on
fundamentals that have been introduced by Einstein, and it is much
easier to understand than relativity. It explains so much of the facts
that, whatever may happen, it must remain a stepping-stone to a
complete theory of the structure of the atom; indeed, the newer
theories have grown directly out of it, and could hardly have arisen in
any other way. We must therefore spend a little time in giving a bare
outline, which is the less to be regretted as the theory is in itself
fascinating.
The theory that matter consists of “atoms”, i.e. of little bits that
cannot be divided, is due to the Greeks, but with them it was only a
speculation. The evidence for what is called the atomic theory was
derived from chemistry, and the theory itself, in its nineteenth-century
form, was mainly due to Dalton. It was found that there were a
number of “elements”, and that other substances were compounds
of these elements. Compound substances were found to be
composed of “molecules”, each molecule being composed of
“atoms” of one substance combined with “atoms” of another or of the
same. A molecule of water consists of two atoms of hydrogen and
one atom of oxygen; they can be separated by electrolysis. It was
supposed, until radio-activity was discovered, that atoms were
indestructible and unchangeable. Substances which were not
compounds were called “elements”. The Russian chemist
Mendeleev discovered that the elements can be arranged in a series
by means of progressive changes in their properties; in his time,
there were gaps in this series, but most of them have since been
filled by the discovery of new elements. If all the gaps were filled,
there would be 92 elements; actually the number known is 87, or,
including three about which there is still some doubt, 90. The place
of an element in this series is called its “atomic number”. Hydrogen is
the first, and has the atomic number 1; helium is the second, and
has the atomic number 2; uranium is the last, and has the atomic
number 92. Perhaps in the stars there are elements with higher
atomic numbers, but so far none has been actually observed.
The discovery of radio-activity necessitated new views as to
“atoms”. It was found that an atom of one radio-active element can
break up into an atom of another element and an atom of helium,
and that there is also another way in which it can change. It was
found also that there can be different elements having the same
place in the series; these are called “isotopes”. For example, when
radium disintegrates it gives rise, in the end, to a kind of lead, but
this is somewhat different from the lead found in lead-mines. A great
many “elements” have been shown by Dr. F. W. Aston to be really
mixtures of isotopes, which can be sorted out by ingenious methods.
All this, but more especially the transmutation of elements in radio-
activity, led to the conclusion that what had been called “atoms” were
really complex structures, which could change into atoms of a
different sort by losing a part. After various attempts to imagine the
structure of an atom, physicists were led to accept the view of Sir
Ernest Rutherford, which was further developed by Niels Bohr.
In this theory, which, in spite of recent developments, remains
substantially correct, all matter is composed of two sorts of units,
electrons and protons. All electrons are exactly alike, and all protons
are exactly alike. All protons carry a certain amount of positive
electricity, and all electrons carry an equal amount of negative
electricity. But the mass of a proton is about 1835 times that of an
electron: it takes 1835 electrons to weigh as much as one proton.
Protons repel each other, and electrons repel each other, but an
electron and a proton attract each other. Every atom is a structure
consisting of electrons and protons. The hydrogen atom, which is the
simplest, consists of one proton with one electron going round it as a
planet goes round the sun. The electron may be lost, and the proton
left alone; the atom is then positively electrified. But when it has its
electron, it is, as a whole, electrically neutral, since the positive
electricity of the proton is exactly balanced by the negative electricity
of the electron.
The second element, helium, has already a much more
complicated structure. It has a nucleus, consisting of four protons,
and two electrons very close together, and in its normal state it has
two planetary electrons going round the nucleus. But it may lose
either or both of these, and it is then positively electrified.
All the latter elements consist, like helium, of a nucleus
composed of protons and electrons, and a number of planetary
electrons going round the nucleus. There are more protons than
electrons in the nucleus, but the excess is balanced by the planetary
electrons when the atom is unelectrified. The number of protons in
the nucleus gives the “atomic weight” of the element: the excess of
protons over electrons in the nucleus gives the “atomic number”,
which is also the number of planetary electrons when the atom is
unelectrified. Uranium, the last element, has 238 protons and 146
electrons in the nucleus, and when unelectrified it has 92 planetary
electrons. The arrangement of the planetary electrons in atoms other
than hydrogen is not accurately known, but it is clear that, in some
sense, they form different rings, those in the outer rings being more
easily lost than those nearer the nucleus.
I come now to what Bohr added to the theory of atoms as
developed by Rutherford. This was a most curious discovery,
introducing, in a new field, a certain type of discontinuity which was
already known to be exhibited by some other natural processes. No
adage had seemed more respectable in philosophy than “natura non
facit saltum”, Nature makes no jumps. But if there is one thing more
than another that the experience of a long life has taught me, it is
that Latin tags always express falsehoods; and so it has proved in
this case. Apparently Nature does make jumps, not only now and
then, but whenever a body emits light, as well as on certain other
occasions. The German physicist Planck was the first to
demonstrate the necessity of jumps. He was considering how bodies
radiate heat when they are warmer than their surroundings. Heat, as
has long been known, consists of vibrations, which are distinguished
by their “frequency”, i.e. by the number of vibrations per second.
Planck showed that, for vibrations having a given frequency, not all
amounts of energy are possible, but only those having to the
frequency a ratio which is a certain quantity h multiplied by 1 or 2 or
3 or some other whole number, in practice always a small whole
number. The quantity h is known as “Planck’s constant”; it has turned
out to be involved practically everywhere where measurement is
delicate enough to know whether it is involved or not. It is such a
small quantity that, except where measurement can reach a very
high degree of accuracy, the departure from continuity is not
7
appreciable.
7
The dimensions of h are those of “action”, i.e.
energy multiplied by time, or moment of
momentum, or mass multiplied by length
multiplied by velocity. Its magnitude is about 6.55
× 10.27 erg secs.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebookfinal.com