Condensed-Matter and Materials Physics: Basic Research For Tomorrow's Technology (1999)
Condensed-Matter and Materials Physics: Basic Research For Tomorrow's Technology (1999)
Condensed-Matter and Materials Physics: Basic Research For Tomorrow's Technology (1999)
org/6407
DETAILS
324 pages | 6 x 9 | PAPERBACK
ISBN 978-0-309-06349-4 | DOI 10.17226/6407
CONTRIBUTORS
Committee on Condensed-Matter and Materials Physics, National Research Council
Visit the National Academies Press at nap.edu and login or register to get:
– Access to free PDF downloads of thousands of publications
– 10% off the price of print publications
– Email or social media notifications of new titles related to your interests
– Special offers and discounts
All downloadable National Academies titles are free to be used for personal and/or non-commercial
academic use. Users may also freely post links to our titles on this website; non-commercial academic
users are encouraged to link to the version on this website rather than distribute a downloaded PDF
to ensure that all users are accessing the latest authoritative version of the work. All other uses require
written permission. (Request Permission)
This PDF is protected by copyright and owned by the National Academy of Sciences; unless otherwise
indicated, the National Academy of Sciences retains copyright to all materials in this PDF with all rights
reserved.
Condensed-Matter and Materials Physics: Basic Research for Tomorrow's Technology
Condensed-
Matter and
Materials
Physics
Basic Research for Tomorrow’s
Technology
NOTICE: The project that is the subject of this report was approved by the Governing Board of the
National Research Council, whose members are drawn from the councils of the National Academy of
Sciences, the National Academy of Engineering, and the Institute of Medicine. The members of the
committee responsible for the report were chosen for their special competences and with regard for
appropriate balance.
The National Academy of Sciences is a private, nonprofit, self-perpetuating society of distin-
guished scholars engaged in scientific and engineering research, dedicated to the furtherance of
science and technology and to their use for the general welfare. Upon the authority of the charter
granted to it by the Congress in 1863, the Academy has a mandate that requires it to advise the federal
government on scientific and technical matters. Dr. Bruce Alberts is president of the National
Academy of Sciences.
The National Academy of Engineering was established in 1964, under the charter of the Na-
tional Academy of Sciences, as a parallel organization of outstanding engineers. It is autonomous in
its administration and in the selection of its members, sharing with the National Academy of Sciences
the responsibility for advising the federal government. The National Academy of Engineering also
sponsors engineering programs aimed at meeting national needs, encourages education and research,
and recognizes the superior achievements of engineers. Dr. William A. Wulf is president of the
National Academy of Engineering.
The Institute of Medicine was established in 1970 by the National Academy of Sciences to
secure the services of eminent members of appropriate professions in the examination of policy
matters pertaining to the health of the public. The Institute acts under the responsibility given to the
National Academy of Sciences by its congressional charter to be an adviser to the federal government
and, upon its own initiative, to identify issues of medical care, research, and education. Dr. Kenneth
I. Shine is president of the Institute of Medicine.
The National Research Council was organized by the National Academy of Sciences in 1916 to
associate the broad community of science and technology with the Academy’s purposes of furthering
knowledge and advising the federal government. Functioning in accordance with general policies
determined by the Academy, the Council has become the principal operating agency of both the
National Academy of Sciences and the National Academy of Engineering in providing services to the
government, the public, and the scientific and engineering communities. The Council is administered
jointly by both Academies and the Institute of Medicine. Dr. Bruce Alberts and Dr. William A. Wulf
are chairman and vice chairman, respectively, of the National Research Council.
This project was supported by the Department of Commerce under Contract No.
50SBNB5C8819, the Department of Energy under Contract No. DE-FG02-96-ER45613, and the
National Science Foundation under Grant No. DMR-9632837. Any opinions, findings, conclusions,
or recommendations expressed in this publication are those of the author(s) and do not necessarily
reflect the views of the organizations or agencies that provided support for the project.
Front cover: A scanning-tunneling microscope image that shows the wave nature of electrons con-
fined in a “quantum corral” of 48 individually positioned atoms. See page 233. (Courtesy of IBM
Research.)
Additional copies of this report are available from National Academy Press, 2101 Constitution
Avenue, N.W., Lockbox 285, Washington, D.C. 20055; (800) 624-6242 or (202) 334-3313 (in the
Washington metropolitan area); Internet, http://www.nap.edu; and
Board on Physics and Astronomy, National Research Council, HA 562, 2101 Constitution Avenue,
N.W., Washington, DC 20418
COMMITTEE ON CONDENSED-MATTER
AND MATERIALS PHYSICS
iii
iv
Preface
In the spring of 1996, the National Research Council’s Board on Physics and
Astronomy established the Committee on Condensed-Matter and Materials Phys-
ics to prepare a scholarly assessment of the field as part of the new survey of
physics, Physics in a New Era, that is now in progress. This assessment has five
objectives.
vii
viii PREFACE
ers in the field as well as leading policy makers from government, industry, and
universities. The committee met several times to plan its work, debate the issues,
and formulate its report. An early output of the study was the report The Physics
of Materials: How Science Improves Our Lives, a short, colorful, and easy-to-
read pamphlet illustrating how research in the field affects our daily lives. The
committee generated several progress reports and held public forums at materi-
als-related meetings of the American Physical Society and the Materials Re-
search Society. The committee also sought input from the general science and
engineering communities. We are particularly grateful to our colleagues in biol-
ogy, chemistry, and materials and electrical engineering for their support and
help in carrying out this study.
The committee would like to thank Donald C. Shapero, Daniel F. Morgan,
and Kevin D. Aylesworth from the Board on Physics and Astronomy for their
efforts throughout the course of this study. Special thanks also to Arthur
Bienenstock, who served on the committee until the fall of 1997, when he as-
sumed responsibilities at the Office of Science and Technology Policy. The
committee gratefully acknowledges the contributions of the following individuals
who provided material or particular advice that influenced its study: David
Abraham, Eric J. Amis, Bill Appleton, Meigan Aronson, David Aspnes, John
Axe, Arthur P. Baddorf, Samuel Bader, A. Balazs, N. Balsara, Troy Barbee, F.
Bates, Bertram Batlogg, Robert Behringer, Jerzy Bernholc, Arthur Bienenstock,
Jörg Bilgram, Howard Birnbaum, Stephen G. Bishop, Steve Block, Lynn A.
Boatner, Eberhardt Bodenschatz, Greg Boebinger, William Boettinger, Bill
Brinkman, R. Bubeck, David Cannell, Federico Capasso, G. Slade Cargill, John
Carruthers, Robert Cava, Robert Celotta, David Ceperley, Paul Chaikin, Albert
Chang, S.S. (Leroy) Chang, Eric Chason, Daniel Chemla, Shiyi Chen, S. Cheng,
B. Chmelka, Alfred Cho, John R. Clem, Daniel Colbert, Piers Coleman, George
Crabtree, George Craford, Harold Craighead, Roman Czujko, Elbio Dagatto,
Adriaan de Graaf, Satyen Deb, Patricia Dehmer, Cees Dekker, David DiVincenzo,
Russ Donnelly, Robert Doremus, J. Douglas, Mildred S. Dresselhaus, Bob
Dunlap, J. Dutcher, Bob Dynes, Robert Eisenstein, Chang-Beom Eom, Evan
Evans, Ferydoon Family, Matthew P.A. Fisher, Zachary Fisk, Paul Fleury, Mike
Fluss, Judy Franz, Jean Fréchet, Glenn Fredrickson, Hellmut Fritsche, William
Gallagher, E. Giannelis, Allen M. Goldman, Jerry Gollub, Matt Grayson, P.
Green, G. Grest, Peter Grüter, Richard Hake, Thomas Halsey, Donald Hamann,
Christopher Hanna, Bill Harris, Beverly Hartline, Kristl Hathaway, Lance
Haworth, Frances Hellman, George Hentschel, Jan Herbst, Pierre Hohenberg,
Susan Houde-Walter, Evelyn Hu, Robert Hull, David Huse, Eric Isaacs, Nikos
Jaeger, Adam B. Jaffe, Sungho Jin, David Johnson, James Jorgensen, Malvin H.
Kalos, A. Karin, Marc Kastner, Efthimios Kaxiras, Jeffrey Koberstein, Carl C.
Koch, Kei Koizumi, J. Kornfield, Mark Kryder, Max Lagally, David V. Lang,
Robert Laudise, G. Leal, Manfred Leiser, Ross Lemons, Joseph Levitzky, Peter
Levy, David Litster, T. Lodge, Gabrielle Long, Steven Louie, Michael
PREFACE ix
Acknowledgment of Reviewers
This report has been reviewed by individuals chosen for their diverse per-
spectives and technical expertise, in accordance with procedures approved by the
National Research Council’s (NRC’s) Report Review Committee. The purpose
of this independent review is to provide candid and critical comments that will
assist the authors and the NRC in making the published report as sound as
possible and to ensure that the report meets institutional standards for objectivity,
evidence, and responsiveness to the study charge. The contents of the review
comments and the draft manuscript remain confidential to protect the integrity of
the deliberative process. We wish to thank the following individuals for their
participation in the review of this report:
Although the individuals listed above have provided many constructive com-
ments and suggestions, the responsibility for the final content of this report rests
solely with the authoring committee and the NRC.
Contents
Executive Summary 1
Overview 5
Introduction, 5
A New Era, 7
The Science of Modern Technology, 8
New Materials and Structures, 10
Novel Quantum Phenomena, 11
Nonequilibrium Physics, 15
Complex Fluids and Macromolecular and Biological Systems, 17
New Tools for Research: From the Benchtop to the
National Laboratory, 19
Findings and Recommendations, 24
Research Infrastructure, 25
Major Facilities, 26
Partnerships, 27
Education, 28
Research Themes, 29
xi
xii CONTENTS
CONTENTS xiii
Turbulence, 173
Processing and Performance of Structural Materials:
Metallurgical Microstructures, 176
Processing and Performance of Structural Materials: Solid Mechanics, 178
Brittle and Ductile Solids, 180
Instabilities in Dynamic Fracture, 180
Polymers and Adhesives, 183
Friction, 184
Granular Materials, 187
Length Scales, Complexity, and Predictability, 189
Further Prospects for the Future, 190
Nonequilibrium Phenomena in the Quantum Domain, 190
Nonequilibrium Phenomena in Biology, 191
Future Directions and Research Priorities, 192
xiv CONTENTS
Condensed-
Matter and
Materials
Physics
Executive Summary
EXECUTIVE SUMMARY 3
Overview
INTRODUCTION
Condensed-matter and materials physics is the branch of physics that studies
the properties of the large collections of atoms that compose both natural and
synthetic materials. The roots of condensed-matter and materials physics lie in
the discoveries of quantum mechanics in the early part of the twentieth century.
Because it deals with properties of matter at ordinary chemical and thermal
energy scales, condensed-matter and materials physics is the subfield of physics
that has the largest number of direct practical applications. It is also an intel-
lectually vital field that is currently producing many advances in fundamental
physics.
Fifty years ago the transistor emerged from this area of physics. High-
temperature superconductivity was discovered by condensed-matter physicists,
as were the fascinating low-temperature states of superfluid helium. Scientists in
this field have long-standing interests in electronic and optical properties of
solids and all aspects of magnetism and magnetic materials. They investigate the
properties of glasses, polymeric materials, and granular materials as well as
composites, in which diverse constituents are combined to produce entirely new
substances with novel properties.
Condensed-matter and materials physics has played a key role in the techno-
logical advances that have changed our lives so dramatically in the last 50 years.
Driven by discoveries in condensed-matter and materials physics, these advances
have brought us the integrated circuit, magnetic resonance imaging (MRI), low-
loss optical fibers, solid-state lasers, light-emitting diodes, magnetic recording
disks, and high-performance composite materials. These in turn have led to the
OVERVIEW 7
A NEW ERA
The world of condensed-matter and materials physics is entering a new era.
Extraordinary advances in instrumentation are providing access to the world of
atoms and molecules on an unprecedented scale. Powerful new experimental
tools, from national synchrotron and neutron facilities to bench-scale atomic-
probe microscopes, are opening up new windows for visualizing and manipulat-
ing materials on the atomic scale. Applications range from nanofabrication of
electronic devices to probing the secrets of superconductivity and protein folding.
These changes are far-reaching. Many research areas, previously inaccessible,
are yielding to new and unanticipated advances in atomic-scale synthesis, charac-
terization, and visualization.
Advances in computational power have made it possible, for the first time, to
simulate the behavior of complex materials systems and large assemblies of atoms.
As a result, numerical simulation is approaching parity with laboratory experiments
and analytic theory in many areas of condensed-matter and materials physics re-
search. Based on benchmarks provided by experimentation, and enlightened by a
proper consideration of theory, the new computational tools provide synergy to
accelerate the understanding of ever more-complex systems. Again, this is a quali-
tative change—with each new generation of computational power, opportunities
are emerging that could only be imagined a few years earlier.
The combined power of the new experimental tools and computational ad-
vances are having an enormous impact on condensed-matter and materials phys-
ics, particularly in those areas where the ability to span length scales from the
atomic to macroscopic is of fundamental importance, that is, where the properties
of atoms and molecules—especially quantum phenomena—become relevant to
large-scale phenomena. This new capability to span length scales is bringing the
world of atoms and molecules closer to the world of our experience, from the
mysteries of quantum mechanics, to the mechanical properties of materials, to the
self-assembly of biological systems. Many of these problems, which underlie
technological innovation and revolution, could not have been addressed on a
fundamental basis even a few years ago.
The developments described in this report present a condensed-matter and
materials physics profoundly different than it has been at any other time in
history. The ability to control and manipulate atoms, to observe and simulate
collective phenomena, to treat complex materials systems, and to span length
scales from atoms to our everyday experience, provides opportunities that were
not even imagined a decade ago. These developments underlie current progress
in condensed-matter and materials physics and provide tremendous optimism for
the future vitality of the field. They also underlie a new unity in science. Ad-
vances in condensed-matter and materials physics increasingly interface with and
relate to nearly all areas of science and engineering, including atomic and mo-
lecular physics, particle physics, materials science, chemistry, biology, and com-
putational sciences. The next decade will bring extraordinary benefits from this
unity, especially as the new capabilities in condensed-matter and materials phys-
ics bridge the gap between physics and biology, revealing the molecular-physics
basis of biological phenomena.
OVERVIEW 9
scatter light in these fibers had to be understood in great detail before it was
possible to develop today’s low-loss fibers that enable a signal to travel 800 km
(500 miles) before it is attenuated. Modern optical fibers are so perfect that the
dominant loss mechanism is the scattering of light by density fluctuations frozen
in when the fiber is cooled to become a glass. Fiber-optic amplifiers have been
developed that greatly extend the reach of these systems. A great challenge is to
develop an inexpensive interface that will bring fiber information links to the
home user. Optics enables data storage in compact disks, in the new high-
capacity digital video disks (DVDs), and high-density storage systems now in
development. To continue this progress, materials scientists must develop com-
mercially manufacturable solid-state lasers that emit blue light. Optics is also
essential to display, printing, and copying technologies. Optics is playing an
increasingly important role in the telecommunications revolution. Further re-
search into the properties of materials for fiber-optic amplifiers, fast optical
switches, and many other optical technologies will be necessary for future ad-
vances in information technologies.
Magnetism has presented physics with some of its most challenging theoreti-
cal problems and also with some of its most important applications. Driven by the
need for progressively more data-storage capacity, the science of magnetism has
yielded new technology for the devices that read and write data on computer disk
drives. Using the recently discovered phenomenon of giant magnetoresistance,
technologists have found ways to fabricate read/write heads that have allowed
development of these devices to keep pace with rapid improvements in integrated
circuitry. Progress in magnetic materials has also yielded a new class of small
motors and new transformer core materials for power distribution. And magneto-
electronic devices have moved from the laboratory to applications with amazing
speed. Among the sensors based on these new technologies are superconducting
quantum interference devices (SQUIDs) that enable the detection of minuscule
magnetic fields emitted by the human brain and heart, and magnetic force micro-
scopes that can image magnetic properties with nearly atomic-scale resolution.
As integrated circuitry becomes ever smaller and closer to the realm of
quantum physics, new ways of constructing the logic functions that are the build-
ing blocks of these circuits may be discovered. It may even become possible to
develop a new form of logic circuitry that exploits the properties of the strange
world of quantum mechanics. Until then, a number of challenges face the manu-
facturers of information systems. The tiny aluminum (and recently, copper) lay-
ers that connect the logic devices and their insulating glassy sheaths may have to
be replaced with something better in order to continue increasing speed. Optical
interconnections may play a role. Will some new way of producing digital
switches, the building blocks of computer logic, emerge? Can optical technology
be developed so that the essential functions of the communications network (such
as switching, now carried out electronically) can all be carried out with optical
devices? Will the key information technologies be reducible to the atomic-size
scale? Can the complex physics and chemistry of current and future materials be
mastered to enable us to meet this grand challenge?
OVERVIEW 11
because of its potential application to read data from a new generation of ultra-
high-density magnetic storage disks.
Until recently, carbon was thought to exist in only two crystal structures—
diamond and graphite. The discovery of new crystalline structures, generically
called “buckminsterfullerenes,” was a great surprise. Variations of these struc-
tures can have amazing properties, including forms with a tensile strength 100
times that of steel.
The name “buckminsterfullerene” was chosen because the structure of the
first one discovered resembles that of a geodesic dome. Their properties depend
primarily on this special shape. The structure can be imagined by starting with a
two-dimensional hexagonal lattice of carbon atoms found in graphite. If penta-
gons are substituted for some of the hexagons, the surface develops positive
curvature and can be made to close on itself, forming a soccer-ball structure (a
“buckyball”) or various other possible shapes, including tubes. These tubes can
vary from metallic to semiconducting, depending on their geometry. By exposing
fullerene molecules (C60) to alkali or alkaline-earth metal vapors, organic super-
conductors can be prepared.
These examples have all been startling breakthroughs. But amazing out-
comes can also result from the steady, evolutionary development of the proper-
ties of materials as some property is refined past previous technological barriers.
For example, steady improvements in the purity of semiconductor materials used
in high-frequency applications such as cellular phones eventually led to the fab-
rication of quantum-dot structures. These structures, fabricated on the quantum-
size domain, have energy states similar to those of atoms but with optical and
electronic properties that can be tailored for a wide variety of applications.
Several themes and challenges are apparent—the role of molecular geometry
and reduced dimensionality, the synthesis and processing and understanding of
more complex materials, tailoring the composition and structure of materials on
very small scales, and incorporation of new materials and structures in existing
technologies. Progress in these areas holds the promise of further startling break-
throughs, yielding materials with unexpected and useful properties and extending
the understanding of condensed-matter and materials physics.
OVERVIEW 13
TABLE O.1 Nobel Prizes Awarded for Research Related to Condensed-Matter and Materials Physics Since 1986
Year Field Citation Laureates
1986 Physics For design of the first electron microscope (Ruska) and the Ernst Ruska, Gerd Binnig, and Heinrich Rohrer
scanning-tunneling microscope (Binnig and Rohrer)
1987 Physics For discovery of superconductivity in ceramic materials Johannes Georg Bednorz and Karl A. Müller
1991 Physics For discovery of methods for studying order phenomena in Pierre-Gilles de Gennes
complex forms of matter, particularly liquid crystals and
polymers
1994 Physics For development of neutron-scattering techniques for Clifford G. Shull and Bertram N. Brockhouse
studies of condensed matter
1996 Chemistry For the discovery of fullerenes Harold Kroto, Robert Curl Jr., and Richard E. Smalley
1996 Physics For the discovery of superfluidity in helium-3 David M. Lee, Douglas D. Osheroff, and Robert C.
Richardson
1997 Physics For development of methods to cool and trap atoms with Steven Chu, Claude Cohen-Tannoudji, and
laser light William D. Phillips
1998 Chemistry For development of the density-functional theory (Kohn) Walter Kohn and John A. Pople
and computational methods in quantum chemistry (Pople)
1998 Physics For discovery of a new form of quantum fluid with Robert B. Laughlin, Horst L. Störmer, and
Condensed-Matter and Materials Physics: Basic Research for Tomorrow's Technology
OVERVIEW 15
NONEQUILIBRIUM PHYSICS
Nonequilibrium physics is the study of systems that are out of balance with
their surroundings. They may be changing their states as they are heated or
cooled, deforming as a result of external stresses, or generating complex or even
chaotic patterns in response to forces imposed on them. Examples include water
flowing under pressure through a pipe, a solid breaking under stress, or a snow-
flake forming in the atmosphere.
Understanding nonequilibrium phenomena is of great practical importance
in such diverse areas as optimizing manufacturing technologies, designing
energy-efficient transportation, processing structural materials, or mitigating the
damage caused by earthquakes. At the same time, the theory of nonequilibrium
phenomena contains some of the most challenging and fundamental problems in
physics. A central theme in this field is that the physics of ordinary materials and
processes is a rich source of inspiration for basic research.
Because nonequilibrium physics touches on such a wide range of different
areas of science and technology, it is an important channel through which physics
makes contact with other disciplines. For example, its concepts help explain
OVERVIEW 17
OVERVIEW 19
OVERVIEW 21
Electron microscopes, which use beams of electrons to probe the sample, are
able to penetrate below the surface. They have much higher resolution than
optical microscopes because of the shortness of the electron wavelength. Instru-
ments with 1-Å resolution have been demonstrated. Development of electron
microscopes has occurred primarily in Europe and Japan. These instruments
show great promise for reconstructing three-dimensional structures of biologi-
cal interest as well as for studying the properties of amorphous and disordered
materials.
We can be sure that improved brightness, spectroscopy, and resolution in
electron microscopes will allow more precise determination of structure and
composition in ultra-small volumes of materials, even when these are embedded
below the surface. This will have a continuing major impact on the microstruc-
tural study of all materials, for example, identifying interfacial structures, solving
important problems of support effects on small clusters, and understanding the
structural basis of adhesion and fracture in materials. These new capabilities will
require a significant reinvestment in infrastructure and increased investment in
instrumental research and development.
An equally important theme is that, to a growing degree, significant ad-
vances in a number of sciences depend on large national facilities. The United
States has been particularly strong in the development of synchrotron radiation
sources. These sources depend on the fact that when an electron is accelerated, it
gives off light. The most sophisticated (third-generation) synchrotron radiation
source in the United States is the Advanced Photon Source (APS) at Argonne
National Laboratory. The APS uses devices called “undulators” that wiggle an
electron beam by passing it through an array of powerful magnets to generate
tunable beams of very high intensity x-rays. The power and controllability of the
x-rays from the APS have made possible a new generation of experiments that
have resolved structures with unprecedented precision.
The technology now being developed for a fourth generation of light sources
may be eight orders of magnitude more powerful than even the APS and will
have pulse lengths less than a picosecond. These devices, called “free electron
lasers,” use undulators configured so that the radiation given off bathes the elec-
tron beam and stimulates further emission of radiation in a process closely analo-
gous to the operation of a laser. If past experience is a guide, the greater intensity
and coherence that these devices will one day offer will lead to new classes of
experiments and new insights into the structure of materials.
Synchrotron radiation is being used to conduct research in a number of areas.
Inelastic x-ray scattering has provided unique information about the dynamics of
fluids and glasses. Photoemission experiments have provided much information
about the electronic structure of solids, which is essential to understanding the
physical details of the operation of semiconductor devices and integrated circuits.
X-ray studies of disordered systems have given insight into inorganic glasses and
biological structures. Information can be obtained about chemical states and
One of the key factors that enabled these results was that both the physics
community and the sponsoring agencies (primarily the Department of Energy)
adopted a philosophy that synchrotron facilities are national resources that should
be designed and implemented to serve all the branches of science. That has meant
investing resources in making the machines and experimental facilities reliable,
predictable, and easily used by researchers unfamiliar with accelerator facilities.
It has meant providing the human infrastructure necessary to support such users.
It has meant husbanding the special institutional arrangements necessary to make
such users successful. Nonphysicist users are now in the majority at most syn-
chrotron facilities.
Neutron scattering is a technique particularly sensitive to spin states and
low-atomic-weight atoms. It is therefore particularly well suited for the study of
magnetism, high-Tc materials, polymers, and biological materials. The major
research facilities for neutron scattering in the United States have been the De-
partment of Energy’s high-flux reactors at Brookhaven National Laboratory and
at Oak Ridge National Laboratory, the Department of Commerce’s reactor at the
OVERVIEW 23
the U.S. high-field magnet research through the establishment of the National
High-Field Magnet Laboratory.
The establishment of third-generation synchrotron light sources at Lawrence
Berkeley National Laboratory and at Argonne National Laboratory and the deci-
sion to construct the new Spallation Neutron Source (SNS) have dominated the
decade as far as large national facilities are concerned. It is particularly crucial to
move forward with construction of the SNS to make the United States competi-
tive in neutron-scattering studies. Once the SNS is commissioned at Oak Ridge
National Laboratory, U.S. researchers can begin the process of recapturing the
lead in the use of neutrons to study structure and spins in superconductors, poly-
mers, and other materials.
We can confidently predict a rapid growth in knowledge and understanding
of biological materials and living organisms resulting from the exploitation of the
Advanced Photon Source by the biology community. That progress will be accel-
erated once the SNS becomes operational. The extensive deployment of various
microscopies in many laboratories can be expected to implement great strides in
understanding surface physics.
The rapid pace of development of research at various scales has depended
on accelerator science and research on the physics of smaller instrumentation.
This work has had to be parasitic on various enterprises, despite the fact that
advances in scientific equipment propel science forward in great leaps. How
much better could we exploit the leverage that instrumentation research has if
we were to recognize it as an important enterprise worthy of planned invest-
ment! The institutional frameworks for such investments clearly depend on
scale, but there are natural environments at each level. Expertise in tomorrow’s
beam physics, for example, partly resides at the major centers of high-energy
physics research, but development for low-energy applications will likely
occur elsewhere. The materials research science and engineering centers are
one set of obvious potential homes for intermediate-scale instrumentation
development.
OVERVIEW 25
Research Infrastructure
The United States has a strong foundation of research groups and small-scale
centers located in universities and government laboratories. Centers play an es-
sential role in a number of areas including microcharacterization, processing,
synthesis, and state-of-the-art instrumentation development. Research groups and
centers are a crucial reservoir of expertise. They also play an important institu-
tional role by providing a meeting ground for research and development person-
nel in industry and students and researchers in universities and government labo-
ratories. Centers bring together the problem-definition capabilities of industry
with the educational role of the universities and the research missions of govern-
ment laboratories. As a result, leading-edge research capabilities are applied to
important areas of microcharacterization, processing, synthesis, and instrumenta-
tion. Centers also bring a long-term commitment to applying intellectual excel-
lence to research problems and to developing expertise in the next generation of
researchers in these essential areas of study.
The role that small-scale centers now play has been fostered also in major
industrial research laboratories as well as by the research strategy of the Depart-
ment of Defense. But the burden now falls much more heavily on research groups
Major Facilities
The emergence of national synchrotron and neutron facilities has revolution-
ized our understanding of the atomic-scale structure and dynamics of materials.
The nation is fortunate to have world-class facilities for synchrotron research.
However, the situation is strikingly different for neutrons, where we find our-
selves with fewer facilities than those judged inadequate by national review
committees more than a decade ago. Many of the advances in structural biology,
polymers, magnetic materials, and superconductivity depend on access to state-
of-the-art neutron-scattering facilities. Without a new neutron source, the nation
cannot be competitive in these and other areas of enormous scientific and techno-
logical significance. This is an urgent and immediate need, and the committee
strongly recommends construction of the Spallation Neutron Source (SNS). Up-
grades at existing neutron-scattering facilities are also essential to sustaining
neutron-scattering research in the United States during SNS construction as well
as to strengthen the field and provide broad access to the user community.
Over the past decade there has been an explosion in the use of synchrotron
facilities. A great success of these facilities has been the rapid growth in their use
across the broad spectrum of science. At national synchrotron facilities biolo-
gists are attacking the structure of biological molecules, chemists are improving
drug designs, and environmental scientists are following the migration of envi-
OVERVIEW 27
Partnerships
Condensed-matter and materials physics is becoming increasingly interre-
lated with other fields of science and technology, with important links to many
disciplines including other branches of physics, chemistry, materials science,
biology, and engineering. At the same time, the field has advanced to the point
where it is often impractical and sometimes impossible to assemble in one place
all of the intellectual resources and specialized equipment for a given research
project. Continued progress in the field depends on establishing effective part-
nerships across disciplines and among universities, government laboratories, and
industry. These partnerships enable cross-disciplinary research, leverage re-
sources, and provide awareness of technological drivers and potential applica-
tions. The extraordinary scientific and technological success of the major indus-
trial laboratories over the past half-century resulted from their ability to integrate
long-term fundamental research, cross-disciplinary teams involving experimen-
talists and theorists, materials synthesis and processing, and a strategic intent.
Virtual elements of this fertile ground exist in potential partnerships among uni-
versities, government laboratories, and industry. Federal R&D agencies should
encourage partnerships that recreate this environment in appropriate subfields of
condensed-matter and materials physics.
Education
Intellectual capital is probably the single most important investment for
science and technology. Intellectual capital in condensed-matter and materials
physics occupies a special place in the national economy, underpinning many of
the technological advances that drive economic growth. The U.S. system of
graduate education, research universities, government and industrial laboratories,
and national facilities for condensed-matter and materials physics is a major
reason for rapid progress in research and technological applications. Maintaining
this progress requires continued commitment to strengthening these institutions.
In addition, condensed-matter and materials physics must play a crucial role in
engaging undergraduates in research and improving their understanding of sci-
ence and technology. Making investments to develop the human capital essential
for leadership in condensed-matter and materials physics and related technolo-
gies will pay rich dividends to the nation. Successful accomplishment of these
OVERVIEW 29
objectives will also help the larger field of physics to adjust to a new role in
which economic security becomes the dominant justification for national invest-
ments in research.
RESEARCH THEMES
Throughout this study the themes of new experimental and computational
capabilities, the ability to address problems of increasing complexity, and the
importance of relationships with other fields pervade the subdisciplines of
condensed-matter and materials physics (see Box O.1). These themes provide a
sense of vitality and optimism for the future of condensed-matter and materials
physics. Maintaining scientific excellence, a long-term perspective, and a world-
class environment for research are essential. Investing in facilities, encouraging
partnerships, integrating research and education, and encouraging discovery are
critical elements. But where is the field headed? Although it is often dangerous to
predict the future in science, the committee identified 10 areas that span and
underpin the subdiscipline-specific scientific priorities of condensed-matter and
materials physics as described in the body of this report. These areas, listed in
Box O.1, encompass the committee’s view of the high-level strategic priorities
that have emerged from the internal dynamics of the field and that are likely to
1
Electronic, Optical, and Magnetic Materials
and Phenomena:
The Science of Modern Technology
Important and unexpected discoveries have been made in all areas of con-
densed-matter and materials physics in the decade since the Brinkman report.1
Although these scientific discoveries are impressive, perhaps equally impressive
are technological advances during the same decade, advances made possible by
our ever-increasing understanding of the basic physics of materials along with
our increasing ability to tailor cost-effectively the composition and structure of
materials. Today’s technological revolution would be impossible without the
continuing increase in our scientific understanding of materials, phenomena, and
the processing and synthesis required for high-volume, low-cost manufacturing.
The technological impact of such advances is perhaps best illustrated in the areas
of condensed-matter and materials physics discussed in this chapter, which will
examine selected examples of electronic, magnetic, and optical materials and
phenomena that are key to the convergence of computing, communication, and
consumer electronics.
Technology based on electronic, optical, and magnetic materials is driving
the information age through revolutions in computing and communications. With
the miniaturization made possible by the invention of the transistor and the inte-
grated circuit, enormous computing and communication capabilities are becom-
ing readily available worldwide. These technological capabilities, which enabled
the information age, are fundamentally changing how we live, interact, and trans-
act business. Semiconductors provide an excellent demonstration of the strong
1National Research Council [W.F. Brinkman, study chair], Physics Through the 1990s, National
Academy Press, Washington, D.C. (1986).
31
FOUNDATIONS A ENERGY
photovoltaics, sensors,
Correlated Electron States - 1980s light-weight motors, high-
Scanning Tunneling Microscopy - 1980s performance transformers
BCS Theory of Superconductivity - 1957s
Electron Microscopy -
Electronic States in Crystals -
1930s
1920s
C TRANSPORTATION
automotive electronics, sensors,
avionics, air-traffic control
Wave Nature of the Electron - 1927s
Quantum Mechanics - 1920s ENTERTAINMENT
X-ray Diffraction -
Superconductivity -
Magnetoresistance -
1911s
1911s
1856s
T consumer electronics
MEDICINE
lasers, medical imaging, sensors
FIGURE 1.1 Incorporation of major scientific and technological advances into new
products can take decades and often follows unpredictable paths. Illustrated here are
some selected technologies supported by the foundations of electronic, photonic, and
magnetic phenomena and materials. These technologies have enabled breakthroughs in
virtually every sector of the economy. The two-way interplay between foundations and
technology is a major driving force in this field. The most recent fundamental advances
and technological discoveries have yet to realize their potential.
Although technological advances today are most often associated with the
information age or communications and the computing revolution, impressive
advances continue to be made across a broad spectrum of technologies and scien-
tific disciplines (see Box 1.1). For example, progress in condensed-matter and
materials physics has led to advances in biology, medicine, and biotechnology.
New tissue diagnostics based on diffusing light probes use understanding
borrowed directly from the physics of carrier transport in mesoscopic random
materials. The development of new optical microscopies, such as two-photon
confocal, optical coherence, and near-field optical microscopy, together with the
widespread use of optical tweezers, have started a revolution in the observation
and manipulation of submicrometer-sized objects in cell biology, in new forms of
spectroscopic endoscopy, and in gene sequencing techniques. The emergence of
high-power solid-state lasers and solid-state detectors and the widespread use of
fiber optics make new optical approaches for diagnostics, dentistry, and surgery
increasingly easy. A new form of magnetic resonance imaging enabled by semi-
conductor laser pumping of spin-polarized xenon gas has allowed the three-
dimensional mapping of lung function. The generation of femtosecond pulses of
light by the use of new solid-state lasers has begun another revolution in our
understanding of the subpicosecond dynamics of biological molecules on the
important frontiers of molecular signal processing and protein folding. Although
not covered in detail here, such advances in the use of optics in medicine and
biology are discussed in detail in another National Research Council report.2 In
addition, semiconductor and other solid-state lasers or enhanced solid-state de-
tector arrays, offshoots from condensed-matter physics, are enabling major ad-
vances in the fields of atomic and molecular physics, physical chemistry, high-
energy physics, and astrophysics. New optical materials and phenomena are also
responsible for a number of advances in the technologies associated with print-
ing, copying, video and data display, and lighting.
In the realm of magnetic materials, the loss of cobalt in the 1980s because of
political unrest in Zaire prompted an intense research effort to find cobalt-free
bulk magnetic materials. This led to major advances in creating magnetic struc-
tures from neodymium and iron, which had superior properties and lower cost
compared with cobalt alloys for electric motors and similar applications requiring
magnets with high permanent magnetization. These new magnets, which are
achieved through complex alloys and even more complex processing sequences,
are vastly expanding the industrial use of bulk magnetic materials.
Advances in magnetic materials and their applications are not limited to bulk
materials with high permanent magnetization and magnetic materials used in
information storage. Improvements in soft bulk magnetic materials play an im-
portant role in transformers used in the electric power distribution industry. In-
2National Research Council [C.V. Shank, study chair], Harnessing Light: Optical Science and
Engineering for the 21st Century, National Academy Press, Washington, D.C. (1998).
DSPC6202
1,000 Pentium II
DSP16110
Processing Power ( MIPS )
Pentium Pro
100 Pentium
DSP16A
DSP16
10 486
386
DSP1620
1 DSP1
286
0.1
1976 1980 1984 1988 1992 1996 2000
YEAR
FIGURE 1.1.1 Computing power versus time in microprocessors. (Courtesy of
Intel.)
more than 20 million km per year—more than 2,000 km per hour, or around Mach
2. In addition, the rate of information transmission down a single fiber is increasing
exponentially by a factor of 100 every decade. Transmission at 2.5 terabits per
second has been demonstrated in the research laboratory, and the time lag be-
tween laboratory demonstration and commercial system deployment is about 5
years. The analog of Moore’s Law for fiber transmission capacity, which serves as
a technology roadmap for lightwave systems, is shown in Figure 1.1.3. Figure
1.1.4 summarizes the history of optical communications technology.
Compound semiconductor diode lasers provide the laser photons that trans-
port information along the optical information highways. Semiconductor diode la-
sers are also at the heart of optical storage and compact disc technology.
In addition to their use in very high-performance microelectronics applications,
compound semiconductors have proven to be an extremely fertile field for advanc-
ing our understanding of fundamental physical phenomena. Exploiting decades of
basic research, we are now beginning to be able to understand and control all
aspects of compound semiconductor structures, from mechanical through elec-
tronic to optical, and to grow devices and structures with atomic layer control, in a
few specific materials systems. This capability allows the manufacture of high-
performance, high-reliability, compound semiconductor diode lasers that can be
modulated at gigahertz frequencies to send information over the fiber-optic net-
works. High-speed semiconductor-based detectors receive and decode this infor-
mation. These same materials provide the billions of light-emitting diodes sold
annually for displays, free-space or short-range high-speed communication, and
numerous other applications. In addition, very high-speed, low-power compound
semiconductor electronics play a major role in wireless communication, especially
for portable units and satellite systems.
Another key enabler of the information revolution is low-cost, low-power, high-
density information storage that keeps pace with the exponential growth of com-
3000
*
1000
*
300
100
30
Capacity (Gb/s)
10
Experimental
1
Single Channel (ETDM)
Multi-Channel (WDM)
Single Channel (OTDM)
0.3 WDM + OTDM
WDM + Polarization Mux
* WDM + OTDM
WDM
0.1
Commercial
Single Channel (ETDM)
0.03 Multi-Channel (WDM)
80 82 84 86 88 90 92 94 96 98 2000
Year
puting and communication capability. Both magnetic and optical storage are in
wide use. Recently, the highest performance magnetic storage/readout devices
have begun to rely on giant magnetoresistance (GMR), a phenomenon that was
discovered by building on more than a century of research in magnetic materials.
Although Lord Kelvin discovered magnetoresistance in 1856, it was not until the
early 1990s that commercial products using this technology were introduced (see
Figure 1.1.5). In the past decade, our understanding of condensed matter and
materials converged with advances in our ability to deposit materials with atomic-
level control to produce the GMR heads that were introduced in workstations in
late 1997. It is hoped that with additional research and development, spin valve
and colossal magnetoresistance (CMR) technology may be understood and ap-
plied to workstations of the future. This increased understanding, provided in part
by our increased computational ability arising from the increasing power of silicon
integrated circuits, coupled with atomic-level control of materials, led to exponen-
tial growth in the storage density of magnetic materials analogous to Moore’s Law
for transistor density in silicon integrated circuits (see Figure 1.1.6).
100
Spin-valve
Giant MR Heads
Areal Density, Gbits/in 2
10
Magnetoresistive
1 Heads
0.1
0.01
1985 1990 1995 2000 2005
Year
FIGURE 1.1.6 “Moore’s Law” for magnetic storage: logarithm of storage density
versus time. (Courtesy of IBM Research.)
Silicon Technology
As noted in the introduction, semiconductor technology is the key enabler of
the information age. The science of materials as a specific discipline is a relatively
In the early 1940s, the basic machines that were later adapted for ion implan-
tation in the semiconductor business were used at Oak Ridge National Laboratory
for uranium isotope separation. This was a critical part of the Manhattan Project.
Ion beams were first used as part of semiconductor-device processing at Bell Lab-
oratories in 1952. Bell filed a comprehensive patent in 1954 covering the use of
ion implantation for doping semiconductors, but it was not until 1966 that implanta-
tion was actually used to manufacture commercial semiconductor devices.
Hughes Research Laboratory used the technique to form junctions in the man-
ufacturing of diodes. In 1970 Texas Instruments began using ion implantation in
integrated circuit manufacturing to set threshold voltages. Concurrent with these
developments in processing, several companies attempted to enter the implant-
tool manufacturing business with only moderate success, most successful among
them being Accelerators Incorporated. In 1971, however, a new company, Extri-
on, was formed to build commercial implanters specifically designed for integrated
circuit manufacturing. Extrion soon became the primary supplier of implant tools.
This led to the development of a whole new industry in America.
Today, ion implantation is used in several steps of the integrated circuit manu-
facturing process to control the concentration and depth distribution of dopants.
Ion implantation tool manufacturing, an almost exclusively U.S. industry, has grown
to a more than $1 billion per year business. Three U.S. companies (Applied Mate-
rials, Eaton, and Varian) supply virtually all the commercial ion-implant systems
worldwide.
ure 1.3.2 shows concurrently scanned STM and BEEM images of a single dot.
The BEEM image clearly shows a reduction in BEEM current through the dot,
which shows the presence of a localized conduction band offset at the dot.
FIGURE 1.3.2 STM and BEEM images of a single dot. [Reprinted with permis-
sion from M.E. Rubin, H.R. Blank, M.A. Chin, H. Kroemer, and V. Narayanamurti,
“Local conduction band offset of GaSb self-assembled quantum dots on GaAs,”
Applied Physics Letters 70, 1590 (1997). Copyright © 1997 American Institute of
Physics.]
dimensions that are smaller than 100 nm. Such structures have been used as gates
on submicron GaAs/AlGaAs devices (see Box 1.5), eliminating the 2DEG under
them. In this way, confinement both in the plane and perpendicular to the plane of
the 2DEG can be achieved. The simplest structure of this kind is a narrow constric-
tion in the 2DEG that exhibits a resistance quantized in units of h/e2.
Electron-beam lithography can be used to make nanometer-sized metal wires
and rings. This opened the field of mesoscopic physics: the study of systems that
are larger than atoms but small enough that they are not bulk materials. In such
mesoscopic systems, the wavelength of the carriers is comparable to the device
dimensions and to the mean free path for phase breaking, and statistical averaging
does not eliminate quantum mechanical phenomena. One dramatic phenomenon
of this kind is universal conductance fluctuations.
Most mesoscopic effects for systems in one- or two-dimensional confine-
ment are subtle. However, when electrons are confined in all three dimensions
the results can be dramatic. Structures in which electrons confined to metals and
semiconductors with tunnel junctions connecting the confined regions to the
leads (essentially “artificial atoms”) enhance the electron-electron correlations,
Quantum dots are semiconducting or metallic regions so small that the elec-
trons are confined in all three dimensions. Like an atom, a quantum dot contains
a finite number of charges and has discrete energy levels. The study of electron
transport through these minuscule conducting regions has revealed a variety of
fascinating phenomena including observable effects caused by individual elec-
trons. Current versus voltage measurements for a quantum dot show discrete
staircases where each successive plateau represents the addition of one electron
to the quantum dot (see Figure 1.4.1).
10
Conductance (e 2 /πh)
15
;;;;;;;
@@@@@@@
??????? ;;;;;;;
@@@@@@@
???????
W = 250 nm
@@@@@@@
???????
;;;;;;;
@@@@@@@
???????
;;;;;;;
GATE
@@@@@@@
???????
;;;;;;;
@@@@@@@
???????
;;;;;;;
L = 1 µm
10 @@@@@@@
???????
;;;;;;; @@@@@@@
???????
;;;;;;;
Resistance (kΩ)
0
–2 –1.8 –1.6 –1.4 –1.2 –1 –0.8 –0.6
FIGURE 1.4.1 Current versus voltage measurements for a quantum dot illustrat-
ing the discrete electronic states. Each successive plateau represents the addi-
tion of one electron to the quantum dot. (Courtesy of Massachusetts Institute of
Technology.)
resulting in the quantization of both charge and energy. The energy levels of
these artificial atoms can be measured by the voltage required to add an extra
electron. These artificial atoms are large enough to display behavior that is not
observed in natural atoms; for example, the superconducting energy gap in
mesoscopic Al structures is quantized.
From the perspective of potential applications, single-electron transistors
(SETs) (see Box 1.6) can be realized in systems with three-dimensional confine-
ment provided by structures with two tunnel junctions and a gate. SETs turn on
and off again every time an electron is added. This device not only functions as
a transistor, but also provides insight into the physics of mesoscopic structures.
Using the sharp peaks associated with the addition of an electron, the equilibrium
ground-state energy of the droplet of electrons, as well as some low-lying excited
states of the droplet, can be measured. Furthermore, application of a magnetic
field reveals phase transitions between different states of the system. The mag-
netic field alters the balance between the confining potential, which favors a high
electron density, and the Coulomb interaction, which favors a low electron
density.
Based on recent successes in nanostructures, we can speculate about the
kinds of nanostructures likely to yield new physics and technology. Three differ-
ent physical effects in nanostructures that can be exploited for nanoelectronics
are illustrated in Figure 1.2. In resonant tunneling, the probability for charge
carriers to tunnel through barriers is greatly enhanced when the energy levels on
top QW < 1 mm
tunneling
bottom QW
bottom QW
contact
back depletion gate (drain)
FIGURE 1.5.1 Schematic illustration of the DELTT. The energy band diagram of
the double quantum well heterostructure is shown at left. [Reprinted with per-
mission from J.A. Simmons, M.A. Blount, J.S. Moon, S.K. Lyo, W.E. Baca, J.R.
Wendt, J.L. Reno, and M.J. Hafich, “Planar quantum transistor based on 2D-2D
tunneling in double quantum well heterostructures,” Journal of Applied Physics
84, 5626 (Nov. 15, 1998). Copyright © 1998 American Institute of Physics.]
Current in the DELTT flows only if both energy and momentum can be con-
served in a tunneling event. Because both layers are two-dimensional, this is
equivalent to the QW subbands being aligned. This can be achieved by applying
a source-drain bias, a control gate bias, or both. Figure 1.5.2 shows source-drain
I-V characteristics at several control-gate voltages for an AlGaAs/GaAs DELTT.
Both the height and position of the resonant current peak are clearly controlled by
the gate. Similarly good behavior has been obtained at 77 K, and bistable memo-
ries and digital logic gates have been demonstrated. Although obstacles remain,
the DELTT shows excellent promise as a practical, room-temperature quantum
transistor.
TC
T = 0.3 K
FIGURE 1.5.2 Source-drain current versus source-drain voltage for several val-
ues of top control gate voltage (VTC). Both the height and position of the reso-
nant tunneling current peak is controllable by the gate. [Reprinted with permis-
sion from J.A. Simmons, M.A. Blount, J.S. Moon, S.K. Lyo, W.E. Baca, J.R.
Wendt, J.L. Reno, and M.J. Hafich, “Planar quantum transistor based on 2D-2D
tunneling in double quantum well heterostructures,” Journal of Applied Physics
84, 5626 (Nov. 15, 1998). Copyright © 1998 American Institute of Physics.]
The analogy of a quantum dot to an artificial atom has been extended with the
demonstration that a quantum dot interacts with nearby metallic leads in much the
same way that a single magnetic impurity interacts with a surrounding metal—in
the phenomenon known as the Kondo effect. Kondo behavior was found recently
in a single-electron transistor, which consists of a semiconductor quantum dot
sandwiched between two metallic leads. This miniature device turns on and off as
individual electrons controlled by a nearby gate flow on and off the dot.
The theory of the Kondo effect was developed in the early 1960s to explain a
long-standing puzzle about the resistance of some metals: Why does the resis-
tance start to increase as the metal is cooled below a certain temperature? Ac-
cording to the picture that has emerged, the increased resistance comes from
magnetic impurities whose local magnetic moments couple antiferromagnetically
to those of the conduction electrons. The coupling becomes stronger and increas-
ingly impedes the flow of current as the temperature is decreased.
The concept of the Kondo effect is intriguing because it involves the pairing of
a localized electron with an electron in an extended state in the metal. Its manifes-
tation in a quantum dot is no less compelling. Although interactions between elec-
trons in quantum dots are known to be important, the Kondo phenomenon is a true
many-body effect requiring a coherent state resulting from the coupling of the lo-
calized electrons in the dot and a continuum of electron states outside the dot.
Experimenters have tried to see a manifestation of the Kondo effect in quantum
dots ever since its presence was predicted in the late 1980s, but succeeded only
recently. Kondo behavior for a single spin had been observed in resonant tunnel-
ing through a charge trap created unintentionally in a point contact. A collaborative
experiment involving the Massachusetts Institute of Technology (MIT) and the
Weizmann Institute in Israel has attracted additional interest because it shows the
Kondo effect in a way that will allow one to explore the phenomenon in a system
with many tunable parameters.
Kondo-like effects in quantum dots are observable only under a very narrow set
of conditions. To see the effects of coupling between the dot and the leads, one
needs to make the rate for tunneling of electrons between the dot and the leads as
high as possible. The higher this rate, the higher the temperature at which the
Kondo effect survives. However, if one makes the rate too high, the electrons on
the dot become completely delocalized. With a smaller dot, the electrons are more
localized to begin with, and a higher rate is possible.
To make a semiconductor quantum dot, one starts with a two-dimensional elec-
tron gas of electrons confined in a plane at the boundary between two semicon-
ducting materials. Additional semiconductor layers go on top of this boundary
region. At the top of the structure, one lays down electrical gates; the electrical
potentials created by these gates confine the electrons in the plane below the
gates to a very small region. Typically the quantum dots lie 100 nm below the
surface. The MIT-Weizmann team made a much smaller artificial atom by forming
the two-dimensional electron gas closer to the surface.
The conductance of a single electron transistor displays a peak when the sum
of the voltage (Vg) on one of the gates and of the voltage (Vds) between the two
leads on either side of the dot, each multiplied by the appropriate capacitance, is
large enough to add an electron to the dot. A gray-scale plot of the conductance
(see Figure 1.6.1) therefore consists of a series of bright diagonal bands, marking
the positions of the peaks, whose slopes are determined by the relative capaci-
tances. The highest peaks occur where the bands intersect on the Vds = 0 axis.
These maxima cluster in pairs along the Vds = 0 axis, with the intra-pair peak
separation smaller than the inter-pair separation. (One pair is shown in the figure.)
The two peaks correspond to the addition of a pair of electrons to the same spatial
state; one electron enters the state with spin up and the other with spin down. The
next electron must go into the next spatial state. Thus, in the region between the
paired peaks the artificial atom has an odd number of electrons.
The peak structure described so far is that expected for any artificial atom. One
tip-off in the data to the presence of the Kondo effect is the non-zero conductance
between the paired peaks, the bright, narrow vertical line along the Vds = 0 axis. In
this region the quantum dot has an unpaired electron, which is free to form a
singlet with the electrons in the leads. This singlet state couples electrons from the
FIGURE 1.6.1 Evidence for the Kondo effect in a single electron transistor.
(Courtesy of Massachusetts Institute of Technology and the Weizmann Institute.)
left lead to the unpaired electron on the dot and thence to the right lead, giving
conductance in a region where none is ordinarily expected. As predicted by theory,
this interpeak conductance increases as the temperature is decreased.
If the enhanced conductance that appeared between the two peaks were due
to the Kondo effect, it would require a symmetric interaction of the unpaired elec-
tron on the quantum dot with electrons in both leads. But if one applies a voltage
Vds across those two leads, separating the Fermi energy levels of the two reser-
voirs, that interaction is no longer symmetric, and the conductance must fall. An-
other signature of the Kondo effect is the disappearance of the enhanced conduc-
tance as the voltage between the leads is increased, leading to the narrowness of
the vertical line in the figure.
Finally, a magnetic field splits the unpaired electrons, causing the conductance
peaks to split as well, by an amount equal to 2 gµBB. This signature is also observed.
the two sides of the barrier are identical. Large changes in the tunneling current
are realized with small changes in the bias voltage across such structures. In
structures that confine electrons to regions with dimensions comparable to the
electron wavelength, quantum interference effects can be used to switch elec-
tronic currents. A conceptual approach to a transistor based on quantum interfer-
ence is shown in Figure 1.3a.
Quantum confinement structures can be created that serve as electron wave-
guides, conceptually similar to the waveguides encountered in optical structures.
Nanostructure switches based on guided-wave coupling can be created in quan-
tum confinement structures, illustrated in Figure 1.2b. In these switches, illus-
trated in Figure 1.3b, electrons in input channel 1 (IN1) can either exit through
output channel 1 (OUT1) or be switched to output channel 2 (OUT2) depending
on the gate bias voltage.
Resonant
Tunneling Quantum Confinement Coulomb Blockade
2D 1D
a) b) c)
FIGURE 1.2 Illustration of physical effects realizable in nanostructures: (a) resonant
tunneling; (b) quantum confinement; and (c) Coulomb blockade. (Courtesy of Stanford
University.)
GATE
Quantum
Interference
IN OUT
a)
GATE
Single Electron
Tunneling
IN OUT
c)
FIGURE 1.3 Schematic illustrations of nanoswitching concepts based on the physical
effects illustrated in Figure 1.2. (Courtesy of Stanford University.)
Single-Charge Trap
Narrow channel blocked
Polysilicon gate
Polysilicon
quantum dot
stores single electron
Silicon channel
blocked
(25 nm X 35 nm)
Insulating layer
Buried oxide
Silicon substrate
Optical Communications
As mentioned in the introduction, fueled by the explosion of Internet use and
the globalization of voice and data communications, lightwave communication
systems capacity and installation are growing exponentially with a growth rate of
about a factor of 10 every 6 years. The current global market for lightwave
systems is about $8 billion per year and is expected to grow to about $15 billion
per year by 2000. Optical telecommunication was introduced into the market in
1980; today, not only is optical fiber the medium of choice for long-distance
voice and data communications, but it is also rapidly growing to be a leading
player in the local area network (LAN) market. Optical fiber is predicted to have
revenues of about $20 billion in the year 2000 and dominate the analog cable
television and fixed wireless loop markets.
The first undersea optical cable was installed in 1988, with a capacity of
about 8,000 voice circuits per cable, at a cost of about $400 per circuit per year.
More than 300,000 km of undersea lightwave cable had been installed by the end
of 1996. Cable installed in 1996 cost less than $30 per year per voice channel and
had a capacity of 120,000 voice channels per cable (5 Gb/s per fiber).
The first major terrestrial lightwave system installed in the United States
linked Washington and New York with a capacity of 90 Mb/s per fiber in 1983.
A similar system linked New York and Boston in 1984. More than 100,000 km
of fiber had been installed in terrestrial systems, one-third of it in the United
States, by the end of 1996. The latest systems incorporate wavelength division
multiplexing (WDM) that uses many separate wavelength channels per fiber,
dispersion-shifted fiber, and optical amplifiers. Currently in deployment are 120-
Gb/s-per-fiber systems using 48 channels with 2.5 Gb/s per channel. In the next
2 years 400-Gb/s, 80-channel systems will be introduced into the market.
These advances in technology were made possible by advances in our under-
standing of materials and growth techniques that reduced the transmission losses
of silica-germania optical fiber from 400 dB/km in 1965 to about 0.15 dB/km by
the early 1990s. Such losses allow a signal to travel 800 km (about the distance
between Washington and Boston) before the signal intensity decreases to about
1/100 of its original value. These advances were accompanied by recent major
advances in InP-based electrically modulated single-wavelength semiconductor
diode lasers operating in the 1.3-µm and 1.5-µm wavelength regions, where the
lowest loss in silica fiber occurs; in fast avalanche photodiode detectors; in
erbium-doped fiber amplifiers and other fiber devices (see Box 1.8); and in high-
power semiconductor diode lasers used to pump the fiber devices.
Current digital optical telecommunications networks typically use the NRZ
(non-return-to-zero) format to transmit data in the linear amplitude regime. Fu-
ture systems could use nonlinear effects in fiber with high-power lasers to exploit
the properties of soliton transmission.
Solitons are wave packets that propagate without changing shape. They are
solutions to the electromagnetic wave propagation equation in fiber waveguides
that arise from the nonlinear effects of self-phase modulation and dispersion in
the group velocity. Solitons are dispersion-free and exhibit a pulse shape that
retains its waveform over long distances because the two nonlinear effects are
exactly counterbalanced. They were first proposed as a means of data transmis-
sion in optical fiber in the 1970s and observed in the research laboratory in 1980.
They offer extremely high bit-rate transmission (>100 GHz) at a single wave-
length. Extensive research on the use of other nonlinear effects in both fibers and
semiconductors and in artificially poled piezoelectric materials is under way to
enable future ultrahigh-speed all-optical processing devices.
Local area networks (LANs), optical data links, and optical signal processing
are emerging growth areas enabled by new technologies such as vertical-cavity
surface-emitting lasers, smart pixels and microelectromechanical systems. All of
these technologies were implemented as devices within the past decade. The
emergence of low-loss graded index multimode plastic optical fibers in the past 5
years could lead to a low-cost medium to deliver high bandwidth communica-
tions over short links from a single-mode glass fiber backbone to the desktop.
The advantage of extremely low-cost connectors and low-cost transceivers could
outweigh the high cost of fluorinated polymer materials compared with the cost
of glass fiber based LANs. The predicted annual market for optical data links is
$1 billion by the year 2000 and $3.3 billion by 2005, with approximately half in
computers and half in LANs.
The revolution in optical communications over the last decade began with the
invention of the erbium (Er)-doped optical-fiber amplifier in the late 1980s. With
the invention and implementation of a number of key optical-fiber devices, evolu-
tion of an all-optical network architecture has begun. Fiber gratings were first
made in 1975. The technological revolution in fiber devices was enabled by the
discovery in 1993 that when exposed to ultraviolet (UV) light, an index of refraction
change as large as 0.01 occurs in the cores of silica fiber doped with hydrogen-
loaded germania. This UV-induced irreversible chemical change permits stable
fiber Bragg gratings to be easily written into the cores of standard optical fiber.
These Bragg gratings serve as key building blocks for a large range of both active
and passive fiber devices such as filters, amplifiers, fiber lasers, dispersion com-
pensators, pump laser reflectors, demultiplexers, and equalizers.
The optical power in communications systems increased sharply with the intro-
duction of the Er-doped fiber amplifier. This amplifier is basically a single-pass
laser consisting of several meters of a spliced silica fiber doped with 1,000 ppm
Er3+ in the core with an input coupler for the pump light. Optical amplification can
be achieved through stimulated emission from the excited states of Er atoms in the
glass if a population inversion is created with pump light from a semiconductor
diode laser.
The optical properties of rare-earth impurities in a glass matrix were first stud-
ied in the 1960s. Rare-earths are ideally suited for use as an amplification and
lasing medium; they have strong optical transitions in the infrared and their proper-
ties are nearly independent of the host material. Er, pumped with 1488-nm or
980-nm light, is ideal for an amplifier in the 1.55-µm communications window.
Optical amplification has many advantages over electronic regeneration: am-
plification occurs over a relatively wide (80-nm) gain curve, ideal for dense wave-
length division multiplexing systems; amplification is transparent—i.e., indepen-
dent of modulation format and bit rate—and in principle the gain is bidirectional. It
also allows watts of optical power, which is important for higher data rates of wave-
length division multiplexing, increased passive split architectures where one source
is split into many channels, and extended repeater spacings. Systems with optical
amplification are far simpler to upgrade to higher bit-rate systems after initial instal-
lation because all optical repeaters are independent of bit rate. Er-fiber amplifiers
were first deployed in undersea communications systems in the mid-1990s.
Because silica fiber has more wavelength dispersion in the 1.55-µm region
than at its minimum in dispersion at 1.31 µm, additional dispersion compensating
fibers were installed at intervals in the system. New fiber designs have shifted the
dispersion minimum to 1.55 µm. New installations use this fiber to minimize the
effects of dispersion at high data rates. In the 1.31-µm optical communications
window used by most of the installed terrestrial base, amplifiers using praseo-
dymium (Pr) ions in fluoride glass hosts (because Pr in silica does not emit at this
wavelength) and Raman-shifted silica fiber amplifiers have been recently demon-
strated in the research lab.
Fiber lasers using rare-earth ion dopants and fiber Bragg gratings as cavity
mirrors were demonstrated in the early 1990s in both silica and fluoride fiber hosts.
A cascaded Raman-shifted laser was demonstrated in 1990 in standard silica-
germania fiber. Raman-shifted lasers eliminate the need for specialty fiber doped
FIGURE 1.8.1 Block diagram of a Raman amplifier at 1.3 µm. (Courtesy of Bell
Laboratories, Lucent Technologies.)
3, than a magnetic disk drive, a data transfer rate 4 to 10 times slower, and a cost
per gigabyte about 5 times higher. However, the platter of an optical drive is
typically removable and costs about a factor of 10 less than a typical magnetic
disk of equivalent capacity. Optical drives occupy niche markets of data back-up
and distribution rather than the far larger market of main drives in computers.
This latter market is dominated by magnetic drives (discussed below). Future
improvements in optical storage technology could potentially allow higher den-
sity optical storage with access speeds similar to those of magnetic drives, and
thus become a displacement technology for magnetic storage. Development is
under way in four areas for discontinuous improvements in the speed or data
density of optical drives.
One development to improve storage density involves focusing a laser beam
into the desired storage layer of a three-dimensional sandwich produced by stack-
ing semitransparent layers. Data storage density is thereby increased by as much
as a factor of 10.
Also under development are shorter wavelength lasers, permitting smaller
spot sizes. Extensive materials research is required to make low-cost high-
reliability blue-green lasers commercially available (see Box 1.9).
The third emerging development is near-field optical recording. Flying a
special design optical head very close to the storage medium allows write and
read spots smaller than the wavelength of laser light. This direct outgrowth of
condensed-matter and materials physics and optical physics was initiated in 1928
when British scientist E.H. Synge proposed the physics of near-field optics for
microwaves. The first near-field visible optical microscope was not constructed
until the late 1980s, after the invention of the scanning-tunneling microscope
(STM) in 1981. The STM created the technology required for scanning a small
tip in a controlled manner a specified distance away from a surface with atomic-
scale resolution. Near-field technology has the potential for data storage density
one to two orders of magnitude higher than conventional optical and magnetic
storage projections to the year 2000. Another potential advantage of this technol-
ogy is the ability to use very low mass optical heads mounted directly onto sliders
that have been developed for magnetic storage to reduce seek times.
The fourth development on the horizon is holographic data storage. In holo-
graphic storage a page of binary data is stored as pixels of a monochrome image. It
is possible to record thousands of holograms in a spot of a storage medium with
resolution on the order of the wavelength of light. Because of the three-dimen-
sional capability, holographic storage promises a projected density two to three
orders of magnitude larger than conventional optical storage. In addition, it has
extremely high data rates because an entire image is transferred simultaneously.
Development of low-cost, reliable, blue-green lasers and solid-state spatial light
modulators, along with low-cost, robust, and reliable storage media for three-di-
mensional holograms, are needed to enable commercialization of the technology.
patterns easily compatible with plastic substrates is the vision driving scientific
investigation of semiconducting organics.
Application of organic materials in electrophotographic photoreceptors is
already commercially successful. In that case, large-area molecularly doped
polymer films with high-charge photogeneration yield have demonstrated ex-
tremely high contrast between photoactivated and dark conductivity. This prop-
erty, combined with the ability to rigorously exclude deep-charge transport traps,
has made organic materials superior for photocopying applications.
The new classes of semiconducting organic materials fall into two catego-
ries: discrete evaporable molecular systems and conjugated polymers. The
former tend to be compositionally pure and easily suitable for layering; the latter
are more thermally stable and are usually amenable to solution processing. The
most mature application of these materials is light-emitting diodes; and commer-
cial low-resolution pixelated monochrome display products based on the
evaporable organics are currently available (see Figure 1.4). It is likely that
analogous products based on conjugated polymers will be available in 1999.
Material stability, device efficiency, and low operation voltage remain important
areas of research and engineering. Further progress in synthesis of systems
resistant to oxidation and electrochemical degradation, synthesis of stable elec-
tron transport materials for the polymeric devices, preparation of materials where
aggregation quenching of luminescence is negligible, and development of contact
modification treatments that improve injection efficiency remains a challenge for
organic chemists. Meanwhile, understanding injection and transport, which de-
termine current voltage characteristics, is crucial. Identification and passivation
of luminescence-quenching sites and study of the effects of high fields present in
light-emitting diodes remain important areas for basic research. A number of
FIGURE 1.4 Flexible light-emitting diode display based on evaporated organic materi-
als. (Courtesy of University of California at Santa Barbara.)
systems issues associated with electrical drive scheme, color fidelity, and pattern-
ing must also be resolved if organic emitters are to have wide application in
display technology.
The requirements for other promising applications of light-emitting organics
such as printing and lighting are substantially the same. A flurry of activity
directed at making electrically pumped organic lasers, however, has raised a
number of additional issues. The current injection requirements are significantly
larger than what is presently achievable, and improved transport will be required.
Also, design of resonator geometries suitable for retaining these large injection
currents, while avoiding absorption loss associated with metallic contacts, will be
required. A significant body of clever work involving distributed feedback,
photonic gap, and microdroplet resonators has begun to address this problem.
Hot carrier effects in the organics may also become quite important at the applied
electric fields necessary to drive lasers, and “interband” emission analogous to
that in inorganic semiconductors has been reported recently under conditions
near dielectric breakdown. The high density of excited states required for stimu-
lated emission gain has also spurred interesting new research on exciton-exciton
interactions and cooperative emission phenomena. These rely on optical excita-
tion of the organic films. It may turn out that the best way to make suitable solid-
state organic lasers is a hybrid design using passive organic gain media that are
optically pumped by electrically pumped inorganic semiconductor lasers such as
those based on InGaN. For these applications, traditional laser dyes doped into
inert polymers may be satisfactory.
Many of the scientific issues noted above are common to the application of
organic semiconductors to transistor applications and photovoltaics. These tran-
sistor applications are likely to be further in the future because performance
remains somewhat inferior to alternative technologies. Organic thin-film transis-
tors that perform within an order of magnitude of amorphous silicon have been
fabricated and may be promising as pixel-switching transistors in active-matrix
liquid-crystal displays. Charge injection and transport are critical to these appli-
cations. Interface chemistry and physics at contacts remain poorly understood;
control of these is essential to stable low-voltage operation. Exciting and prom-
ising results using dipole layers to reduce injection barriers have been reported.
Dipole layers function much as graded gap contacts do in traditional semiconduc-
tors. Charge transport has been studied extensively, and deep traps are com-
monly observed that both reduce mobility for transistor applications and raise
injection voltages resulting from space charge limitations on the current. Re-
search to identify and reduce trap sites is also important to make these applica-
tions viable.
A final general class of promising applications involving active organics in
electronics (see Box 1.10) relies on the conductivity of doped conjugated poly-
mers. Doped trans-polyacetylene, although unstable in ambient conditions, has
exhibited conductivities higher than metals. More stable systems, such as doped
polypyrrole, that are more easily processed and are more compatible with plastic
than traditional systems like indium tin oxide, show promise for application as
transparent contacts for displays.
We anticipate a variety of scientific opportunities, for which the ultimate
technological implications are as yet unclear, to emerge from studying electronic
polymers in the next decade. Some of these opportunities may come as a result of
photochemically modifying single domains or arrays of domains with a near-
field optical microscope, synthesizing materials with giant optical nonlinearities,
linking organic materials to inorganic quantum dot structures, examining the
phase diagrams of mixed polymeric systems, and designing functional polymers
that interface to biological systems. Most likely, even more exciting but unantici-
pated phenomena will arise from this young and vital area of research.
Although the vision for future consumer electronics as described in Box 1.10
is somewhat hyperbolic, the past decade has marked the synthesis of new active
organic materials and advances in the science underlying their electronic, optical,
and mechanical properties. Technologies based on these materials have the
promise that comes with the processability of organic materials—namely, the
potential to use inexpensive manufacturing methods like spray painting, dip coat-
As you walk into the train station, you notice the large multicolor advertisement
for the evening paper glowing on the electroluminescent schedule board. The
large-area light-emitting diodes are made from organic conducting electrodes and
luminescent polymers that have been spray painted onto the board. You want
your usual sections of the local paper and then a few from different papers, so you
slide your plastic profile card into the newspaper machine. The card is a few
thousand transistors made from organic materials that have been printed by an ink
jet printer modified to deposit organic charge transporters and electrodes. It con-
tains your medical records, frequent flier numbers, custom newspaper prefer-
ences, and a host of other information. It also serves as a cash card. The machine
asks you to put in your display. You unroll your pocket display and insert it into the
machine. Several organic lasers write your customized newspaper into a thin
patch of organic material about the diameter of a dime in the upper right corner of
your portable display that functions as an erasable compact disk. On the train, you
plug your display into the seat back. The reader is a moving organic laser whose
reflection from the writing on the disk is recorded by a photovoltaic cell, which is
also made from organic materials, and the display is a high-resolution luminescent
color display made from organic light-emitting diodes. The active switching matrix
for the display pixels is made from hybrid materials, inorganic charge transporters
that have been solubilized to be processable using organic chemistry. The display
was printed on a flexible polymer in a reel-to-reel manufacturing process. A tiny
dot of silicon circuitry containing a microprocessor to interpret the information read
from the disk and containing display drive circuitry was attached. You marvel that
the display cost less than your monthly rail pass.
standing of the physics of quantum microcavities and advances in III-V and II-VI
compound semiconductor growth and processing techniques. Simultaneously,
these new materials fabrication and processing techniques have led to beautiful
insights into the science of quantum optical structures, which will in turn enable
more advances in technology.
By changing the dimensionality of a material by growth or processing, one
can greatly alter the density of electronic states, as shown in Figure 1.5. A bulk
three-dimensional distribution of electrons in a metal or semiconductor has a
density of available electronic states that rises as the square root of the energy.
The nature and the energies of these electronic states are greatly affected by
“quantum confinement.” Enclosing a thin layer (with thickness comparable to
the electron’s de Broglie wavelength, about 20 nm in GaAs) of lower band-gap
material between two slabs of higher band-gap material, yields a two-dimen-
sional quantum well with sharp steps in the electronic density of states (Figure
1.6a). Confining the electrons into a one-dimensional quantum wire produces a
series of sharp peaks at the onset of each new quantum mode in the wire (Figure
1.6b). Confining the electrons into a zero-dimensional “quantum dot,” a box
comparable in size to the wavelength of the electron in all directions, produces a
series of even sharper spikes that correspond to a series of confined quantum
levels for the electrons in the box (Figure 1.6c). Therefore lowering the dimen-
sionality of a structure on the atomic scale of a few nanometers will cause very
large changes in the physics of electronic transport. The advances in simulations
of electronic and optical processes in semiconductors, along with programmable
molecular-beam epitaxy (MBE) techniques for fabrication of multiple quantum
wells with atomic-scale accuracy—(“quantum engineering” of structures), have
led to the invention of the quantum cascade laser, which operates between con-
fined levels of electrons in a series of tailored quantum wells (see Box 1.11),
giant “pseudomolecules” with large optical nonlinearities, and resonant tunneling
devices.
The optical properties of a semiconductor are greatly affected by reducing
the dimensionality of its structure. For example, an exciton, an optical excitation
of the system near the band edge, comprises a bound state between a conduction
band electron and a valence band hole with wave functions similar to those of
Rydberg atoms. Either an electron or a hole, or both, can be bound in a lower
dimensional material by being confined in one, two, or three dimensions. This
can also greatly affect the excitonic levels and thus the optical properties near the
band gap. The dimensionality of the system is reduced if the size of a dimension
(for example, the thickness of a slab) is comparable to the diameter of the exciton,
which in GaAs is about 5 nm. In materials with strong coupling to light, the
optical and electronic plasma modes of the material interact to form coupled
exciton-polariton modes that are also greatly affected by the dimensionality of
the system. Theoretically, manipulating the density of states of the optical exci-
tations of a system can produce lasers with zero current threshold, in contrast to
CURRENT
ZERO RESONANT VALLEY
VOLTAGE VOLTAGE VOLTAGE
20 NANOMETERS
DENSITY OF STATES
ENERGY
ENERGY ENERGY ENERGY ENERGY ZERO VOLTAGE RESONANT VOLTAGE VALLEY VOLTAGE
FIGURE 1.5 Illustration of the effect of quantum confinement on the density of elec-
tronic states. [Reprinted from Scientific American 8(1), 26 (1997) with permission from
John Deecken (artist).]
FIGURE 1.6 Two-, one-, and zero-dimensional small optical devices. [Reprinted from
Scientific American 8(1), 27-29 (1997) with permission from (a) S.N.G. Chu, Lucent
Technologies; (b) Lucent Technologies; (c) James S. Harris, Stanford University.]
FIGURE 1.11.1 The active regions of a QC laser are alternated with electron
injectors from which electrons tunnel into the upper excited state of the laser
transition. (Courtesy of Bell Laboratories, Lucent Technologies.)
conventional lasers in which gain is only achieved after sufficient excitation such
that the excitons have overlapped in space and have ionized into an electron-hole
plasma. Because excitons are bound states of two fermions, they should there-
fore obey Bose statistics and can themselves exhibit Bose condensation into a
macroscopic quantum ground state. The phase diagram of the excitonic matter in
semiconductors and its interaction with photons, the observation of lasing be-
tween sharp excitonic levels or lasing from Bose-condensed excitons is still
controversial, however, and is a field of much current interest.
Over the last decade four approaches for forming small optical devices have
been used:
double-T zero-dimensional quantum dots (Figure 1.6c); or, another approach, the
clever use of strain in pseudomorphic overgrowth was used to produce strained
dot arrays (Figure 1.3.1).
4. Solution chemistry, described in Chapter 5, was used to produce a mono-
disperse colloidal suspension of semiconductor quantum dots.
Selective area overgrowth has also been successfully used to tailor the quan-
tum well thickness and composition laterally to make integrated semiconductor
optical devices; examples include electro-optical modulators that consist of a
ridge waveguide semiconductor laser and an optical modulator adjacent to each
other on a single chip. The fabrication of novel optical nanodevices is in its
infancy; many advances in design and manufacturing (such as self-assembly) are
required to allow mass applications.
Enclosing a material between highly reflecting mirrors produces an optical
microcavity, which can be tailored to control the angular distribution of the
output light from the structure as well as the spectrum and spontaneous emission
from any emitters inside. This is the principle behind vertical-cavity surface-
emitting lasers, or VCSELS, which are made by enclosing an active gain medium
between two highly reflective dielectric stacks of mirrors grown vertically on a
substrate. Microcavities with very sharp resonances (very high Q) have been
achieved by making whispering gallery mode resonators out of semiconductors
or droplets of dye solution or polymer in which the index change between the
material and air produces a high-Q waveguide around the outside diameter of the
structure (Figure 1.7). In fact, optical nanocavities have been produced that have
Q as high as several thousand. Because of the enormous field enhancement in the
cavity with such Q, nonlinear effects should be observable with only a few
incident photons. The intense emission of light observed, and yet to be conclu-
sively understood, from porous silicon is to some extent both a confinement and
microcavity effect. Another exciting new field involves tailoring optical materi-
als to achieve periodic opposite polarities of the ferroelectric polarization on a
length scale tailored to maximize the intensity of optical nonlinearities by effi-
cient phase matching of coherent four-wave mixing.
Extending the concept of optical microcavities into three dimensions leads to
the prediction of photonic band-gap materials, structures with periodic variations
of dielectric constant on a length scale comparable to the wavelength of light.
The idea is to design materials such that they can affect the properties of photons
in a manner similar to the way semiconducting crystals affect electrons. In a
semiconductor, the atomic lattice presents a periodic potential to an electron
propagating through the electronic crystal. The geometry of the lattice and the
strength of the potential are such that, owing to Bragg-like diffraction from the
atoms, a gap in energy for which an electron is forbidden to propagate in any
direction appears. In a photonic crystal, the periodic potential is caused by a
lattice of periodic dielectric media instead of atoms. If the dielectric contrast is
FIGURE 1.7 Very high-Q microcavity. (Courtesy of Bell Laboratories, Lucent Technol-
ogies.)
sufficient, Bragg scattering of light off the lattice can also produce a forbidden
band that extends over a certain energy range in which light cannot propagate in
any direction. However, a defect in the periodicity will introduce localized states
in the photonic band-gap in much the same way that localized states exist for
electrons within the semiconducting gap. The nature and shape of the localized
states will depend on the dimensionality of the defect: a two-dimensional slab or
a one-dimensional line will define mirrors and waveguides in the dielectric array,
and a zero-dimensional defect will define a microcavity. The design and manipu-
lation of these defects in the photonic band-gap material promises far more
control of photons.
As the technology for fabricating photonic lattices in the near infrared (IR)
and visible spectral regions advances, they will offer a radically different means
for controlling light. For example, Figure 1.8 shows a theoretical model of how
light propagates in a periodic square lattice of dielectric rods with a waveguide
produced by the intersection of two missing rows of rods. Remarkably, propaga-
tion is predicted to occur with no losses even though the bend in the waveguide is
on a length scale comparable to the wavelength of light! In ordinary dielectric
waveguides today, the bending losses caused by leakage from evanescent fields
requires very smooth bends with bending radii of 10 cm—thus the waveguides
are large, making manufacture and packaging of integrated optical structures
FIGURE 1.8 How light propagates in a periodic square lattice. (Courtesy of Massachu-
setts Institute of Technology.)
Technology Pull
The industry of magnetic recording, in all its forms, constitutes an enterprise
with annual revenues in excess of $100 billion. This consists primarily of mag-
netic disk, magnetic tapes, and optical disks for digital data storage together with
various forms of magnetic recording for audio and video. This industry is expe-
riencing an overall compound annual growth rate in revenue of about 10 percent
per year. This growth rate is expected to continue for at least another decade,
with magnetic storage playing the dominant role. The United States “owns”
about 40 percent of the magnetic storage business, the largest single component
of which is hard disk drives (see Box 1.12). This component alone is a $30
billion a year business.
The accelerating interplay of the science and applications of magnetism is
well illustrated by the phenomenon of magnetoresistance. Lord Kelvin first
observed this effect in 1856. Beginning in the early 1980s, a decade of research
and development (at IBM) with this basic laboratory phenomenon perfected a
product of major commercial importance. The first magnetoresistive sensor used
in the recording head of a hard disk drive has an intricate structure in which data
is sensed by a 20 nm thick permalloy (a NiFe alloy) layer. The useful change in
resistance of this film as it passes in close proximity to a small magnetized region
of a magnetic disk is about 2 percent. The time from discovery of the phenomena
to a high-volume product was 135 years.
Recent research activities led to the discovery of a superior form of magneto-
resistance, called giant magnetoresistance (GMR). GMR requires the interaction
between at least two very thin ferromagnetic films and can register a resistance
change at room temperature of about 10 percent in the same magnetic field range
as permalloy. Moreover, as an interfacial phenomenon, its performance in so-
called spin-valve recording sensors improves with decreasing film thickness,
which also increases the storage density. In contrast to the case of magnetoresis-
tance, only 10 years have passed since the original discovery of GMR before an
initial product was produced. In the exponentially growing global hard-drive
industry, GMR sensors will be needed to sustain this rate of improvement into the
third millennium.
A range of magnetic tape storage systems with applications from audio and
video to data storage constitute another third of the magnetic storage business.
Storage densities in this arena are experiencing a single-digit compound annual
growth rate with continuing cost reduction. Here, too, magnetoresistive heads
play an important role in the continued scaling to higher densities. Magnetic
particle tapes are still the industry standard, but thin film tapes have been intro-
duced and will undoubtedly dominate some time in the future.
In a completely different arena, bulk magnetic materials constitute a $4
billion global market, with the United States holding approximately 20 percent of
the market share. This market is projected to grow to more than $6 billion by the
year 2000. “Hard” bulk magnetic materials are essential constituents in a wide
variety of electric motor and generator technologies. For such applications, the
strength of the permanent magnetism, or so-called “maximum BH-product,” is
10000
HDD Capacity Shipped, Petabytes
2830
1545
1000
639
CGR = 95% 335
161
100
80.7
32.9
14.9
10
8.18
4.71
3.73
1.56
1
1990 1992 1994 1996 1998 2000 2002
Year
FIGURE 1.12.2 The number of hard disk drive bytes shipped in products per
year. The compound annual growth rate of 95 percent is projected to continue
for the foreseeable future. (Courtesy of IBM Research.)
1980s and dramatically illustrates the unpredictable effects of the interplay be-
tween politics, economics, research, and development. Samarium-cobalt was the
leading hard magnetic material in the late 1970s in spite of the high cost of
samarium. The cost of samarium-cobalt magnets ballooned fivefold when the
world’s principal source of cobalt disappeared during the 1978 national upheaval
in Zaire. Intense focused research established that samarium and cobalt could be
replaced by neodymium and iron, respectively. Not known in advance was that
the attainment of permanent magnetization in the new compositions hinges on
complicated processing sequences including, in one case, creation of amorphous
material by melt-spinning followed by crystallization through severe mechanical
treatment and subsequent annealing. Equally unexpected was that the product
would be stronger, both magnetically and mechanically, and less expensive. The
rapidly growing NdFeB segment of the bulk magnetic materials market is pro-
jected to reach $4 billion by the year 2005.
“Soft” bulk materials play key technological roles in radio frequency (rf) and
power distribution applications. At frequencies higher than 100 kHz, ferrites
remain the materials of choice. They are widely used in all manner of rf and
microwave elements such as antennas, filters, circulators, and insulators. Histori-
cally, advances in ferrite performance (higher permeability, higher frequency
100
;;;
@@@
???
Neodymium-Iron-Boron
50 ????;;;
;;;;
@@@@
????
;;;;
@@@@
????
;;;
@@@
???
Samarium-Cobalt
;;;;
@@@@
????
;;;;;;;;;;;????
@@@@@@@@@@@
??????????? ;;;;
@@@@
???????????;;;;
;;;;;;;;;;;
@@@@@@@@@@@
???????????
(BH)max (MGOe)
10 ;;;;;;;;;;;
@@@@@@@@@@@
???????????
5
;;;;;;;;;;;
@@@@@@@@@@@
???????????
Alnicos
;;;;;;;;
@@@@@@@@
????????
;;;;;;;;
@@@@@@@@
????????
;;;;;;;;;;;
@@@@@@@@@@@
???????????
;;;;;;;;
@@@@@@@@
????????
;;;;;;;;;;;
@@@@@@@@@@@
???????????
;;;;;;;;
@@@@@@@@
????????
;;;;;;;;;;;
@@@@@@@@@@@
???????????
;;;;;;;;
@@@@@@@@
????????
;;;;;;;;;;;
@@@@@@@@@@@
???????????
;;;;;;;;;;;;
@@@@@@@@@@@@
???????????? ;;;;;;;;
@@@@@@@@
????????
;;;;;;;;;;;
@@@@@@@@@@@
??????????? Hard
1 ;;;;;;;;;;;;
@@@@@@@@@@@@
????????????
;;;;;;;;;;;;
@@@@@@@@@@@@
????????????
;;;;;;;;;;;Ferrites
@@@@@@@@@@@
???????????
0.5 ;;;;;;;;;;;;
@@@@@@@@@@@@
????????????
;;;;;;;;;;;;
@@@@@@@@@@@@
????????????
;;;;;;;;;;;;
@@@@@@@@@@@@
????????????
Steels
0.1
1880 1900 1920 1940 1960 1980
Year
FIGURE 1.10 Chronological trend of (BH)max where the data represent initial demon-
stration in the laboratory. [Reviews of Modern Physics 63, 819 (1991).]
response, and lower losses) have been driven by niche military applications, with
commercialization following rapidly. At lower frequencies, soft magnetic mate-
rials are used extensively in transformers for the power-distribution industry.
Here, incremental improvements can have an enormous technological and eco-
nomic impact. Particularly important is magnetic metglass, an amorphous mate-
rial in ribbon form prepared by ultra-fast quenching. The absence of magneto-
crystalline anisotropy has the consequence that the magnetization vector can be
easily rotated. Therefore, a metglass has high magnetic permeability and low
losses. When used in the core of a power transformer, metglass is more expen-
sive than crystalline materials and increases capital costs; however, this increased
capital cost is often rapidly amortized by continuing savings from decreased
transformer loss in electric power transmission. Worldwide savings of several
billion dollars have been realized by the introduction of magnetic metglass.
Magnetostriction is another area of magnetism of considerable, and growing,
interest for both military and commercial applications. Most of the materials
development work in this area has been focused on rare-earth transition metal
alloys. Strains of up to 1 percent, along with considerable force actuation, have
been demonstrated in practical applied fields. Applications range from sonar
pulse generation to high-reliability replacement of hydraulic systems in aircraft
and even tanks.
A final area is magnetoelectronics (exclusive of magnetic storage), which
includes a variety of devices and associated assemblies. The largest component
of this industry today is sensors used for commercial, scientific, and military
applications. Such sensors range from Hall effect sensors, to superconducting
quantum interference devices (SQUIDs), flux gate magnetometers, search-coil
magnetometers, magnetoresistance (MR) and GMR sensors, to magnetic force
microscopes. Magnetoelectronics is characterized by a number of small but
stable niche markets. The worldwide market for SQUID instrumentation, for
example, is about $20 million per year; the market for magnetic force micro-
scopes is similar.
The potential sleeping giant in the field of magnetoelectronics is a growing
collection of novel devices and circuits that possibly can be integrated onto a
high-performance chip to perform some complex function. More realistically,
key elements may be integrated with high-performance semiconductor technol-
ogy to produce new generations of microchips with function, density, and/or
performance beyond that achievable with semiconductor technology alone. Non-
destructive read out memory chips in which bits are stored in small electrically
addressable magnets have been demonstrated at capacities of up to 256 kb (Fig-
ure 1.11). The nonvolatile radiation-hard nature of this memory together with
potential for scaling to much higher density and performance, especially as new
magnetic elements are developed, show considerable promise.
FIGURE 1.13.1 Structure of the Fe10 “ferric wheel” cluster, where the large solid
circles represent the iron atoms and the empty circles are, in order of decreasing
size, chlorine, oxygen, and carbon. The 10 Fe3+ ions, each with a magnetic
moment corresponding to the same angular momentum or spin, are bound to-
gether into a perfectly regular ring. High magnetic field experiments have shown
that the Fe3+ ions exhibit antiferromagnetic behavior; neighboring spins prefer to
be antiparallel. The spin structure of the molecule passes through a rich se-
quence of phase transitions resembling those in bulk layered antiferromagnets.
These experiments open the prospect of precisely controlling the structure, inter-
actions, and dynamics of nanomagnets. [Reprinted with permission from D.
Gatteschi, A. Caneschi, L. Pardi, and R. Sessoli, “Large clusters of metal ions:
The transition from molecular to bulk magnets,” Science 265, 1056 (1994). Copy-
right © 1994 American Association for the Advancement of Science.]
characterization gives the basic magnetic parameters of the particle: ionic mo-
ment (i.e., the local state of spin), exchange interaction (the coupling strength
between the spins), and anisotropy (the height of the energy barrier in the double-
well potential separating the “up” state from the “down” state). Such character-
izations demonstrate that the ferric wheel behaves very much as an atomic-scale
analog of a layered antiferromagnet. Less traditional characterizations are re-
quired to understand the ultimate stability of such molecular magnets, which is
determined by quantum tunneling. One particularly interesting example is the
observation of what might be called “quantized hysteresis” in the Mn(12) mol-
ecule: at low temperatures, this structure shows a propensity to switch from up to
down at a sequence of regularly spaced magnetic fields. Some evidence suggests
that these magnetic fields coincide with resonances between quantum levels of
the up and down wells, resulting in enhanced tunneling. Although some details
of the tunneling mechanism remain to be understood, this is a particularly simple
example of the stability (up versus down) of the moment of a molecular magnet
being ultimately limited by a purely quantum effect.
Of more profound significance than the observation of quantum tunneling
would be the observation of “quantum coherence” in these nanomagnets. The
phenomenon is closely analogous to the microscopic quantum coherence (MQC)
effect sought in small SQUIDs for years—the creation of a quantum state in a
controlled superposition of up and down states. The regular advance of the phase
of this superposition would result in the sinusoidal oscillation of the magnetic
domain between the up and down states. Observation of such coherence oscilla-
tions would foreshadow a significant change in the role that quantum mechanics
might play in the dynamics of magnetic domains. Although we might view
magnetic tunneling as a nuisance, destroying the stability of a bit stored in the
magnetic domain, coherence, if controllable, could be the resource needed to
realize the basic element of storage and processing in quantum computing. Signs
of magnetic quantum coherence have in fact been observed in a naturally occur-
ring magnetic nanoparticle.
In the past 10 years rapid progress has been made in the characterization and
understanding of magnetic multilayers, exchange coupling, and spin-dependent
transport through magnetic materials and interfaces. Results from an experiment
representative of this exciting, ground-breaking work is shown in Figure 1.13.
This experiment measured the oscillatory exchange coupling between iron layers
separated by a chromium spacer of varying thickness. The chromium wedge was
grown epitaxially on the nearly perfect surface of an iron whisker crystal whose
magnetization is split into two opposite domains along the [001] direction. A thin
iron film was deposited on top of the chromium, and its magnetization was
measured using scanning electron microscopy with polarization analysis
(SEMPA). The SEMPA image, drawn on the wedge schematic, clearly shows
that the exchange coupling reverses direction with almost every single monolayer
change in chromium thickness. The oscillatory coupling period, which arises
FIGURE 1.13 Oscillatory exchange coupling in Fe/Cr/Fe. [Physical Review Letters 67,
140 (1991).]
Ie e_ Pt
Co
Cu
Co
Ie
e_ Si Emitter
- _
Vbc e Si Collector
+
Ic
A e_
FIGURE 1.14 Schematic cross section of a prototype spin-valve transistor. [Physical
Review Letters 74, 5260 (1995).]
a) Structure
b) Magnetoresistance
0
H
FIGURE 1.15 Magnetic tunnel junction structure. (Courtesy of IBM Research.)
fraction is needed to obtain the position of the oxygen atoms within the unit cell
as a function of temperature, field, pressure, and doping. Electron microscopy is
needed to understand growth inclusions that form two-dimensional stacking
faults. Synchrotron sources enable advanced spectroscopies to identify the +3
and +4 valence states of Mn and their ratio. Diffuse x-ray scattering and quasi-
elastic neutron scattering are used to investigate the presence and dynamics of
polaronic distortions. A number of other techniques based on such effects as
spin-polarized photoemission, magnetic circular dichroism, and second harmonic
generation are becoming increasingly prevalent, while many others are in the
initial stages of demonstration.
We need to understand the impact of the issues above and related issues
regarding technologies such as magnetic recording and the synthesis of new
materials with improved properties such as higher BH-products. Because of the
resurgence of the science and applications of magnetism, it is important that we
reestablish the teaching of magnetism as a priority in our universities as a whole
rather than at only a few institutions that presently teach it.
Much remains to be learned concerning the nature of spin-polarized trans-
port. Questions need to be answered about the role of structure and the relation-
ship of surface and interface structure to magneto-transport, the scattering mecha-
nisms at interfaces in GMR, and the physics of the temporal and spatial decay of
nonequilibrium magnetism. Also required is a detailed understanding of the
mechanism of spin injection, either directly or through tunneling barriers from
maturation of GMR and development of CMR and MTJs in the not too distant
future. Although these changes will have major impact on computing and com-
munications over the next few years, it is clear that extensive research will be
required to produce new concepts, as will new approaches to reduce research
concepts to practice, if these industries are to maintain their historical growth rate
over the long term.
Continued research is needed to advance the fundamental understanding of
materials and phenomena in all areas. For example, despite the extensive techno-
logical application and impact of magnetic materials and, despite more than a
century of research in magnetic materials and phenomena, we lack a first-prin-
ciples understanding of magnetism. By comparison, the technology underlying
optical communication is very young. The past few years has seen enormous
scientific and technological advances in optical structures, devices, and systems.
New concepts such as photonic lattices, which are expected to have significant
technological impact, are emerging. We have every reason to believe that this
field will continue to advance rapidly with commensurate impact on communica-
tions and computing.
As device and feature sizes continue to shrink in integrated circuits, scaling
will encounter fundamental physical limits. The feature sizes at which these
limits will be encountered and their implications are not understood. Extensive
research is needed to develop interconnect technologies that go beyond normal
metal and dielectrics in the relatively near term. Longer term, technologies are
needed to replace today’s silicon field-effect transistors. One approach that bears
investigation is quantum state switching and logic as devices and structures move
further into the quantum mechanical regime.
A major future direction is nanostructures and artificially structured materi-
als, which was a general theme in all three areas. In all cases, artificially struc-
tured materials with properties not available in nature revealed unexpected new
scientific phenomena and led to important technological applications. As sizes
continue to decrease, new synthesis and processing technologies will be required.
A particularly promising area is that of self-assembled materials. We need to
expand the research into self-assembled materials to address such questions as
how to control self-assembled materials to create the desired one-, two-, and
three-dimensional structures.
As our scientific understanding increases and synthesis and processing tech-
nologies of organic materials systems mature, these materials are expected to
increase in importance for optoelectronic and, perhaps electronic, applications.
Many of the recent technological advances are the result of strong interdiscipli-
nary efforts as research results from complementary fields are harvested at the
interface between the fields. This is expected to be the case for organic materials;
increased interdisciplinary efforts—for example, between condensed-matter and
materials physics, chemistry, and biology—offer the promise of equally impres-
sive advances in biotechnology.
Priorities
• Develop advanced synthesis and processing techniques, including those
for nanostructures and self-assembled one-, two-, and three-dimensional struc-
tures.
• Pursue quantum state logic.
• Exploit physics and materials science for low-cost manufacturing.
• Pursue the physics and chemistry of organic and other complex materials
for optical, electrical, and magnetic applications.
• Develop techniques to magnetically detect individual electron and nuclear
spins with atomic-scale resolution.
• Increase partnerships and cross-education/communications between in-
dustry, university, and government laboratories.
Our ability to make new materials and structures—both in bulk and in re-
duced dimensions or length scales—is inextricably linked to the advancement of
our understanding of fundamental phenomena in condensed-matter and materials
physics. This chapter describes some of the past decade’s advances in inorganic
materials and structures. Some of the advances and promising new areas in
organic materials are discussed in Chapter 5. As described in Box 2.1, an aston-
ishing array of new materials with unexpected properties has come over the
horizon. Improvements in synthesis and processing have led to dramatic im-
provements in the properties of established materials and our ability to exploit
these properties. As a result, we can now fabricate new combinations of materi-
als, features of reduced dimensions, and other characteristics that differ in signifi-
cant ways from previous possibilities. Some of these developments have pro-
vided fertile ground for condensed-matter and materials physicists to explore
novel fundamental phenomena; others show promise for finding applications
quickly; some have the potential to change our lives.
New materials underlie the science and technology described throughout this
report. Beyond condensed-matter and materials physics, they enable both sci-
ence and future technologies. In some cases, entirely new and unexpected phe-
nomena appear in a class of new materials. Layered cuprate high-temperature
superconductors are a new class of materials that has kept experimentalists and
theorists alike searching to understand the physical basis of high-temperature
superconductivity. New materials sometimes allow entirely new device concepts
to be realized or lead to a dramatic change in their scale, such as single-molecule
wires made of carbon nanotubes; and new forms of already known materials can
93
There have been far too many new developments in the past 15 years or so to
document them all in detail, but all these developments have been made possible
by advances in two intertwined areas: complexity and processing. Many of the
new materials and structures are dramatically more complex, compositionally or
structurally, than have been studied previously. In general, this trend has required
advances in processing to allow control of the increased complexity. In other
cases, the final product may not be much more complex than other well-known
materials or structures, but the processing itself may need to be altered to achieve
more control over the growth process in order to obtain the new material.
Advances giving rise to new materials and structures fall into three categories.
Some involve the synthesis of an entirely new compound or material. The ad-
vance may have been revolutionary, meaning that the properties of the new mate-
rial (or in some cases its existence) could not have been predicted. In other cases,
advances in processing have allowed fabrication of new or modified materials or
structures whose properties were suspected before the material was actually
made. This may allow a well-known compound to be remade in a new form with
different properties. Third, well-known materials are sometimes found to exhibit
new (in some cases unexpected) properties that appear when the ability to pro-
cess them is improved. The new property may be found in a known material
simply by looking at it in a new light, which shines on it as a result of insight gained
from another materials system.
The materials advances listed in Table 2.1.1 were driven by different motiva-
tions. Many addressed a technological need, such as the need to transfer or store
information. Others were driven by scientific curiosity. Although the driver can be
clearly identified in each case, the two sets are not mutually exclusive. Many
discoveries that result from pure scientific curiosity ultimately find their way into
products. For example, low-temperature superconductors are now used in mag-
nets for magnetic resonance imaging. Other discoveries, though originally moti-
vated by a technological need, give rise to very beautiful and fundamental insights.
For example, the fractional quantum Hall effect was first observed in high-mobility
semiconductor structures now used in high-frequency applications.
TABLE 2.1.1 Some New Inorganic Materials of the Past Fifteen Years
Advance Driver Nature of Advance
New compounds/materials
High-temperature superconductors Science Revolutionary
Organic superconductors Science Revolutionary
Rare-earth optical amplifier Technology Evolutionary
Intermetallic materials Technology Evolutionary
High-field magnets Technology Evolutionary
Organic electronic materials Technology Evolutionary
Magnetooptical recording materials Technology Evolutionary
Bulk amorphous metals Technology Evolutionary
The remainder of this chapter examines a few of the past decade’s most im-
pressive advances in materials and structures. The selections emphasize a number
of themes that have emerged in materials research. Some of the discoveries have
been completely unexpected. Others were predicted, although the experimentalists
did not always know of these predictions when they did their work. Our thinking
about new materials has changed fundamentally; we now consider dramatically
more complex possibilities in our search for new materials than we did a decade
ago. In some classes of materials that have been studied for many decades, we have
achieved a much deeper understanding of physical and chemical mechanisms that
govern their properties. This understanding in turn has led to improvements in the
properties of the materials, either through elimination of problems inherent in
existing materials by improved processing or by the introduction of new materials.
Even in a material as thoroughly studied as carbon, a myriad of new forms has been
discovered, exhibiting a wide range of properties. Shrinking the dimensions of
well-known materials such as semiconductors has led to properties dramatically
different from those of the bulk. New concepts in thin-film growth have led to
improved film properties by changing the growth and processing windows. Fi-
nally, there has been a change in the attitude toward strain in heteroepitaxial sys-
tems that allows strain to be used to tailor the morphology as well as the electrical
properties of the layers. The culmination of this effort is in the use of strain to
induce self-assembly of quantum dots.
COMPLEX OXIDES
Surely one of the most surprising developments since the publication of the
Brinkman report1 has been the discovery of high-temperature superconductivity
in complex oxide materials, beginning in 1986 with the observation by Bednorz
and Müller of superconductivity near 30 K in La2-xBaxCuO4. This discovery was
rewarded with the 1987 Nobel Prize in Physics (see Table O.1). The field ex-
ploded with the discovery of superconductivity at temperatures in excess of the
boiling point of liquid nitrogen (77 K). The family of known high-temperature
superconducting materials now numbers near 100, with the highest supercon-
ducting transition temperature (Tc) above 130 K. High-temperature superconduc-
tivity has significantly altered the direction of condensed-matter and materials
physics in several ways. The excitement generated by this totally unexpected
discovery attracted researchers from throughout the field of condensed-matter
and materials physics and beyond to the study of these fascinating materials.
More recently, the principles that have been successful in the study of these
materials have proven valuable in the study of other areas of condensed-matter
and materials physics, most notably other sorts of oxides.
1National Research Council [W.F. Brinkman, study chair], Physics Through the 1990s, National
Academy Press, Washington, D.C. (1986).
FIGURE 2.3.1 A suggested phase diagram of vortex matter in the magnetic field-
temperature plane. Several vortex liquid and solid phases are illustrated, includ-
ing a liquid of entangled vortex lines, a perfect hexagonal lattice, a polymer glass
of entangled lines, and solid phases disordered by point pinning defects (vortex
glass) or by line pinning defects (Bose glass). The melting transition is first-order
from a lattice and proposed to be second-order or continuous from a glass. A
critical point may occur on the melting line, where the first-order character disap-
pears. The normal and vortex liquid states are separated by a fluctuation domi-
nated by crossover rather than by a true phase transition. (Courtesy of Argonne
National Laboratory.)
Vortex matter has emerged as a vital field, with its own developing issues and
international community of researchers. It extends traditional studies of atomic
matter in several ways. For example, vortex density is linear in the applied mag-
netic field, so it can easily be changed by an order of magnitude with the twist of a
dial. Experimental access to such a large density range is unheard of in atomic
matter. The interactions among vortices are well-known Lorentz forces, which can
be treated analytically or in simulation with no uncontrolled approximations. Ad-
vanced materials development has produced clean crystals with few pinning de-
fects, revealing intrinsic thermodynamic behavior and its evolution under controlled
disorder induced by electron or heavy ion irradiation. Finally, vortices can be set in
motion by the Lorentz force from an externally applied transport current, enabling
studies of driven phases, steady-state motion, and the new area of dynamic phase
transitions. This remarkably rich microcosm of condensed-matter physics owes its
existence to two materials developments: the landmark discovery of high-temper-
ature superconductors, which introduced large thermal energies into the vortex
phase diagram, and dramatic improvements in materials perfection, which enabled
experimental studies of the delicate thermodynamics of collective vortex behavior.
defects of a particular type into the material by, for example, ion irradiation or
judicious atomic substitution allows the properties to be adjusted.
The superconducting oxides with Tc above 77 K all contain at least four
elements, two of which are copper and oxygen. Oxygen moves readily in these
materials, during both sample preparation and subsequent processing. Changing
the oxygen content by just a few percent can determine whether a material is a
superconductor or an insulator. It can also govern the symmetry and crystal
structure of the material, resulting in phase transformations during specimen
preparation that, to date, have been unavoidable. Precise control of the stoichi-
ometry of the metal constituents is also required to optimize the superconducting
properties, although the consequences of deviations from ideal stoichiometry are
not nearly as critical for the metals as for oxygen.
Current interest in the high-temperature superconducting materials centers
around two general areas: superconducting electronics and the carrying of large
currents. The electronics applications can be further subdivided into logic and
high-frequency applications. Electronics applications require thin films, generally
in combination with films of other materials. The fabrication of reproducible
tunnel junctions with useful properties for logic applications has been very chal-
lenging because of the incompatibility of high-temperature superconducting mate-
rials with most nonoxide barrier materials and the extremely short coherence length
of the superconductor. Quite a few metallic oxides with compatible crystal struc-
tures have been identified and studied as a result of considerable research into
suitable barrier materials. A promising area of application is in components for
communications, particularly in the gigahertz frequency domain. The major issues
are the surface resistance of the material and electrical nonlinearities at high fre-
quencies. Though there has been considerable progress in improving surface resis-
tance in the past few years, detailed understanding of the relationships between this
and other relevant properties and the structure of the materials is still emerging.
Technological applications demand large-area films that can be deposited
fast enough to be economically viable. There has been dramatic progress, with
high-quality films of YBa2Cu3O7-x (see Figure 2.2) now available on substrates
several hundred square centimeters in area.
Current-carrying applications require bulk material or thick films. Grain
boundaries, especially those with significant misorientation between grains, are
extremely detrimental to high critical currents because of both the extreme anisot-
ropy of the materials properties and the properties of the grain boundaries them-
selves. The most successful approach for bulk materials with properties of poten-
tial technological interest has been the use of drawn, multifilament wires, especially
in the bismuth system. The drawing induces alignment of the grains in the fila-
ments and increases the critical-current density. More recently, biaxial orientation
has been achieved in thick YBa2Cu3O7-x films deposited on metal substrates, either
coated with an aligned buffer layer fabricated by ion beam-assisted deposition or
with strong crystallographic alignment induced in the substrate by rolling.
It has proven very fruitful to apply the principles discovered and techniques
developed for high-temperature superconductivity to other classes of complex
oxides. In some cases, this research has been driven by the need for materials
with specific electronic or magnetic properties that are chemically and structur-
ally compatible with high-temperature superconductors. These materials are
typically needed as buffer or barrier layers. Compatible materials with other
properties could be needed in the future if high-temperature superconducting
devices are to be successfully integrated with devices having other functionality,
such as memory and optical devices. Perhaps the most impressive demonstration
of the application of lessons from high-temperature superconductivity has been
the recent interest in colossal magnetoresistance in LaMnO3-derived materials
(see Box 2.4).
ELECTROCERAMICS
Electroceramic materials have been studied and used for many decades be-
cause of their interesting and in some cases novel properties, such as ferroelec-
tricity, piezoelectricity, pyroelectricity, and electro-optic activity. Current interest
in the ferroelectrics centers on their potential in nonvolatile memories and high
dielectric constant capacitors. Micromachines, such as accelerometers, displace-
ment transducers, actuators, and so on, require piezoelectric materials. Room-
temperature infrared detectors make use of pyroelectric properties. Electro-optic
properties enable color filter devices, displays, image storage systems, and the
optical switches required in integrated optical systems.
Electroceramics can serve as “smart” materials, functioning as both sensors
and actuators (see Box 2.5). All smart materials have at least two phase transi-
tions (e.g., crystallographic and electronic), and their synthesis and processing
must be carefully controlled to regulate their excursions through phase space.
The complexity of these phenomena and the materials that display them has made
this an exciting area. There has been dramatic progress in the control of electro-
ceramic materials properties, in understanding the relationships between proper-
ties of interest and the underlying microstructural mechanisms that control them,
and in integrating various materials to give improved properties or even new
behavior.
Progress has been especially impressive in the ferroelectric materials. Ex-
tensive research has focused on understanding the mechanisms responsible for
the degradation of ferroelectric and high-permittivity perovskite thin films with
time, temperature, and external field stress. The three most important degrada-
tion phenomena are ferroelectric fatigue, ferroelectric aging, and resistance
degradation.
Ferroelectric fatigue, the loss of switchable polarization with repeated polar-
ization reversals, is caused by pinning of domain walls, which inhibits switching
of the domains. Elimination of fatigue is critical for nonvolatile memory applica-
tions. Recent results have shown that charge trapping at internal domain bound-
aries is the primary fatigue mechanism. Fatigue also induces changes in the
oxidation states of isolated impurity point defects, which are much more stable
than optically generated ones in unfatigued samples.
Fatigue can be largely eliminated in some ferroelectric systems [e.g., lead
60,000
50,000
Figure of Merit (10 -15 MKS)
40,000
30,000
20,000
10,000
0
PZ
T 0-3 1-3 3-1 3-3 3-2 on
ie
Mo
FIGURE 2.3 Developments in new electrode materials for ferroelectric capacitors have
reduced the fatigue in these devices by more than 6 orders of magnitude. The upper
image shows a “conventional” capacitor structure of the ferroelectric material
Pb((Nb,Zr)Ti)O3 (PNZT) with unbuffered platinum electrodes, along with the fatigue of
the remanent polarization. The polarization decays to half its initial value after 105
cycles. The lower image shows a capacitor structure with the platinum electrodes buff-
ered by La0.5Sr0.5CoO3. This capacitor shows no fatigue even after 1012 cycles. (Cour-
tesy of the University of Maryland.)
FIGURE 2.4 Scanning-force microscopy of topographic (a) and piezoresponse (b-f) im-
ages of a PZT film grown on a LSCO/TiN/Si substrate. The central grain was switched
completely from the polarization direction down (dark) to up (white). The switch back of
the central grain into the polarization down direction starts mainly at the grain boundaries
with the surrounding grains. [MRS Bulletin 23, 39 (1998).]
tion experiments designed to simulate the chemistry in a red giant carbon star.
Finally, in 1990, a method was developed to produce macroscopic quantities to
allow the intense investigation of this and related compounds that have been a
focus of research in the present decade. As suggested in Figure 2.5, C60 turns out
to be just one of a veritable menagerie of three-dimensional closed carbon mol-
ecules: spheres, tubes, particles, and combinations thereof, with one or multiple
layers. The discovery of fullerenes by Curl, Kroto, and Smalley was recognized
with the 1996 Nobel Prize in chemistry (see Table O.1).
The remarkable geometry of these molecules is enabled by slight deviations
from the hexagonal bonding configuration found in graphite resulting from the
desire to eliminate energetic dangling bonds at the edges of graphite sheets. The
addition of twelve pentagons to the hexagonal array transforms the open graphite
structure into any of the observed closed molecules that have only positive curva-
ture. Heptagonal rings give rise to a saddle-shaped surface when buried among
hexagons.
Carbon nanotubes, which were originally grown as a by-product in the
fullerene-generating chamber, are quasi-one-dimensional structures with a simple
and well-understood atomic structure. A chemist might think of a carbon nano-
tube as a monoelemental polymer. The nanotube is an ideal model for quasi-one-
graphite
diamond
C60
(10, 10) tube
C70
dimensional structures because its known atomic structure makes computer simu-
lations more reliable. Nanotubes can be as much as several microns long, and
tube diameters range from one to a few tens of nanometers. A metal serves as a
catalyst for nanotube formation, preventing the growing tubular structure from
wrapping around and closing into a smaller fullerene cage. Nanotube growth is
believed to take place at the open ends of the tubes. During growth, the open
tubule end required to fabricate long, single-wall nanotubes can be maintained by
a high electric field, by the entropy opposing orderly cap termination, or by the
presence of a metal catalyst. The tube ends tend to close quickly when the growth
conditions become inappropriate, for example, when the temperature drops or
when the carbon atom flux is too low. As long as the tube end is open, carbon
atoms can be deposited on the tube-end peripheries, and they can grow. When
pentagons are formed for some reason, the tubes will be capped. If the axial
growth rate is dominant over the radial one, the tubule will become a single-shell
tube. A comparable growth rate in both the axial and radial directions will form
spheroidal particles.
Characterization of carbon nanotubes has been slow compared to fullerene
research activity, partly because of the inability to synthesize macroscopic quan-
tities of the tubules and to refine them. Many nanotubes are in the form of a
multiple-shell structure of nested cylindrical tubes separated by about 0.34 nm,
which is the same as the d0002 lattice spacing of graphite. Cylindrical crystals are
often seen in biological protein crystals but rarely in inorganic materials. Recent
measurements on single-wall carbon nanotubes have shown that they do indeed
act as genuine quantum wires, confirming theoretical predictions, as shown in
Figure 2.6.
Electronic and mechanical properties of nanotubes deviate from those of a
bulk graphite crystal. Depending on tubule diameter and helicity, both of which
affect the band gap, the behavior can range from metallic to semiconducting.
Because it makes for a more symmetrical structure, less helicity leads to better
conductivity. This leads to true molecules that are also true metals, something
chemistry has never had before. Because of the quasi-one-dimensionality of
these nanotubes, conduction is quantized.
One unexpected phenomenon in nanotubes is the ability to fill them with a
material. Nonhexagonal carbon rings in the hexagonal network are responsible
for tubule morphologies and presumably local strain. After deposition of a small
amount of lead on tubule surfaces and heating, some of the metal clusters move to
heptagon sites. Nanotubes can be opened by mild oxidation at the reactive site at
the closed end. On heating, some of the lead is transported into the central hollow
in the tubule. The intercalated material is crystalline and not pure lead but lead
carbonate or oxide. This finding suggests that the tubule tips react selectively in
air at elevated temperature, but the rest of the tubules do not react. Strain induced
by including pentagons in the tubule tips may be responsible for the selective
reaction. Carbon onions have also been stuffed with metals and metal carbides.
a 3 µm
Vgate Vbias
50 nm
b C
B
A
0.5
1
0
-4 0 4
0.0
A B C
-0.5
-4 -2 0 2 4
Bias voltage (mV)
0.1
0.0
-200 0 200
Gate voltage (mV)
NANOCLUSTERS
Ever since the early 1980s, when scientists began discovering the various
potentially advantageous properties of ultrasmall grains of material—nano-
clusters—there has been tremendous activity as researchers strive to create and
control new types of particles. Much of the recent research has been directed at
finding ways to make small clusters of uniform size with common optical, electri-
cal, and mechanical properties. These efforts have already begun to have com-
mercial payoffs, as in the case of ceramics and chemical catalysts that have
increased efficiency because of their high surface-to-volume ratio.
In any material, substantial variation of fundamental electrical and optical
properties with decreasing size will occur when the electronic energy-level spac-
ing exceeds the temperature. The variation is especially pronounced for semi-
Because fullerenes act as electron acceptors, they can form different types of
salt. Those of the alkali and alkaline-earths have particularly interesting electronic
properties. Photoemission studies probing the occupied electronic states, as well
as inverse photoemission measurements probing the unoccupied electronic states,
have allowed direct monitoring of the nature of electron doping into the C60 levels.
The valence band of solid C60 is derived from a fivefold degenerate hu orbital. The
stability of this orbital makes it difficult to remove electrons from C60. On the other
hand, C60 is a good acceptor because of the threefold degenerate t1u and t1g
levels. Exposing C60 to alkali vapor results in electron filling of the t1u level. Be-
cause the Fermi energy is pinned to the top of the filled level, with increased filling,
the spectral manifold is shifted to lower energy. The threefold degeneracy of the
level means that half-filling corresponds to three electrons. K3C60 is a metal that
becomes superconducting at temperatures lower than 19 K. Further filling leads to
the compound A6C60 (where A is an alkali element). Because this material has a
fully filled t1u lowest unoccupied molecular orbital, it is insulating. The structures of
C60, K3C60, and Cs6C60 are shown on the right in Figure 2.6.1. The structures are
cubic but are represented in tetragonal form. The alkali metal atoms sit in the
tetragonal and octahedral voids of the fcc-C60 structure. With alkaline-earth met-
als, the t1g orbital derived band is also partially filled. Thus Ca5C60, Sr6C60, and
Ba6C60 are also superconducting.
x=6
BCC
x=3
FCC
x=0
FCC
FIGURE 2.6.1 Normal (PES) and inverse (IPES) photoemission density of states
of C60 as a function of exposure to potassium vapor. The gradual filling of the
C60 t1u lowest unoccupied molecular orbital is clearly seen. The spectral mani-
fold shifts to lower energy with increasing exposure because of Fermi level pin-
ning. On the right, body-centered tetragonal representations show the structures
of Cs6C60 (x = 6), K3C60 (x = 3), and C60 (x = 0). [Left: Journal of the Physics and
Chemistry of Solids 53, 1433 (1992); Right: MRS Bulletin 19, 28 (November
1994).]
° diameter
21 A
°
23 A
Optical Density
°
27 A
°
30 A
°
40 A
1.8 2.0 2.2 2.4 2.6 2.8 3.0 3.2 3.4 3.6
Energy (eV)
FIGURE 2.7 Quantum confinement causes the optical spectra of CdSe nanocrystals to
sharpen and move to higher energy as the size of the particle shrinks. [MRS Bulletin 20,
23 (1995).]
a 0.5 nm b1 b2 c
5 nm
b3 b4
50 nm
d1 d2 e
500 nm 50 nm 100 nm
FIGURE 2.8 Gallery of quantum dot structures: (a) Positions of cadmium and sulfur
atoms in the molecular cluster Cd32S55, as determined by single-crystal x-ray diffraction.
This cluster is a small fragment of the bulk CdS zincblende lattice. The organic ligands
on the surface are omitted for clarity. (b1) and (b2) Transmission electron micrographs of
CdSe nanocrystals with hexagonal structure, as viewed down different crystallographic
axes. These nanocrystals were prepared colloidally and exhibit well-defined facets. The
surfaces are passivated with organic surfactants. (b3) and (b4) Transmission electron
micrographs of CdS/HgS/CdS quantum dot quantum wells. The faceted shapes show that
epitaxial growth for passivation is possible in colloidally grown nanocrystals. (c) Trans-
mission electron micrograph of a CdSe quantum dot superlattice. (d1) Scanning electron
micrograph of two coupled GaAs quantum dots about 500 nm in diameter. The strength
of the coupling can be adjusted by adjusting the gate voltage. (d2) Transmission electron
micrograph of coupled CdSe nanocrystal quantum dots 4 nm in diameter. These crystal-
lites are joined by an organic molecule. The coupling can be tuned by changing the linker
length. (e) Transmission electron micrograph of InAs quantum dots in a GaAs matrix,
prepared by molecular beam epitaxy. [Reprinted with permission from A.P. Alivisatos,
“Semiconductor clusters, nanocrystals, and quantum dots,” Science 271, 934 (1996).
Copyright © 1996 American Association for the Advancement of Science.]
The main reason for the high level of interest in semiconductors of reduced
dimensionality results from their large quantum-size effects. The band gap in
cadmium selenide can be tuned between 4.5 and 2.5 electron volts (eV) as the
size is varied from the molecular regime to the macroscopic crystal, and the
radiative lifetime for the lowest allowed optical excitation ranges from tens of
picoseconds to several nanoseconds. The energy above the band gap required to
add an excess charge decreases by 0.5 eV. The melting temperature increases
from 400 to 1600°C, and the pressure required to induce transformation from a
ABS
Absorbance (arbitrary units)
PL
Wavelength (nm)
FIGURE 2.9 Solid lines show optical absorption (ABS) and photoluminescence (PL)
spectra at 10 K for close-packed solids of CdSe quantum dots 3.85 nm (curve a) and 6.2
nm (curve b) in diameter. Dotted lines are photoluminescence of the same dots but in
dilute form dispersed in a frozen solution. [Reprinted with permission from C.B. Murray,
C.R. Kagan, and M.G. Bawendi, “Self-organization of CdSe nanocrystallites into three-
dimensional quantum-dot superlattices,” Science 270, 1336 (1995). Copyright © 1995
American Association for the Advancement of Science.]
hole combine, resulting in a shorter wavelength. The smaller the dot, the greater
the frequency shift. Making a true quantum dot laser has proven difficult. It is
not straightforward to make the dots the same size, and the result has been that the
devices emit a range of light frequencies. Very recent work in this area has
yielded dots of more uniform size, with characteristics more indicative of true
laser activity.
The study of film growth has been increasingly characterized by the application
of surface science methods to understanding growth at the atomic level. Both
technology and the desire for fundamental knowledge at the atomic level are driv-
ing the search for atomic-level control of the fabrication processes for novel mate-
rials and new devices.
Growth of thin films from atoms deposited from the gas phase is intrinsically a
nonequilibrium phenomenon, governed by a competition between kinetics and
thermodynamics. Precise control of the growth and thus of the properties of thin
films becomes possible only through an understanding of this competition. Exper-
iment and theory have both made impressive strides in exploring the kinetic mech-
anisms of film growth, including adatom diffusion on terraces, along steps, and
around island corners; nucleation and dynamics of the stable nucleus; atom at-
tachment to and detachment from terraces and islands; and interlayer mass trans-
port. The synergism between experiment and theory has tremendously improved
our understanding of the kinetic aspects of growth.
The diffusion of an adatom on a flat surface or terrace is by far the most impor-
tant kinetic process in film growth. Despite the vital importance of surface diffu-
sion, accurate determination of the surface diffusion coefficient in a broad range of
environments has been a major challenge. Scanning-tunneling microscopy (STM)
has improved the situation considerably. STM can image a vastly broader range
of surfaces than can field ion microscopy, which has traditionally been used for
such studies. Atom-tracking STM has been especially valuable because it allows
an atom or cluster to be followed as it migrates. Information from such experi-
ments can then be fed into theories to provide deeper understanding of the mech-
anisms at play in adatom diffusion.
The availability of new probes of the initial stages of nucleation and growth has
meant that even well-studied systems have continued to yield new insights. Much
recent attention has focused on the possible pathway for nucleation of a silicon ad-
dimer, the stable nucleus for a wide range of growth conditions for homoepitaxy on
Si(100) (see Figure 2.7.1). A silicon adatom may have multiple diffusion pathways
on the surface before finding a partner, as all calculations have suggested. Recent
experiments have focused on determining the relative stability of different dimer
orientations and have been able to distinguish slight differences. Studies have
also focused on the preferred locations of dimers, where there are still significant
differences between experiment and theory. Experiments have revealed some
surprisingly large anisotropies in larger islands as they grow.
As islands grow, specific island shapes or morphologies develop. One class is
compact, whereas another is fractal-like, with rough island edges or highly aniso-
tropic shapes. Recent studies of two-dimensional island formation in metal-on-
metal epitaxy have identified several aspects of atom diffusion along island edges
that are important in controlling the formation of fractal islands. Fractal island
growth is very dependent on bonding geometry, having been reported only on
face-centered cubic (111) or hexagonal close-packed (0001) substrates, both of
which have approximately triangular lattice geometry. Growth on face-centered
cubic (100) surfaces with square lattice geometry has so far resulted only in com-
pact islands. This observation has required modification of the classic diffusion-
limited aggregation model.
FIGURE 2.10 Films deposited by glancing angle deposition (GLAD): (a) oblique evap-
orated flux at 85 degrees from the substrate normal produces a slanted, porous micro-
structure; (b) periodically alternating the oblique flux from angles of 85 degrees to –85
degrees produces a porous film composed of isolated “zigzags”; (c) rotating the substrate
about an axis normal to the wafer center while maintaining obliquely incident (85 degree)
flux produces isolated helical structures on the substrate. [Reprinted with permission
from K. Robbie and M.J. Brett, “Sculptured thin films and glancing angle deposition:
Growth mechanics and applications,” Journal of Vacuum Science and Technology A 15,
1460 (1997). Copyright © 1997 American Vacuum Society.]
FIGURE 2.11 The use of a surfactant dramatically alters the morphology of a growing
film. In this figure are medium energy ion scattering (MEIS) spectra for germanium films
on Si(111) at 470 °C. Both random (solid line) and channeling (dotted line) data are
shown. (a) Ten monolayers of germanium deposited with no gallium. Note the island
morphology. (b) Twenty monolayers of germanium deposited with one-third of a mono-
layer of gallium as a surfactant. Note the columnar morphology. (c) Twenty-eight mono-
layers of germanium with one monolayer of gallium surfactant. Note the smooth mor-
phology. [Reprinted with permission from J. Falta, M. Capel, F.K. Legroves, and R.M.
Tromp, Applied Physics Letters 62, 2962 (1993). Copyright © 1997 American Institute of
Physics.]
Multilayers are artificially structured materials that are periodic in one dimen-
sion in composition or both composition and structure. These layered materials
are, if perfect, equivalent to single crystals in one dimension. Thus the multilayer
acts as a superlattice, diffracting longer-wavelength radiation in a manner directly
analogous to the diffraction of x-rays by crystals. This application of multilayer
structures as dispersion elements for soft x-rays and extreme ultraviolet radiation
was the impetus for the first attempts to synthesize multilayer materials. Many
factors determine the character of the multilayer response to an incident spectrum.
The important parameters are the substrate quality (roughness and figure), the
uniformity and thicknesses of the component layers, the x-ray optical constants of
the component elements, the number of layers in the structure, the interfacial width
between layers (i.e., interfacial abruptness in atomic position and composition),
and roughness at layer interfaces. Many of these factors depend in turn on the
synthesis process and the materials. Therefore, understanding of multilayer per-
formance depends on a knowledge of the relationships among synthesis process,
resultant microstructure, and properties for these engineered microstructure
materials.
The individual layers of the optics have a specific set of properties related to
bulk forms of the materials. Primary issues include the compositions and struc-
tures of the layers, the x-ray optical properties of the layers, and uniformity of the
areal density of atoms in the layers (see Figure 2.8.1). Specific synthesis ques-
tions relate to the film nucleation and growth behavior because deposition of ma-
terial A onto a substrate or layer B may differ substantially from deposition of ma-
terial B onto a substrate or layer A. Interfaces within the multilayer must also be
controlled to an excruciating degree. They must be compositionally abrupt,
smooth, clean, and flat.
Recent work has shown that precise control of sputtering parameters during
multilayer deposition allows control of individual layer thicknesses to an accuracy
of better than ~0.01 nm, which greatly enhances reflectivity for both nickel-carbon
and tungsten-carbon multilayers. Sputter deposition of multilayers typically pro-
duces higher quality structures than thermal source techniques. This has been
attributed to ion bombardment by the sputter plasma resulting in smoother inter-
faces and higher reflectivities. Results of ion beam-assisted deposition support
this proposal. Thermal-evaporation-source synthesized rhodium-carbon multilay-
ers with and without argon ion bombardment (300 eV) at an incidence angle of 10
degrees show the effect. A gain of more than a factor of two in reflectivity was
found for the samples “polished” by the incident ion beam. This increased reflec-
tivity is attributed to smoothing of the interfaces between the carbon and rhodium
by a factor of 30 percent by the ion bombardment. Combining these two improve-
ments in control are likely to facilitate fabrication of higher quality multilayer struc-
tures, particularly of smaller periods.
Multilayer structures may be optimized and engineered for specific spectral
ranges by an analysis for optimum materials on the basis of their x-ray constants
and an assessment of their suitability for multilayer microstructure synthesis. As
an example, there are difficult spectral regions in which the lowest absorption
materials useful as spacer layers are either toxic, such as beryllium, or unstable,
such as lithium. Candidate materials such as magnesium are difficult to deposit as
0.5
NIST (Tario)
AMP-LLNL (Montcalm)
0.4 Calculation
Reflectivity
0.3
0.2
0.1
0.0
170 180 190 200 210 220
Wavelength (Å)
FIGURE 2.8.1 Transmission electron micrograph of a 6.9 nm period Mo2C/Si
multilayer x-ray mirror (top) and the experimental and calculated reflectivity as a
function of x-ray wavelength (bottom). The experimental reflectivity is 93.5 per-
cent of the calculated values. [Reprinted with permission from T.W. Barbee, Jr.,
and M.A. Wall, “Interface reaction characterization and interfacial effects in multi-
layers,” Proceedings of the SPIE 3113-20, 204 (1997). Copyright © 1997 SPIE.]
FIGURE 2.13 Atomic force microscope image of an initially planar 2-nm thick Si0.5Ge0.5
alloy layer on Si(001) after annealing to produce hut-shaped islands caused by strain in
the layer. [MRS Bulletin 21, 31 (1996).]
opments have been complete surprises; this will almost certainly be true for the
foreseeable future as well.
Physics: Understanding
The materials and structures on the horizon offer rich possibilities for
condensed-matter and materials physicists. More perfect materials will enable us
to move toward developing a full understanding of the relationship between the
detailed structure of a material and its properties. The ability to control defects
will enable them to be studied themselves—how they interact with the material
they inhabit and even how judiciously assembled collections of them interact
with one another and with different defects. Advances of the past decade in
probing surfaces and interfaces on the atomic scale offer the possibility that a full
understanding of the initial stages of growth in systems more complex than
silicon may one day truly be possible. Control of the structure of materials on
various length scales simultaneously offers the opportunity to look for effects that
result from the interplay of structure on these different length scales.
Technology: Relevance
Advances in new materials and structures have dramatically improved our
lives in the past, and there is every reason to expect that new advances will have
comparably great impact in the years to come. For this to happen, sustained
research will be needed over many years. This research will need to have a
balance between fundamental investigations into the physical mechanisms at
play and research and engineering aimed at investigating the numerous questions
that must be answered before a material can enter the technological mainstream:
What can the material be used for? Is there a potential market of sufficient size to
pay for the needed research and development? Is the advance so revolutionary,
with improvements in customer capability so great, that it can found a new
industry? If the improvement is in an area already occupied by an existing
technology with significant infrastructure, can the material be integrated with the
existing technology? And if so, is the improvement worth the development cost?
Just as revolutionary advances in new materials and processes enabled the
transistor, the optical fiber, the solid-state laser, and many other technologies that
have improved our lives and strengthened the economy, new developments in
materials and structures hold out the promise of revolutionary breakthroughs in
the twenty-first century.
Research Priorities
• Tailor materials at the molecular level.
• Use more complex combinations of materials: polymers, organic mol-
ecules, biological molecules, etc.
• Develop new tools to synthesize, visualize, characterize, and manipulate
new materials and structures.
• Make increasingly complex materials and combinations with as much
control as is currently possible in the making of semiconductors.
• Increase our understanding of and the ability to use self-assembly and
biomimetic techniques to produce and process materials.
• Merge molecular chemistry and condensed-matter and materials physics
to understand and control fabrication and processing on multiple length-
scales.
• Integrate processing of new materials and structures with existing tech-
nologies.
137
pletely unexpected properties. Living matter and life itself are perhaps the most
spectacular examples of emergent phenomenon; no matter how much we learn
about individual atoms, life cannot be understood or explained in this purely
reductionist manner. One of the biggest surprises of the last decade was high-
temperature superconductivity. It is hard to imagine a less likely candidate for a
superconductor than an insulating ceramic compound with properties similar to
those of a china coffee cup. Yet when chemically doped to introduce charge carriers,
such compounds not only superconduct, they do so at record high temperatures.
The characteristic energy scale for individual atoms is 1 to 10 electron volts
(eV). However, as we look on larger length scales at collections of atoms,
characteristic energies become smaller and smaller, and excitations become more
and more collective. At low energies, the effective elementary degrees of free-
dom may be collective objects very different from individual electrons and at-
oms, and their effective interactions may be very different from the original
“bare” Coulomb interactions. These collective effects are the source of the
surprises that emerge.
It is instructive to compare this situation with that in high-energy elementary
particle physics. There we know the effective degrees of freedom and their
interactions at low energies—it is the world of atoms around us. The intellectual
challenge is to understand degrees of freedom at shorter and shorter length scales
and higher and higher energy scales. This is done by constructing high-energy
particle accelerators to act as microscopes with ever greater magnification, or by
studying extreme conditions in astrophysical systems and the early universe.
This approach is just the reverse of what is done in condensed-matter physics,
where we strive to understand collective effects at longer and longer length
scales. The analog of the particle accelerator is the refrigerator, which lowers
thermal energy scales and increases the distance over which particles suffer in-
elastic collisions. The analog of an extreme astrophysical system is a sample in
a dilution refrigerator. The intellectual challenge is the same in the two fields: to
find correct descriptions of the physics that work over a wide range of scales.
Fifty years ago understanding a novel quantum object known as a “hole” (see
Box 3.1) led to the invention of the transistor. In the past decade there has been
tremendous progress in the discovery and study of a variety of novel quantum
phenomena. This chapter presents brief descriptions of a few examples drawn
from superfluidity, superconductivity, Bose-Einstein condensation, quantum
magnetism, and the quantum Hall effect. It cannot cover many other fascinating
areas of development in the last decade, including significant advances in our
understanding of quantum critical phenomena, non-Fermi liquids, metal-insulator
and superconductor-insulator transitions in two dimensions, quantum chaos and
the role of interactions, coherence, and disorder in mesoscopic systems.
There has been particularly significant progress in this last area, both tech-
nologically and theoretically. For example, electron “wave guides” have been
constructed, and the quantization of their conductance in units of e2/h has been
Fifty years ago at the time of the invention of the transistor, the hot topic in
condensed-matter physics was an exotic quantum object, the “hole,” whose pre-
dicted existence was one of the great early triumphs of quantum mechanics. The
ability to create and manipulate these “holes” is crucial to the operation of diodes,
transistors, photocells, light-emitting diodes, solid-state lasers, and computer
chips.
To understand the concept of the hole, consider the fact (illustrated schemati-
cally in Figure 3.1.1) that when atoms are assembled into a solid, the discrete
quantum energy levels of the individual atoms smear out into bands of quantum
levels. The Pauli exclusion principle tells us that each band state can hold no more
than one electron. In a semiconductor, the highest occupied band (the valence or
“bonding” band) is separated by a small but crucially important energy gap from
the lowest unoccupied band (the conduction or “anti-bonding” band).
The Pauli exclusion principle has the important consequence that a filled band
is inert. It is impossible to excite the system by moving an electron to a new state
within a band that is already entirely filled up, so the only way to achieve the lowest
energy excitation that can be made in a semiconductor is to lift an electron from the
valence band across the gap to the conduction band. It is easy to visualize the
electron in the conduction band as a particle that can move around, carry current
and so forth. Paradoxically, quantum mechanics also teaches us that the absence
of an electron in the otherwise-filled valence band should be viewed as a hole that
behaves like a kind of anti-particle (much like the positron, which is the anti-particle
of the electron in high-energy physics). Without the hole, the valence band is inert
and carries no charge or current. The electron that was removed had negative
charge and carried some particular current. Hence, we must assign the hole a
positive charge and the opposite current. Without quantum mechanics guarantee-
ing that a filled band is inert, this assignment would not be meaningful.
Introduction of chemical dopants into semiconductors can produce an excess
of electrons (n-type material) or an excess of holes (p-type material). Remarkable
materials physics advances in purification and doping control of silicon now allow
routine inexpensive construction of the special types of junctions between p- and
n-type material that play such a crucial role in today’s solid-state technology. So
next time you turn on your computer, remember quantum mechanics is at work!
conduction band
atom
hole
valence band
FIGURE 3.1.1 Energy bands in solids. [Reprinted with permission from S.M.
Girvin, “Exotic quantum order in low-dimensional systems,” Solid State Commu-
nications 107, 623 (1998). Copyright © 1998 Elsevier Science.]
FIGURE 3.1 (Left ) The long straight lines mark the boundaries of three substrate stron-
tium titanate films with different crystallographic orientations. The four circles are rings
of the high-temperature superconductor yttrium barium copper oxide, grown epitaxially
on the substrate. The substrate orients the dx2–y2 Cooper pair wave function as indicated
by the four-leaf clovers. At the point where the ring crosses from one substrate to the
next it has a grain-boundary Josephson junction. Cooper pairs moving from the lower
section to the upper left suffer an orientation change of less than 45 degrees and hence
have a positive overlap with their new state. The same is true for tunneling from the
upper left to the upper right. However, the orientation change from the upper right to the
lower section exceeds 45 degrees and hence gives a negative overlap. A Cooper pair
traveling around the central ring thus picks up a net minus sign (a phase shift of p), which
results in destructive interference that raises the energy. The other three rings remain
unfrustrated. The frustration of the central ring can be alleviated if a spontaneous current
begins to circulate, which produces a half quantum of magnetic flux, because the Aha-
ronov-Bohm effect would then introduce an additional ±p phase shift. This saves more
than enough energy to pay for the cost of producing the current. Thus, if the supercon-
ductor is d-wave, the system is unstable to producing half a flux quantum trapped in the
central ring. The bright spot in the central ring is an image based on a scanning-probe
measurement, using superconducting quantum interference devices, of the local magnetic
field. [Reprinted with permission from Barbara Levi, Physics Today 49, 19-22 (1996).
Copyright © 1996 American Institute of Physics.] (Right) The same data are shown but
in a three-dimensional representation. The total integrated flux in the central ring is very
close to half a flux quantum, providing proof that the system is frustrated and is almost
certainly d-wave. [Reprinted with permission from Nature 373, cover page (January 9,
1995). Copyright © 1995 Nature.]
FIGURE 3.2 One can interpret classical superconductor vortex lines as world lines of
quantum bosons moving in two space dimensions and one time dimension. In the Feyn-
man-path-integral formulation of quantum statistical mechanics, the trajectories of the
bosons must be followed over a time interval h/kBT. Hence the thickness of the supercon-
ducting sample determines the inverse “temperature” of these fake quantum bosons. No-
tice that the columnar defects constitute a potential for the bosons that is random in space
but constant in time. Thus we have the quantum mechanics of bosons in two dimensions
in the presence of a static random potential. There has been a great deal of interest in the
last decade in this “dirty boson” problem as a model for helium in porous media (or
adsorbed on surfaces) and for modeling the superconductor-insulator transition in metal-
lic films (viewing the Cooper pairs as bosons). There are two different phases at zero
temperature: at low densities or strong disorder, the bosons are localized in an insulating
“Bose glass” phase; at high densities or weaker disorder, the bosons are condensed into a
superfluid state. We can now map these pictures back onto the vortex system: The
“insulating Bose glass” phase is superconducting because the vortices are localized. The
“superfluid” state of the bosons means that the vortices are freely moving (even though
they are highly entangled). Thus the “vortex liquid” has dissipation and represents the
nonsuperconducting state. This “Bose glass” generalization of the vortex glass idea can
be pursued further to include analogs of the Mott-Hubbard Bose insulator, Mott variable
range hopping, and boson tunneling between localized states. Furthermore, our knowl-
edge of the dynamical structure factor for quantum bosons makes predictions for the
static structure factor for the vortex fluctuations that have been confirmed by small-angle
neutron scattering experiments.
pinning. The vortex glass phase does not exist in two dimensions, and it exists in
three dimensions only in the absence of magnetic screening (so that collective
effects are sufficiently strong).
The pinning efficiency for extended columnar defects is much better. These
can be constructed using the linear damage tracks produced by fast-moving heavy
ions from an accelerator. The statistical mechanics of fluctuating vortices in this
situation has an elegant interpretation by means of an analogy with the quantum
mechanics of a Bose liquid (see Figure 3.2).
This quantum boson analogy clearly demonstrates the existence of a phase
transition in which the vortices can become localized by columnar pins leading to
a state with truly zero resistivity in linear response.
The relatively small size of the Cooper pairs in high-temperature supercon-
ductors puts the superconducting transition in a new regime, closer to the Bose-
Einstein condensation limit.
The most extreme regime of Bose-Einstein condensation has recently been
achieved with the creation of condensates in gases of alkali metal atoms held in
atom traps and cooled to nanokelvin temperatures. These are analogous to
helium-4 in the sense that the particles are bosonic, but in this case the gas is
dilute—the spacing between particles is much larger than the scattering length—
and, hence, represents the nearly ideal case of pure Bose-Einstein condensation.
a)
b)
FIGURE 3.3 A laser produces coherent light, which we can think of as Bose condensate
of photons. Understanding what this means can help us understand Bose-Einstein con-
densates (BECs) and atom lasers. Laser light wave oscillations are very similar to the
waves produced by an ordinary radio transmitter, as shown in a. Because nothing fixes
the phase of an oscillator, the phase undergoes a slow random walk, but the amplitude
fluctuates very little. The random walk of the phase introduces a finite correlation time
and hence a small but finite spectral bandwidth. A thermal source of the same bandwidth
could be obtained by passing black-body radiation through a narrow filter. The resulting
wave would look something like that shown in b. The large amplitude fluctuations are a
result of interference among the different Fourier components, each of which has a ran-
dom amplitude and phase. If the intensity and bandwidths are the same, the autocorrela-
tion functions will be very similar. To distinguish coherent light from incoherent, we
therefore have to look at fluctuations. For a perfectly coherent state there are no fluctua-
tions: 〈ψ†ψ†ψψ〉 – 〈ψ†ψ〉2 = 0. The quantum mechanical interpretation of this is that the
photons are Poisson distributed independently of each other, so that there is no bunching.
For a thermal source, the fields are gaussian random variables and Wick’s theorem tells
us that 〈ψ†ψ†ψψ〉 = 2! 〈ψ†ψ〉2, so that there are very large intensity fluctuations. The
quantum interpretation is that there is excess “noise” or “bunching” in the photon distri-
bution. In the atom trap, the potential energy of the short-range interacting bosons is
given by the probability of finding two bosons at the same place at the same time,
〈V〉 ∝ 〈ψ†ψ†ψψ〉. Experiments comparing the normal and condensed states at the same
density have shown that, just as expected, 〈V〉normal/〈V〉condensed = 2!. The atomic vapor is
unstable to decay into bound atom pairs, which fall (literally) out of the trap. Because
a third body is needed to carry off the binding energy, the decay rate obeys
〈ψ†ψ†ψ†ψψψ〉normal/〈ψ†ψ†ψ†ψψψ〉condensed = 3!. Remarkably, this third-order coherence
effect has also been observed experimentally. Thus an atom trap emitting a coherent
beam of alkali atoms (a “boser”) has all the coherence properties of a laser.
potential energy stored in the condensate and the momentum distribution func-
tion. In addition, it is possible to suddenly remove a barrier between two
condensates and directly see the quantum interference fringes that result from
their overlap. In the future it will likely be possible to extend these results
to multicomponent condensates and study the separate response of each
component.
Atomic condensates have played a truly useful role in promoting cross-
disciplinary communication and productive interactions among the condensed-
matter, atomic-physics, and quantum-optics communities. The condensed-mat-
ter theory community is supplying expertise in two areas: many-body calculation
techniques and experience with the study of collective effects. It turns out that in
this low-density regime, straightforward and standard mean-field theory calcula-
tion methods appear to be quite accurate for low temperatures, so there appear to
be few theoretical challenges in this regard (except for questions of metastability
for systems with negative scattering lengths that have not yet been fully settled).
On the other hand, there remain quite a few challenges in understanding collec-
tive effects. These include the mechanism for damping of collective modes at
finite temperatures, two-fluid hydrodynamics, effects of the spin degrees of
freedom, the details of the different time regimes in the dynamics of condensate
formation, and how the systems carry angular momentum via vortices and multi-
pole shape distortions. There are connections with models of nuclei here.
Fermi systems are also of considerable interest. Because Pauli exclusion
limits the phase space available for scattering, the two-body collision rate drops
rapidly as the temperature is lowered, and it is difficult to cool and equilibrate a
Fermi system in isolation. (In a degenerate Fermi system the Fermi energy is
much larger than the temperature. Evaporation carries away highly energetic
particles, but this mostly results in lowering the Fermi energy, rather than cooling
the system.) However, sympathetic cooling in Bose-Fermi mixtures is possible
because the Bose-Fermi collision rate is not limited by the Pauli principle. The
fermions cool by losing energy to the bosons, which are in turn cooled by the
usual evaporative means. This opens up new possibilities similar to those studied
in superfluid 3He-4He mixtures but now in a very different regime.
A profound physical problem that the atomic BECs seem well suited to
address is that of the dynamics of a macroscopic quantum system approaching an
equilibrium state with long-range phase coherence. This physics is important
both in connection with the dynamics of the cooling process and for the develop-
ment of atom lasers (“bosers,” see Figure 3.3). Similar questions have been
addressed in the condensed-matter literature—for example, in connection with
the development of nematic order in liquid crystals quenched from high tempera-
tures. The theory of such ordering kinetics is reasonably well developed. How-
ever, the experimental systems studied so far are all well described by a classical,
overdamped dynamics; that is, their time evolution consists merely of a frictional
descent into the nearest local energy minimum. The theoretical analyses have
also all been for purely relaxational models. The dynamics of the atomic BEC
systems is clearly not in this regime, as the unitary time evolution of Schrödinger’s
equation is surely important in the development of macroscopic phase coherence.
Experimental and theoretical studies examining these fundamental issues are
beginning, and their rapid advancement offers exciting prospects for the future.
(a)
(b)
(c)
FIGURE 3.4 Spin fractionalization process in which a ∆S = 1 spin flip in a spin chain
breaks up into two domain walls each carrying spin-1/2. (a) Perfectly ordered antiferro-
magnetic configuration for a spin-1/2 chain. (b) The circled spin has been flipped.
(c) Adjacent pairs of spins mutually flip producing a pair of domain walls (indicated by
the dashed lines) separating perfectly ordered antiferromagnetic configurations. These
domain walls (“spinons”) are unconfined and free to move independently.
FIGURE 3.5 Spinon production observed by spin-flip inelastic neutron scattering. There
is much more phase space available for spinons than for ordinary spin waves. The curve
with period π in the upper panel is the single spinon dispersion curve. The curve with
larger amplitude and period 2π is the upper bound on the range of allowed energies for a
pair of spinons of total momentum Q. The parabolic curve in the upper panel is the
kinematically allowed momentum and energy transfer to the scattered neutrons. The
lower panel shows large peaks in the cross section at energies correctly predicted by the
picture in which a single flipped spin decays into a pair of independent spinon excitations.
Ordinary spin wave theory would predict zero intensity in the first and third peaks. [Phys-
ical Review Letters 70, 4003 (1993).]
FIGURE 3.6 Typical spin configuration in the ground state of the Affleck-Kennedy-
Lieb-Tasaki (AKLT) model. The symbols refer to the z component of the spin at each
site. Notice that, if the zeros are ignored, the state has perfect antiferromagnetic order.
apparently featureless “spin liquid” ground state with an excitation gap and spin
correlations that decay exponentially with distance even at zero temperature. The
spin degrees of freedom seem to have disappeared because of some “confine-
ment” mechanism.
The origin of the excitation gap is a novel “hidden” topological order not
visible in the ordinary spin-spin correlation function. This order is best under-
stood by studying the Affleck-Kennedy-Lieb-Tasaki (AKLT) model, which is
closely related to the Heisenberg model but whose ground state is more readily
soluble. Figure 3.6 shows a typical configuration (in the Sz basis) of the spins in
the ground state of the AKLT model. We see that an up spin can be followed by
an arbitrary number of zeros (“sideways spins”) but then must be followed by a
down spin. That is, if we removed all the zeros, there would be perfect antiferro-
magnetic order. This novel order is completely invisible to the ordinary, experi-
mentally measured, spin correlation function and can only be detected theoreti-
cally using a nonlocal “string order” correlation function that includes a factor of
–1 for each of the nonzero spins within the string of spins connecting two sites.
The most obvious experimental manifestation of this hidden topological order is
that it costs a finite amount of energy to break it; hence, the system has a spin
excitation gap.
The origin of the excitation gap can also be understood by examining a
different graphic of the AKLT ground state shown in Figure 3.7a. Each S = 1 spin
on a site is visualized as being made up of a pair of spin-1/2 particles. These
spins are formed into singlet bonds with their neighbors to create a “valence bond
solid” as shown. Enforcing the rule that the state be symmetric under interchange
of the pair of spins on a site guarantees that they will form a triplet and correctly
represent the S = 1 on that site (see Figure 3.8).
Just as in the case of the spin-1/2 chain, it is possible to split a single spin flip
with ∆S = 1 into a pair of spinon excitations each carrying S = 1/2. This is
illustrated in Figure 3.7b. Notice now, however, that in order to avoid generating
more unpaired spins, the two sites containing the unpaired S = 1/2 spins are
connected by a “string” of alternating double bonds and missing bonds. There is
a finite “string tension,” meaning there is a finite energy cost per unit length to
produce this string. Thus, in contrast to the S = 1/2 chains, the spinons are
confined (much as quarks are) and the resulting excitation (“meson”) has a finite
minimum energy cost. It can be shown that this excitation breaks the topological
order discussed above. The situation for larger integer spins is similar, but the
(a)
(b)
FIGURE 3.7 (a) Valence bond solid picture of the same Affleck-Kennedy-Lieb-Tasaki
(AKLT) model ground state shown in Figure 3.6. The dots represent a site that contains a
single spin S = 1. This is viewed as being made up of a pair of spin-1/2 particles, each of
which forms a singlet bond (solid line) with one of its neighbors. (b) An excited state of
the AKLT model. Because sites are viewed as containing two spin-1/2 particles, there
must be either two bonds or one bond and an unpaired spin on each site. Thus there are
two spin-1/2 particles liberated here. The string of alternating double and zero bonds
costs a finite energy per unit length. This “string tension” confines the spin-1/2 particles
together much as quarks are confined.
gap must become exponentially small in order to match onto the gapless classical
limit at S = ∞.
A great deal of theoretical progress has been made in understanding the role
of disorder in one-dimensional quantum spin chains for both S = 1/2 and S = 1.
The S = 1/2 system develops into a random singlet phase in which there are strong
singlet bonds over short distances and arbitrarily weak bonds over arbitrarily long
distances. The S = 1 chain is quite different because it is initially gapped and,
therefore, stable against weak disorder. For moderately strong disorder, the gap
is destroyed, but the topological order remains in a “Griffiths” phase. Thus,
paradoxically, the spins that disappeared at low energies in the clean system can
be made to reappear by the addition of strong nonmagnetic impurity disorder.
This is illustrated in Figure 3.9, which shows a segment of an S = 1 chain cut off
from the rest of the chain by a nonmagnetic impurity at each end. The disruption
of the valence bond solid ground state liberates a nearly free spin-1/2 at each
end of the segment. This is manifested experimentally in the magnetic suscepti-
bility, which becomes algebraic rather than exponential at low temperatures.
There are many open questions still to be addressed. In two and three
dimensions there are a rich variety of highly frustrated lattices such as the kagome
for which we still lack a complete understanding of the low-energy physics.
Debate continues as to whether high-temperature superconductivity occurs be-
cause of, or in spite of, antiferromagnetism in the insulating parent compounds.
Mixtures of itinerant electrons and local moments occur in heavy fermion, Kondo
lattice, and disordered systems near the metal-insulator transition. These con-
tinue to be dauntingly complex theoretical and experimental challenges. In addi-
tion, the entirely new classes of oxide systems now being synthesized will pro-
duce fascinating new realizations of ladders, chains, and planes of spins, which
will doubtless raise new theoretical and experimental challenges.
J1 J2 J1 J2 J1 J2 J1 J2
∆
Dimerization gap
Haldane gap
0 1 J2 /J1
FIGURE 3.8 There is a connection between the gap seen in S = 1 (and other integer)
chains and the gap seen in dimerized systems. (Upper) Consider the S = 1/2 chain with
alternating weak and strong bonds. In the limit J2 = 0, the exact ground state is a “valence
bond solid” (VBS) of singlets on the J1 bonds. For J2 nonzero, the bonds begin to
fluctuate, but the VBS state still captures the essential physics, and the gap survives
throughout the region J2 ≠ J1. (Lower) The gap persists even as the strength of the weak
bond passes through zero and changes sign to become ferromagnetic. As these bonds
become infinitely ferromagnetic, the associated pairs of spins become locked into triplet
states, and one recovers the S = 1 spin chain description. Because the gap never closes
during this process, the system does not undergo any phase transition as the dimerization
is varied adiabatically. It follows that the dimer system has the same type of topological
order, measured by the “string” correlation function, as the S = 1 system and that the
Haldane gap in an S = 1 system is a special limit of the dimerization gap in an S = 1/2
system. [Reprinted with permission from S.M. Girvin, “Exotic quantum order in low-
dimensional systems,” Solid State Communications 107, 623 (1998). Copyright © 1998
Elsevier Science.]
The integer quantum Hall effect (IQHE) owes its origin to an excitation gap
associated with the discrete kinetic energy levels (Landau levels) in a magnetic
field. The fractional quantum Hall effect (FQHE) has its origins in very different
physics of strong Coulomb correlations that produce a Mott-insulator-like ex-
citation gap. In some ways, however, this gap is more like that in a superconduc-
tor because it is not tied to a periodic lattice potential. This permits uniform
charge flow of the incompressible electron liquid and, hence, a quantized Hall
conductivity.
The microscopic correlations leading to the excitation gap are captured in a
revolutionary wave function, developed by R.B. Laughlin, that describes an in-
compressible quantum liquid. The charged quasiparticle excitations in this sys-
tem are “anyons” carrying fractional statistics, intermediate between bosons and
fermions, and fractional charge. This sharp fractional charge, which despite its
bizarre nature has always been on solid theoretical ground, has recently been
observed directly in two different ways. The first is an equilibrium thermody-
namic measurement using an ultrasensitive electrometer built from quantum dots
(see Figure 3.10). The second is a dynamical measurement using exquisitely
sensitive detection of the shot noise for quasiparticles tunneling across a quantum
Hall device.
Quantum mechanics allows for the possibility of fractional average charge in
both a trivial way and a highly nontrivial way. As an example of the former,
consider a system of three protons, arranged in an equilateral triangle, and one
electron tunneling among their 1S atomic bound states. The electronic ground
state is a symmetric linear superposition of the quantum amplitudes for being in
each of the three different 1S orbitals. In this trivial case, the mean electron
number for a given orbital is 1/3. This is a result of statistical fluctuations,
however, because a measurement will yield electron number 0 two-thirds of the
time and electron number 1 one-third of the time. These fluctuations occur on a
very slow timescale and are associated with the fact that the electronic spectrum
consists of three very nearly degenerate states corresponding to the different
orthogonal combinations of the three atomic orbitals.
The ν = 1/3 quantum Hall effect has charge-1/3 quasiparticles, but it is
profoundly different from the trivial scenario just described. An electron added
to a ν = 1/3 system breaks up into three charge-1/3 quasiparticles. If the locations
of the quasiparticles are pinned by, say, an impurity potential, the excitation gap
still remains robust and the resulting ground state is nondegenerate. This means
that a quasiparticle is not a place (like the proton above), where an extra electron
spends one-third of its time. The lack of degeneracy implies that the location of
the quasiparticle completely specifies the state of the system; that is, it implies
that these are fundamental elementary particles with charge 1/3. Because there is
a finite gap, this charge is a sharp quantum observable that does not fluctuate (for
frequencies below the gap scale).
To understand this better, imagine that you are citizen of Flatland, living in
FIGURE 3.10 (Left) Variation of the conductance for tunneling through a quantum dot
as a function of bias voltage on the dot. The oscillations indicate the discrete charging of
the dot under quasi-thermodynamic equilibrium conditions. The period of the oscillations
for the ν = 1/3 quantum Hall plateau is three times smaller than for the ν = 1 plateau,
indicating that the quasiparticle charge is one-third that of an electron. (Right) Absolute
shot noise for currents tunneling through a ν = 1/3 Hall fluid. The deviation from linear-
ity at low currents is the crossover from shot noise to thermal Nyquist noise. Both the
intensity of the noise and the location of the crossover from shot noise to Nyquist noise
are in quantitative agreement with the quasiparticle charge being e* = 1/3. The dashed
line shows the predicted shot noise if the current were carried by objects with charge e
instead of e/3. [Right: Reprinted with permission from R. De-Picciotto, M. Reznikov, M.
Heiblum, V. Umansky, G. Bunin, and D. Mahalu, “Direct observation of a fractional
charge,” Nature 389, 163 (1997). Copyright © 1997 Nature.]
Composite Particles
It is a peculiarity of two dimensions that the ν = 1/3 vacuum represented by
the Laughlin wave function can be viewed in more than one way in terms of
composite particles. One way is to make a singular gauge transformation that
attaches three quanta of magnetic flux to each electron. This induces an
Aharonov-Bohm phase of 3π when two particles are interchanged. The physics
will therefore remain invariant if we change the particle statistics from fermion to
boson to cancel this phase change. At ν = 1/3 there are three flux quanta from the
externally applied uniform magnetic field for each particle. Thus, if we make a
mean-field approximation in which the flux quanta attached to the particles are
smeared out into a uniform field, they will precisely cancel the external field,
leaving a theory of composite bosons in zero (mean) magnetic field, as illustrated
schematically in Figure 3.11. The condensate wave function of these bosons
defines a hidden off-diagonal long-range order not visible in the ordinary correla-
FIGURE 3.11 Illustration of the condensation of composite bosons in the ν = 1/3 frac-
tional quantum Hall effect. The three flux quanta attached to the electrons convert them
into bosons moving in zero average field. Vortices in the condensate are the Laughlin
quasiparticles carrying charge ±1/3.
tion functions of the original electron variables. (There are deep analogies here
with the hidden string order in quantum spin chains discussed previously.)
The natural excitations in a bosonic condensate are (Goldstone mode)
phonons and vortices. The analogs here (“magnetophonons”) have recently been
observed directly in Raman scattering. Vortices in two dimensions normally cost
a logarithmically divergent amount of energy and are confined in neutral pairs at
low temperatures. In the FQHE, however, something peculiar happens. Because
each composite boson carries three flux quanta, binding one-third of a charge to
the vortex is equivalent to binding one quantum of flux to the vortex. Just as in a
type-II superconductor, this quantized flux screens out the currents at large dis-
tances and removes the divergence in the energy. Thus in this picture, the
Laughlin quasiparticles are topological defects, and the same mechanism that
gives them fractional charge also deconfines them. By analogy with type-II
superconductors, the magnetophonon acquires a mass gap, which was predicted
theoretically and has been observed experimentally.
An alternative picture of the ν = 1/3 vacuum can be developed by attaching
two rather than three flux quanta to each particle. These composite objects
remain fermions and see a mean magnetic field of one flux quantum per particle,
as illustrated schematically in Figure 3.12. Thus the ν = 1/3 FQHE is mapped
onto the IQHE at νeff = 1. The Laughlin quasiparticles become additional com-
posite fermions added to the next Landau level. In this formulation the off-
diagonal long-range order remains hidden, but there are two significant advan-
tages. First, accurate variational wave functions for various hierarchical quantum
Hall states at different rational filling fractions can be written down explicitly and
studied numerically with relative ease. Second, the special case of ν = 1/2 is
naturally described as composite fermions in zero mean magnetic field. The
characteristic Fermi surface wave vector 2kF of these composite fermions has
been observed in surface acoustic wave attenuation experiments. If the mean
field picture is taken literally, then moving slightly away from ν = 1/2 puts
the composite fermions in a weak magnetic field that should cause the quasipar-
ticles to follow curved trajectories. Remarkably, this too has been observed
experimentally.
It should be emphasized that experiments to date have all dealt with the
kinematics of the composite fermions and the associated length scales, not their
dynamics and the associated frequency scales. Hence, there is no unambiguous
evidence for long-lived Fermi liquid-like quasiparticles above a sharply defined
Fermi surface, as opposed to well-defined length scales at the Fermi surface.
There is great theoretical interest currently in trying to understand the nature of
fluctuations around the mean-field solution and their effect on the composite
fermions. Considerable progress has been made, but many questions still remain
to be definitively settled.
FIGURE 3.12 Illustration of the formation of composite fermions in the ν = 1/3 frac-
tional quantum Hall effect. The two flux quanta attached to the electrons turn them into
fermions moving in an average field corresponding to the ν = 1 integer quantum Hall
effect. A Laughlin quasiparticle is represented by a composite fermion in the next
Landau level.
Edge States
At low energies, the bulk of an FQHE system appears as a featureless vacuum
with an excitation gap; however, very unusual gapless modes exist at the edges.
These are shape distortions that preserve the area of the incompressible fluid. In
a certain sense, the quantized-edge density fluctuations can be viewed as a gas of
Laughlin quasiparticles liberated from the bulk gap.
Because these objects carry fractional charge and statistics in the bulk, they
do not form an ordinary Fermi liquid at the edge. Instead, they constitute a nearly
ideal realization of a chiral Luttinger liquid. The edge modes are chiral because
they propagate in only a single direction, controlled by the direction of E × B drift
in the edge-confinement potential. The density of states for tunneling an ordinary
electron into a Luttinger liquid vanishes with a power-law singularity at low
energies because of an orthogonality catastrophe that results from the fact that the
electron must break up into fractionally charged quasiparticles. Recent progress
n+
quantum well 2DEG
cleaved edge barrier
overgrowth
FIGURE 3.14 Skyrmion spin texture in a quantum Hall ferromagnet. Note that the spins
are all up at infinity but down at the origin. At intermediate distances they have a vortex-
like configuration. Because of the quantized Hall conductivity, these objects carry quan-
tized charge. [Reprinted with permission from S.M. Girvin, “Exotic quantum order in
low-dimensional systems,” Solid State Communications 107, 623 (1998). Copyright ©
1998 Elsevier Science.]
GaAlAs GaAs
200nm 20nm
104
104
C (10-9 J/K)
103
Heat capacity (10-9 J/K)
0
102 0.03 0.04 0.05
T (K)
10
θ = 0°
T -2
0.01 0.1 1
Temperature (K)
FIGURE 3.15 Even in a sample with many quantum wells, there are not enough nuclei
in the 20-nm-thick wells to do an ordinary nuclear magnetic resonance (NMR) experi-
ment. The technique of optically pumped NMR therefore had to be developed [Barrett et
al., Physical Review Letters 74, 5112 (1995); Tycko et al., Science 268, 1460 (1995)].
(Upper) Circularly polarized light above the gallium arsenide (GaAs) band gap but below
the band gap of gallium aluminum arsenide (GaAlAs) is absorbed only in the quantum
wells. The angular momentum of the photons is transferred to the orbital motion of
excited electrons and then, via the spin-orbit interaction, to the electron spins. Finally, the
hyperfine interaction transfers the polarization to the nuclei in the quantum wells. This
polarization enhancement allows the NMR signal to reach detectable levels for samples
with as few as 10 wells. NMR experiments clearly demonstrate the existence of skyrmi-
ons—collective excitations with charge ±e and large-spin S. These objects enhance the
nuclear relaxation rate, bringing T1 down from many hours to about 20 seconds. (Lower)
With the nuclei in thermal equilibrium, the specific heat is enhanced by many orders of
magnitude, rising from picojoules to microjoules per kelvin. [Physical Review Letters 76,
4584 (1996) and 79, 1718 (1997).]
Although the quantum Hall effect is extremely well understood at this point,
there remain some important mysteries yet to be resolved. In the IQHE, the main
question continues to be the nature of the delocalization transition in which the
Hall conductivity jumps from one quantized value to the next. The general
scenario by which this happens is well understood; it is almost certain that there
exists a critical point near the center of each Landau level at which the localiza-
tion length diverges, and numerical estimates of the critical exponent agree well
with experiment—in selected samples, at least. It is not understood at present,
however, why generically there are often deviations from the expected scaling
behavior (although the answer probably has to do with macroscopic inhomogene-
ities). One problem only just beginning to be addressed is the possible relevance
of Coulomb interactions to the transition. In addition, despite valiant efforts,
there does not yet exist a simple quantum field theory for this transition from
which we can analytically compute the critical exponent. Finally, there remains
an interesting set of puzzles about what happens at weak magnetic fields as
Landau level mixing becomes strong and direct transitions apparently occur from
quantum Hall effect states to insulating states.
In general, the ordering that produces the hierarchy of fractionally quantized
states is very well understood. The most interesting remaining problem is to
understand the physics of the ν = 1/2 state, which in the composite fermion
picture is a Fermi liquid-like state with zero mean magnetic field. The nature and
effect of fluctuations around the mean field still need to be better understood.
The theory of quantum Hall edge states has successfully made detailed pre-
dictions of the observed temperature and voltage dependence of the tunneling
current-voltage characteristics of the ν = 1/3 fractional plateau edge. One unex-
pected experimental discovery, however, has been that the edge-tunneling den-
sity of states has a power law-form over a wide range of magnetic fields, not just
on the plateaus. Furthermore, as shown in Figure 3.15, the exponent of the power
law varies continuously and linearly with magnetic field and seems quite insensi-
tive to whether the bulk of the sample is in a quantized Hall state or not. Current
theory predicts that the exponent should be quantized, just as the Hall conductiv-
ity is, and should vary discontinuously with magnetic field.
Another significant question involves the sharp peak in the specific heat
shown in Figure 3.15. The large linear region in the plot is explained quantita-
tively by the Schottky anomaly of the nuclei in the quantum wells. The extra
peak is known to involve the additional nuclei in the barriers between the quan-
tum wells, but the mechanism that gives rise to a sharp feature is not understood
at present. It may be related in some way to the freezing of the skyrmions.
SUMMARY
The committee has detailed here just a small sampling of the wide variety of
novel quantum effects that have been explored successfully in the past decade.
Some of the major accomplishments are listed below:
• New tools and paradigms for studying the interplay between interactions
and disorder in quantum systems would shed light on phenomena like the re-
cently discovered metal-insulator transition in two-dimensional electron gases.
• Carbon nanotubes are likely to present a great opportunity for study of
novel electronic properties.
• Many of the remarkable quantum effects discovered in the last decade
have been observable only at relatively low temperatures. Can quantum energy
scales be boosted so that, for example, room-temperature mesoscopic and single-
electron devices can be constructed?
Nonequilibrium Physics
168
specialty about a generation ago. In recent years, the same nonequilibrium con-
cepts being tested in the design of alloys are also being applied to galaxy forma-
tion in the cosmos, climatic changes on the Earth, and the growth of forms in
biological systems.
The Brinkman report1 was remarkably prescient in its discussion of nonequi-
librium physics. Its authors recognized the growing importance of topics such as
pattern formation, chaotic behavior, turbulence, and fractal geometries. Under-
standably, they missed today’s emerging interests in friction, fracture, and granu-
lar materials and the speculations that some of these spatially extended chaotic
phenomena might be exhibiting previously unanticipated collective behaviors.
They also could not have predicted the invention of the scanning-probe micro-
scopes and optical tweezers that are only just now beginning to open the world of
biological phenomena to first-principles physical investigations.
In the last decade or so—the period since the Brinkman report—important
progress has been made in many of these areas. We now understand in much
more systematic ways how complex patterns emerge from simple ingredients in
hydrodynamic, metallurgical, and chemical systems. Notable progress has been
made in sorting out the mechanisms that control pattern formation, for example,
in convecting liquid crystals, on the surfaces of vibrating fluids, in chemical
reaction-diffusion systems, and in some biological phenomena such as cellular
aggregation and membrane morphology. New understanding of spiral waves in
active media has found application in the analysis of cardiac arrhythmia. We are
beginning to understand how complex systems—for example, those in which
fluid flow and chemical reactions are occurring simultaneously—sometimes be-
come intrinsically self-organized, sometimes exhibit large critical fluctuations,
sometimes become chaotic, and sometimes do all three of those things at the
same time.
Nonequilibrium physics has grown into a major enterprise, one that cannot
be described fully in this report. The committee has therefore selected a special
set of topics as illustrative examples of the themes and issues to emphasize. The
first of these topics is pattern formation and turbulence in fluid flow. The next
two are in the areas of processing and performance of structural materials, spe-
cifically, microstructural pattern formation in solidification and a group of topics
in solid mechanics: friction, fracture, granular materials, and polymers and adhe-
sives. The final section includes some brief remarks about nonequilibrium phe-
nomena in biology and in the quantum domain. Each of these topics, in different
ways, illustrates the four themes listed below:
1. Much of the most important progress in recent years has consisted simply
of recognizing that fundamental questions remain unanswered in the physics of
1National Research Council [W.F. Brinkman, study chair], Physics Through the 1990s, National
Academy Press, Washington, D.C. (1986).
tems; rolls, hexagons, and plumes in thermal convection of fluid layers; turbulent
“spots” in the transition from the laminar state to turbulence in a boundary layer;
and large-scale circulation patterns in the atmosphere. It is likely that, at least to
some extent, living organisms are the result of this tendency for nonequilibrium
systems to self-organize. This pattern-forming tendency owes its existence to
nonlinearities that dominate the dynamics under conditions of strong forcing.
Although patterns found in nonequilibrium systems are varied in character
and combine an astonishing labyrinth of order and disorder, they do share some
common features. For example, the details of pattern formation are generally
sensitive to small perturbations. In small systems, boundary conditions deter-
mine the positions and orientations of patterns. The nonlinearities in pattern-
forming systems often produce intermittency; that is, such systems may undergo
irregular, large excursions away from their most probable states.
It is only natural, then, to think that these common features might imply a
deeper layer of truth, and that there might exist a general theory of nonequilib-
rium phenomena. Such a theory, if it exists, is still outside our reach; but we have
made substantial progress in developing special theories for some particularly
simple cases. Examples include liquid-crystal hydrodynamics, Rayleigh-Benard
convection, Taylor-Couette flow, and fully developed turbulence in boundary
layers. The main advantage of studying simple-fluid systems, as opposed to
more complex-fluid systems such as those used in industrial processes, is that the
laws of motion for the simple fluids are well known. If there exist common
underlying principles, they will be most easily discovered in simpler systems.
Specific examples will continue to provide useful insights, and the methods of
analysis that they generate will find broad utility. What is unclear is whether a
deep general theory will emerge from the knowledge acquired by studying spe-
cial systems; whether nonequilibrium phenomena, like thermodynamic critical
phenomena, fall into a small number of universality classes; and whether a broad-
based understanding will eventually enable us to predict and control complex,
technologically important processes.
Pattern Formation
Consider the simple case of Rayleigh-Benard convection. When a fluid,
initially at rest between two horizontal plates, is heated from below, it experi-
ences a temperature gradient. For small gradients, the heat transfer from the
bottom to the top occurs purely by conduction—that is, by molecular collisions.
When the gradient exceeds a certain threshold, however, the conductive state
becomes unstable and yields to convective states involving bulk motion of the
fluid. If the system is confined so that the fluctuations are correlated across the
entire system, or if the system is modulated externally, the convective dynamics
is largely temporal rather than spatial in character; one then observes a variety of
universal properties associated with temporal chaos of low-dimensional systems.
Turbulence
More complexities develop as one increases the stress applied to a fluid
system at its boundaries, for example, by increasing the heat flux or the shear
rate. (“Increasing the stress” means increasing the Reynolds number.) Among
the complexities are the decay of the long-range order of the patterns, the devel-
opment of new length scales, and the appearance of a strong flux of energy across
the range of length scales (the “inertial range”) on which turbulent motion is
occurring. The scale range increases with the Reynolds number and is bounded,
on the one hand, by the characteristic size of the system (the “large scale”) and,
on the other, by the small “dissipation scale” at which viscous effects become
dominant. The flow is said to be fully turbulent when the scale range is large.
Well-developed turbulence has some interesting and important features. A
tracer substance such as a dye, when injected into a turbulent flow, is mixed
efficiently and diffused at unusually high rates; isosurfaces of the dye concentra-
tion are fractal; the small scales are spatially intermittent and amenable to
multifractal description and modeling; correlations and fluctuations are anoma-
lously large; and externally imposed perturbations decay slowly. These features
are characteristic of phenomena far from equilibrium.
A quantitative theory of turbulence is likely to be valuable in the study of
other nonequilibrium phenomena. This is why turbulence merits some attention
and discussion here; indeed, until the 1960s, fluid turbulence was the clearest
example of a phenomenon in which a large range of length scales are simulta-
neously important (see Box 4.1). The successful application of scaling, univer-
sality, and renormalization group theory to thermodynamic critical phenomena
has altered this situation, but turbulence still offers one of the cleanest examples
of scaling behavior in nonequilibrium physics.
Physicists generally like to focus on “universal” aspects of the phenomena
they are studying. The conventional wisdom in turbulence theory is that small-
scale turbulence possesses universal properties that are independent of specific
large-scale flows. However, the notion of absolute universality, initiated bril-
liantly by Kolmogorov and others, is not strictly valid for turbulence, let alone for
all nonequilibrium systems. Universality may pertain, at best, only to certain
scaling exponents. The universality of scaling exponents is a compelling no-
tion—one that clearly invites comparisons with other nonequilibrium problems—
and principal questions regarding them are just beginning to be resolved. There
are lingering impediments. For example, at present there is no theory in turbu-
lence for effects of finite Reynolds number or finite shear. Despite advances in
modern experimental methods, properties of turbulence continue to be probed
only partially at high Reynolds numbers, and quantities of theoretical interest can
be measured only approximately. A major advance in this regard is the use of
powerful computers to solve the equations of motion explicitly and thus elucidate
spatio-temporal details of turbulent solutions. The Reynolds numbers of the
numerical solutions are approaching the range of interest for addressing impor-
tant issues.
An interesting question is whether the coherence of the small-scale motion,
in the form of elongated and anisotropic vortex structures, is consistent with the
universal scaling presumed to exist in fully developed turbulence. In an anisotro-
pic ferromagnet near its critical point, for example, the critical indices do not
depend on the magnitude of the anisotropy (although they are different for iso-
tropic and anisotropic cases). In turbulence, however, the critical indices may, in
some instances, depend on the magnitude of the anisotropy. The relation be-
tween scaling, which emphasizes the sameness of various scales, and structure,
which becomes better defined and topologically more anisotropic for larger am-
plitudes, is at present quite obscure.
In summary, the issues considered here are the changes occurring in a fluid
flow that is increasingly stressed at its boundary. The stresses may be applied by
mechanical, thermal, or other means. The changes include instabilities, bifurca-
tions, temporal chaos, pattern formation, phase modulations, defects, growth of
localized structures, interactions among dissimilar length scales and timescales,
universal and anomalous scaling, intermittency, anomalous transport, and the
like. These phenomena have strong similarities to those that are seen in other
nonequilibrium systems. If these similarities can be exploited intelligently, there
will be many new opportunities for understanding turbulence better. Conversely,
turbulence poses a rich variety of problems and has an array of tools of analysis
that should be useful to other branches of nonequilibrium physics.
Fluid turbulence is a difficult problem with a long history, but the pace of
progress has accelerated in recent years. Much of the recent progress is the result
of a powerful combination of modern experimental methods, computer simula-
tions, and analytical advances. The present picture of turbulence is generally
self-consistent despite lingering uncertainties, and recent advances have further
improved our qualitative and quantitative understanding. That the qualitative
understanding should impact practical and industrial problems is substantially an
article of faith; much remains to be bridged between the fundamental develop-
ments of recent years and practical problems of industrial relevance. To some
extent, this is a problem of bridging cultures. To a larger extent, however, this is
a reflection of the difficulties of strongly nonlinear problems that occur far from
equilibrium.
What matters in turbulence is the ability to quantify properly the mix of the
universal and system-specific aspects and to describe that mix economically.
Such an understanding will propel forward not merely the study of fluid turbu-
lence but the entire subject of nonequilibrium physics.
behind the tip are all determined uniquely by the degree of undercooling—
that is, by the degree to which the liquid is colder than its freezing temperature.
The question is, How? (An equivalent problem is one in which the dendritic
behavior is controlled not by the temperature but by the degree of chemical
supersaturation.)
As described in Box 4.2, a rich understanding of the behavior of isolated
dendrites has been found in theories of morphological instability and the discov-
ery that very weak forces—crystalline anisotropy of surface energies, for ex-
ample, or even atomic-scale thermal fluctuations in some cases—can completely
control the patterns that emerge from these instabilities. These new conceptual
developments, however, still leave us very far from being able to predict metal-
lurgically relevant microstructures. Current simulations of casting, for example,
include heat flow and fluid convection in complex geometries but succeed in only
very rudimentary ways in coupling those effects to the formation of dendritic
microstructures.
Perhaps the most important theoretical challenge is a quantitative under-
standing of what is called the “mushy zone”—the region between the fully
formed solid and the molten fluid where the dendrites are forming and interact-
ing among themselves. Within this region, the thermal, chemical, and hydrody-
namic degrees of freedom of the system are all active. Even if each dendrite is
behaving according to the rules already discovered, it is doing so in an environ-
ment where the local growth conditions are determined by its neighboring
dendrites and their associated diffusion and flow fields. This behavior is al-
most certainly chaotic, and therefore most likely will have to be described in
probabilistic rather than deterministic terms. We know that the mushy zone has
its own collective instabilities that can produce fatal structural defects in the
solidified materials.
The situation in the real world is even more complicated. In many casting
processes, new dendrites nucleate at impurities throughout the molten fluid as it
cools. Thus these processes are highly sensitive to the purity of the materials.
Moreover, heterogeneous nucleation of this kind is extremely difficult to predict
or control, even under ideal conditions. Other complications arise from the fact
that, in welding, for example, the molten fluid itself is turbulent.
Can such behavior be modeled in a usefully predictive way? Can the rel-
evant dynamics be described with sufficient accuracy by some coarse-grained,
many-dendrite theory; or will there be such sensitivity to details and such a huge
variety of possibilities that this problem will forever be beyond our reach? And
even if we can make substantial progress, will we be able to translate our theoreti-
cal understanding into decision-making tools that will be applicable to real-life
manufacturing?
These questions regarding intrinsic limits of predictability are unavoidable.
Nevertheless, we should be able to do better than we can at present. We already
know enough about these systems to recognize that a coordinated experimental
the theory has been checked in numerical studies that have probed its nontrivial
mathematical aspects. As a result, although we know that there must be other
cases (competing thermal and chemical effects, for example, or cases where the
anisotropy is large enough that it induces faceting), we now have reason to feel
confident that we understand at least some of the basic principles correctly.
them to compute stress distributions and failure criteria for highly complex solid
objects. With the help of modern computers, they are now able to predict with
confidence the mechanical properties of structural materials in a wide variety of
engineering applications.
Almost all of this progress, however, pertains to static—or very nearly
static—phenomena. Roughly speaking, conventional fracture mechanics has been
concerned primarily with predicting when materials will break and much less
with understanding what happens after failure occurs. The latter topic, “fracture
dynamics,” remains largely unexplored. It is fundamentally more challenging
than its static counterpart because it involves very deep issues in nonequilibrium
physics. Some of the most important outstanding questions are, How fast do
cracks move? What mechanisms limit their growth? How is energy dissipated in
fracture? What determines the various kinds of fracture patterns that we see in
nature?
One of the most important recent developments in fracture dynamics has
been the experimental demonstration that fast brittle cracks undergo material-
specific instabilities (see Figure 4.1). Fracture surfaces frequently are rough;
they may even be fractal. We now know that this roughness often occurs because
fast cracks are unstable with respect to bending away from their directions of
propagation or dissipating energy in the form of tip-splittings or sidebranches.
High-speed fracture is frequently a complex, chaotic, pattern forming process.
But instability is not a universal phenomenon. Since our ancestors first made
stone tools and later learned how to “cut” diamonds, it has been clear that sharp,
smooth fracture surfaces can be made by producing cleavage cracks in glassy or
crystalline solids. Apparently the trajectories of those cracks are stable.
FIGURE 4.1 Unstable fracture in a polymeric glass. This figure illustrates an experiment
in which a crack was observed with high precision as it moved along the center line of a
long strip of the polymer. (a) The graph of crack speed versus crack length. We see that
the initial crack in the unstressed sample was about 2 cm long. The stress applied to
the sample was increased until the crack suddenly started moving at a speed of about 200
m/s. The crack then accelerated smoothly up to a critical speed of about 400 m/s, at
which point an instability occurred that shows up on the graph as a rapid and irregular
oscillation of the crack speed. (b) Photograph of the fracture surface left by the unstable
crack. The front face is the fracture surface itself, with visible roughness on the scale of
0.1 mm. The top and right faces show that the instability also generated sidebranching
cracks and subsurface damage as the main crack moved through the system. (Courtesy of
University of Texas, Austin.)
Friction
Another classic part of materials research that is enjoying a resurgence of
interest among physicists is the science of friction. This topic has much in
common with dynamic fracture and adhesion. Two interacting solid surfaces
sliding past each other look in many ways like a dynamic shear crack. Mecha-
nisms such as cohesion and decohesion, energy dissipation, elastic deformation,
and so on are all relevant. But friction is an even larger and more complex topic
than fracture because it occurs in such a wide variety of circumstances and,
apparently, with an equally wide variety of underlying physical mechanisms.
The conventional goal of research on this topic is to determine frictional
forces as functions of the relative state of motion of two solid surfaces and the
stress holding the surfaces in contact with each other. Real friction, however, is
far more interesting and complex than this conventional statement would make it
seem.
Much of the recent progress in this area has been based on novel techniques
for visualizing the microscopic processes that take place during friction-con-
trolled sliding. Several atomic-scale probe microscopies have been used, as well
as some relatively simple and direct methods for following the motions of larger
features such as contact points and asperities. Numerical simulations, especially
via molecular dynamics, are now beginning to provide very valuable insights;
and good use also is being made of analog systems for making accurate observa-
tions of slipping events. A recent example of the latter technique involves layers
of carefully characterized granular substances confined between sliding plates.
Friction problems fall very roughly into three different categories: friction
between molecularly flat crystalline surfaces, friction between deformable rough
surfaces, and, in a very general sense, lubricated friction—that is, friction con-
trolled by the dynamic behavior of substances constrained to move between the
surfaces that are sliding across one another. In the first of these categories, the
clean crystalline surfaces, it is possible to make plausible models that involve
only atomic-scale degrees of freedom. Although such models still must include
assumptions about irreversible behavior, they are relatively well posed and, in
some cases, they are now beginning to produce credible agreement between
theory and experiment.
The other two categories of friction problems are fundamentally more chal-
lenging because they involve two or more widely separated length scales and
timescales. They may also be of broader practical importance.
In dry friction between polycrystalline, noncrystalline, or otherwise imper-
fect surfaces, the actual area of contact is much smaller than the nominal area of
the surfaces. The behavior of the small contact regions is crucial in determining
frictional forces and dissipation rates, but there is as yet no clear understanding of
the physical mechanisms that occur there. The problem seems to have issues in
common with fracture; the behavior is governed by cohesion and decohesion at
atomic-scale contacts that are strongly coupled to larger-scale elastic and plastic
modes of deformation. One useful way of dealing with systems of this kind is to
describe them not just by the relative positions and speeds of the sliding surfaces
but also by “state variables” that might represent, for example, the density and
strength of the contacts, and that obey equations of motion of their own. Such
“rate and state dependent” friction laws have been developed especially by
seismologists.
The ostensibly most complex problems in this field are those in which a
“lubricant”—that is, some extraneous substance—is present in the space sepa-
rating the sliding surfaces and transmits the frictional forces from one surface
to the other. In some of the most interesting recent experiments, the lubricant
is confined to a very small region, just a few molecular diameters across, and
thus its properties—especially under shear—may be quite different from those
of the same substance in bulk. Now the use of state variables is absolutely
essential. The lubricant may respond to changes in the shear rate by changing
its state, perhaps from liquid-like to solid-like, and such variations may occur
on many different space scales and timescales. The challenge is to identify
the essential degrees of freedom for these complex systems and to under-
stand the interrelations between the relevant microscopic and macroscopic
phenomena.
One of the most interesting and characteristic kinds of behavior seen in
friction experiments is stick-slip motion. In many circumstances, surfaces in
contact with one another will stick together until the applied shear stress reaches
some threshold, and then will slip past each other in accord with a rate-dependent
friction law until, under the influence of external forces perhaps, they come to
rest and restick. Familiar examples include squeaky door hinges and the motion
of a violin string driven by a bow.
It is easy to imagine how stick-slip motion can occur at a localized asperity,
that is, at a point where irregularities on opposite surfaces are attached to each
other via contact forces or molecular bonds. Slipping begins when the bond
breaks and stops when a new bond is established. On macroscopic scales, friction-
limited slipping may be the average of very many uncorrelated microscopic stick-
slip events. Macroscopic motions also may have a stick-slip character, as in the
case of the squeaky hinge. Such behavior occurs when the combined action
of dynamic friction and external loading induces some kind of mechanical
instability.
One interdisciplinary research topic that combines many of these ingredi-
ents—stick-slip friction plus fracture—is earthquake dynamics. Earthquakes, by
definition, are stick-slip events. They are triggered when some piece of a fault is
brought to its slipping threshold by the tectonic forces in the Earth’s crust. They
have the additional features that they occur on large length scales and have an
extremely broad range of sizes, even on single fault segments. Both physicists
and seismologists have been interested recently in the discovery that models of
earthquake faults consisting simply of elastically coupled stick-slip slider blocks
are deterministically chaotic systems that exhibit some of the characteristic be-
havior of real faults. Of course, these models do not account for the geometric
complexity of real seismic phenomena; but the qualitative picture that they pro-
vide, in which large events occur intermittently as cascades of small events, is at
the least an intriguing caricature of many kinds of self-organized phenomena. It
might even prove to be useful in seismology.
Granular Materials
Granular substances such as sand provide an especially clear example of a
familiar class of materials whose properties have yet to be understood from a
fundamental scientific point of view. These materials have been studied empiri-
cally for centuries in civil engineering, geology, soil mechanics, etc., because
they are essential ingredients in a wide variety of natural phenomena and have
many practical applications. But we do not know how to answer some of the
most basic questions about their behavior.
There are several clear distinctions between granular materials and the other,
superficially comparable, many-body systems that are more familiar to physi-
cists. Because they have huge numbers of degrees of freedom, they clearly need
to be understood in statistical terms. However, individual grains of sand are
enormously more massive than atoms or even macromolecules; thus thermal
kinetic energy is irrelevant to them. On the other hand, these grains also have
infinitely many internal degrees of freedom; thus they may—or may not—be
highly inelastic in their interactions with each other or with other objects. They
also may—or may not—have irregular shapes; arrays of many grains may achieve
mechanical equilibrium in a wide variety of configurations and packings. It
seems, therefore, that the concept of entropy must be relevant. We shall need
some way of deciding which are the statistically most probable states under
various constraints. But is there any analog of temperature or internal energy?
What other quantities might be necessary for describing the states of these
substances?
The questions become even more interesting when we consider the analogs
of nonequilibrium properties for granular materials. What happens to sand when
it is made to vibrate? Or when it is exposed to shear stresses? In some circum-
stances it behaves like a solid; close-packed sand can support limited shear
stresses. In other circumstances—strong shaking in an earthquake, for example—
it flows like a liquid. In yet other circumstances, granular materials behave in
ways that we do not yet know how to characterize (see Figure 4.2). Their free
surfaces spontaneously form regular patterns when shaken in special ways; their
internal stresses organize themselves into chain-like structures under certain kinds
of loading; flow patterns sometimes look roughly like localized shear bands.
Granular materials are only the simplest examples of states of matter that are
unfamiliar and relatively unexplored from a fundamental point of view, yet ap-
pear in many ordinary circumstances. To change the granular system just a little,
we might consider cases in which the grains cohere to each other. If the coher-
ence is weak, such substances may behave like viscous fluids—wet sand or clay,
for example. If it is strong, then we have materials like concrete or sandstone
which, for the most part, behave like ordinary solids. They support shear stresses,
and they can be brittle or ductile in their failure modes. In both cases, however,
FIGURE 4.2 Localized standing wave in a vertically vibrated layer of 0.2 mm diameter
bronze balls. (Courtesy of Center for Nonlinear Dynamics, University of Texas, Austin.)
than physics. Until very recently, there has been little room in biology for what
physicists call “theory.” The complex phenomena being observed and inter-
preted by biologists are taking place in systems whose fundamental properties are
not understood in the way we understand, for example, the physics of solid xenon
or the mechanical properties of grains of sand. Physicists usually have not had
the information they need for developing quantitative theories of biological phe-
nomena or the tools they need for testing those theories.
As described in more detail elsewhere in this report, that situation is now
beginning to change. Laser tweezers, atomic-force microscopes, and the like are
permitting us to see what individual molecules are actually doing during biologi-
cal processes. It is now possible, for example, to measure forces between cellular
membranes, to watch those membranes change their shapes in response to vari-
ous kinds of stimuli, or to see how proteins are formed and transported from one
place to another within cells. From the wealth of information just now becoming
available, we are beginning to understand that large biological molecules often
function as machines, absorbing energy from their chemical environments, dissi-
pating energy, and doing biologically useful work—all in accord with the basic
principles of nonequilibrium physics.
There seems little doubt that, so far, we are seeing only a very small part of
the huge world of biological materials and biophysical phenomena. The near-
term challenges for physicists working in these areas will be to identify those
biological systems that are ripe for quantitative investigation, to develop the
instruments and techniques for data analysis that will be needed to characterize
those systems, and to induce quantitative and predictive theories that can serve as
guides for further experimentation. Ultimately, the goal is to acquire a deep,
detailed understanding of the most extraordinary of all nonequilibrium phenom-
ena: life itself.
the “mushy zone.” Another part of the difficulty is that there is relatively little
effort in this area in the United States, especially in industrial laboratories.
3. Recent developments in scientific instrumentation, especially atomic-
scale resolution in probe microscopy, plus extraordinary advances in computing
power, mean that long-standing problems in solid mechanics should now be
solvable. These are fundamentally challenging problems that involve non-
equilibrium statistical physics, nonlinear dynamics, and the like. They are also,
essentially without exception, directly relevant to modern technology. Among
those problems are the following:
a. The origin of dynamic instabilities in brittle fracture;
b. The fundamental distinction between brittleness and ductility in both
crystalline and amorphous solids;
c. The relation between molecular and mesoscopic structure and mechani-
cal properties, especially fracture toughness, in composite materials containing,
for example, varieties of polymeric constituents;
d. The relation between molecular and mesoscopic structure and the dy-
namics of friction in an extremely wide variety of situations, ranging from atomi-
cally flat surfaces interacting across molecularly thin layers of lubricants, to
tectonic plates interacting across earthquake faults; and
e. The relation between elementary interactions between grains and the
macroscopic mechanical behavior of granular materials.
4. In all probability, the next major frontier for research in nonequilibrium
physics will be in the area of biological materials and phenomena.
5. The same recent advances in scientific instrumentation and computing
power that portend both major advances and major surprises in nonequilibrium
materials research also force us to face fundamental issues in the physics of
complex systems. The problem of understanding the limits of predictability in
these systems must be addressed with every bit as much skill and objectivity as
the more familiar problems of understanding specific properties of specific sys-
tems. These issues lie, not just at the interface between different scientific disci-
plines, but also at the interface between science and public affairs.
5
Soft Condensed Matter:
Complex Fluids, Macromolecular Systems,
and Biological Systems
194
• Summer schools,
• “Bilingual” survey texts and tutorials,
• Continuing education,
• Industry-academic visitation and collaboration,
• Grant programs to encourage truly basic research, and
• Graduate training in chemistry, physics, and biology.
The world produces 5 × 1011 (500,000,000,000) liters of milk each day. Some
of this is directly consumed. A major part is processed or used to supply industrial
components. The milk protein casein, for example, 170 million pounds of it per
year in the United States, is put into bakery products, medicines, adhesives, pa-
per, low-fat coffee whiteners, and synthetic whipped dessert toppings.
Milk is a fragile mixture of fat and proteins in water. The structure and compo-
sition of fat globules, casein micelles, globular proteins, and lipoprotein particles
have the malleability to allow them to be made into hundreds of butter, cream,
yogurt, and cheese products. Their natural “complex fluid” properties are the kinds
that physicists are now beginning to recognize.
In the dairy industry, processing is often guided by ingenious trial and error.
The condensed-matter physics of soft matter can now have a chance to contribute
here. The gelation of fat globules and proteins, the distribution of gel networks,
and the size of nano-droplets of dispersions—which change the texture, taste,
and feel of food—are in fact physical properties amenable to systematic physical
investigation.
in fields that traditionally disdain each other. The complexities of materials are
themselves challenging enough to require no elaboration. New materials and
properties are now studied in physics departments; at the same time there is
increasing need for good physics in biology and engineering departments.
It is said that one of Isaac Newton’s greatest achievements was to extract
from Johannes Kepler’s notebooks the two Kepler Laws that showed Newton the
way to the discovery of gravity and the explanation of planetary motion. The
notebooks and their calculations were themselves inspired by Tycho Brahe’s
astronomical observations. There is an analogy here to modern soft-materials
research.
Biological systems present a set of successful molecular mechanisms that
create the living state. The path of trial and error to industrial success leaves a
valuable though diffuse trail of information. To pick out tractable essentials from
these data is a challenge that might lead to the discovery of what makes a system
live, today’s equivalent of Newton’s realization of gravity. The very mass of new
data creates its own challenges. Entire genomes of species, including our own,
are being mapped. Already one hears biologists speak of a “post-genomic era”
when new thinking will be needed to work with the new information and new
materials. Much of that thinking is expected to come from physicists.
Condensed-matter and materials physicists are used to thinking in terms of
emergent phenomena in large complex systems (see Chapter 3) and understand
that the simple paradigm that “structure determines function” can easily fail
because of collective phenomena. However, much of our experience is in rela-
COMPLEX FLUIDS
As though condensed fluids were not already sufficiently complex (see Box
5.2), condensed-matter physics has defined “complex fluids” in an effort to in-
vestigate the suspensions and solutions of large molecules. Here “large” begins
with the nanometer size of proteins and high polymers and extends to the micron-
plus dimensions of colloids, liquid crystals, and grains of sand. Particles of this
size organize themselves by steric collisions that create unexpected symmetries
and sensitivity to boundary surfaces. Their interactions are governed by electro-
static and solvation forces in forms not seen between smaller particles.
static, van der Waals, and hydration forces. The stability of a lamellar array
reflects the interplay of all these factors, together with the layer flexibility that
allows thermal undulation in the first place.
Similar reasoning holds for the packing of rod-like particles such as slightly
flexible linear polymers and some viral particles.
Pursuing a sudden opportunity to examine the huge number of new liquid-
crystal phases with natural and artificial materials in solution, theorists are
The study of glasses is particularly useful here because they allow us to narrow
the energy range and hence the number of configurations of the liquid that need to
be considered. Metallic glasses are the prime experimental systems to test the
above paradigm because, unlike network-forming or organic glasses, the building
blocks of metallic glasses are single, spherically symmetrical atoms. All glasses
are intrinsically unstable; they are formed when the liquid goes out of equilibrium
when cooled below the glass transition temperature. As a result they can contin-
uously lower their free energy by a process of structural relaxation. Study of these
relaxation phenomena yields unique additional information about the structure and
defects of these glasses.
creating a new language of structure and symmetry. From the observed struc-
ture of these phases, experimentalists construct materials of controlled micro-
scopic structure, symmetry, density, and thermal conductivity. For example,
the fragile cubic lattice of a lipid-water microemulsion can be perfused by
water-soluble monomers that are then polymerized to create a hardened tortu-
ous network. These materials have extraordinarily high surface areas for their
volume and can be engineered to host chemical reactions that must progress on
surfaces.
Polymeric microemulsions—i.e., polymer equivalents of microemul-
sions—have been identified. Thus it may now be possible to make inexpensive
blends out of immiscible polymers and create particles whose sizes are tens
of nanometers. To control particle size precisely, we must understand the kinet-
ics of formation of these microemulsions out of their highly viscous polymer
components.
Liquid crystals of lipid and water can hold DNA or drugs to deliver them
through the lipids of the membranes that protect biological cells. Strategies are
being developed to transfect cells by delivering alien DNA across this protective
barrier, while protecting the DNA so as to allow it to become part of the target
cell’s genome.
Specialized domains in cell membranes that surround cells confine proteins
to force them into two-dimensional order. Physical theory and measurement of
lateral organization is already an important part of the search to explain the origin
and function of these functional regions.
objective lens
(b) r(t)
FIGURE 5.1 The laser grip of optical tweezers makes it possible to position two micron-
sized spheres at a specified separation. When the laser is turned off, the spherical colloids
begin to diffuse. This diffusion is followed through computerized image analysis of the
spheres’ motions as seen through a light microscope. From the paths of the two colloids
after release, it is possible to infer the forces between the spheres. If they repel, the
spheres drift apart. Micron-sized spheres can be positioned at various locations to test the
consequence of proximity to surfaces. (Courtesy of the University of Chicago.)
the structure of the solvent. All the usual difficulties that impede understanding
of highly structured liquids are amplified by the minute details of macromolecu-
lar structure.
Still, the empirical facts speak for themselves, telling us how to think more
logically about molecular organization. New forms of colloidal crystals and
suspensions can be designed using measured forces. Computer algorithms can be
Polyelectrolytes
Polymers whose properties are dominated by their electrostatic charge are
instructive because of their solution properties, their ability to control ion activi-
ties, and their propensity to form liquid crystals. Among biopolymers, DNA has
been the most intensely studied from the viewpoints of liquid-crystal physics as
well as its “solution” properties in the cell. Intermolecular forces have been
measured in detail, both for molecules in simple salt solution and for those
organized by natural condensing agents. There has been extensive work aimed at
modeling the electrostatic potential around DNA.
Among artificial materials, there are several electrically charged or polariz-
able polymers, both natural and synthetic, that form networks controlled by
applied electric fields. Some of these materials are block copolymers, neutral as
well as charged, whose long-range order can be seen to emerge from mesogenic
organizing centers in the molecules themselves. Enhanced stiffness in polyelec-
trolytes can be achieved by neutralization with oppositely charged aliphatic
surfactant molecules. Modest electric fields can be particularly effective in
creating organization in liquid-crystalline polymers. Large-scale organization
can also be induced by shear processing these materials because, like most
mesomorphic materials, they readily align in shear flow.
The interplay between the various degrees of flexibility and freedom and the
electrostatic forces within and between molecules creates many modes of packing
and stimulates theories of molecular organization. The need is for complete theo-
ries that include real force potentials rather than analytically convenient approxima-
tions. Precise determination of phase structure from x-ray and neutron diffraction,
combined with direct measurement of the work of assembly, will be essential.
At the moment, DNA is probably the friendliest polyelectrolyte, natural or
synthetic, used to create well-defined liquid crystals in water. Soon there should
be other made-to-order polymers in practically any single length. Polydispersity,
a major nuisance when testing theories of polymer assembly, is not an impedi-
ment with uniformly engineered DNA. It may even be that the size regulation so
useful for fundamental studies will also make DNA and other precisely prepared
synthetic model compounds similarly useful for practical applications. Because
they are highly soluble in water, polyelectrolytes might come to replace organic
polymers, which must be dissolved in environmentally unsuitable solvents.
Polysaccharides
Although the substance of enormous industries (see Box 5.3), polysaccha-
rides have been relatively unappreciated by most polymer chemists and physi-
cists. Cellulose, whose biomass exceeds that of any other natural polymer, was
not even mentioned in the National Research Council’s 1994 survey of polymers.1
At present, modifications in chemistry and physical processing are creating new
research questions and many practical applications, from the design of new paper
currencies to the creation of industrial fibers to cosmetics to artificial food and
blood thickeners.
It has recently become possible to measure equations of state of several
polysaccharide systems, a development that should demand better theories of
polymer assembly. Considering the mass of polysaccharides in the world, their
economic and practical importance, and the excellent chemical and biochemi-
cal work already done, it is surprising that physical research has been so lim-
ited. Because polysaccharides are often polymers of repeating units, they would
seem to be an ideal material for physicists to study. There is a splendid oppor-
tunity here for instructive physics on materials far less complicated than the
more popular proteins. Their swelling properties, their viscous and elastic
capabilities, and their stability over a wide range of solution conditions and
temperatures are theoretically intriguing and technically enticing. There is
already the expectation of creating bacterial polyesters and biodegradable ther-
1Polymer Science and Engineering: The Shifting Research Frontiers, National Academy Press,
Washington, D.C. (1994).
The label on a bottle of salad dressing, a box of ice cream, a coffee whitener
reminds us of the various polysaccharides that we consume. Among these many
polymers of sugar are guar from seeds, carrageenan from seaweed, pectin from
fruits, and xanthan from the coats of microbes. These compounds are used as
thickeners and preservatives, often playing a role parallel to that played in natural
circumstances. Xanthan is so versatile, so stable in its physical properties in the
face of heating and mixing with salts that it is pumped into the ground to stimulate
the recovery of oil wells and into your stomach after giving food the right “mouth
feel.” The animal polysaccharide hyaluronic acid is a significant component in
cartilage and connective tissue; commercially extracted from animals, it is increas-
ingly used medically for the repair of joints and cartilage.
It is no surprise then that natural polymers, or slightly modified natural poly-
mers, are industrially popular. Several billion pounds of starches from plants are
used in the United States alone for processing paper and in sizing, binding, and
adhesive applications. Hundreds of millions of pounds of modified cellulose find
their way into foods as well as paper and construction materials.
Viscous and elastic properties make these polymers industrially valuable.
These are physical properties. Yet physics has paid surprisingly little attention to
polysaccharides and related polymers. With its new capabilities, soft-matter phys-
ics can be expected to recognize and to modify the behavior of these materials that
have traditionally enjoyed the attention mainly of chemists, colloid scientists, and
chemical engineers.
moplastics. The natural polysaccharides that coat some bacteria are able to
direct the precipitation of minerals dissolved in the surrounding solutions;
heroic hopes for deep-sea mining might be coupled to learning how tiny bacte-
ria collect minerals.
Taken in the context of polymer studies in general, there is the possibility to
study materials that have already been selected by nature for their physical prop-
erties. Most industrial use has been guided by trial and error rather than by
combination with systematic physical theory and experiment. In the food indus-
try particularly there are huge potential benefits.
SiOx
PMMA
PS P
S
PMMA
SiOx
PS
SiOx
PMMA
SiOx
FIGURE 5.5.2 Single dendrimers (top) “packed” with drugs and (bottom) releas-
ing the drugs at specified binding sites. (Courtesy of the National Institute of
Standards and Technology.)
FIGURE 5.2 The crumpling of a membrane sheet. Perspective and side view of an
instantaneous configuration of a large tethered membrane composed of 4219 monomers.
[Reprinted with permission from F.F. Abraham and D.R. Nelson, “Diffraction from
polymerized membranes,” Science 249, 394 (1990). Copyright © 1990 American Asso-
ciation for the Advancement of Science.]
part of the body for local, directed, sustained drug release. Stable substrates of
synthetic and natural polymers create substrates for directed cell culture.
Biological Connections
In contrast to synthetic polymers, which are composed of at most three
monomer types, normally arranged in random order, proteins are synthesized
according to a programmed sequence involving 20 kinds of monomer. They are
also produced to a precise length. Other classes of biopolymers, such as saccha-
rides and nucleic acids, are also very precisely specified compared with artificial
synthetics; even heterogeneity appears to be intentionally created. Polymer chem-
ists and molecular biologists have begun to collaborate to synthesize copolymers
with the same precise control of monomer sequence and chain length. The
challenge for polymer physicists, as precision-architecture polymers become
available, is to understand the link between architecture and polymer system
properties. Lured by the wide range of properties conveyed by biological macro-
molecules, from spider silk to the elastin of blood vessel walls, we expect to see
polymers with similarly remarkable properties emerge from these new syntheses.
Given the vast range of possibilities and the small initial quantities of each
polymer, it will be necessary to develop methods for rapid screening. Scanning-
probe microscopies have begun to be used to determine mechanical properties.
The aim is to be able to screen polymers for desired properties with the same
efficiency that is achieved for developing and producing biological polymers by
the natural checks of evolution and growth.
BIOLOGICAL SYSTEMS
Biological molecules are substances that have evolved to do highly specific
jobs on highly specific timescales. Physicists have had impressive success study-
ing the dynamics of macromolecules, particularly proteins. The mechanics of
single molecules can be measured and used to test theories of molecular confor-
mation. The measured energies of packing biopolymers inform us of the work
needed to package them into cells and viruses and challenge us to explain and
manipulate macroscopic properties.
It helps to distinguish incidental and essential physical properties of
biomolecules. For some materials, such as DNA, the cell works with physical
properties because it must. For others, such as RNAs, proteins, lipids, and
polysaccharides, molecular physical properties are themselves useful to the or-
ganism. There are bulk materials that provide structural stability to the cell or
organism. Happily, the language and concepts of lyotropic liquid-crystal physics
often meet the need to examine biological materials.
The committee enthusiastically agrees with that study and will not repeat its
advice here. Rather, this section will provide more examples for both basic and
applied research. It will suggest possibilities for a physics of biological materials
in which physical thinking will be essential to the understanding of how biologi-
cal substances are designed to work in their native habitat. There are many
biological phenomena that traditionally or currently have been studied produc-
tively through physical thinking. Some examples are as follows:
FIGURE 5.3 Quick clicks. High-power x-ray sources provide successive pictures of
a protein process. [Reprinted with permission from W.A. Eaton, E.R. Henry, and
J. Hofrichter, “Nanosecond crystallographic snapshots of protein structural changes,”
Science 274, 1631 (1996). Copyright © 1996 American Association for the Advance-
ment of Science.]
3Ken Dill, NSF Workshop on Interdisciplinary Macromolecular Science and Engineering (unpub-
lished), chaired by S.I. Stupp, University of Illinois at Urbana-Champaign, May 1997.
FIGURE 5.6.1 Using a bubble to tug apart large molecules. (Courtesy of Boston
University and the University of British Columbia.)
FIGURE 5.4 A molecular Coulter counter. A bilayer membrane between two chambers
contains a single ionic channel of ~1 nm diameter. If a small molecule moves into an
open channel, the event is seen as a reduction in electric current. The duration of this
reduced current measures the residence time of the molecule diffusing into and out of the
channel. [Reprinted with permission from S.M. Bezrukov et al., “Counting polymers
moving through a single ion channel,” Nature 370, 279 (1994). Copyright © 1994 Nature.]
Molecular Association
There is justified pride in modern polymer synthesis, by which stretches of
one or another kind of monomer allow polymers to associate in parts to multimo-
lecular arrays of specific symmetry, packing, and material properties. Yet this
kind of packing is rough compared to that of proteins or DNA, whose every
monomer has functional consequence. One mutation in one amino acid of an
antibody will qualitatively weaken its antigen-antibody binding strength and
specificity. One change out of six nucleic acids will spoil that sequence for
recognition by a protein that controls gene expression. Strength and specificity
are what count when an antigen or a hormone binds to a cell-surface receptor at
FIGURE 5.5 The right details: precision fit in the binding of an antigen to an antibody.
The antigen, a lysozyme protein nestles into an antibody. A groove in the antibody
matches itself tightly against a ridge in the lysozyme antigen. This ridge is formed by two
arginines, at positions 45 and 68. Such is the precision of the match that mutation of
Arg68 to a chemically similar Lysine reduces binding strength by a factor of 1000.
[Reprinted by permission from S. Chacko, E. Silverton, L. Kam-Morgan, S. Smith-Gill,
G.H. Cohen, and D.R. Davies, “Structure of an antibody-lysozome complex. Unexpected
effects of a conservative mutation,” Journal of Molecular Biology 245, 261-274 (1995).
Copyright © 1995 Academic Press.]
the end of a molecule that reaches through the cell membrane into the cell itself,
where it can organize internal machinery just from the tension created by external
binding. Even as x-ray diffraction reveals the intricacies of the essential contacts,
we have no more than cartoon ideas of how energies are transmitted and applied.
Physical opportunities abound.
Between the chemists’ syntheses and nature’s precise machining, there is the
possibility to work both ways: to use new tricks in synthesis to create molecules
with preferred properties, and to make changes in nature’s design—intentional
mutations of natural structures—to modify properties.
Great possibilities will be realized here when physicists trained to think
about molecular organization are also trained in the much easier crafts of synthe-
sis, mutation, modification, and manipulation. For example, biosensors are being
designed with biological materials for contact with the species to be detected and
electrodes with integrated circuitry for amplified response.
Priorities
Education
When physicists work with materials that were once the province of other
fields, and when scientists in those fields use what physicists have learned, they
discover that there are different ways of learning, thinking, and even speaking in
the different fields. It is easy to say that education in physics, chemistry, and
biology must be broad as well as deep. It is easy to argue that tomorrow’s
condensed-matter physicists should not fear to synthesize polymers or handle
proteins or express genes. Although such skills are easily learned, there are many
obstacles to such broad learning. Even if there were time in school, subject
matter changes too fast to rely only on what is learned in school.
Several strategies can be tried:
• Interdisciplinary workshops;
• Summer schools with laboratories, for scientists at all career stages;
• New courses for biologists in elementary physics and for physicists in wet
chemistry, biochemistry, and molecular biology;
• Introductory physics instruction that emphasizes soft systems; and
• Bilingual texts—e.g., in biology and physics—that teach the vocabulary
and basic phenomena of particular systems. (This may be a good time for another
review for physicists modeled on the landmark 1959 series in Reviews of Modern
Physics.5)
Basic Research
Industrial and medical results will follow naturally, as they have inevitably
followed basic research in the past. Grant mechanisms can be established to
encourage the necessary interdisciplinary work.
• Special grant programs to compensate for the double jeopardy that goes
with the present system when research is judged both as biology and as physics;
• Fellowships developed and expanded for physicists to work in biological
laboratories—the NSF-NIH one-year visit program is one example; and
• Contact between university researchers and industrial scientists and be-
tween physicists, chemists, and biologists to foster collaborations, particularly
with the chemical, medical, and pharmaceutical industries.
The residue from trial and error in industrial research is an abundant source
of information for new physics. Biological systems are an inspiring source of
solved problems for doing physics in a new place. We can work to create
comfortable common ground for collaboration.
Undersupported research areas should be identified in which results will be
needed. For example, polyelectrolytes and biological polymers will be increas-
ingly used for products to displace environmentally unfriendly organic materials.
Research Facilities
For structure determination, neutron sources in particular are urgently needed.
Synchrotron x-ray, ion beam, transmission electron microscope, and surface probe
facilities are high on the list. Data processing is needed for the large amounts of
information being generated and the large computations that will be undertaken.
Overall
Intellectually, industrially, and medically, soft-material research has a poten-
tial that justifies funding increases like those being given to research in biology
and medicine.
225
Atomic Structure
Scanning-probe microscopes have made atomic resolution imaging of sur-
faces almost routine, with tremendous impact on surface science. We are finally
beginning to understand the important subject of thin-film growth, one atom at a
time, and can observe how atomic steps can prevent atom migration in one
direction compared with another, leading to undesirable roughness in deposited
films. Here, there is close interaction between experimental visualization and
computer modeling. A particularly exciting development in scanning-probe mi-
croscopy has been the imaging of chemical and biochemical molecules and the
possibility of monitoring chemical reactions. By choosing one molecule as the
tip of the atomic-force microscope (AFM), the forces between molecules can be
directly measured and chemical reactions sensed with unprecedented molecular
sensitivity. This has already led to new insights into the rheology of macromol-
ecules (see Chapter 5), and we can expect great advances in the near future,
especially in the biological sciences. For example, the use of “smart” tips would
allow recognition of molecules using specific receptors adhered to the tip.
The scanning-tunneling microscope (STM) views the local electronic struc-
ture, so careful image simulations must be made to deduce atomic structure. In
general, for structural studies on surfaces, the best results have been obtained by
a combination of direct STM imaging with diffraction—for example, by x-rays
or electrons. The highest directly interpretable spatial resolution for atomic
structure has been obtained with TEM (see Box 6.1); instruments capable of
resolving 1 Å have recently been demonstrated. The committee notes that, partly
because of the ~$50 million price tag for these instruments and partly because of
the damage accompanying the high accelerating voltages required, no such
instrument can be found in the United States. Researchers’ hopes are pinned on
lower accelerating voltage approaches to improved TEM resolution, such as
holographic reconstruction, focus variation, incoherent Z-contrast, and aberration
correction. However, it is troubling that work in these areas is predominantly
located in Europe and Japan; a notable exception is work on incoherent Z-contrast
imaging (see Box 6.1). A relatively recent study of trends in atomic resolution
1National Science Foundation Panel Report on Atomic Resolution Microscopy: Atomic Imaging
and Manipulation (AIM) for Advanced Materials, U.S. Government Printing Office, Washington,
D.C. (1993).
reconstruct objects in three dimensions at the atomic scale. This would be par-
ticularly exciting for amorphous and disordered materials; knowledge of their
atomic structure is limited to statistical averages from diffraction. Instruments to
enable this will require ~0.5 Å resolution combined with high specimen-tilt
capability (>45°). Such will be possible either with very high voltages or with
aberration correction.
Electronic Structure
For many research problems in condensed-matter and materials physics, it is
important to visualize the electronic structure on a near-atomic scale. STM
provides direct information about electronic states at surfaces but is often used
for purely structural analysis and has had tremendous impact on surface science.
Examples in the report include the germanium “huts” in Figure 2.13. In general,
probe microscopy combined with electron microscopy has revolutionized our
understanding of thin-film growth and epitaxy (see Chapter 2).
STM has been profitably used to examine surface electronic states and chemi-
cal reactions on the atomic level. Although detailed electronic structure calcula-
tions are needed to interpret STM images in terms of atomic positions, often the
electronic structure information is directly useful. For example, Box 6.2 gives an
example of direct STM imaging of the electronic states associated with individual
dopant atoms in semiconductors.
Electron energy-loss spectroscopy in TEM provides an important method to
obtain electronic structure from the interior of samples on a near-atomic level.
Improvements in the sensitivity of detection, using more monochromatic field-
emission electron sources and parallel detection, have led to important advances
in the last decade. For example, dopant segregation at semiconductor grain
boundaries has been identified.
Nanoproperties of Materials
One of the most significant developments of the last decade is the prolif-
eration of scanning-probe techniques for measuring the nanoproperties of ma-
terials. Figure 6.1 shows a large variety of signals that are now detectable.
Nanomechanical (force) measurements can be used to watch the behavior of
individual dislocations; optical measurements can visualize single lumines-
cent states; piezoelectric measurements can identify the effect of defects on
ferroelectrics, which have potential for high-density nonvolatile memory;
magnetic measurements can show the effect of single atoms on spin alignment
in atomic layers; ballistic electron transport can identify the electronic states
associated with isolated defects inside a film. We can expect these capabilities
to revolutionize our ability to characterize the physical properties of nanoscale
materials.
One critical issue, as semiconductor devices are scaled down in size for higher
density and speed, is the stochastic nature of the location of dopant atoms. These
atoms, which lend electronic carriers to the active semiconductor layers, are typi-
cally present in densities of only about 1 in a million. Until recent years, it was an
impossible dream to identify the exact location of these dopant atoms, but this has
recently proved possible with scanning-tunneling microscopy. Figure 6.2.1 shows
detection of the local electronic state generated by the impurity. When a semicon-
ductor structure is cleaved in vacuum, the individual impurity atoms near the sur-
face are clearly visible. The image (courtesy of Lawrence Berkeley Laboratory)
shows the position of Si dopants in GaAs as bright spots. Also present in the
image are Ga vacancies, which appear as dark spots.
Tunneling
Current
Light, Sound
Force
(electrostatic,
magnetic)
Electron
Transport
Atomic Manipulation
Whether intended or not, our atomic-scale characterization tools can change
the structures they are examining. This can be used to our advantage in manipu-
lating atoms on the atomic scale for making nanostructures. Figure 6.2 shows the
classic example of a ring of iron atoms assembled by the tip of a scanning-
tunneling microscope. The circular atomic corral shows the resonant quantum
states expected from simple theory. The imagination boggles at the possibilities
with related techniques. In principle, we can assemble arbitrary structures to test
our understanding of the physics of nanostructures and perhaps make useful
devices at unprecedented density. Two major issues will need to be addressed
before these methods can reach their full potential. First, even when we place
atoms where we choose, with few exceptions (such as the Fe atoms in Figure 6.2
at ultralow temperatures), they will not stay there. So, to assemble structures that
FIGURE 6.2 Atomic manipulation. The image shows the atomic scale capability for
patterning that is possible with the scanning-probe microscope. Atoms of Fe (high peaks)
were arranged in a circle on the surface of Cu and caused resonant electron states (the
ripples) to appear in the Cu surface. The structure is dubbed the “quantum corral.”
Related structures might one day be useful for electronic devices, where as many devices
as there are humans in the world could be assembled on an area the size of a pinhead
(1 mm2). (Courtesy IBM Research.)
Conclusions
Atomic visualization is a crucial part of condensed-matter and materials
physics. It is a thriving area in which advances usually driven by physics and
engineering have wide impact on science and technology. Many manufacturing
technologies depend on innovations enabled by atomic visualization equipment,
so research in the field has important economic value. We expect continued
developments, but attention must be paid to nurturing the development of appro-
priate instrumentation in close connection with scientific experiments. Depend-
ing on the nature of the visualization tool, the funding scope ranges from indi-
vidual investigator to small groups, to national centers of excellence in
instrumentation. From our success in probe microscopy, it appears we are stron-
ger at the individual-investgator level but weaker at the medium- and larger-scale
instrumentation development levels. A concern is that many new students are
attracted by computer visualization rather than experimental visualization. The
two methods are obviously complementary, and we are not yet near the point
where we can rely only on computer experiments. Thus funding must be main-
tained at a level sufficient to create opportunities that will attract high-quality
students into this field.
NEUTRON SCATTERING
The neutron is a particle with the mass of the proton, a magnetic moment
because of its spin-1/2, and no electrical charge. It probes solids through the
magnetic dipolar interaction with the electron spins and via the strong interaction
with the atomic nuclei. These interactions are weak compared to those associated
with light or electrons. They are also extremely well known, which makes it
possible to use neutrons to identify spin and mass densities in solids with an
accuracy that in many cases is greater than with any other particle or electromag-
netic probe. The wavelengths of neutrons produced at their traditional source,
nuclear research reactors with moderator blankets of light or heavy water held near
room temperature, are on the order of inter-atomic spacings in ordinary solids. In
addition, their energies are on the order of the energies of many of the most
common collective excitations—such as lattice vibrations—in solids. To image
spin and mass densities, condensed-matter physicists usually aim neutrons moving
at a single velocity and in a single direction, that is, with well-specified momentum
and energy, at a sample and then measure the energy and momentum distribution of
the neutrons emerging from the sample. Such neutron-scattering experiments have
been important for the development of condensed-matter physics over the last half
century. Indeed, the impact of the technique has been such that C. Shull (Massa-
chusetts Institute of Technology) and B. Brockhouse (McMaster University) were
awarded the 1994 Nobel Prize in Physics for its development (see Table O.1). In
previous decades, neutron scattering provided key evidence for many important
phenomena ranging from antiferromagnetism, as originally posited by Neel, to
unique quantum oscillations (called rotons) in superfluid helium. But what has
happened in the last decade in the area of neutron scattering from solids and liquids,
and what is its potential for the coming decade?
Overview
Three major developments of the last decade are (1) the emergence of neu-
tron scattering as an important probe for “soft” as well as “hard” condensed
matter, (2) the coming of age of accelerator-based pulsed neutron sources, and
(3) the revival of neutron reflectometry. The first development has expanded the
user base for neutron scattering far beyond solid-state physicists and chemists,
who had been essentially the only users of neutrons. The second development is
associated with a method for producing neutrons not from a self-sustaining fis-
sion reaction, but from the spallation—or evaporation—that occurs when ener-
getic protons strike a fixed target. As depicted in Figure 6.3, a spallation source
consists of a proton accelerator that produces short bursts of protons with ener-
gies generally higher than 0.5 GeV, a target station containing a heavy metal
target that emits neutrons in response to proton bombardment, and surrounding
moderators that slow the neutrons to the velocities appropriate for experiments.
Until the mid-1980s, the leading facility of this type was the Intense Pulsed
Neutron Source (IPNS) at the Argonne National Laboratory. In the last decade,
the clear leader by a very wide margin has been the ISIS facility in the United
Kingdom. Successful developments, especially at ISIS, have given the neutron-
scattering field growth prospects that it has not had since the original high-flux
nuclear reactor core designs of the 1960s. This follows because pulsed sources
are more naturally capable of taking advantage of the information and electronics
revolutions and because the unit of cooling power required per unit of neutron
flux is almost one order of magnitude less than for nuclear reactors.
The revival of neutron reflectometry seems at first glance less momentous
than the emergence of neutron scattering as a soft condensed-matter probe or the
emergence of accelerator-based pulsed neutron sources. However, as so much of
modern condensed-matter physics and materials science revolves about surfaces
and interfaces, neutron scattering could hardly be considered a vital technique
FIGURE 6.3 Drawing of the planned Spallation Neutron Source at Oak Ridge National
Laboratory. The basic design features are similar to those for the Los Alamos Neutron
Scattering Center (LANSCE) and the planned European Spallation Source (ESS). The
linear accelerator takes protons to 1.33 GeV, while the accumulator ring groups them into
1 µsec bursts, occurring at a repetition rate of 60 Hz, which then impinge onto a liquid
mercury target. The neutrons emanate in corresponding bursts from the target and feed
scattering instruments with flight paths with lengths from 2 to 100 m. (Courtesy of Oak
Ridge National Laboratory.)
cores rather than via the electromagnetic interaction to the atomic electrons,
neutrons can be equally sensitive to light (low-Z) and heavy (high-Z) atoms,
whereas x-rays always couple much more strongly to the heavy elements. Neu-
trons are especially sensitive to the lightest and arguably most important element
of all, hydrogen, and quite sensitive to its rival in importance, oxygen. In addi-
tion, it is possible to change atoms’ visibility to neutrons, without appreciably
changing the bonding or chemistry of a particular atom, by changing the isotope.
Thus, particular sites in a material can be labeled for investigation of their micro-
scopic coordinates and motion. Finally, the combination of various neutron
sources, as well the ability to tailor wavelength distributions even at a single
source, permits the examination of structures with characteristic length scales
from angstroms to microns. The weak coupling nature of the probe means that
even as the wavelengths of the neutrons used experimentally change over three
orders of magnitude, the scattering cross sections do not and absorption and
resolution corrections remain simply calculable.
One of the most lively areas in condensed-matter science over the last decade
has been that of transition metal oxides, a field dramatically revived by the
discovery of high-temperature superconductivity in oxides of copper. The
materials are generally combinations of relatively heavy lanthanides, medium-
weight transition metals, and light oxygen atoms. With this set of constituents,
neutron scattering was ideally positioned to make an important contribution to
the structure determination. The technique did not disappoint. First it has been
demonstrated that the key structural elements common to all of the cuprate super-
conductors are nearly square planar arrays of copper and oxygen. The signifi-
cance of this simple finding is impossible to overstate. That copper oxygen
planes are the key feature of the high-temperature superconductors has been the
starting point for essentially all of our thinking about high-temperature supercon-
ductivity as well as searches for materials with better superconducting properties.
Beyond revealing the ubiquity of the copper oxygen planes, neutron diffraction
has revealed how the planes appear singly, in pairs, or even as triplets, sometimes
with and sometimes without copper oxide chains in intervening layers. The
picture of the intervening layers as reservoirs that provide charges for the copper
oxygen planes is largely the result of a combination of neutron diffraction and
classical measurements of bulk electrical properties such as resistivity.
Extensive work has shown close correlations between structural details and
superconducting properties. For example, in a mercury-based compound exhib-
iting an extraordinarily high Tc, which itself is very sensitive to pressure, neutron
diffraction showed dramatic changes in the atomic coordinates with applied pres-
sure (see Figure 6.4).
Even after 10 years of indispensable contributions to the understanding of
high-Tc superconductivity, neutron diffraction retains its unique, driving role in
this field. A recent illustration of this is the excitement generated by the discov-
ery that certain materials very closely related to the high-temperature supercon-
Cu2-O3
FIGURE 6.4 Mercury-based cuprates exhibit not only the highest transition tempera-
tures Tc for superconductivity, but also extraordinarily pressure-dependent Tc’s. High-
resolution neutron diffraction at pulsed spallation sources has revealed the complex struc-
tures (at right) of these compounds. In addition, the penetrating power of the technique
has been exploited to examine the pressure dependence of the structure. There is an
astonishing 0.25 Å contraction of the marked copper-to-oxygen distance as pressure is
applied to raise Tc from 138 to 160 K.
doing. The electrons of particular interest are the outer electrons because they
account for the chemical bonding and electrical and magnetic properties of a
solid. The neutron couples to these electrons through the magnetic dipole inter-
action; because its energy is typically much too small to excite the electrons from
the core where they form a closed shell with zero net orbital and spin-angular
momenta, the cores are invisible. The outer electrons are easiest to see with
neutrons when they live on a regular lattice and their spin orientations repeat
periodically. In this case, they produce diffraction spots entirely analogous to
those associated with atomic nuclei. As Shull showed in the 1950s, it is then
possible to do magnetic crystallography to image the spin arrangements in virtu-
ally any magnet. In the last decade, magnetic crystallography with neutrons has
continued to be among the most essential tools in condensed-matter physics.
Again, high-temperature superconductivity has been an area of accomplishment.
The important experiment was that which showed, shortly after the super-
conductivity’s discovery, that the insulating and undoped parent compounds of
the superconductors are actually very simple antiferromagnets. In the decade
since this experiment, the superconductivity and magnetism of the cuprates have
been inextricably intertwined. As for the neutron diffraction experiments that
revealed the microscopic structures of the high-Tc compounds, the last decade’s
progress in high-temperature superconductivity would be unimaginable without
the early magnetic diffraction data on the parent compounds.
Magnetic diffraction has played a similar role in other subfields that have
been active in the last decade. For example, it established a definite link between
the magnetism and exotic superconductivity of certain actinide and rare-earth
intermetallics, also known as heavy fermion compounds. Also, a particularly
important and elegant set of experiments explored the coupling between mag-
netic layers through intervening nonmagnetic layers in thin-film multilayer struc-
tures grown by molecular-beam epitaxy. The structures show great promise as
“spin valves” for application to computer disk drive read heads. The optimiza-
tion of their performance requires complete knowledge of the atomic and spin
densities responsible for the desirable giant magnetoresisant behavior. Using
polarized-neutron reflectivity, one can obtain a depth profile of the direction and
magnitude of the magnetic moment in these materials with 2- to 3-Å resolution.
Early polarized-neutron reflectivity studies confirmed that maximum giant mag-
netoresistance is correlated with an antiparallel alignment of the magnetic layers
across the nonmagnetic interlayers. More recent experiments revealed the com-
plex interplay between the magnetic structure and the physical characteristics.
with the electron spins—namely, through the magnetic dipole coupling between
the neutron spin and the magnetic fields. The relevant wave numbers and corre-
sponding apparatus are different, but the concepts remain the same. The most
famous mesoscopic field inhomogeneities in condensed-matter physics are those
associated with type-II superconductors. Here, the superconductor accommo-
dates an external field by admitting quantized vortices containing normal (metal-
lic) state cores embedded in a superconducting matrix. The vortices typically
arrange themselves to form a lattice with inter-vortex separations of order 100 to
1000 Å. One of the triumphs of neutron scattering in the 1960s was the verifica-
tion of the vortex lattice picture for conventional, low-temperature type-II super-
conductors. Given this early success, it should come as no surprise that as
unconventional superconductors such as actinide intermetallics and cuprates were
discovered in the 1980s and 1990s, neutrons were used to image their vortex
lattices. They provided key evidence for two of the most important new ideas
about superconductivity. The first idea is that real solids could actually display
superconductivity more akin to the superfluidity of helium-3 than to the super-
conductivity of ordinary solids like aluminum; the second is that collections of
vortices can have intricate phase diagrams much like those of complicated or-
ganic molecules in solution.
and technical fields from biology to integrated circuit packaging. Figure 6.5
shows the shape change undergone by diblock copolymers adsorbed on a glass
substrate. As the conditions are changed, the copolymers undergo a transition
from a mushroom- to a brush-like shape, which correlates with a change in the
adhesive properties of the coated surface.
Dynamics
Nuclear and magnetic structure determinations represent the most common
and widely understood application of neutron scattering. However, since the
work of Brockhouse in the 1950s, the study of lattice vibrations and magnetic
fluctuations has also had an impact on condensed-matter physics and materials
science. As have neutron determinations of magnetic and nuclear structure,
10-9
φ (x)
.01 .02 .03 .04 .05
(b) Polymer "Brushes" -1
Q z (Å )
(b) Polymer “Brushes”
4
Reflectivity *Qz
10-9
FIGURE 6.5 One of the most important developments of the 1990s has been the revival
of neutron reflectometry. Formerly used as a tool for establishing absolute neutron-
scattering cross sections, it has become a major technique for surface and interface sci-
ence, with particularly significant accomplishments in the fields of soft matter and mag-
netoresistive films. The figure shows data for the “mushroom” to “brush” transition for
polymers attached to a substrate. Raw data are at right, the directly deduced density
profiles are in the middle, and the inferred morphology is shown at left. The radically
different reflectivity profiles at right attest to the ability of the technique to discriminate
between the different arrangements of the polymers at the surface. (Courtesy of Los
Alamos National Laboratory.)
neutron scattering from excitations in solids has strongly influenced our thinking
about transition metal oxides. For example, work on the cuprate superconductors
includes measurements of the phonon density of states, which can be used as
inputs into traditional calculations of the superconducting transition temperature.
The apparent failure of such calculations remains an important motivation to
search for a new theory for the superconductivity of the cuprates. Much more
recent experiments provide similarly complete magnetic excitation spectra, which
can now be used in analogous tests of “conventional” magnetic theories of high-
temperature superconductivity.
An interesting development has been the use of neutrons to probe the elec-
tronic gap function and pair-breaking excitations in the superconducting
state. Neutrons are unique for this application because they allow the only
superconducting spectroscopy that is a true bulk probe capable of examining
short-wavelength phenomena with high energy resolution.
The continuing work on the dynamics of ordered solids has coexisted with
a rapidly growing enterprise concerned with the dynamics of fluids and soft
matter. Important experiments include those that have verified one of the key
concepts in polymer science—that polymers in a melt move in snake-like fash-
ion within tubular structures formed by their neighbors. The experiments are
noteworthy not only for their scientific impact, but also because they required
the use of an instrument—the neutron spin echo spectrometer—that operates on
a principle unknown to the founders of inelastic neutron scattering, Fermi and
Brockhouse.
These other developments not only coexisted with the great materials dis-
coveries of the last decade, but were actually a prerequisite for the significant
contributions ultimately made by neutron scattering to the elucidation of these
discoveries. It is our judgment that further improvements in all five of the listed
categories are inevitable in the next decade. The inevitability follows from the
continued effects of accelerator-based pulsed neutron sources and instrumenta-
tion, and of advances driven by the microelectronics revolution on the entire
field, that has been hampered by the limits imposed by the modest incident fluxes
that even modern research reactors can provide. Advances in both accelerator-
based pulsed neutron research and microelectronics have made it possible to
multiplex many experiments on an enormous scale, for example, simultaneously
collecting 106 usable pixels of information where the old reactor-based methods
would yield a single pixel. Thus, the field of neutron scattering has changed
qualitatively over the last decade, even though only one major new source (ISIS
in the United Kingdom) has been completed. The figure of merit for many
important experiments has been transformed from the reactor power to the infor-
mation rate. In the coming decade, we expect the useable information rates as
measured by the product of incident flux delivered by the beam optics and the
number of independent pixels to grow in tandem with the microelectronics revo-
lution (Figure 6.6). Beam optics are also on a growth curve driven by improve-
ments in thin film-technology and x-ray and light optics, and so are also likely to
improve. The continued growth in capabilities will make many new experiments
possible, as well as allow old measurements to be performed with greater preci-
sion. The new experiments might include measurements of vortex lattice dynam-
ics in type-II superconductors, investigations of the magnetic aspects of the quan-
tum Hall effects, characterization of fluid flow in small capillaries, and studies of
electromigration at silicon-metal interfaces. Of course, the most exciting experi-
ments will be those dealing with phenomena we are unaware of today.
Neutron experiments have continued to be popular even in the absence of a
new neutron source because of the neutron’s uniqueness as a probe of condensed
matter and because neutron experiments are so readily improved by ongoing
advances in microelectronics and thin-film technology. However, merely trans-
ferring technology developed for other uses to its antiquated neutron-scattering
centers will not allow the United States to recapture its lead in neutron science.
There is no substitute for constructing a new high-power spallation source with
many high-flux beam lines. In recognition of this, the government is supporting
10 7
ESS
Pixel Power (MW)
106
SNS
5
10 MAPS/ISIS
reactors
104 spallation sources
HET/ISIS
Brockhouse
103
HFIR MARI/ISIS
102 ILL
HFBR HET/ISIS
FIGURE 6.6 The information acquisition rate (left) for single-crystal inelastic experi-
ments is the product of the flux at the sample (expressed in nuclear reactor equivalent
MW units) and the number of useable pixels within which the scattered and incident
neutrons fall. Brockhouse was co-recipient of the 1994 Nobel Prize for developing the
single-pixel triple-axis spectometer (see Table O.1), which dominated inelastic neutron
scattering until around a decade ago. The development of pulsed spallation sources and
fast rotor chopper spectrometers has moved inelastic neutron scattering onto a growth
curve (Moore’s Law) driven largely by the electronic data-processing industry. The
neutron sources identified are the HFBR (High Flux Beam Reactor, Brookhaven National
Laboratory), HFIR (High Flux Isotope Reactor, Oak Ridge National Laboratory), and ILL
(Institut Laue-Langevin, France) reactors and the ISIS (Rutherford-Appleton Laboratory,
United Kingdom), SNS (proposed Spallation Neutron Source, Oak Ridge National Labo-
ratory), and ESS (European Spallation Source, currently unsited) accelerator-based facil-
ities. MAPS, HET, and MARI correspond to ISIS instruments at different stages of
development. [Physics World, 33 (December 1997).]
the construction of precisely such a source, the Oak Ridge Spallation Neutron
Source, whose completion will be the big event of the next decade for neutron
science.
SYNCHROTRON RADIATION
In the past 30 years, the use of infrared, ultraviolet, and x-ray synchrotron
radiation (SR) for condensed-matter and materials physics research, as well as
research in the other natural sciences, engineering, and technology, has blos-
somed. The pace and scientific range of SR utilization has increased even more
rapidly during the past decade because of source improvements, advanced instru-
mentation, and more beam time than was possible a decade ago. Further impetus
was provided by the construction of new facilities with extreme performance. As
a consequence of these developments, approximately 4,000 scientists from
academia, industry, and government laboratories now use U.S. SR facilities.
In the 1960s and 1970s, research was initiated using SR produced by the
bending magnets at storage rings designed for high-energy physics. As shown in
Figure 6.7, such rings provided about four orders of magnitude greater brightness
than the best in-laboratory sources. In addition, the radiation covered a very
broad spectrum, in contrast to the line source x-ray tubes then available. These
features made a number of previously unfeasible experiments possible.
FIGURE 6.7 History of (8 keV) x-ray sources. Brilliance (or brightness) is defined as
source intensity per illuminated solid angle. (Courtesy of Argonne National Laboratory.)
Because of the science and the large user communities resulting from the
first-generation sources, they were joined by second-generation, high-brilliance
rings designed specifically for SR research in the mid-1980s. The increased
brightness and the greater availability of these sources, as well as the increased
flux achieved by insertion devices at all sources, further expanded both the science
and the user community. (Insertion devices, known as wigglers and undulators,
are magnetic arrays that cause the charged particles to undergo quasi-sinusoidal
paths, producing far brighter radiation than can be achieved with bending mag-
nets at the same storage ring.)
During the past decade, third-generation rings [the Advanced Light Source
(ALS) and Advanced Photon Source (APS; shown in Figure 6.8) in the United
States, SPRING-8 in Japan, and the European Synchrotron Radiation Facility in
France], with still higher brightness (by 4 to 5 orders) and many straight sections
for insertion devices, have been constructed. At the same time, the first- and
second-generation rings have been modified so that their performances have
increased markedly. Such increases form the basis of revolutions in the research
that utilizes SR—a process that is likely to continue well into the next century
with new sources.
FIGURE 6.8 Overview of the recently completed Advanced Photon Source (APS) at
Argonne National Laboratory. Electrons circulate in the storage ring and emit brilliant x-
ray beams that are used to probe the structure of condensed matter at scales ranging from
the atomic to the macroscopic. (Courtesy of Argonne National Laboratory.)
Protein Crystallography
The goal of understanding life has evolved into a large interdisciplinary
effort that integrates information extending from experimental results at the
atomic and molecular levels to studies of organelle, cellular, and tissue organiza-
tion and function. Atomic-level information will increasingly provide the means
through which biological function, and malfunction that leads to disease, will be
understood. Macromolecular crystallography has provided the vast majority of
information about three-dimensional biological structure and will play an even
greater role in the future. Information relating structure to function has also led to
the development and successes of new approaches to drug discovery (often called
structure-based drug design).
The unique properties of SR—namely, its tunability and high brilliance—
have allowed it to play a seminal role in these advances. So important is SR to
protein crystallography that 73 percent of new structures published in Nature
and 60 percent of those published in Science in 1995 used synchrotron-based
data, and this percentage continues to grow. Some of the most important results
include the structure of the myosin head, which has led to a molecular-level
interpretation of muscle contraction; the structure of cytochrome oxidase, which
is the enzyme that carries out the final step in mammalian respiration; the struc-
ture of the enzyme nitrogenase responsible for production of most of the assimi-
lable (fixed) nitrogen in our biosphere; the structure of the ribozyme, which is a
catalytic form of RNA; numerous plant and animal virus structures (for an impor-
tant example, see Figure 6.9), as well as studies of their interaction with potential
antiviral drugs; and structures of a variety of enzymes, like topoisomerases,
involved in DNA transformations and regulation.
FIGURE 6.9 Structural information is central to the development of models and cures for
disease, and today is largely established using methods and large facilities originally
developed for the condensed-matter and materials physics community. The figure shows
the exterior envelope protein (upper right) of the AIDS (acquired immunodeficiency syn-
drome) virus together with a neutralizing antibody (left) and the human CD4 receptor
(lower right). The structures are deduced from x-ray diffraction data collected at the
National Synchrotron Light Source (NSLS) at Brookhaven National Laboratory.
most exciting of the fundamental studies have been those of continuous phase
transformations. SR studies of transformations between the different phases of
monolayers absorbed on flat, well-ordered substrates, as well as reconstructed
surfaces, have enabled significant tests of exact results from the theory of two-
dimensional physics. Similarly, the step-bunching transition on single crystal
surfaces, originally predicted theoretically, has been studied experimentally with
SR. The surface-scattering methods have also been used for studies of liquid-
surface and amorphous thin-film structures. A particularly interesting result is
that near the surface of liquid metals, there is metal atom layering.
Surface sensitive SR techniques have been applied increasingly to signifi-
cant technological problems. Several major projects have been aimed at under-
standing thin-film formation via vapor deposition and sputtering. Because the
normal surface-sensitive techniques cannot be used in these “high-pressure” situ-
ations, synchrotron-based surface-scattering studies now account for a large por-
tion of the existing in situ characterization of these processes. An important
extension of the surface-scattering technique is grazing incidence fluorescence
now being applied by several semiconductor manufacturing companies to micro-
contamination analysis of Si wafers.
Also of technological importance are the surface-sensitive electron-yield
techniques in the vacuum ultraviolet (VUV)/soft x-ray region, which measure
bond lengths of adsorbate/surface bonds and orientations of molecules adsorbed
on surfaces. These are now being used for practical applications such as deter-
mining the mechanisms governing the orientation of molecules of importance to
liquid-crystal displays. In addition, x-ray magnetic circular dichroism is having a
significant impact on the science of magnetic recording.
The number of applied problems involving surfaces, surface layers, and
interfaces is enormous. We anticipate enormous growth in experiments related to
corrosion, electrochemistry, tribology, environmental interfaces, and the like as
more beam lines are commissioned around the world. Electrochemistry deserves
special mention because already SR together with probe microscopies has trans-
formed this field from one primarily dependent on electrochemical measure-
ments and related modeling to the study of electrode processes at the molecular
level.
Microspectroscopy
The availability of the third-generation sources has made possible higher
resolution microspectroscopies. The higher brightness at ALS has enabled con-
struction of an improved scanning transmission x-ray microscope (STXM) as
well as a scanning photoelectron microscope (SPEM). The STXM is especially
useful for micro-composition and orientation measurements in multicomponent
polymers and organic systems. Spatially resolved x-ray photoelectron spectros-
copy is now being applied to a range of materials issues, such as examining
chemical structure of Ti-Al alloys reacted with graphite and chemical speciation
on bond pads of integrated circuits in order to correlate chemical state with
phenomena like adhesion and chemical residues in vias.
At APS, an x-ray microprobe with a FWHM focal spot size of 0.33 µm with
a flux density exceeding 5 × 1010 photons/m2 s (0.01 percent bandwidth) has been
developed. Using a root specimen in its natural hydrated state, elemental sensi-
tivity significantly better than 10 ppb and minimum elemental detection limit of
0.3 fg have been demonstrated. The x-ray microprobe is being used in variety of
environmental and biological research projects. Figure 6.10 shows that it is also
15
5 µm beam
InP(004)
0
Intensity (log(cts))
-1 +1
10 modulator
0
-1 +1
-2
laser
0
-4 -2 0 2
∆q/q 0 (%)
x-ray microbeam
1 µm
EM modulator
DFB laser
Electro-absorption
Modulator/Laser
Photoemission Spectroscopy
Angle-resolved photoemission spectroscopy (ARPES) at SR sources has
proven to be a unique tool when addressing the question of the electronic struc-
ture of solids, most notably the variation of the band energies with respect to
momentum.
Over the last decade, such experiments have played an important role in
advancing our understanding of high-temperature superconductors. The signifi-
cant improvements in beam intensity and energy resolution obtained from
undulators and new spectrometers have facilitated the discovery of a number of
fascinating features in the electronic structure of the high-Tc superconductors.
The most notable consequence is the beginning of a detailed view of how conven-
tional band theory breaks down for these materials. In addition, ARPES has
provided images of the unconventional superconducting gap functions of the
cuprates.
Magnetic Scattering
Although the coupling of x-rays to magnetic moments is considerably smaller
than that of neutrons or electrons, the extremely high-SR intensities have enabled
qualitatively new kinds of experiments complementary to those performed with
neutrons. In particular, the availability of radiation of tunable energy and polar-
ization has led to spectroscopies that promise the separation of the orbital and
spin magnetization densities in solids. SR has made very precise characteriza-
tions of the magnetic behavior of a variety of rare-earth, transition-metal, and
actinide systems possible. The naturally high resolution not only allows the
Infrared Investigations
One development that was not foreseen a decade ago was the rise of infrared
techniques using SR. For example, the vacuum ultraviolet ring at the National
Synchrotron Light Source (NSLS) at Brookhaven provides infrared light that is
103 times brighter than typical thermal sources and highly stable. Similarly, it
produces more power than thermal sources in the far infrared and is also a pulse
source suitable for time-resolved spectroscopy with subnanosecond resolution.
This source has enabled infrared spectroscopy to be applied to problems such as
the dynamics of adsorbates on metals and semiconductors, and photoconductivity.
conductivity and are playing a very important role in the development and explo-
ration of new magnetic materials.
Another very exciting area of “small-scale” instrumentation involves lever-
aging microfabrication technology, as driven by the microelectronics industry, to
do or enable physics experiments. There are several different aspects of this. The
first relates to the custom design of electronic circuitry specifically configured for
some special laboratory instrumentation function. What used to require the effort
of numerous people hand-wiring large numbers of components together to even-
tually produce a rack full of instrumentation can now frequently be reduced to an
application-specific integrated circuit (ASIC) designed to the need, along with a
few other high-function, but standard, integrated circuits.
A second aspect of leveraging microfabrication involves special-purpose
technology developed to fulfill some engineering need, but using it for physics
applications as well. A good example of this is low-Tc superconducting electron-
ics. For the past several years high-quality foundry service has been available for
producing prototype superconducting digital circuitry. This same foundry ser-
vice has been used to fabricate on-chip experiments to study the physics of
Josephson junctions, the behavior of arrays of superconducting devices, the per-
formance of high-frequency mixers and antennae for radio astronomy, and so on.
In addition, this technology can be used to fabricate all manner of integrated
SQUIDs including, for example, magnetometers with small pickup-loop struc-
tures to be used in scanning SQUID microscopy applications. This is a fabrica-
tion service available to everyone at a very modest cost.
A third aspect of leveraging is related to microelectromechanical systems
(MEMS). MEMS is a rapidly growing engineering field, closely linked to the
microelectronics industry, that has developed a wide variety of devices such as
micromotors, microactuators, and microflow-controllers. MEMS in the form of
microcantilever structures are at the heart of many scanning-probe implementa-
tions. MEMS technology is beginning to provide some exciting opportunities to
do physics in unconventional ways on very small quantities of matter. At present,
cantilever structures in one form or another are the basis for many such experi-
ments, but it is clear that MEMS can provide an ideal platform for a wide variety
of physics experimentation. We can expect to see a rapid expansion of “labora-
tory-on-a-chip” concepts and implementations in the near future.
In their pursuit to gain control of entities of the very smallest dimensions,
scientists are developing extremely sensitive sensors to detect and analyze very
weak physical and chemical effects involving minute amounts of material. The
basic sensing element used in one such study is a silicon microcantilever like that
used in an atomic-force microscope. This microcantilever bends in reaction to
the forces imposed on it by various phenomena under investigation. Several
methods can be applied to detect the motion and deformation of the cantilever
including optical and electrical techniques, the latter using piezoresistors.
objects and devices that are beyond our current imagination. Numerous advances
in the fabrication of new materials are discussed in this report.
Another path to the discovery of new phenomena has been through subject-
ing matter to unusual or extreme conditions of low temperature, pressure, and
magnetic field. Measurements under such unusual conditions have sometimes
led to dramatic surprises, with important results that could not have been antici-
pated in advance. Here, the committee looks at a few of the results and describes
the present state of the technology and future prospects. There are common
themes for research under extreme conditions: (1) The limit in the equilibrium or
static value of minimum temperature, maximum pressure, or maximum magnetic
field is 10 or more times less than transient values that can be achieved. (2) The
instrumentation required for preparing specimens and performing measurements
has become increasingly sophisticated and frequently requires facilities available
only at large laboratories. (3) The miniaturization of specimens and apparatus is
becoming increasingly beneficial to each technology.
very low temperatures. The goal has been to discover a superfluid pairing transi-
tion similar to that which occurs in pure liquid 3He. The paired state might be
quite different from that in the pure liquid. No one has succeeded in cooling the
dilute mixtures to temperatures less than 200 µK. Heat transfer between the
metal coolant and the dilute mixture is quite difficult.
Significantly lower temperatures have been achieved for isolated systems
not in thermal equilibrium with surrounding matter. One class of experiments
has been the study of spontaneous nuclear magnetism in metals such as copper,
silver, and platinum. At the end of the magnetic cooling process, after the large
magnetic field used to polarize the nuclei is removed, the magnetic moments are
quite cold. The thermal equilibrium times are quite long, sometimes more than
108 seconds. Through clever determinations of the spin-entropy, temperatures as
low as a tens of picokelvin have been deduced. The method has been used to
examine a variety of unusual states of magnetic order. As Figure 6.11 illustrates,
some of the experiments have even been conducted in connection with neutron
diffraction at reactor facilities. The neutrons were used to image the nuclear
magnetic moments in the ordered state.
A spectacular example of the cooling of a metastable isolated system of
matter has been the studies of Bose-Einstein condensation in gases of sodium,
rubidium, and lithium. Modern optical techniques in conjunction with magnetic
traps and radio frequency fields have been used to cool dilute gases of these
atoms to sub-microkelvin temperatures. The hot atoms are kicked out of the
magnetic trap by the radio frequency electromagnetic fields. At the very low
temperatures, the atoms obeying Bose statistics can simultaneously occupy the
same state—the condition for Bose-Einstein condensation. Quantum interfer-
ence between clusters of atoms has been demonstrated. The effect is analogous
to interference between two sources of coherent light.
little more than 1 Mbar. Quite a number of new dense phases of matter have been
discovered. Of particular interest has been the transition of normally completely
insulating materials such as solid xenon and sulfur in conducting metallic states.
As the atoms are squeezed closer together, the outer electrons become free.
The small size of the specimens presents a special challenge in instrumenta-
tion. The entire experiment has to be built in micron-sized volumes. Neverthe-
less, nanotechnology has been used to apply electrical leads and even magnetic
resonance coils to diamond anvil devices. The crystal structure of new high-
pressure phases of matter has been determined with x-rays from synchrotron
sources. The measurement of the pressure is also a difficult matter. Calculation
of the force per unit area is frequently insufficient because the stresses are not
uniformly distributed. Instead, a combination of calculated pressures and ex-
trapolation of material properties such as fluorescence frequencies must be care-
fully compared in many experiments to establish a reliable pressure scale.
A special goal in high-pressure research in recent years has been the search
to find the elusive metallic state of solid hydrogen. It would be an especially
interesting discovery because hydrogen should be one of the easiest materials for
which to calculate an equation of state with fundamental theory. The pressure
predicted for the metallic transition in hydrogen is very close to the values cur-
rently being produced.
Beyond providing information about systems of fundamental interest to
condensed-matter physicists, high-pressure research is essential for understand-
ing the composition and properties of Earth’s interior. Recent experiments have
led to significant new findings on phase transformations associated with deep
earthquakes, for example.
Further progress in achieving higher pressures will probably be achieved
through use of stronger materials. For example, studies of tungsten and iron
suggest that they become even stronger at megabar pressures.
Transient pressures greater than 2 Mbars have been obtained in shock waves.
The maximum pressure lasts only a few nanoseconds. Nevertheless, most of the
existing high-pressure and high-temperature data have been obtained with the use
of gas guns, high explosives, and even nuclear detonations. The development of
high-intensity lasers provides a potentially attractive complement to these meth-
ods, particularly for equation of state studies at high energy densities. By focus-
ing a short-pulse, intense laser beam on a sample, a rapidly expanding plasma is
created, which, in turn, drives a shock wave into the sample; laser-induced shock-
wave experiments to obtain high-pressure data (in excess of a megabar) have
been carried out for more than a decade. However, concerns have existed regard-
ing the accuracy of the data owing to the lack of planarity of the shock front,
preheating of the material ahead of the shock front, difficulty in determining the
steadiness of the wave front because of the small sample size, and the absence of
absolute pressure and volume data. Recent improvements in beam smoothing
and other experimental developments have improved the quality of the propagat-
ing shock wave.
FIGURE 6.12 Higher fields are associated with higher energies, smaller length scales,
and more extreme technologies and environments. (Courtesy of Bell Laboratories, Lu-
cent Technologies.)
FIGURE 6.13 Magnet technology, as expressed in the maximum field reached in nonde-
structive experiments, has grown exponentially over the last century. We anticipate that
the incorporation of high-Tc superconductors will assure continued growth. (Courtesy of
Bell Laboratories, Lucent Technologies.)
transient experiments have been important in gaining information about the high-
field behavior of high-temperature superconductors, the optical properties of
matter, and the conducting properties of unusual metallic compounds.
1013
Serial
ASCI ID system
Origin 2000
Vector
1011
Cray T3D, CM-5
Parallel
Intel Delta, CM-200, etc
Distribute Intel Gamma, CM-2, etc J-90 series
CM-1, etc
10 9
Easy commercial
Operations per second
Cray X-MP
Cray-1
10 7 10 4
CDC 7600
CDC 6600 Rλ
IBM 7030
10 5 10 2
IBM 704
MANIAC three-dimensional
3 Navier-Stokes turbulence
10 10 0
SEAC
Electro-mechanical
accounting machine
FIGURE 6.14. The plot shows the growth of the number of operations per second from
1940 to 2010 for the fastest available “supercomputers.” Objects of different shapes are
used to distinguish serial, vector, and parallel architectures. All processors until Cray-1
were single-processor machines. The line marked “three-dimensional Navier-Stokes tur-
bulence” shows, in rough terms, the extent to which the increased computing power has
been harnessed to obtain turbulent solutions by solving three-dimensional Navier-Stokes
equations. Turbulence is used here as an example of one of the grand and difficult
problems needing large computing power. The computing power limits the size of the
spatial domain over which computations can be performed. The Reynolds number
(marked on the right as Rλ) is an indicator of this size. (Courtesy of Los Alamos National
Laboratory.)
problems) and microprocessors were much slower than discrete component de-
signs. Today memory density has risen enormously and prices have fallen dra-
matically. Microprocessors have risen several orders of magnitude in speed and
gone from 8- to 64-bit word lengths. After some tumultuous history involving
exploration of different parallel architectures, shared-memory parallel systems
combining many processors communicating via high-speed digital switches are
now rapidly developing and have largely replaced pure vector processors. Clock
speeds for microprocessors are now so high that memory access time is often far
and away the greatest limitation on overall speed. One of the great software
challenges now is to find algorithms that can take maximum advantage of parallel
architectures consisting of many fast processors coupled together.
In addition to hardware advances, the last decade has seen some revolution-
ary advances in algorithms for the study of materials and quantum many-body
systems. Improved algorithms are crucial to scientific computation because the
combinatorial explosion of computational cost with increasing number of de-
grees of freedom can never be tamed by raw speed alone. (Consider the daunting
fact that in a brute force diagonalization of the lowly Hubbard model, each site
added multiplies the computational cost by a factor of approximately 64.)
In the last two decades computational condensed-matter and materials sci-
ence has moved from the initial exploratory stages (in which numerical studies
were often little more than curiosities) into the main stream of activity. In some
areas today, such as the study of strongly correlated low-dimensional systems,
numerical methods are among the most prominent and successful methods of
attack. As new generations of students trained in this field have begun to
populate the community, numerical approaches have become much more common.
Nevertheless it is fair to say that computational physics is still in its infancy.
Pushing the frontiers of computational physics and materials science is im-
portant in its own right but also important because training students in this area
provides industry and business with personnel who not only have expertise on the
latest hardware architectures but also bring with them physicists’ methods and
points of view in analyzing and solving complex problems.
Progress in Algorithms
In spite of its great enthusiasm, the committee offers a warning before pro-
ceeding. Specifically, numerical methods have become more and more powerful
over time, but they are not panaceas. Vast lists of numbers, no matter how
accurate, do not necessarily lead to better or deeper understanding of the under-
lying physics. It is impossible to do computational physics without first being a
good physicist. One needs a sense of the various scales relevant to the problem at
hand, an understanding of the best available analytical and perturbative ap-
proaches to the problem, and a thorough understanding of how to formulate the
interesting questions.
Feynman path integral. This means that the weights cannot be interpreted as
probabilities that can be sampled by Monte Carlo methods. The fixed-node
approximation attempts to get around this problem by specifying a particular
nodal structure of the wave function. This has yielded very useful results in some
cases in which the nodal structure is understood a priori. Some workers are now
moving beyond small atoms and molecules to simple solids and have obtained
good results for lattice constants, cohesive energies, and bulk moduli.
Fermion Monte Carlo path integral methods continue to be applied success-
fully to lattice models such as the Hubbard model, but again the sign problem is
a serious limitation. For example, it is still difficult to go to low-enough tempera-
tures to search for superconductivity, even in highly simplified models of high-Tc
materials.
Bosons, which are much easier to treat numerically, also pose interest-
ing problems. “Dirty boson” models have been used to describe helium films
adsorbed on substrates and to treat the superconductor-insulator transition. With
this model one makes the approximation that Cooper pairs are bosons and as-
sumes (not necessarily justifiably) that there are no fermionic degrees of freedom
at zero temperature.
raw speed increases, we will naturally learn new algorithms for relaxation to
exploit the extra processors.
on the largest scales depends in detail on the dynamics and energetics not only
down to the protein level, but even down to the way in which each protein is
hydrated by its aqueous environment.
Quantum Computers
Theoretical analysis of the quantum computer, in which computation is per-
formed by the coherent manipulation of a pure quantum state, has advanced
extremely rapidly in recent years and indicates that such a device, if it could ever
be constructed, could solve some classes of computational problems now consid-
ered intractable. A quantum computer is a quantum mechanical system able to
evolve coherently in isolation from irreversible dephasing effects of the environ-
ment. The “program” is the Hamiltonian. The “input data” is the initial quantum
state into which the system is prepared. The “output result” is the final, time-
evolved state of the system. Because quantum mechanics allows a system to be
in a linear superposition of a large number of different states at the same time, a
quantum computer would be the ultimate “parallel” processor.
The basic requirement for quantum computation is the ability to isolate,
control, and measure the time evolution of an individual quantum system, such as
an atom. To achieve the goal of single-quantum sensitivity, condensed-matter
experimentalists are pursuing studies of systems ranging from few-electron quan-
tum dots to coherent squeezed photon states of lasers. When any of these reach
the desired single-quantum limit, experiments to probe the action of a quantum
proven so vital for imaging atoms and spins in materials ranging from high-
temperature superconductors to polymers.
In previous decades, key events in condensed-matter and materials physics
have been the exploitation of inventions and investments in large facilities. The
inventions and the facilities are devices with the special purpose of being tools
for condensed-matter and materials physics. The last decade is unique in that the
major event relating to such tools is actually not directly connected with inven-
tions and facilities. Instead, it is the same phenomenon that has profoundly
transformed nearly all other aspects of our society—namely, the information
revolution. An obvious consequence of the information revolution for condensed-
matter and materials physics is the recent progress in computational materials
science. Less obvious but equally important is the ability to collect and manipu-
late progressively larger quantitative data sets and reliably execute increasingly
complex experimental protocols. For example, in neutron scattering, data gather-
ing rates and, more crucially, the meaningful information content, have risen in
tandem with the exponential growth of information technology
What will happen in the next decade? Although we cannot predict inspired
invention, we anticipate progress with ever-shrinking and more-brilliant probe
beams and increasingly complete, sensitive, and quantitative data collection. One
result will be the imaging and manipulation of steadily smaller atomic land-
scapes. Another will be the analysis and successful modeling of complex mate-
rials with interesting properties in fields from biology to superconductivity.
The promised performance improvements with applications throughout ma-
terials science will come about only if balanced development of both large-scale
facilities and technology for small laboratories takes place. For example, deter-
mination of the crystal structures of complex ceramics and biological molecules
is likely to remain the province of neutron and synchrotron x-ray diffraction,
performed at large facilities, while defects at semiconductor surfaces will most
likely remain a topic for electron and scanning-probe microscopy, carried out in
individual investigators’ laboratories and small facilities. Thus, the cases for
large facilities and small-scale instruments are equally strong. Although the
larger items such as the neutron and photon sources appear much more expensive
than those that benefit a single investigator, recent European experience suggests
that the costs per unit of output do not depend very strongly on the scale of the
investment, provided of course that it is properly chosen, planned, and managed.
Information technology is also blurring the difference between large and small
facilities, as they all become nodes on the Internet. One important upshot will be
that the siting of large facilities as well as the large-versus-small facility debates
will largely cease to be of importance to scientists.
In addition to the construction of large facilities such as the SNS and APS,
healthy research in instrumentation science is crucial to the development of
improved tools for atomic visualization and manipulation. Although we have
impressive success stories to point at, as in the dominance of the probe micros-
Priorities
• Build the Spallation Neutron Source and upgrade existing neutron sources.
• Fully instrument and exploit the existing synchrotron light sources and do
R&D on the x-ray laser.
• Build state-of-the-art nanofabrication facilities staffed to run user pro-
grams for the benefit of not only the host institutions but also universities, gov-
ernment laboratories, and businesses that do not have such facilities.
• Recapitalize university laboratories with state-of-the-art materials fabri-
cation and characterization equipment.
• Build medium-scale centers devoted to single issues such as high mag-
netic fields or electron microscopy.
• Exploit the continuing explosion in information technology to visualize
and simulate materials.
274
defense technology. The arms race, Sputnik, the energy crisis, and the informa-
tion revolution stimulated continued growth in the field over the subsequent
decades. For most of this period, there was sustained growth in the federal
investment in science, including condensed-matter and materials physics. This
federal role in fundamental research, originally articulated by Vannevar Bush at
the end of World War II in Science: The Endless Frontier,1 was substantially
justified on the basis of national defense.
In the late 1980s, the end of the Cold War, the emergence of the global
economy, and the growing federal deficit combined to shake the foundations of
the national R&D enterprise. In the absence of a major military threat, invest-
ments in the defense establishment were reduced, including support for R&D.
Overall federal R&D investments, which peaked at $80 billion (in 1997 dollars)
in 1987, declined 20 percent in the following decade (see Figure 7.1) as priorities
shifted away from defense, and the desire to reduce the deficit applied increased
pressure to the discretionary part of the federal budget. Federally supported basic
research, performed mostly at universities, fared much better, increasing by 30
percent between 1985 and 1995 (see Table 7.1). This increase was dominated by
increased investment in the life sciences; only modest gains were recorded for
physics. At the same time, competition in the global economy (which itself was
enabled by communications advances rooted largely in condensed-matter and
materials physics) forced industry to sharpen the focus of its R&D investments.
Industrial R&D turned away from long-term physical sciences and toward projects
with more immediate economic return, reducing fundamental research invest-
ments that have been essential to the development of new technologies.
A DECADE OF CHANGE
The transition to the global economy represents a significant opportunity for
condensed-matter and materials physics. Competitiveness in a fast-moving
economy is critically dependent on advances in materials for a broad range of
applications from information technology to transportation to health care.
Condensed-matter and materials physics has responded effectively over the past
decade, supporting continued innovation in electronic and optical materials, while
developing new thrusts in complex fluids, macromolecular systems, and biologi-
cal systems (collectively known as “soft materials”), and nonequilibrium pro-
cesses. At the same time, science has become increasingly international, and
U.S. leadership in many areas of science and technology, including condensed-
matter and materials physics, is being challenged. Continued progress in con-
densed-matter and materials physics is critical to sustained economic competi-
1Vannevar Bush, Science the Endless Frontier: A Report to the President, U.S. Government Print-
ing Office, Washington, D.C. (1945), reprinted by the National Science Foundation, Washington,
D.C. (1960).
90
80
Non-Profits
Federal R & D Expenditures
70
(billions of 1997 dollars)
60
Industry
50
40
Federal Government
30
20 FFRDCs*
10
Universities
0
1970 1975 1980 1985 1990 1995
Year
*Federally Funded Research and Development Centers operated by universities, industry,
and non-profits.
FIGURE 7.1.1 Bell Laboratories at the time of the invention of the transistor.
(Courtesy of Bell Laboratories, Lucent Technologies.)
700
500
2
DOE
400
300
3
NSF
200
DOD4
100
5
NASA
NIST6
0
1986 1988 1990 1992 1994 1996
Year
1
Major facilities operations supported by the DOE Division of Materials Sciences.
2
Doe Division of Materials Sciences (research).
3
NSF Division of Materials Research (research and facilities).
4
Estimates from CMMP-related DOD physics and materials research.
5
Estimates from CMMP-related NASA microgravity and space science programs.
6
Estimates from CMMP-related research and facilities operation at NIST.
4000
3000
No. of Users
2000
1000
0
1983 1985 1987 1989 1991 1993 1995 1997
Fiscal Year
FIGURE 7.3 Growth in the number of users at Department of Energy synchrotron facili-
ties, 1982-1997.
Chemical Sciences
12%
Materials Sciences
36%
FIGURE 7.4 Use of national synchrotron facilities by scientific discipline shows that
more than half of the 4000 users in 1997 worked in fields other than condensed-matter
and materials physics.
research proposals are not lost in the competition with other proposals that more
neatly fit the boundaries of established disciplines.
Partnerships across disciplines and among universities, government laborato-
ries, and industry are becoming increasingly important in bringing together the
resources and diverse skills needed to continue advancing knowledge in condensed-
matter and materials physics. For many leading-edge research projects, it is neither
practical nor cost-effective to assemble the required capital and intellectual re-
sources at a single location. Teams form and dissolve as research directions change,
and the diversity of institutions and performers ensures that a wide range of projects
and approaches can be accommodated. This is a fundamental strength of the U.S.
R&D system. Modern communications, an outgrowth of condensed-matter and
materials physics, is essential to these partnerships.
Another significant change in the practice of condensed-matter and materials
physics has been the emergence of major national research facilities. These
facilities, which include synchrotrons, neutron sources, and microcharacterization
centers, have had an extraordinary impact on the ability of researchers to investi-
gate ever-smaller, lower cross-section, more dilute, and more complex systems.
Accordingly, there has been a spectacular increase in the use of these facilities.
These powerful tools transcend condensed-matter and materials physics to serve
large user communities from other disciplines, including biology, which now
consumes more than 25 percent of the beam time at national synchrotron facili-
ties. As a result, condensed-matter and materials physics is having a significant
impact on many fields with which it had little connection just a decade ago.
Institutional change is never comfortable, and it is a continuing challenge to
U.S. space science. Research organizations are being expected to improve organi-
zational effectiveness and resource utilization, create new partnerships, and serve
customers better. Customers, ranging from corporate manufacturing arms to
sponsors to facility users, are increasingly involved and demanding. All sectors
of condensed-matter and materials physics underwent profound change in recent
years. Industrial laboratories were downsized and redirected. Government labo-
ratories struggled with substantial reductions in resources, increased regulation,
and mission and operational reform. Research universities came under increas-
ing pressure to reduce overhead, cut costs, and become more responsive to the
public and to industry. All of these changes have potentially positive outcomes,
and condensed-matter and materials physics is particularly well positioned to
contribute effectively in this new environment. However, great care will be
required to navigate these changes while preserving the research infrastructure of
the nation for the long term.
(ideas and discoveries) are difficult to measure, and the outcomes (the advance of
knowledge or the introduction of new products), are difficult to quantify or relate to
specific programs. Consequently, proxy indicators related to the desired outputs or
outcomes are developed for research activities. These proxies might include pa-
pers, prizes, and patents to take the place of ideas and discoveries in measuring
research outputs, and citations and productivity growth to take the place of ad-
Industry-level Studies
Terleckyj (1980) NSb
Griliches-Lichtenberg (1984a) 4
Patel-Soete (1988)c 6
Mohnen-Nadiri-Prucha (1986) 11
Terleckyj (1974) 15
Wolff-Nadiri (1987) 15
Sveikauskas (1981) 16
Bernstein-Nadiri (1988) 19
Link (1978) 19
Griliches (1980) 21
Bernstein-Nadiri (1991) 22
Scherer (1982, 1984) 36
a For studies for which Nadiri (1993) reports a range of possible returns, the midpoint of that range is
provided in this table.
b Not significantly different from zero in a statistical sense. This result, however, may be a reflection
of limitations in the quantity of data used in the study.
c Economy-level study (all industries grouped together).
SOURCE: M.I. Nadiri, “Innovations and Technological Spillovers,” Working Paper No. 4423,
National Bureau of Economic Research, Cambridge, Mass. (1993).
288
Human Capital
Many economists attribute current economic growth to investments in hu-
man capital, the capacity to generate new ideas that organize and rearrange exist-
ing resources to achieve productivity gains. Examples range from new ways of
processing steel and polymers, to the soaring performance of electronic and
optical systems, to the growth in software and computer applications. These
advances share common characteristics of innovation and integration of knowl-
edge—the economics of ideas. Human capital, enabled by investments in educa-
tional and research institutions, drives economic growth by providing the new
ideas that allow escape from a traditional economic future limited by scarcity of
resources and the law of diminishing returns.
Unlike physical resources, which are limited in a finite world, the potential
of human capital is nearly limitless. But it is not free. A commitment to educa-
tion, to research, and to the free exchange of information and ideas is essential. In
the modern global economy, world leadership is impossible without leadership in
human capital.
Infrastructure
Laboratories, instrumentation, and facilities for performing state-of-the-art
condensed-matter and materials physics are becoming increasingly expensive to
develop and operate. At the same time, more universities are competing effec-
tively for federal research dollars. It is becoming increasingly apparent that the
needed infrastructure cannot be duplicated at even a few dozen universities, let
alone the more than 180 institutions nationwide that grant physics Ph.D.s. It is
estimated that nearly half of university laboratories in the physical sciences re-
quire refurbishment in order to be used effectively. Government institutions,
including the DOE laboratories, are also burdened with an aging infrastructure.
At the same time, there has been a significant increase in the availability of
modern research infrastructure at major national and regional research facilities
and centers. These facilities provide needed infrastructure on a shared basis. In
addition, there is substantial research infrastructure at government laboratories
(beyond the major facilities) that is contributing to alleviating this problem. The
number of guest researchers from universities and industry at DOE national
laboratories has skyrocketed in the past 15 years, and more could be accommo-
dated with modest investments. An integrated solution, combining revitalization
of university laboratories with modernization and increased community utiliza-
tion of government laboratories, seems to provide the most cost-effective option
to serve the infrastructure needs of the condensed-matter and materials physics
community (see Box 8.1).
and the Synchrotron Ultraviolet Radiation Facility II (SURF II) at the National
Institute of Standards and Technology remain highly active and productive.
In contrast, construction of the Advanced Neutron Source, which was to
have been a reactor source at Oak Ridge National Laboratory, was canceled in
1995. In addition, the High Flux Beam Reactor at Brookhaven National Labora-
tory is currently not operating, and there is opposition to restarting it. On a
positive note, the neutron-scattering facilities at the National Institute of Stan-
dards and Technology have been recently upgraded and the High Flux Isotope
Reactor (HFIR) at Oak Ridge is being upgraded. Nevertheless, the neutron-
scattering field now depends on an array of facilities that is even smaller than
what was already found inadequate by national review committees in the 1980s
and early 1990s.
As a consequence of these concerns, the DOE’s Basic Energy Sciences
Advisory Committee (BESAC) recently established reviews of its existing and
proposed neutron and synchrotron radiation facilities. BESAC considered the
neutron situation at a meeting in Washington, D.C., on February 5-6, 1996.
Drawing on reports from several national panels, BESAC made the following
recommendations for neutron-scattering facilities:
synchrotron radiation over the next decade, determine the size and nature of the
user community both globally and by facility, and assess the operation of the
facilities including their plans and vision for the future. The panel was also asked
to make detailed recommendations under various budget scenarios and to con-
sider the consequences of closing one or more of the BES synchrotron facilities.
In its report to BESAC, the panel concluded unanimously that “. . . shutdown
of any one of the four DOE/BES synchrotron light sources over the next decade
would do significant harm to the nation’s science research capabilities and would
considerably weaken our international competitive position in this field.” The
panel recommended the following actions (in priority order):
1. Continue operation of the three hard x-ray sources (APS, NSLS, and
SSRL) for their large user communities, with a modest investment for general
user support and for R&D on a fourth-generation x-ray source. (Recommended
expenditures at both NSLS and SSRL were $3 million per year above the FY
1998 DOE-requested levels.)
2. Develop new beam lines at APS and modernize existing facility beam
lines at NSLS. (Recommended expenditures were $8 million per year at APS and
$3 million per year at NSLS.)
3. Fund ALS at the FY 1998 DOE-requested level of $35 million.
The panel also recommended funding proposed upgrades to the NSLS and
SSRL facilities at an estimated cost of $27 million per year over 3 years. These
upgrades should be carried out under a special initiative separate from the normal
budgeting process. For example, BES might seek partnerships with other divi-
sions within DOE and with other agencies such as the National Institutes of
Health (NIH) or could request a budget add-on. This recommendation was
intermediate in priority between the second and third priorities above.
The committee recommends support for operations and upgrades at existing
synchrotron facilities (including modest investments for user support), as well as
tions between R&D activity and regional economic development, are becoming
important resources for research support.
Within condensed-matter and materials physics, there are many approaches
to the conduct of research, ranging from individual investigators to large multi-
disciplinary teams and from bench-scale experiments to studies at major national
facilities. There is also a diversity of federal sponsors for condensed-matter and
materials research, led by DOE, NSF, and the defense agencies. No single
approach can span the diversity of research problems, and an effective national
research program requires balance among a variety of performers and approaches.
Achieving this balance requires an appreciation of the R&D roles of industry,
universities, and government laboratories and of how to establish relationships
among performers that encourage research synergy, funding leverage, and scien-
tific productivity. The diversity of performers, institutions, and funding sources
is a fundamental strength of condensed-matter and materials physics, essential to
progress in a field that embraces both fundamental and applications-oriented
research and spans both small and big science.
tion adapt and become more flexible so that it can better serve the future needs of
industry and the nation.
R&D interactions with industry involve both universities and government
laboratories. For large companies with in-house R&D capabilities, access to
unique skills or facilities at universities or government laboratories drives the
interaction. These interactions often involve a financial commitment by the
company to the partner organization. For smaller companies, many that have no
R&D capabilities of their own, interactions with universities and government
laboratories may be the only way to assemble the necessary R&D resources to
address a technical barrier. The success of cooperative research interactions
depends critically on pursuing projects that contribute to the core missions of all
involved organizations. An urgent need is the development of workable intellec-
tual property arrangements, particularly between industry and universities (see
Box 8.3).
Industry interactions with universities and government laboratories help pro-
vide a strategic context for condensed-matter and materials physics research.
This is extremely important in a field that has such a direct impact on the economy
and for which there are insufficient resources to explore every opportunity.
Choices have to made, and interactions with industry provide useful input as to
what may be important. As a first step, physics departments should become more
involved in the industrial liaison programs at their universities, and government
laboratories should engage in cost-shared research in their competency areas with
industry to provide a window on technology. These interactions should not drive
condensed-matter and materials physics research at universities and government
laboratories, but they can provide a context for appreciating the broader implica-
tions of the research.
efforts to integrate research and education. To make this possible, we must work
proactively on many fronts. Universities and departments must be at the forefront
of this effort and can greatly increase the attraction of physics in basic ways.
Discovery
Encouraging discovery is critical to the strategic success of condensed-
matter and materials physics. Incremental progress is not sufficient to maintain
leadership in science or technology. Although discovery cannot be predicted, it
often occurs when researchers explore the boundaries between fields and when
advances in instrumentation make possible new measurements. Both can be en-
couraged within the federal R&D system. Funding must be made available for
research at the interfaces between disciplines. For example, the new field of
molecular mechanics falls between structural biology and macromolecular phys-
ics. New mechanisms must be developed to encourage and evaluate interdiscipli-
nary proposals, which are often lost in a peer-review process organized according
to traditional disciplines. A multiplicity of funding sources is also essential to
ensure that bold, new ideas are given an opportunity to succeed. Increased
flexibility for agency program managers to take risks in funding decisions should
also be encouraged. New facilities and instrumentation create new opportunities
in condensed-matter and materials physics, and continued support for facilities
and for broad access to them must be emphasized. Finally, the strategic context
of the research should be understood, particularly in condensed-matter and mate-
rials physics, where the coupling to technology is so strong. The strategic context
of a research area encompasses the related technological issues and opportunities.
A broad appreciation of strategic context is important both in planning research
and in recognizing significant potential research developments. This apprecia-
tion is most effectively acquired through interactions with industry through re-
search partnerships, personnel exchanges, and consulting arrangements.
Scientific Themes
Chapters 1 through 6 of this report identify the key scientific questions that
are expected to drive the subfields of condensed-matter and materials physics for
the next decade. Specific areas of emphasis for future condensed-matter and
materials physics research are suggested. In this section, the committee addresses
limiting step to continued progress. The United States has lagged in the develop-
ment of materials-synthesis and processing capabilities despite strong recom-
mendations from the National Research Council report, Materials Science and
Engineering for the 1990s: Maintaining Competitiveness in the Age of Materials.1
Access to facilities for nanofabrication and crystal growth is needed, as well as
increased emphasis on processing research.
• The increasing power of computers foreshadows a shift from empriricism
toward increased predictability in materials development. Although in its in-
fancy, this shift presents significant challenges to and opportunities for materials
theory and computional physics. The prospects for accelerating progress in
condensed-matter and materials physics through simulation of complex systems
are truly revolutionary.
1Materials Science and Engineering for the 1990s: Maintaining Competitiveness in the Age of
Materials, National Academy Press, Washington, D.C. (1989).
computing, and many other areas probe the secrets of materials and materials-
related phenomena. This is a new era, as vast new arenas ranging from subtle
quantum phenomena, to macromolecular science, to the realm of complex mate-
rials become increasingly accessible to fundamental study. It is a time of excep-
tional opportunity to perform pioneering research at the technological frontier—
a frontier enabled by advances in condensed-matter and materials physics.