Introduction To Quantum Chromodynamics
Introduction To Quantum Chromodynamics
Introduction To Quantum Chromodynamics
John Campbell
Theoretical Physics Department, Fermilab, Batavia, Illinois, USA
Joey Huston
Department of Physics and Astronomy, Michigan State University, East Lansing,
Michigan, USA
Frank Krauss
Institute for Particle Physics Phenomenology, Durham University, Durham, UK
3
3
Great Clarendon Street, Oxford, OX2 6DP,
United Kingdom
Oxford University Press is a department of the University of Oxford.
It furthers the University’s objective of excellence in research, scholarship,
and education by publishing worldwide. Oxford is a registered trade mark of
Oxford University Press in the UK and in certain other countries
© John Campbell, Joey Huston, and Frank Krauss 2018
The moral rights of the authors have been asserted
First Edition published in 2018
Impression: 1
No part of this publication may be reproduced, stored in a retrieval system, or transmitted,
in any form or by any means, other than as expressly permitted by law and by licence.
This licence does not extend to the use of any trademarks in any adaptations of the content
This is an open access publication. Except where otherwise noted, this work is distributed under and
subject to the terms of a Creative Commons Attribution 4.0 International License (CC BY 4.0)
a copy of which is available at https://creativecommons.org/licenses/by/4.0/.
This title has been made available on an Open Access basis
through the sponsorship of SCOAP3.
1 Introduction 1
1.1 The physics of the LHC era 1
1.2 About this book 7
2 Hard Scattering Formalism 12
2.1 Physical picture of hadronic interactions 12
2.2 Developing the formalism: W boson production at fixed order 48
2.3 Beyond fixed order: W boson production to all orders 80
2.4 Summary 96
3 QCD at Fixed Order: Technology 99
3.1 Orders in perturbation theory 99
3.2 Technology of leading-order calculations 101
3.3 Technology of next-to-leading-order calculations 117
3.4 Beyond next-to-leading order in QCD 170
3.5 Summary 179
4 QCD at Fixed Order: Processes 182
4.1 Production of jets 182
4.2 Production of photons and jets 197
4.3 Production of V+jets 205
4.4 Diboson production 215
4.5 Top-pair production 224
4.6 Single-top production 236
4.7 Rare processes 241
4.8 Higgs bosons at hadron colliders 243
4.9 Summary 268
5 QCD to All Orders 270
5.1 The QCD radiation pattern and some implications 271
5.2 Analytic resummation techniques 284
5.3 Parton shower simulations 329
5.4 Matching parton showers and fixed-order calculations 358
5.5 Multijet merging of parton showers and matrix elements 375
5.6 NNLO and parton showers 393
6 Parton Distribution Functions 400
6.1 PDF evolution: the DGLAP equation revisited 402
6.2 Fitting parton distribution functions 411
6.3 PDF uncertainties 424
6.4 Resulting parton distribution functions 441
viii Contents
Acknowledgements
We are greatly indebted to a large number of people, who inspired us to do particle
physics, who did their best to teach us something, who collaborated with us, and who,
by far and large, shaped our view of QCD at the LHC and other collider experiments:
we truly are standing on the shoulders of giants.
We also owe a debt of gratitude to our friends and colleagues from CTEQ and
MCnet who put up with our antics while further helping our understanding of science
in countless night-time discussions during graduate schools and other meetings. We
would like to thank Stefano Catani, Daniel de Florian, Keith Hamilton, Stefan Höche,
Simon Plätzer, Stefan Prestel, and Marek Schönherr for patiently answering our many
questions concerning some of the more specialized aspects in this book. You have been
a huge help! Any inaccuracy or error is not a reflection of your explanations but of
our limited understanding of them.
We are extremely grateful to Josh Isaacson, Pavel Nadolsky, Petar Petrov, and
Alessandro Tricoli for carefully reading parts of the book while it was being written,
pointing out conceptual shortcomings and misunderstandings from our side, the typical
errors with factors of two, unfortunate formulations, and much more. However good,
for all mistakes left in the book, the buck stops with us. A list of updates, clarifications,
and corrections is maintained at the following website:
http://www.ippp.dur.ac.uk/BlackBook
We would like to thank the following for useful conversations and for providing
figures for the book: Simon Badger, Johannes Bellm, Keith Ellis, Steve Ellis, Rick Field,
Jun Gao, Nigel Glover, Stefan Höche, Silvan Kuttimalai, Gionata Luisoni, Matthew
Mondragon, Ivan Pogrebnyak, Stefan Prestel, Marek Schönherr, Ciaran Williams, Jan
Winter, and Un-ki Yang.
The genesis of this book was a review article co-written by two of us [320] and
we thank both our co-author, James Stirling, and the IOP for their collaboration and
support. Some parts of this book benefitted tremendously from us being allowed to
test them on unsuspecting students in lectures during regular courses, at graduate
schools, or summer institutes, and on our colleagues during talks at conferences and
workshops. Thank you, for your patience with us and your feedback!
Many of the plots in this book have been created using the wonderful tools apfel-
web [331], JaxoDraw [249], RIVET [291], MATPLOTLIB [638], xmGRACE, and XFIG.
On that occasion we would also like to thank Andy Buckley, Holger Schulz and David
Grellscheid for their continuous support and ingenious help with some of the finer
issues with RIVET and LATEX.
We are also grateful for the great support by our publisher, and in particular by the
team taking care of this manuscript: Sonke Adlung, Harriet Konishi and Hemalatha
Thirunavukkarasu.
Finally, of course we would like to thank our families for putting up with us while
we assembled this manuscript. Surely, we were not always the most pretty to watch
or the easiest people to have around.
1
Introduction
The Black Book of Quantum Chromodynamics: A Primer for the LHC Era. John Campbell, Joey Huston, and Frank Krauss.
© John Campbell, Joey Huston, and Frank Krauss 2018. Published in 2018 by Oxford University Press.
DOI 10.1093/oso/9780199652747.001.0001
2 Introduction
distances and largest energies tested in a laboratory so far. With this discovery a
50-year-old prediction concerning the character of nature has been proven
The question now is not whether the Higgs boson exists but instead what are
its properties? Is the Higgs boson perhaps a portal to some new phenomena, new
particles, or even new dynamics? There are some hints from theory and cosmology
that the discovery of the Higgs boson is not the final leg of the journey.
periments is that they have no coupling to ordinary matter through gauge interactions
but instead couple through the Higgs boson.
These examples indicate that the SM, as beautiful as it is, will definitely not provide
the ultimate answer to the questions concerning the fundamental building blocks of the
world around us and how they interact at the shortest distances. The SM will have to be
extended by a theory encompassing at least enhanced CP violation, dark matter, and
dark energy. Any such extension is already severely constrained by the overwhelming
success of the gauge principle: the gauge sector of the SM has been scrutinized to
incredibly high precision, passing every test up to now with flying colours. See for
example [179] for a recent review, combining data from e− e+ and hadron collider
experiments. The Higgs boson has been found only recently, and it is evident that this
discovery and its implications will continue to shape our understanding of the micro–
world around us. The discovery itself, and even more so the mass of the new particle
and our first, imprecise measurements of its properties, already rule out or place severe
constraints on many new physics models going beyond the well-established SM [515].
Right now, we are merely at the beginning of an extensive programme of precision
tests in the Higgs sector of the SM or the theory that may reveal itself beyond it. It
can be anticipated that at the end of the LHC era, either the SM will have prevailed
completely, with new physics effects and their manifestation as new particles possibly
beyond direct human reach, or alternatively, we will have forged a new, even more
beautiful model of particle physics.
Fig. 1.1 A 3D layout of the LHC, showing the location of the four major
experiments. Reprinted with permission from CERN.
It seems paradoxical that the largest devices are needed to probe the smallest distance
scales. The ATLAS detector, for example, is 46 m long, 25 m in diameter and weighs
7000 tonnes. The CMS detector, although smaller than ATLAS at 15 m in diameter
and 21.5 m in length, is twice as massive, at 14, 000 tonnes. This can be compared
to the CDF detector at the TEVATRON which was only 12m×12m×12m (and 5000
tonnes). The key to the size and complexity of the LHC detectors is the need to
measure the four-vectors of the large number of particles present in LHC events, whose
momenta can extend to the TeV range. The large particle multiplicity requires very
fine segmentation; the ATLAS detector, for example, has 160 million channels to read
out, half of which are in the pixel detector. The large energies/momenta require, in
addition to fine segmentation, large magnetic fields and tracking volumes and thick
calorimetry.
Both ATLAS and CMS are what are known as general-purpose 4π detectors, meaning
that they attempt to cover as much of the solid angle around the collision point as
6 Introduction
1.1.3.3 Challenges
To use a popular analogy, sampling the physics at the LHC is similar to trying to drink
from a fire hose. Over 1 billion proton-proton collisions occur each second, but the
limit of practical data storage is on the order of hundreds of events per second only.
Thus, the experimental triggers have to provide a reduction capability of a factor of the
order of 107 , while still recording bread-and-butter signatures such as W and Z boson
production. This requires a high level of sophistication for the on-detector hardware
triggers and access to large computing resources for the higher-level triggering. Timing
2 The main limitation for the solid-angle coverage is in the forward/backward directions, where the
instrumentation is cut off by the presence of the beam pipe.
About this book 7
Fig. 1.2 A layout of the ATLAS detector, showing the major detector
components, from en.wikipedia.org/wiki/ATLAS experiment. Original
image from CERN. Reprinted with permission from CERN.
is also an important issue. The ATLAS detector is 25m in diameter. With a bunch-
crossing time of 25ns, this means that as new interactions are occurring in one bunch
crossing, the particles from the previous bunch crossing are still passing through the
detector. Each crossing produces 25 interactions. Experimental analyses thus face both
in-time pileup and out-of-time pileup. The latter can be largely controlled through the
readout electronics (modulo substantial variations in the population of the individual
bunches), while the former requires sophisticated treatment in the physics analyses.
The dynamic ranges at the LHC are larger than at the TEVATRON. Leptons from
W boson decays on the order of tens of GeV are still important, but so are multi-TeV
leptons. Precise calibration and the maintenance of linearity are both crucial. To some
extent, the TEVATRON has served as a boot camp, providing a learning experience for
physics at the LHC, albeit at lower energies and intensities. Coming later, the LHC has
benefited from advances in electronics, in computing, and perhaps most importantly,
in physics analysis tools. The latter comprise both tools for theoretical predictions at
higher orders in perturbative QCD and tools for the simulation of LHC final states.
Despite the difficulties, the LHC has had great success during its initial running,
culminating in the discovery of the Higgs boson, but, alas, not in the discovery of new
physics. The results obtained so far comprise a small fraction of the total data taking
planned for the LHC. New physics may be found with this much larger data sample,
but discovering it may require precise knowledge of SM physics, including QCD.
readers are referred to Appendix B.1, and for a more pedagogical introduction to these
issues to a wealth of outstanding textbooks on various levels, including the books by
Peskin and Schröder [803], Halzen and Martin [606], Ramond [822], Field [525] and
others. For a review of QCD at collider experiments, the reader is referred to the
excellent books by Ellis, Stirling, and Webber [504] and by Dissertori, Knowles, and
Schmelling [467]. Of course, for a real understanding of various aspects it is hard to
beat the original literature, and readers are encouraged to use the references in this
book as a starting point for their journey through particle physics.
This book aims to provide an intuitive approach as to how to apply the framework
of perturbative theory in the context of the strong interaction towards predictions at
the LHC and ultimately towards an understanding of the signals and backgrounds at
the LHC. Thus, even without the background discussed at the beginning of this section,
this book should be useful for anyone wishing for a better understanding of QCD at
the LHC.
The ideas for this book have been developed over various lecture series given at
graduate level lectures or at advanced schools on high-energy physics by the authors.
The authors hope that this book turns out to be useful in supporting the self-study
of young researchers in particle physics at the beginning of their career as well as
more advanced researchers as a resource for their actual research and as material for
a graduate course on high-energy physics.
1.2.1 Contents
Chapter 2 provides a first overview of the content of this book and aims at putting
various techniques and ideas into some coherent perspective. First of all, a physical
picture underlying hadronic interactions, and especially scattering reactions at hadron
colliders, is developed. To arrive at this picture, the ideas underlying the all-important
factorization formalism are introduced which, in the end, allows the use of perturbative
concepts in the discussion of the strong interaction at high energies and the calculation
of cross-sections and other related observables. These concepts are then used in a
specific example, namely the inclusive production of W bosons at hadron colliders.
There, their production cross-section is calculated at leading and at next-to-leading
order in the strong coupling constant, thereby reminding the reader of the ingredients
of such calculations and fixing the notation and conventions used in this book. This
part also includes a first discussion of observables relevant for the phenomenology of
strong interactions at hadron colliders. In addition, some generic features and issues
related to such fixed-order calculations are sketched. In a second part, the perturbative
concepts already employed in the fixed-order calculations are extended to also include
dominant terms to all orders through the resummation formalism. Generic features of
analytical resummation are introduced there and some first practical applications for
W production at hadron colliders are briefly discussed. As a somewhat alternative use
of resummation techniques, jet production in electron–positron annihilations and in
hadronic collisions is also discussed and, especially in the latter, some characteristic
patterns are developed.
The next chapter, Chapter 3, is fairly technical, as it comprises a presentation of
most of the sometimes fairly sophisticated technology that is being used in order to
About this book 9
evaluate cross-section at leading and next-to leading order in the perturbative expan-
sion of QCD. It also includes a brief discussion of emerging techniques for even higher
order corrections in QCD. In addition, the interplay between QCD and electroweak
corrections is touched upon in this chapter. Starting with a discussion of generic fea-
tures, such as a meaningful definition of perturbative orders for various calculations,
the corresponding technology is introduced, representing the current state of the art.
As simple illustrative examples for the methods employed in such calculations, again
inclusive W boson production and its production in association with a jet are em-
ployed. The calculations are worked out in some detail at both leading and next-to
leading order in the perturbative expansion in the strong coupling.
The overall picture and phenomena encountered in hadron–hadron collisions, de-
veloped in Chapter 2, is discussed in the context of specific processes in Chapter 4.
The processes discussed here range from the commonplace (e.g. jet production) to
some of the most rare (e.g. production of Higgs bosons). In each case the underlying
theoretical description of the process is described, typically at next-to leading order
precision. Special emphasis is placed on highlighting phenomenologically relevant ob-
servables and issues that arise in the theoretical calculations. The chapter closes with
a summary of what is achievable with current technology and an outlook of what may
become important and relevant in the future lifetime of the LHC experiments.
Following the logic outlined in Chapter 2, in Chapter 5 the discussion of fixed-order
technology is extended to the resummation of dominant terms, connected to large log-
arithms, to all orders. After reviewing in more detail standard analytic resummation
techniques, and discussing their systematic improvement to greater precision by the
inclusion of higher-order terms, the connection to other schemes is highlighted. In the
second part of this chapter, numerical resummation as encoded in parton showers is dis-
cussed in some detail. The physical picture underlying their construction is introduced,
some straightforward improvements by introducing some generic high–order terms are
presented and different implementations are discussed. Since the parton showers are
at the heart of modern event simulation, bridging the gap between fixed-order per-
turbation theory at high scales and phenomenological models for hadronization and
the like at low scales, their improvement has been in the focus of actual research in
the past decade. Therefore, some space is devoted to the discussion of how the simple
parton shower picture is systematically augmented with fixed-order precision from the
corresponding matrix elements in several schemes.
In Chapter 6, an important ingredient for the success of the factorization formal-
ism underlying the perturbative results in the previous two chapters is discussed in
more detail, namely the parton distribution functions. Having briefly introduced them,
mostly at leading order, in Chapter 2, and presented some simple properties, in this
chapter the focus shifts on their scaling behaviour at various orders and how this can
be employed to extract them from experimental data. Various collaborations perform
such fits with slightly different methodologies and slightly different biases in how data
are selected and treated, leading to a variety of different resulting parton distribu-
tions. They are compared for some standard candles in this chapter as well, with a
special emphasis on how the intrinsic uncertainties in experimental data and the more
theoretical fitting procedure translates into systematic errors.
10 Introduction
any sequence the reader or teacher finds most beneficial. The third part, Chapters 8 and
9, where core experimental findings are confronted with theoretical predictions, again
is independent of the second part, although for a better understanding of theoretical
subtleties it may be advantageous to be acquainted with certain aspects there.
Finally, a list of updates, clarifications and corrections to this book is maintained
at the following website:
http://www.ippp.dur.ac.uk/BlackBook
2
Hard Scattering Formalism
The Black Book of Quantum Chromodynamics: A Primer for the LHC Era. John Campbell, Joey Huston, and Frank Krauss.
© John Campbell, Joey Huston, and Frank Krauss 2018. Published in 2018 by Oxford University Press.
DOI 10.1093/oso/9780199652747.001.0001
Physical picture of hadronic interactions 13
centre-of-mass energy of the colliding beams, thus effectively reducing the amount
of energy carried away from the leptons through electromagnetic radiation. However,
most of the time, especially when their combined initial invariant mass is above the
mass of a resonance, such as the Z boson, the leptons will react with actual energies
that are reduced with respect to their full available energy. This effect is sometimes
called “radiative return” The corresponding energy loss is due to the emission of
photons from the incident leptons, a process denoted as QED initial-state radiation
(ISR). .
While in QED the treatment of ISR potentially is a tedious but essentially straight-
forward exercise, tractable with perturbation theory, in QCD the problem is much more
involved and fundamentally different. This is because in QCD, the colliding particles
cannot be interpreted as the fundamental quanta of the theory but rather as bound
states, hadrons such as protons, which cannot be quantitatively understood and de-
scribed through the language of perturbation theory. This conceptual gap necessitates
the construction of a framework to provide direct and systematically improvable con-
tact between the proven language of perturbative calculations of corresponding cross-
sections and the non-perturbative structure of the hadronic bound states. This section
is devoted to developing an intuitive picture of how this factorization framework
actually works, by dwelling on the limited analogy with the emissions of secondary
quanta in QED and the differences between electromagnetic and strong interactions
when considering such collisions in greater detail.
α dω db2⊥
dnγ = e2e · · 2 . (2.1)
π ω b⊥
14 Hard Scattering Formalism
Fig. 2.1 Electrical and magnetic fields in blue and green of a lepton at
rest (v = 0, left panel) and with a velocity v ≈ c (middle panel). The
equivalent photons are depicted on the right panel.
Here ω is the energy of the equivalent photon, and the constant of proportionality
is obtained by integrating over the transverse plane, parameterized by the impact
factor b⊥ , and by the electromagnetic coupling constant. The latter gets modified by
the relative charge of the electron, ee , which of course equals -1. There is a maximal
energy available for these equivalent photons; naively it is given by ωmax = E, the
energy of the lepton.
In a more quantum field-theoretical way of thinking about this, the impact pa-
rameter is replaced with the transverse momentum, b⊥ ←→ k⊥ , through a Fourier
transform, and the equivalent photons are considered to be part of the lepton’s wave
function. Their spectrum then reads
2
α dω dk⊥
dnγ = · · 2 . (2.2)
π ω k⊥
In such a picture, the physical lepton is given by a superposition of states with varying
photon multiplicity,
|eiphys = |ei + |eγi + |eγγi + . . . , (2.3)
where the photons have different energies and transverse momenta. Due to momentum
conservation, they are typically off their mass shell. This limits the lifetime of the
quantum-fluctuations like |eγi or |eγγi.
To see how this works in somewhat more detail, consider the case of the |eγi Fock
state. Assuming massless on-shell electrons in the initial and final state, but allowing
the photons to go off-shell, the kinematics of a splitting e(P ) → e(p) + γ(k) can be
written as
P = p + k −→ 0 = 2p · k + k 2 . (2.4)
Physical picture of hadronic interactions 15
k 2 ≈ −k⊥
2
= ~k⊥
2
(2.5)
in the case where x is small, i.e., for the bulk of the photons. This also implies that
in this limit the momentum component of the photon parallel to the electron is about
kk ≈ ω, the energy of the photon. The fact that the emitted photons (or the electrons
or both) are off-shell is equally true for massive electrons, and it limits the lifetime of
the (eγ)-component of the wave function through the uncertainty principle. With the
photon momentum given by k µ ≈ (ω, k~⊥ , ω), the energy shift necessary to move it on
2
its mass shell is given by δω ≈ k⊥ /2ω yielding
1 2ω 2xE
τγ ∼ ≈ 2 = 2 (2.6)
δω k⊥ k⊥
as the lifetime of the fluctuation. This implies that such fluctuations live longer,
as the energy of the photon increases and the relative transverse momentum of the
photon and the electron decrease. Note that in this book natural units are being used,
so effectively ~ = c = 1.
In addition, of course, the photons can split into a fermion–anti-fermion pair, as yet
another quantum fluctuation, which in turn may emit further photons. This compli-
cates the wave-function picture even more. However, altogether, the distance from the
original lepton and the lifetime of all these virtual particles, the quantum fluctuations
which they manifest, are given by the amount they are off their mass shell and by the
amount of energy they carry.
The radiation that actually gives rise to the aforementioned secondary particles can
now be described, to all orders, by equations that are known as evolution equations,
2
since they relate the probability to find quanta with certain kinematics x and k⊥ to
similar probabilities at other scales, through emission of secondaries. In general, they
can be obtained in different approximations related to different kinematical situations,
which can be intimately related to different factorization schemes. In the collinear
factorization scheme, which will be employed in most of the remainder of this
16 Hard Scattering Formalism
book, the energy and especially transverse momentum of the secondary particles are
considered to be small with respect to the energy of the original lepton, and therefore
the kinematical effect merely amounts to a successive reduction of the original lepton
energy. In this scheme, the probability densities evolve with the logarithm of the
transverse momentum, cf. Eq. (2.2), as
2 2 Z1
d`(x, k⊥ ) α(k⊥ ) dξ x 2 2
2 = P`` , α(k⊥ ) `(ξ, k⊥ )
d log k⊥ 2π ξ ξ
x
(2.8)
2 2 Z1
dγ(x, k⊥ ) α(k⊥ ) dξ x 2 2
2 = Pγ` , α(k⊥ ) `(ξ, k⊥ ),
d log k⊥ 2π ξ ξ
x
where the effect of photons splitting into virtual lepton–anti-lepton pairs has been
omitted. The first line of these equations exhibit how the probability density of leptons
with smaller energy fraction x is driven by leptons with a larger energy fraction x/ξ >
x, which can be interpreted as the leptons on the right hand side of the equation losing
an energy fraction 1 − ξ in the emission of a photon. In the kinematical approximation
employed here, terms like P`` (x/ξ, α), the splitting kernels, can be understood as
reduced matrix elements for the emission of one photon off the lepton. In general, Pba
denotes such kernels for a transition of a particle of type a to a particle of type b, while
emitting a particle of type c, which is not been made explicit. These kernels have been
taken at leading order, as manifest by the explicit order in α in front, but of course
they can also be evaluated to higher orders in a perturbative expansion in powers of
the coupling with corresponding coefficient functions.
Closer inspection reveals that this set of equations is nothing but the celebrated
Dokshitser–Gribov–Lipatov–Altarelli–Parisi (DGLAP) equation, specified for
QED. In this equation, the splitting kernels at leading order are independent of the
coupling constant and read
2 1 + z2 3
P`` (z) = eq + δ(1 − z)
(1 − z)+ 2
2
(2.9)
2 1 + (1 − z)
Pγ` (z) = eq .
z
Here, the notation of “+”–functions has been employed, which will crop up again
at various places, especially in conjunction with splitting kernels such as the ones
discussed here. They are defined through their integral together with a test function
g(z) such that
Z1 Z1
dz[f (z)]+ g(z) = dzf (z) [g(z) − g(1)] . (2.10)
0 0
For further details, the reader is refered to Appendix A.1.2. To gain a more intuitive
understanding, consider this prescription to work in such a way that the pole for
z → 1 in P`` is excluded in the +-function and reinserted through the second part of
Physical picture of hadronic interactions 17
the splitting kernel, proportional to the δ-function. It will be seen later, how such terms
become necessary to ensure the correct physical behaviour of the splitting functions,
such as satisfying momentum conservation.
However, these equations will be revisited and discussed in more detail in later
chapters of the book. To follow the reasoning in this more introductory chapter it
should suffice to mention that the 1/ω pole present in the naive classical picture has
its counterpart in the 1/(1 − z) or 1/z terms appearing in the splitting function.
1 Here, and throughout the book, kinematical quantities q related to the colliding partons are
supplemented with a q̂, while those related to the (beam-)particles are left without it as q.
18 Hard Scattering Formalism
A similar picture also emerges when charged particles are produced in the final state.
As an example, consider the case of, say, muon pair production in electron-positron
annihilations, e+ e− → µ+ µ− . Classically, the production of the muons can be under-
stood as their instantaneous acceleration after emerging from a finite energy density
related to the previous annihilation of the electron and positron into electromagnetic
fields, the intermediate photon in quantum field theory. In classical electrodynamics
such an acceleration of charged particles triggers the radiation of additional photons
off the charges, known as Bremsstrahlung.2 Interpreting the muon pair as a cur-
rent going from a velocity ~v 0 to ~v at the origin of the coordinate system, the double
differential classical radiation spectrum I in the direction ~n(Ω) with polarization ~
reads 2
d2 I e2 ∗ ~v ~v 0
,
= ~
· − (2.11)
dωdΩ 4π 2 1 − ~v · ~n 1 − ~v · ~n
0
in the dominant region of small energies, where the squared term is known as the
radiation function W [645]. For massless particles travelling at the speed of light,
the radiation function can be rewritten as
1 − cos θvv0
Wβ 0 , β = . (2.12)
(1 − cos θnv )(1 − cos θnv0 )
Cast in its covariant form and interpreted as a photon spectrum and tacitly inserting
α = e2 /(4π), the radiation spectrum becomes
µ 2
α ∗ p p0µ d3 k
dN = µ − 0 (2.13)
π p·k p ·k (2π)3 2k0
for the number of photons N emitted in the process. Here, k and denote the photon’s
four-momentum and polarization four-vector, while p and p0 denote the four-momenta
of the muons. The form above,
µ
p p0µ
W(p, p0 ; k, ) = ∗µ − 0 (2.14)
p·k p ·k
is also known as the eikonal form, and it is identical with the result of a full-fledged
calculation in quantum field theory, as follows.
For the case at hand, consider the photon emission part off the muons, which are
assumed to be massless. The muons are produced through a vertex factor called Γ. The
leading-order Feynman diagrams are depicted in Fig. 2.2. The corresponding matrix
element for X → µ− (p)µ+ (−p0 )γ(k) is given by
0
p/ + k/ p/ − k/ µ
MX→µ+ µ− γ = eūµ− (p) γ µ Γ − Γ γ uµ+ (p0 )∗µ (k)
(p + k)2 (p0 − k)2
2 In the quantum picture, equivalently, this translates into the radiation of Bremsstrahlung photons.
Physical picture of hadronic interactions 19
Fig. 2.2 Feynman diagrams for the emission of one photon by a muon
pair at the lowest perturbative order.
2pµ + k µ − 12 [γ µ , k/] 2p0µ − k µ + 12 [γ µ , k/]
= eūµ− (p) Γ−Γ uµ+ (p0 )∗µ (k) ,
2p · k 2p0 · k
(2.15)
In other words, in the limit of soft photon emission, the emission term completely
factorizes from the production of the system emitting the photon and is just given by
the eikonal term W(p, p0 ; k, ).
It is worth noting that soft photon emission is independent from the spin of the
emitting particles and other internal properties and by its very nature it is a classical
phenomenon. This is the essence of Low’s theorem [734]. As a consequence, soft
photons can be thought of as essentially not carrying any quantum numbers, and,
therefore, the emission of soft photons will not lift any veto – for example, due to
C-parity or angular momentum – for a transition to happen. Such transitions, in the
case at hand, for instance, from one helicity to another, necessitate the emission of
a hard photon, described by the terms ∝ k. In contrast to the soft, classical terms,
these essentially quantum-mechanical contributions do not exhibit any soft divergence
dω/ω but rather behave like dω · ω.
The simple classical picture of soft photon emission in principle allows a description
of the pattern of photon radiation from the muon pair, by iterating emissions through
20 Hard Scattering Formalism
the eikonal term. This implicitly assumes that the individual emissions are indepen-
dent from one other. The key point here is the introduction of the notion of photon
resolution. The reason for this is the observation that the eikonal factor diverges for
k → 0, the soft divergence, or for k parallel to p or p0 , the collinear divergence. This
is nothing but the well-known infrared-catastrophe of QED, a pattern that occurs for
every theory with massless spin-1 bosons such as QED or QCD. Essentially it can be
explained by the fact that it makes no physical sense to ask how many photons are
emitted by a particle, without specifying how the photons are measured. In practice,
photons can be too soft for a detector to respond; at the same time, if they are too
parallel with the emitting particle they will end up in the same detector cell, which
must have a finite size, and thus will not be distinguished. Therefore, the phase space
of the photons must be constrained to detectable photons to make any sense. This
is achieved by appropriate cuts, for instance by demanding a minimal energy and a
minimal angle with respect to the emitting particle.
There are now in principle two ways of how the full photon radiation pattern can
be described: in a more direct approach, the integral of the eikonal factor over the
constrained photon phase space could be used as the relevant term in a Poissonian
distribution of the number of photons being emitted, the independence of the indi-
vidual emissions guaranteeing the Poissonian character of this distribution. For each
photon then, the phase space could be individually fixed. Alternatively, the emissions
could be ordered in, say, the energy of the emitted photons, or, as a somewhat pre-
ferred choice, the relative transverse momentum with respect to the emitter. This
would allow a redefinition of the radiation pattern through a probabilistic picture,
driven by the Sudakov form factor. In order to see how this works in more detail,
cf. Section 2.3 and Chapter 5.
In any case, it is worth noting that the eikonal picture here, condensed in Eq. (2.17)
can be directly translated to the equivalent quanta picture above and the slightly more
sophisticated version encoded in the DGLAP evolution equations, Eq. (2.8).
Fig. 2.3 The leading quantum corrections to the running of the QCD
coupling constant αs : gluon self energy at one loop order (ghost diagrams
are ignored).
Including such corrections, and denoting all couplings — QED and QCD — collectively
with α = g 2 /(4π) they vary with the scale µR , also known as (renormalization)
scale, as
∂α(µ2R )
µ2R = β(α) , (2.18)
∂µ2R
where the β–function β(α) can be expanded in a perturbative series and reads
∞
X β0 2 β1 3
−β(α) = bn α2+n = αs + α + ... . (2.19)
n=0
4π (4π)2 s
Here the customary relation αs = gS2 /(4π), and, similarly, α = e2 /(4π) has been
assumed.
In SU (N ) gauge theory, the first coefficients βi of the perturbative expansion of
the β-function read
11 4
β0 = CA − TR nf
3 3
34 2 20
β1 = C − CA TR nf − 4 CF TR nf
3 A 3 (2.20)
2857 3 205 1415 2
β2 = CA + 2 CF2 TR nf − CF CA TR nf − CA TR nf
54 9 27
44 158
+ CF TR2 n2f + CA TR2 n2f ,
9 27
where the number of active fermions is given by nf , and where the Casimir operators
of the gauge group in its fundamental and adjoint representations are CF and CA .
It is worth stressing here that the first two coefficients, β0 and β1 are renormalization
scheme–independent, while all further contributions, starting with β2 , depend on the
renormalization scheme; the result given here for β2 is the one in the MS scheme.
For a very brief review, cf. Appendix B.1.
In QCD, they are given by
Nc2 − 1
CF ≡ Cq = and CA ≡ Cg = Nc (2.21)
2Nc
22 Hard Scattering Formalism
g 2 (µ2R ) α(Q2 )
α(µ2R ) ≡ = µ2R
. (2.23)
4π β0
1 + α(Q2 ) 4π log Q2
For the case of QCD, it has become customary to write the solution as
gs2 (µ2R ) 1
αs (µ2R ) ≡ = µ2R
. (2.24)
4π β0
log
4π Λ2QCD
ΛQCD ≈ 250 MeV has been introduced as the QCD scale. Conversely, αs could be
fixed experimentally, for instance through αs (m2Z ) ≈ 0.118 [799], taken from the PDG.
The difference with respect to QED, at this level, can be traced back to the differ-
ence in the β-function, which for QED to first order reads
2nf
β0 = − . (2.25)
3
This difference in the overall sign, stemming from the gluon loops which of course
are absent in the QED case, implies that, in striking contrast to the electromagnetic
coupling, the strong coupling increases with increasing distance or decreasing scale and
decreases with decreasing distance or increasing scale, a property known as asymp-
totic freedom. In physical terms this phenomenon can be interpreted as the response
of the vacuum to the presence of an electromagnetic or strong charge. In quantum field
theory, the vacuum is not really an empty space, but it consists of a superposition of
short-lived quantum fluctuations with total quantum numbers 0. In the case of QED,
such fluctuations consist, to leading perturbative order, of fermion–anti-fermion pairs.
In the presence of a fixed charged particle, say an electron, these pairs will not be
completely unpolarized. Instead they will orient themselves in such a way that the
positively charged fermions are closer to the electron, and the negatively charged vir-
tual particles are further away. Trying to measure the charge of the electron by probing
it with a photon thus introduces a scale dependence: a low-Q2 photon will only be able
to “resolve” a comparably large cloud of charged virtual particles, partially screening
the charge of the electron. This can be seen as a close analogy to a charge in a dielectric
material in classical electrodynamics, where the internal polarization fields partially
counteract the electric field related to the charge. Increasing the virtual mass Q2 of
the photon will allow it to resolve smaller distances and thus probe the electron with
diminished influence of the screening cloud. The same mechanism, of course, also ac-
Physical picture of hadronic interactions 23
counts for the case of QCD. Taking here a quark as a fixed charge, by far and large,
the virtual quark–anti-quark pairs will orient themselves in such a way that the anti-
quarks are closer to the quark. Similar to the case of QED, this effectively will lead to
a screening of the fixed charge inside the cloud of short-lived virtual charged particles.
However, in contrast to QED, also the carriers of the interaction, the gluons, carry
colour charges and thus contribute to screening effects. Their contribution comes with
a sign opposite to the fermionic contribution, in classical electrodynamics such a be-
haviour would be classified with the pathological case of a dielectric constant smaller
than unity.
The effect of this running coupling of the strong interaction has been con-
firmed experimentally, as shown in the pictorial summary of theory and experimental
data in Fig. 2.4. For small scales, i.e. for µR → ΛQCD , the running coupling diverges,
Fig. 2.4 The running of the strong coupling constant: experimental data
confront theory, with the theoretical uncertainty shown as the blue band
(Reprinted with permission from PDG [229]).
signalling the breakdown of perturbation theory and any ideas of treating quarks as
quasi-free particles. This is also known as the Landau pole of QCD.
where the colour factors Cq,g = CF,A again are the Casimir operators of the
fundamental (CF ) and adjoint (CA ) representation of the gauge group underlying
QCD, SU (3). From the relevant equation, Eq. (2.21), it can be seen that in the large-
Nc limit, Nc → ∞, the colour charge of the quark becomes Nc /2, half as large as
the gluon colour charge, reflecting the fact that gluons carry two colour indices, while
quarks carry only one. They are therefore, in this limit, twice as likely to emit a gluon
than the quarks. Note here that it is quite often useful to consider the large-Nc limit,
as corrections to this limit are typically suppressed by 1/Nc2 . Parametrically this is a
correction of the order of 10%, but often, and in particular for sufficiently inclusive
quantities, the effects are significantly smaller and can thus be neglected.
It should be stressed, though, that due to the Landau pole the perturbative lan-
guage must fail at transverse momenta around or below the Landau pole, with the
result that the bound-state structure of the hadrons is more complicated than three
quarks somehow bound together. Coming back to the qualitative picture, where hard
interactions with constituents of the complex Fock state describing an incident fermion
break its coherence, it is clear that something similar will also occur in the case of
QCD. There are, however, a number of differences to this simple picture. First of all,
the incident particle will be a hadron, a bound state in itself, with a difficult structure
that cannot be calculated from first principles. Ignoring this complication, assume that
at scales sufficiently high above the Landau pole, where a perturbative description be-
comes sensible, the hadron is made of a number of valence partons. Apart from some
colour exchange binding them together, these objects will start emitting softer quanta,
in analogy to the Fock state picture in QED, which are also known as sea partons. The
coherence of the hadrons enforces the recombination of these secondary quanta, and
a picture like the one depicted in the left panel of Fig. 2.5 emerges. In the right panel
of the same figure the effect of a hard probe, a photon interacting with the partonic
ensemble is shown: one of the partons — the one the photon interacted with — is
“kicked” out of the hadron. As a consequence the colour field is missing a quantum
number, and the remaining partons cannot recombine anymore. The coherence of the
hadron breaks down and a more complex final state emerges through a reconfiguration
of the coloured partons.
Fig. 2.5 Hard interactions break coherence of QCD initial states, In the
left panel, coherence of the stable hadrons enforces the recombination of
emitted partons, in the right panel the interaction with a hard photon
destroys this coherence by “kicking” out the parton it couples to.
partons will occur, again governed approximately by Eq. (2.26). Similar to the case
2
of QED, also in QCD, the lifetime of the sea partons is proportional to ω/k⊥ and
their distance from the emitting valence quark is given by 1/k⊥ . The coherence of
the quantum states, encoded as negative virtual mass, also guarantees, like in QED,
that no partons can evade their hadron. In a hard collision, again, this coherence
may break down if a parton is “kicked out” of the radiation pattern, and the partons
that stay behind may go on-shell, thus becoming physically meaningful objects with
observable consequences. Of course, also in QCD, the more modern way to look at this
employs the notion of initial-state radiation, described by evolution equations such as
the simplified ones encountered already in the naive discussion of QED, see Eq. (2.8).
q2 qz2
xB = − = (2.28)
2P · q 2Pz qz
implies that qz = 2xB Pz . A sketch of the Breit-frame in these parameters is depicted
in Fig. 2.6. The time during which the interaction between photon and proton takes
place is given by the (longitudinal) wavelength of the photon, which of course is the
component being hit by the proton. Hence,
1
τint ∼ λz ∼ . (2.29)
qz
In order for the photon to see the partons inside the hadron, they must have wave-
lengths at least as large as the photon, which is purely longitudinal; at the same time
for the partons to see the photon, the photon’s wavelength must at least be as large
as theirs. Therefore, their respective longitudinal wavelengths must be the same and,
consequently, also their momenta must be of the same size: pz = qz . In the approxi-
mation of quasi-collinear partons then the momentum-fraction x of the struck parton
with respect to the incident proton must be of about the same order as xB . The life-
time of these partons, as discussed in Eq. (2.6), is given by τparton ∼ pz /p2⊥ , which is
larger than the interaction time τint ∼ 1/pz , provided that pz p⊥ .
To summarize the discussion up to now: the parton picture of interactions devel-
oped so far is a sensible construction. Demanding that the interaction time of the
parton with the photon is much smaller than the lifetime of the quantum fluctuation
that actually is the parton, yields a condition on the parton kinematics: the trans-
verse momentum of the parton must be much smaller than its longitudinal momentum,
p2z p2⊥ . In this setup the point-like photons measure the number of partons with
similar longitudinal momentum in an area of size 1/Q2 , or phrased in slightly dif-
ferent terms, they probe partons with momentum fraction x ≈ xB at scale Q2 . The
cross-section of this process must then be proportional to the sum of probabilities to
find partons with this kinematics, that can interact with the photon, i.e., ignoring
higher-order diagrams: X
σep ∼ e2q fq/p (x, Q2 ) , (2.30)
q
if the photon that is responsible for the interaction has a (negative) virtual mass
squared with absolute value Q2 and a longitudinal momentum given by x.
Physical picture of hadronic interactions 27
Furthermore, if this picture holds true — if p2z p2⊥ — then the photon–parton
collision can be treated as the collision of two independent quasi-free particles; the
struck parton does not feel the presence of the surrounding partons forming the pro-
ton, since the interaction time is smaller than the lifetime of this parton, i.e., the
characteristic time after which the strong colour fields force the parton to recombine
with the other partons. In such a framework the probabilities to find partons with
a given momentum fraction x at a scale Q2 inside a proton is process-independent,
so the DIS setup with exchanged photons can be replaced by other processes with
similar kinematics, such as Drell–Yan production, jet production, etc.. Thus, encoding
these probabilities of finding a parton p in a hadron h as process-independent parton
distribution functions (PDFs) fp/h (x, Q2 ) is sensible. This is the physical basis of
the factorization formula presented in the next section.
The PDFs cannot, at the moment, be calculated from first principles, since they are
truly non-perturbative objects. Of course, assuming the validity of the factorization
picture, they can be measured in different processes and at different scales. These can
be related to each other through the DGLAP equations, cf. Eq. (2.8) for the case of
QED, which for QCD read,
∂ fq/h (x, Q2 )
∂ log Q2 fg/h (x, Q2 )
Z1 (2.31)
αs (Q2 ) dz Pqq xz Pqg xz fq/h (z, Q2 )
= ,
2π z Pgq xz Pgg xz fg/h (z, Q2 )
x
where the sum over different 2nf quark flavours and their anti-flavours is implicit and
will be further detailed in Chapter 6. Schematically, the convolution in Eq. (2.31) could
be written as
∂ fq/h (Q2 ) αs (Q2 ) Pqq Pqg fq/h ( Q2 )
= ⊗ , (2.32)
∂ log Q2 fg/h (Q2 ) 2π Pgq Pgg fg/h ( Q2 )
The kernels of the evolution equation, the splitting functions, are given, again, at
leading order,3 by
3 For further reference, the splitting functions are decomposed here into a splitting kernel P and
the anomalous dimensions of quarks or gluons, γq,g , where applicable.
28 Hard Scattering Formalism
(1) 1 + x2 3 (1)
Pqq (x) = CF + δ(1 − x) = Pqq (x) + γq(1) δ(1 − x)
(1 − x)+ 2 +
(1) 2 2 (1)
Pqg (x) = TR x + (1 − x) = Pqg (x)
(1) 1 + (1 − x)2 (1)
Pgq (x) = CF = Pgq (x) (2.33)
x
(1) x 1−x
Pgg (x) = 2CA + + x(1 − x)
(1 − x)+ x
11CA − 4nf TR (1)
+ δ(1 − x) = Pgg (x) + γg(1) δ(1 − x) .
6 +
Here x is the splitting parameter, essentially governing the light-cone momentum frac-
tion of the offspring with respect to its emitter. It is worth noting that the indices
of both Pba and of Pba are related to a parton splitting process a → bc, where the
type c of the third parton is fixed by a and b. The P (o) (x) are also called regularized
splitting functions to order o.
A pictorial way to construct the anomalous dimensions is to analyse the splitting
functions and to realize that they can be written as the sum of a part that diverges
for z → 1 and some finite remainders which are linked to the δ-function from the +
prescription. This link is given by sum rules, namely
XZ
1
dzPij (z) = 0
i 0
Z1
dzPqq (z) = 0
0
Z1
dzzPgg (z) = 0 .
0
Here the first sum rule satisfies the condition that the splitting functions can be in-
terpreted as probability densitiies for a splitting to take place, while the second and
third one ensure flavour and momentum conservation.
However, it should be stressed here that, strictly speaking, the picture above,
called collinear factorization, which factorizes cross-sections of processes involving
hadrons in the initial state into process-independent PDFs times the matrix element
with quasi-free partons in the initial state, has been proved only for the cases of deep-
inelastic scattering and Drell–Yan production [143, 409, 410]. The fact that the same
ideas and the same formalism are also employed for other processes is in fact justified
only by the success of this description rather than on a strict mathematical proof. In
fact, recent work hints at the existence of rather subtle, factorization-breaking effects
at higher orders [343], which are beyond the scope of this book.
Physical picture of hadronic interactions 29
Fig. 2.7 Some Feynman diagrams for the emission of multiple gluons by
a quark pair contrasted with corresponding leading colour diagrams.
emissions of photons by photons are absent in the QED case, in QCD the gluons carry
colour and thus can also act as gluon emitters. Then, after each gluon emission the
structure of the eikonals will change, as the emitted gluons introduce new directions
of the colour field. In the limit of infinite colours, also known as the large-Nc limit,
this leads to the notion of colour dipoles emitting the gluons. The emergent more
complicated radiation pattern can be represented by some colour-flow diagrams of the
type depicted in Fig. 2.7.
The leading behaviour of the QCD emission process follows a pattern given by
q→qg
2
αs (k⊥ ) 2
dk⊥ dω ω 2
dw = CF 2 1+ 1−
2π k⊥ ω E
2 2 2 2 2
(2.34)
αs (k⊥ ) dk⊥ 1+z αs (k⊥ ) dk⊥ (1)
= CF 2 dz = CF 2 dz Pqg (z) .
2π k⊥ 1−z 2π k⊥
for gluon emission off a quark with energy E. In the soft limit, for vanishing gluon en-
ergies ω = E(1 − z), this reproduces the dipole form in Eq. (2.26), which qualitatively
is defined by a logarithmic distribution in both the gluon energy ω and its transverse
momentum k⊥ with respect to the quark direction. They are often denoted as trans-
verse logarithms, related to the collinear divergence or mass singularity for the
2
logarithms related to the 1/k⊥ term and as longitudinal logarithms related to the
soft divergence or infrared singularity stemming from the 1/ω part of Eq. (2.34).4
4 This automatically also leads to a qualification of parton emissions: for the production of a
“jet”, a parton needs to be emitted at relatively large angles and energies, k⊥ , ω ∼ E, leading to
dwq→qg ∼ CF αs (k⊥ 2 )/(2π). Conversely, emissions not giving rise to additional jets are characterized
k ≈ kk ≈ k⊥ ≈ R−1 (2.36)
kk ER
t(had) ≈ 2 ≈ , (2.38)
k⊥ m
where m is a typical hadronic mass scale; the result is the same as in the more classical
treatment above.
At the same time it is important to see how long it actually takes for a gluon to
form in the emission off, say, a quark. Naively, this formation time can be estimated
from the invariant mass of the quark–gluon pair and its energy as
1 E E k kk
t(form) ≈ ≈ 2
≈ 2 ≈ 2 . (2.39)
mqg mqg kEθqg k⊥ k⊥
Here, the starting point is given by a combination of uncertainty principle and Lorentz
time dilation effects, and in some intermediate steps a small opening angle of the
resulting pair has been assumed.
Comparing this with the hadronization time and demanding — reasonably — that
gluons be formed before they hadronize, yields a constraint on the emission kinematics:
kk (form) 1
2 ≈ t ≤ t(had) ≈ kk R2 −→ k⊥ ≥ = O (few ΛQCD ) . (2.40)
k⊥ R
Extending this to the case of heavy quarks Q, such as top quarks, it is worth noting
that in their case the lifetime given by
3
mW E E
τQ ∼ ∼ t(had) (2.41)
mQ mq mq
is smaller than their hadronization time; they behave as truly free quarks.
2.1.4.4 Hadronization
At this low scale, QCD enters a new regime, where the strong interaction becomes so
strong that the partons start feeling the effect of being bound together in hadronic
states. As in the case of the initial state, the transition between the parton and the
hadron description is beyond our current quantitative understanding. This, of course,
is where the parameterization provided by the fragmentation functions, obtained from
measurements, becomes important. There is, however, a significant difference with
respect to the PDFs in the initial state. In the final state there may well be more
than one hadron emerging from a parton; consequently the fragmentation functions
describe the transition of the parton into the corresponding hadron plus any other
hadrons. They are therefore mainly useful for describing the inclusive production of
this one given hadron in a collision, ignoring all other hadrons which may eventually
emerge as well. To reach a more inclusive picture, other methods are thus necessary,
which will be introduced and discussed at later stages of this chapter and in Chapter 7.
Physical picture of hadronic interactions 33
There, the relevant features of the event — the emergence and decay of heavy states
such as gauge or Higgs bosons or top quarks or the occurrence of a number of hard
QCD jets — can systematically be included. This is, at leading order and for small
final state multiplicities, typically achieved by the well-known textbook methods of
constructing the relevant Feynman diagrams, summing and squaring them by employ-
ing completeness relations, convolution with the PDFs, and by finally integrating over
the relevant phase space of the final state and over the momentum fractions of the
initial-state partons with respect to the incoming hadrons. The last step is done with
numerical methods, Monte Carlo integration, which is due to the non-analytical
structure of the PDFs. For increasing multiplicities of final state particles, still at lead-
ing order, the number of Feynman diagrams exhibits a faster than factorial growth,
rendering the textbook methods prohibitively time-consuming, and numerical meth-
ods must be employed throughout. For a summary of such methods, the reader is
referred to Section 3.2.1. In order to improve the accuracy of such fixed order calcula-
tions, of course higher perturbative orders must be included, which typically leads to
additional problems due to the occurrence of ultraviolet and infrared divergences. For
a more in-depth discussion of the overall mechanism of such fixed-order calculations
the reader is referred to Section 2.2, and to Chapter 3, while in Chapter 4 specific
issues related to different process will be detailed. The role of the PDFs, on the other
hand, and how they are obtained from data, will be dwelt on in Chapter 6.
generators, the resummation of these logarithms and the resulting jet structure in
typical QCD events is achieved numerically invoking parton showers, cf. the second
half of Chapter 5.
Taken together, the logarithmically enhanced radiation pattern may have some
sizable impact on the structure of the events, as it may very well change the kinematical
distribution of particles in the final state. In addition, it should not be forgotten here
that the experimentally observable objects are hadrons rather than the quanta of
QCD, quarks and gluons, which introduces yet another layer of complication when
discussing the full structure of hard collision events.
For the decays of hadrons with heavy quarks, quite often only those final states are
known, which involve a few final state particles only, which then can be taken, again,
from the PDG. There, kinematical distributions typically are taken from suitably
chosen matrix elements and often involve additional form factors, both obtained from
Heavy Quark Effective Theory (HQET) [497, 640, 745, 784, 813, 847, 848]. For
high-multiplicity final states (which in the case of, say, the B-mesons amount to about
50% of the total decay rate), often the decay is treated by going back to the parton
level. In these cases, the parton-level matrix elements are supplemented with parton
showers and hadronization. For a more detailed description, the reader is referred to
Chapter 7.
An additional complication, when discussing collisions with hadron initial states, arises
from the fact that hadrons are extended objects, which are composed from a multitude
of partons. In standard textbook formalism, this is reflected by employing PDFs to ac-
count for the transition of the incident hadrons into the partons which then experience
a hard scattering. However, this formalism does not account for the possibility that
more than one parton pair coming from the two hadrons may interact. This effect,
also known as double-parton scattering or multi-parton interactions, in fact
is beyond standard factorization; in the absence of a first-principles approach it thus
must be modelled. Up to today, even the simplest models, which are based on a sim-
ple factorized ansatz of merely multiplying the partonic cross-sections and including
a trivial symmetry factor, appear to be in agreement with data. This suggests that,
possibly, probable correlation effects when going from one- to two-particle PDFs or
non-trivial final state interactions are not dominant and can be treated as some kind
of higher-order correction to the simple picture.
Apart from such hard double or multiple interactions there are other, softer addi-
tional interactions when hadrons collide — some of them will just fall into the category
of multiple-parton interactions at lower scales, where no hard objects like jets with
some tens of GeV in transverse momentum are produced. In addition, there is another
contributor to the overall particle multiplicity, namely the remnants of the incident
hadrons. After one or more partons have been extracted from them, the parton en-
semble forming them typically cannot combine back into a single colourless object but
instead must hadronize into a set of hadrons. Quite often, however, the two beam
remnants appear to be connected in colour-space which must be included into the
reasoning. In addition, although the partons are by far and large moving in parallel
to the beam, i.e., to the hadron they form, there is no reason to assume that they do
not have some transverse momentum of the order of ΛQCD up to some few GeV. This
intrinsic transverse momentum of the beam-remnant partons guarantees that the
hadrons stemming from their hadronization do not necessarily vanish along the beam
pipe, but can instead reach the detector due to their finite transverse momentum.
For a further discussion of current models, the reader is referred to Chapter 7;
some comparisons to data are presented in Chapters 8 and 9.
Physical picture of hadronic interactions 37
2.1.5.5 Pile-up
Another important effect in hadron collisions, especially at the LHC, is the interaction
of multiple pairs of protons in the same bunch crossing, the so-called pile-up. This
is driven by the instantaneous luminosity of the collider experiment in question, and
while at maximal luminosity at the TEVATRON there were about 5 proton–anti-proton
interactions per bunch crossing, at the LHC about 25 pairs of protons are expected
to interact in each bunch crossing at design luminosity and the centre-of-mass energy
Ec.m. = 14 TeV. For a possible SLHC, this number would go up to about 250 proton–
proton collisions per bunch crossing. In addition to that, the size of the detector allows
for the possibility that the particles produced in more than one bunch crossing interact
with different parts of the detector at the same time, an effect known as temporal
pile-up. At the LHC, with the anticipated maximal frequency of 40 MHz, there will
be about three such generations of particles at the same time in the detector.
2.1.6.1 Leptons
While naively this discussion may seem a bit awkward, it is important to understand
why this is a relevant issue. To gain some understanding, consider first an example in
electrodynamics, namely the production of Z bosons and their subsequent decay to
a lepton pair. Without QED final-state radiation, and in an ideal world with perfect
detectors, the invariant mass of the lepton pair would roughly follow the Breit–Wigner
form expected from resonance production, of course modulated by PDF effects. Their
combined four-momentum, due to energy conservation, would of course be identical to
the intermediate boson’s state. In turn its transverse momentum and rapidity could
be directly reconstructed. This, however, is not how reality presents itself. Instead
the lepton will emit photons that follow, at leading order, the eikonal pattern already
discussed in Section 2.1.1, resulting in a logarithmic enhancement of soft and collinear
emissions. Consequently, the lepton four-momentum is reduced and, even for ideal de-
tectors, their reconstructed kinematics would typically differ from those in the absence
of such QED FSR effects. There are various strategies to deal with this apparent prob-
lem. Of course it is possible to take such effects into account by including them in the
calculation. A popular way of achieving this is employed in some modern Monte Carlo
simulations. It is based on the algorithm of Yennie, Frautschi, and Suura [901] and
allows for a resummation of leading logarithms in an energy cut-off and an angular
cut-off, which can be systematically improved by fixed-order calculations. Since this
algorithm also provides a transparent way of guaranteeing four-momentum conserva-
tion,5 it is particularly well-suited to describing the kinematic effects on leptons due to
photon emissions in both the initial and the final state. Emissions below the cut-offs
5 For some specific realizations of such infrared-safe phase space mappings between states before
and after QED radiation, see for instance [838].
38 Hard Scattering Formalism
are considered to be too soft or too collinear to yield any visible effect; conversely, such
photons can practically be taken as recombining with the original lepton. However the
singularities related to their emission cancel the corresponding soft and collinear di-
vergences in the virtual contributions, to all orders. This is yet another manifestation
of the KLN and BN theorems, cf. Section 2.2.5.
On the other hand, the production of leptons, especially in hadronic collisions, may
essentially proceed through two mechanisms. Direct production, the case discussed up
to now, is where the leptons directly stem from the hard interactions. However, lep-
tons may also emerge in the decay of hadrons and, in particular, in the weak decay of
heavy hadrons containing charm or bottom quarks. In fact the leptons there play an
important role in the identification of such objects, with the finite lifetime of weakly-
decaying heavy particles resulting in displaced vertices measurably different from
the primary one. However this production channel may also mimic direct production
and its characteristics. Typically the production cross-sections of heavy flavours con-
voluted with corresponding decay branching ratios are orders of magnitude larger than
the cross-section for direct lepton production, implying that this is an issue that must
be dealt with carefully. A straightforward solution relies on two considerations. First,
weak decays of heavy hadrons producing the leptons also yield other, lighter hadrons.
In addition, more often than not, the heavy flavour is part of a larger system containing
more hadrons — a jet — that will be discussed further shortly. Especially for leptons
with a transverse momentum larger than the mass of typical heavy hadrons, these
two effects mean that the lepton more often than not is part of a roughly collimated
bunch of other particles, mostly hadrons. As a consequence, demanding leptons to be
isolated in rapidity and transverse angle from any hadronic activity will significantly
reduce the impact on an analysis of leptons produced in hadronic decays. This isola-
tion requirement is typically realized experimentally by demanding that the sum of
the hadronic energy or the transverse momentum of other charged tracks in a radius
Rcrit around the lepton be smaller than a critical value. Here, the distance ∆Rij
between two objects i and j is given by
q q
∆Rij = ∆ηij 2 + ∆φ2 = (ηj − ηi )2 + (φj − φi )2 (2.42)
ij
Fig. 2.10 The effect of photon radiation on the average lepton energy in
W boson decay, as a function of the angular separation between the photon
and the lepton.
the energy of the lepton (measured in the rest frame of the decaying W boson) by a
fraction that depends on the angular separation of the photon and the lepton (∆Rγ` ).
This fraction is larger for electrons than muons due to the collinear enhancement that
goes with the lepton mass (m` ) as log(m` /mW ). For nearly-collinear photons, in the
region ∆Rγ` < 0.1, the fraction of energy radiated away is around 2.5% for electrons
and 0.7% for muons. Emission at wider angles is responsible for approximately 1% of
additional depletion in both cases.
In an experimental lepton reconstruction algorithm, photon radiation is absorbed
inside a cone of small radius around the lepton, typically ∆R < 0.1 using the definition
in Eq. (2.42). The resulting lepton four-vector, that has been corrected for collinear
photon radiation effects by such an algorithm, is designated as a dressed lepton.
There is still a residual correction of around 1% (for both electrons and muons) to
account for wide angle photon emission. Often, data and theory comparisons are made
at the dressed lepton level, if the theoretical calculation derives from a Monte Carlo
program, which is capable of including photon radiation effects. For comparison to
fixed-order calculations, the data typically is corrected for QED radiation to the Born
level.
2.1.6.2 Photons
Similar considerations also hold true when considering the production of photons.
Once again, the dominant contributions stem from secondary mechanisms of photon
Physical picture of hadronic interactions 41
production rather than direct production in the hard interaction. Such secondary
photons can be emitted as final-state radiation off quarks or, more often, originate
from the decay of hadrons. In the latter case the major source of concern is not tied
to the production and decay of heavy flavours, which typically gives rise to other,
secondary hadrons in addition to charged leptons, but the annihilation of neutral,
mostly light mesons in processes such as π 0 → γγ or η (0) → γγ. These particles are
created in abundance in hadronic final states and, since both they and their decay
products are neutral, they can only be seen calorimetrically. Because they are so light,
their decay products more often than not end up in the same calorimeter cells when
looking for high–p⊥ objects, which makes it fairly tricky to disentangle them from
single photons. This is particularly relevant for signals containing primary photons,
which are directly produced in the hard process and thereby have large backgrounds
from such secondary production channels. This adds another level of complication with
respect to the lepton case, but the overall solution remains the same.
Again, in order to disentangle direct (or prompt) photons from those emerging
in the fragmentation or decay of strongly interacting particles, isolation criteria are
introduced. In past years, it has become customary to use the Frixione isolation
criterion [540] in theoretical calculations. This criterion is defined by a cone with
opening angle δ0 around the photon in η–φ space, a critical exponent n and a scaling
factor εγ .8 Photons are considered isolated from the hadronic environment, if the
accumulated hadronic transverse energy inside any cone with size δ < δ0 , weighted by
the distance from the photon, is smaller than the photon’s transverse energy multiplied
with a cone-size-dependent weight:
X
1 − cos δ
n
E⊥,i Θ(δ − ∆Riγ ) ≤ εγ E⊥,γ ∀ δ < δ0 . (2.43)
1 − cos δ0
i∈hadrons
It is worth noting that the modifying factor on the right-hand side of the equation
above enjoys the property that
n
1 − cos δ
lim = 0. (2.44)
δ→0 1 − cos δ0
This guarantees that in the strictly collinear limit any hadronic emission will lead
to the photon being considered as not isolated. At the same time, the scaling with
energies on both sides ensures that the case of partons splitting into partons in some
final-state radiation process does not hamper the isolation criterion too badly – it will
remain sufficiently infrared-safe.
From the theoretical perspective, the use of the Frixione prescription greatly sim-
plifies any calculation. It removes the need for including non-perturbative photon
fragmentation contributions that describe the parton to photon transition, cf. Sec-
tion 2.1.4. Since these fragmentation contributions are a purely collinear phenomenon,
they are explicitly removed when using the Frixione approach. However, experimen-
8 Customary choices for the parameters of this algorithm are, for instance, ε = 1, n = 1 and
γ
δ0 ≈ 0.5, mimicking the typical cone-size of jets, see the section about jets, 2.1.6.3.
42 Hard Scattering Formalism
talists have typically not used the Frixione approach for photon isolation either at
the TEVATRON or at the LHC, not least because the condition in Eq. (2.43) cannot be
directly implemented for a detector with finite granularity. Instead an isolation cone
is defined, typically of radius R=0.4, and any transverse momentum contained in the
isolation cone (not assigned to the photon) is required to be less than either a fixed
value, or a percentage of the photon tranverse momentum. The isolation definition is
designed to remove most of the jet fragmentation background to photons, while having
a high efficiency for retaining true, isolated photons. While the primary purpose of the
isolation algorithm is to remove the jet backgrounds, it also has the effect of effectively
removing much of the photon production from fragmentation processes.
Another procedure, also known as a democratic approach, is based on treating
photons as if they were jets and applying a jet clustering algorithm on all outgoing
particles. In this method a photon is isolated, if it would be forming a jet and if its
hadronic content contributes less than a critical value to its overall energy or transverse
momentum. In the next section, such jet finding and clustering algorithms will be
discussed in more detail.
Ultimately both algorithms can be used in parton-level calculations and in hadron–
level simulations or measurements, facilitating a direct comparison between the two.
In reality the energy in the isolation region around any photon candidate is dominated
by underlying event energy, and at high luminosities, by pile-up. There are techniques
to effectively subtract this energy that is either completely (in the case of pile-up),
or partially (in the case of the underlying event energy) uncorrelated with the hard
scatter, see [586]. However, the stochastic uncertainties in the subtraction tend to over-
whelm any fragmentation energy that may be in the cone. Given these uncertainties
it is therefore not unreasonable to use Frixione isolation in theoretical calculations for
comparison to data on which typical experimental isolation cuts have been applied.
put: considering the same observable at a higher perturbative order typically involves
additional parton emissions, which actually may be soft and/or collinear with respect
to already present partons. These additional and potentially unresolved emissions must
not lead to unwanted artifacts such as ambiguities in the actual number of jets observed
or their position in phase space. If this requirement is satisfied, the jet algorithm is
called infrared-safe.
In this section, the focus will be on jet definitions and algorithms applied at the
perturbative level, i.e., using partons. Some of the difficulties that result when hadrons
are clustered to jets instead will be discussed in Chapter 8.
At leading order, each jet is modelled by a single quark or gluon. As already noted
in Section 2.2, in general this leads to infrared singularities corresponding to kinematic
configurations in which two partons are collinear, or a gluon is soft. It is only after
the application of a jet algorithm, which ensures that all partons are both sufficiently
hard and well-separated, that any sensible theoretical prediction can be made.
In general, jet algorithms can be considered as constructed from two ingredients.
First, all objects belonging to a jet must be identified, for which there are essentially
two categories of algorithm. One is based on purely geometric considerations and pro-
ceeds by identifying a jet axis and assigning all objects within a radius R0 around this
axis to the jet. In contrast, sequential algorithms proceed by combining and clustering
pairs of particles in turn until only hard quasi-particles identified with jets are left.
Second, the momenta of the jet constituents must be combined to yield the overall jet
momentum, which in modern algorithms is realized by recombination schemes acting
sequentially on four-momenta. These two stages in principle are independent, but in
sequential clustering algorithms they are, to a certain degree, intertwined.
It is crucial to describe this construction of jets from their constituents in terms of
an appropriate set of kinematic variables. The standard set consists of the transverse
momentum (p⊥ ), rapidity (y), azimuthal angle (φ), and invariant mass (m) of the jet.9
This is a very simple algorithm but, like many nice ideas, it is rather too simplistic
and it cannot be used to describe the full wealth of features predicted in QCD without
introducing further complications. A first problem arises when trying to find more
than one jet. Depending on the parameters of the collision and the cone algorithm, it
is not unlikely that in a multijet environment jets may overlap, leading to particles
that may in principle contribute to more than one jet. Such assignments would most
certainly lead to unwanted features like non-conservation of momentum and therefore
must be avoided. A simple way of doing this is by ensuring that, for instance, the
harder jet includes the particles in the overlap zone. So, this problem can to some
degree be solved algorithmically.
More complications arise when considering seeded cone algorithms. As an example
at the parton level, consider the situation depicted in Fig. 2.11, where two hard partons
i and j have a relative distance R0 < Rij < 2R0 . As long as these are the only two
partons in the vicinity, they will form two separate jets since Rij > R0 , however, the
addition of a soft gluon may completely change the picture. Using it as a potential
seed it is entirely possible that both partons have a distance smaller than R0 , thus
being sucked into one combined jet, which has a larger total energy and momentum
and will therefore be accepted as a candidate. So, basically, the presence of additional
radiation at higher orders allows new seed directions about which jets can be formed.
The appearance of such new jet axes, especially if it is due to the presence of arbitrarily
soft radiation, will cause real and virtual corrections to fall into bins of different jet
multiplicity. This will quite often hamper the mutual cancellation of the associated soft
singularities and in such cases will invalidate the perturbative calculation. In general,
this kind of breakdown is termed a problem with infrared-safety. The consequence of
this is that although simple cone algorithms are perfectly feasible at the hadron level
they have no relevant theoretical interpretation in terms of perturbation theory. This
renders the comparison of theory and experiment, using these algorithms, seriously
flawed at best.
Physical picture of hadronic interactions 45
A remedy to this problem of the simple cone algorithm above is to introduce addi-
tional seed directions throughout; this was the philosophy underlying the introduction
of the “Midpoint cone” algorithm. In this algorithm, extra seeds are placed between
every pair of stable cones having a separation of less than 2R0 , twice the size of the
clustering cones. However, a careful analysis reveals that such solutions only post-
pone the infrared-safety problem to some higher order in perturbation theory, which
of course means that they are not very satisfying. Considering all possible directions of
jet cones would eliminate any infrared-safety problems to all orders; but while this is
feasible for a low-multiplicity parton final state, it becomes computationally expensive
for true experimental data, or indeed parton shower predictions, tamed only by the
granularity of real-world detectors. In summary, this problem of infrared-safety ren-
ders “seeded” cone-algorithms in principle problematic for any meaningful comparison
between theoretical calculations and experimental data. In practice, however, a careful
analysis of the midpoint and SISCone algorithms (see below), as applied to jet physics
at the TEVATRON, shows only marginal differences between the two. This ultimately
motivates the use of TEVATRON data with jets defined through the midpoint algorithm
which was the standard there.
A practical solution maintaining the idea of perfectly cone-sized jets was found by
abandoning the idea of seeds and replacing them with innovative geometrical methods
to reduce the computational complexity [836] and resulted in the Seedless Infrared-
Safe Cone (SISCone) algorithm. This algorithm suffers none of the limitations of
its forebears but retains a relatively intuitive physical picture. However, in the first
years of data-taking at the LHC another class of jet finders, the kT -algorithms became
the standard tool.
2.1.6.5 kT algorithms
The kT family of algorithms [304, 345–347, 480, 511, 898] defines jets through a pro-
cedure that follows an idea rather different from the one employed in simple cone
algorithms. The latter use a predefined, regular shape — the cone — to capture the
relevant objects in the event, either partons, hadrons, or calorimeter entries. The kT
algorithm instead uses these basic objects as inputs from which to build jets in a
recursive, iterative way. When these kT -algorithms were originally proposed in [345–
347, 511], they were constructed in such a way as to follow the natural pattern of QCD
radiation. Such a procedure may of course lead to jets with fairly irregular shapes. With
the introduction of a less well motivated clustering algorithm in form of the anti-kT
algorithm [304], this possible obstacle was overcome and the kT -type of jet algorithms
was established.
To cluster the objects in kT -algorithms, it is necessary to introduce a generalized
distance measure in momentum space, namely
2p
diB = (p⊥i ) for each object i
n o R (2.45)
2p 2p ij
dij = min (p⊥i ) , (p⊥j ) for each pair of objects i and j .
R0
The clustering now proceeds iteratively, where in each step the object(s) i (and j)
with the smallest d are clustered, either with the beam, if diB is the smallest, or with
46 Hard Scattering Formalism
each other, if dij is the smallest. This is repeated until all distances d are larger than
a critical value dcut , which in turn defines the relative transverse momentum two jets
have with respect to the beam or to each other. The generalized form of the algorithm
discussed here is thus controlled by two parameters, one denoting the cone-like size of
the jets (R0 ), and one specifying the power to which the transverse momenta entering
these expressions are raised (p). The R0 parameter here introduces a useful discrepancy
in how relative transverse momenta impact on the jet definition: for typical values of
R0 < 1 this implies in particular that the relative transverse momentum between two
jets, which is given by
2
p2⊥ = min {p⊥i , p⊥j } Rij (2.46)
must be larger than their transverse momenta with respect to the beam, which ef-
fectively translates into a different treatment of initial and final state QCD radiation
when analysing the structure of the overall radiation pattern, where the former is more
susceptible to relative transverse momenta with respect to the beam and the latter is
more sensitive to the relative transverse momentum of two final state objects.
The original kT algorithm [346, 511] corresponds to the choice p = 1 in the
above criteria, and in this realization jets are clustered together in order of increasing
transerse momenta. The Cambridge–Aachen (CA) algorithm [480, 898] uses p = 0 and
therefore clusters in a sequence of proximity in η-φ space.11 In both specific imple-
mentations jets tend to have a fairly irregular shape, which renders any subtraction
of hadronic activity due to pile-up or to the underlying event a cumbersome task. In
contrast, the variant in which p = −1, known as the anti-kT algorithm [304], results in
fairly regular-shaped jets. Because of this feature, by now it has become the preferred
choice in experimental analyses in recent years: it combines infrared-safety with fairly
conical jets.
A variety of jet sizes are commonly used with the anti-kT jet algorithm at the
LHC, but unfortunately differ between ATLAS (0.4, 0.6) and CMS (0.5, 0.7). The tools
now available, though, both for jet calibration and jet reconstruction, should allow for
physics analyses to be carried out with multiple jet sizes and multiple jet algorithms.
Fig. 2.12 The phase space for the two partons that can form a jet in
q calculation, parametrized by z = pT,2 /pT,1 (pT,1 ≥ pT,2 ) and
a NLO
d= (y1 − y2 )2 + (φ1 − φ2 )2 . Reprinted with permission from Ref. [508].
will not be clustered into the same jet by either algorithm, while partons in Region
II would nominally be clustered into the same jet with a cone jet algorithm, but not
with a kT jet algorithm. Thus, a cone jet effectively has a larger catchment area than
a kT jet with the same size parameter. In practice, with real data or Monte Carlo
events, Region II is truncated for cone algorithms. This will be developed further in
Chapter 8.
Inclusive measurements involving jets at the TEVATRON tended to involve relatively
large jet sizes, typically with jet radius R = 0.7, such that most of the energy of the
jet is included within the jet radius [61, 86, 119, 122]. For complex final states, such
as W + n jets, or tt̄ production, it has been useful to use smaller jet sizes, in order to
resolve the n-jet structure of the final state [59, 60, 81, 85, 88, 96, 99, 112].
Each jet size comes with its own benefits and drawbacks, both theoretically and ex-
perimentally. A smaller jet size reduces the impact of pile-up and the underlying event,
but fragmentation effects are inversely proportional to the jet size, thus becoming more
important as the jet size decreases. In addition, as R decreases, terms proportional to
log R start becoming important, requiring a resummation of such terms (not present
in fixed-order calculations) for a precise prediction.
was mainly driven by the work of Butterworth and collaborators [297], who showed
that it may be possible to use such techniques to find a Higgs boson that decays to
b-quarks at the LHC in gauge-boson associated production and thereby reviving an
important search and measurement channel in Higgs boson physics. One of the key
observations is that the angular separation of the decay products depends on the Higgs
boson transverse momentum in a simple way.
To see this, one can write the Higgs boson four-momentum as p = pi + pj , with
p2 = m2H , and i and j the two decay products. Choosing a frame in which one of the
decay products is directed along the x-axis, the momenta pi and pj are,
In this frame, i has rapidity η and azimuthal angle φ while j has zero rapidity and
azimuthal angle. Therefore the angular separation of the decay products is given by,
p
∆Rij = η 2 + φ2 , (pi + pj )2 = 2pi⊥ pj⊥ (cosh η − cos φ) . (2.48)
If both η and φ are not too large, i.e. i and j are collimated, a series expansion yields
(pi + pj )2 = pi⊥ pj⊥ η 2 + φ2 + O(η 4 , φ4 )
2
(2.49)
= pi⊥ pj⊥ (∆Rij ) + O(η 4 , φ4 ).
In the limit in which i and j are highly collimated one can replace the four-momenta
of the decay products by the Higgs boson four momentum scaled by an appropiate
energy fraction, i.e. pi = zp and pj = (1 − z)p. From Eq. (2.49) it is then clear that,
XZ
1
XZ Z
1
1
= dxa dxb fa/h1 (xa , µF ) fb/h2 (xb , µF ) dΦn |Mab→n |2 (Φn ; µF , µR )
2ŝ
a,b 0
Z1 Z
1 X dxa dxb
= fa/h1 (xa , µF )fb/h2 (xb , µF ) dΦn |Mab→n |2 (Φn ; µF , µR ) .
2s xa xb
a,b 0
(2.52)
ŝ = xa xb s . (2.53)
of the massless partons a and b, and the integral of the partonic transition ampli-
tude squared, |Mab→n |2 (Φn ; µF , µR ), over the available n-parton phase-space
element, dΦn . 14 This phase-space element is given by
Yn Xn
dpi 2 2 (0) 4 4
dΦn = (2π) δ(pi − mi )Θ(pi ) (2π) δ (pa + pb − pi ) . (2.55)
i=1
(2π)4 i=1
12 For some reminder of basic scattering kinematics, the reader is referred to Appendix A.3. There,
the light-cone decomposition of momenta will be discussed.
13 Systems with total four-momentum squared Q2 < 0 are called space-like, those with Q2 > 0 are
time-like, and those with Q2 = 0 are called light-like.
14 Here and in the following, quantities at parton level are denoted by a circumflex (for example,
σ̂), while quantities at hadron level are left without it.
50 Hard Scattering Formalism
with all other PDFs being exactly zero. This would guarantee the flavour sum rule,
given by
Z1
dx fu/p (x, µ2 ) − fū/p (x, µ2 ) = 2
0
W boson production at fixed order 51
Z1 h i
dx fd/p (x, µ2 ) − fd/p 2
¯ (x, µ ) = 1
0
Z1
dx fq/p (x, µ2 ) − fq̄/p (x, µ2 ) = 0 for q ∈ {s, c, b} , (2.57)
0
Z1 X
dx x fi/h (x, µ2 ) = 1 ∀µ2 and for all hadrons h , (2.58)
0 i
¯ . . . , g}.
where i runs over all partons, {u, ū, d, d,
Switching on elastic interactions between the quarks, but still neglecting any emis-
sion of additional quanta, would not change this picture dramatically. The only effect
of such a form of interaction, which can be thought of as some kind of “rubber bands”
holding the quarks together, would be to smear out the sharp peak at x = 1/3 and
replace it with a probability density such that its expectation value would be 1/3:
D E 1
x|fu,d/p (x, µ2 ) = . (2.59)
3
2.2.2.2 QCD effects and scaling violations
A drastic change, however, occurs, when the emission of quanta without their imme-
diate reabsorption is considered, along the lines of what was discussed in Section 2.1.
As has been shown, these additional partons, which are summarily denoted as “sea”
or “sea-partons”, would have a finite lifetime that increases with decreasing momen-
tum fraction x. This leads to these sea partons typically having larger probability
distributions at small rather than at large values of x. In fact, it is worth analysing
the kinematics of parton emission as described by the QCD analogue of the splitting
functions given in Eq. (2.33).
They will lead to gluon emissions off valence quarks typically taking place for values
of z close to 1, i.e. for small momentum fractions (1 − z) of the gluon with respect to
the quark. As sea quarks as well as additional gluons emitted by these gluons inherit
their kinematics, the perturbative production of sea partons favours them to have
small values of x. Consequently, the combination of lifetime arguments and the form
of the splitting functions induces non-zero sea PDFs, which typically steeply increase
for decreasing x. Naively, and to a fairly good approximation,
where λ ≈ 1 for gluons and sea quarks and λ ≈ −1/2 for valence quarks. This behaviour
is contrasted with the cases of no or elastic, “rubber-band”-type interactions in a
sketchy way in Fig. 2.13. Apart from the increase for small x due to the sea partons, it is
worth noting that also the distribution of valence partons, which for elastic interactions
52 Hard Scattering Formalism
Fig. 2.13 Sketch of the up-quark PDF for different interactions: valence
quarks only, without and with elastic interactions and full QCD.
is centred around x ≈ 1/3, shifts towards smaller x values of about 0.1 and broadens
out. This of course is due to them losing energy due to the emission of sea partons.
And in fact, while the PDF – from a CT10 NLO set [551] – displayed in Fig. 2.13
for an u-quark is taken at relatively low scales of µF = 10 GeV, this slight “valence
bump” at x ≈ 0.1 persists also to higher scales.
This picture of secondary parton radiation indeed is very similar to the case of
photons emitted by an electron. There, the photons play the role of the sea, allowing
the photons to split into electron–positron pairs supplements this sea with additional
fermions.
Putting this together yields a picture of the proton in the x-Q2 plane as sketched
in Fig. 2.14. The behaviour of the PDFs is entirely calculable with perturbative meth-
ods, and is in fact given by the Dokshitser–Gribov–Lipatov–Altarelli–Parisi
(DGLAP) equation for QED, cf. Eq. (2.8), with the starting condition
In contrast to this fully known and entirely perturbative QED case, the case of QCD is
more complicated, owing to the essentially non-perturbative infrared structure of the
theory. There, the perturbative regime is bound from below; typically it is assumed
that perturbative QCD breaks down for scales of the order of 1 − 2 GeV and below.
This implies that the starting conditions for a DGLAP evolution must be taken from
data, a subject that will be the focus of a more detailed discussion in Chapter 6. As
already stated in Section 2.1.3, the scale evolution of the PDFs is given by the DGLAP
equation, Eq. (2.31), where q denotes all quark flavours q and their anti-quarks.
A consequence of this scaling behaviour is that for increasing scales µ the sea
W boson production at fixed order 53
Fig. 2.14 Sketch of the proton in the x-Q2 plane. At large x (≈ 1/3), the
number of partons in the proton remains roughly the same, given by the
valence partons. In contrast, at smaller x and larger Q2 significant scaling
of the parton number takes place. The “transverse size” of the resolved
partons roughly scales with 1/Q.
with the Weinberg angle θW and the Fermi constant GF . Numerically, the elec-
tromagnetic coupling in the Thompson limit and at the Z-pole are roughly
1
e2 (µ) 137 for µ → 0
α(µ) = ≈ (2.63)
4π
1 for µ = mZ ,
128
the sine of the Weinberg angle is about
As a first step, consider the matrix element for the production of an on-shell W +
boson. At leading order, i.e., in the approximation of tree amplitudes only, there is
only one diagram, and the corresponding matrix element reads
iVud gW δij ¯ 1 − γ5
Mud→W
¯ + = − √ di (p2 )γ µ uj (p1 )(W
µ
)
, (2.66)
2 2
where the spinor arguments and subscripts indicate their momenta and colour indices
and Vud is the relevant element of the Cabibbo–Kobayashi–Maskawa matrix.
This amplitude leads to the summed and squared expression
X̄
2 3 |Vud |2 gW
2
µ ν 1 − γ5 Qµ Qν
|Mud→W
¯ +| = Tr p/2 γ p/1 γ −gµν +
9·4 2 2 m2W
(2.67)
|Vud |2 gW
2
2 |Vud |2 gW
2
2
= Q = mW ,
12 12
where Q = p1 + p2 and the invariant mass of the boson, ŝ = (p1 + p2 )2 = 2(p1 p2 ) =
Q2 = m2W , have been introduced. The factor 3 stems from the sum over three possible
quark-line colours, the 1/9 takes care of taking the average over all possible colour
configurations of the quark and the anti-quark, and the factor 1/4 reflects the average
over the incoming quark spins.
The two diagrams displayed in Fig. 2.15 relate to the two different charge states W +
and W − . At leading order each of them is the only relevant one for each of the charge
channels. The matrix element for the process ud¯ → `ν ¯ ` , W + production and decay,
reads
W boson production at fixed order 55
The terms in the first line correspond to the respective left-handed fermion currents,
made manifest by the short-hand notation
1 − γ5
γµL = γµ (2.69)
2
while the second line represents the W propagator connecting them.
A similar expression can be found for the case of W − production, by suitably
permuting the labels of the fermion spinors. Squaring yields
X̄
2 3 |Vud |2 gW
4
1 − γ5 1 − γ5
|Mud→`
¯ + ν` | = Tr p/d¯γ µ p/u γ ρ Tr p/ν` γ ν p/`¯γ σ
9·4 4 2 2
Qµ Qν Qρ Qσ
gµν − m2 gρσ − m2
W W
×
(Q2 − m2W )2 + m2W Γ2W
|Vud |2 gW
4
t̂2
= ,
12 (Q − mW )2 + m2W Γ2W
2 2
(2.70)
where the average over the initial quarks’ spins and colours and the sum over the
lepton spins in the final state is implicit. Here the Mandelstam variables
cms ŝ
t̂ = −2pu · p` −→ − (1 − cos θ∗ ) , (2.73)
2
where θ∗ is the polar angle of the lepton with respect to the incoming u-type quark.
This allows to rewrite
Z 2 Z1
1 d2 Ω∗` 4
gW |Vud | 2πd cos θ∗ ŝ2 (1 − cos θ∗ )2
σ̂ (LO) = 2
|M|2ud→ν
¯ ¯
` = 2
h i
2ŝ 32π ` 12 · 2ŝ 4 · 32π 2
(ŝ − m2W ) + m2W Γ2W
−1
4 2
gW |Vud | ŝ
= ,
576π (ŝ − m2W )2 + m2W Γ2W
(2.74)
and thus
4 2 Z
(LO) gW |Vud | 1
σh ¯ = dyW dŝ 2
1 h2 →ν` ` 576π [ŝ − m2W ] + m2W Γ2W
(2.75)
X
× ¯ 2 (xd¯, µF ) ,
xu fu/h1 (xu , µF )xd¯fd/h
u,d¯
where the integral over the energy fractions has been written as
dŝ
dxu dxd¯ = dyW . (2.76)
s
This is because the xu, d¯ can be related to the centre-of-mass energy squared and the
centre-of-mass rapidity through
ŝ = xu xd¯s
1 xu (2.77)
yc.m. = log .
2 xd¯
In other words, the cross-section is written as an integral over the invariant mass
squared and rapidity of the produced system.
Going a step further allows to calculate a differential cross-section with respect to
the lepton rapidity. The trick here is to relate its rapidity ŷ`¯ in the partonic c.m.-
frame with its rapidity y`¯ in the hadronic c.m.-frame. This is fairly straightforward,
W boson production at fixed order 57
since rapidities are additive along the same boost axis and therefore
For massless particles, such as the lepton in the process here, also rapidities coincide
exactly with pseudorapidities, therefore, in the partonic c.m.-frame
θ∗ 1 1 + cos θ∗
ŷ`¯ = log cot = log (2.79)
2 2 1 − cos θ∗
or
1
sin θ∗ = . (2.80)
cosh ŷ`¯
This means that
d cos θ∗ = sin2 θ∗ dŷ`¯ = sin2 θ∗ dy`¯ . (2.81)
Finally, therefore
Z
dσh1 h2 →`ν
¯ ` sin2 θ∗ dσ̂ud→
¯ `ν
¯ `
= dxu dxd¯fu/h1 (xu , µF )fd/h
¯ 2 (xd¯, µF ) ∗
. (2.82)
dy`¯ d cos θ
The result in Eq. (2.75) can be further simplified by noting that the propagator —
a Breit–Wigner form — suppresses values of ŝ away from m2W . This observation is
manifest in a simplification known as the narrow-width approximation (NWA).
In this approximation, the propagator factor for an internal particle X with mass MX
and width ΓX is replaced according to
dŝ π 2
2 )2 2 2 −→ dŝ δ(ŝ − MX ), (2.83)
(ŝ − MX + MX ΓX MX ΓX
where the overall factor outside the δ-function ensures that the replacement does not
change the value of the integral. Applying such an approximation will result in a sharp
mass distribution of the decay products of the propagator particle. Depending on the
actual measurement, and, of course, the values of the internal particles mass and width,
this then may yield unphysical results. As a rule of thumb it can be argued that this
may be the case if a measurement of kinematic observables of the decay products is
more accurate than ΓX /MX . In NWA,
2
yZmax
X
(LO)
4
gW |Vud | mW mW eyW mW e−yW
σh h →ν `¯ = dyW fu/h1 √ , µF fd/h
¯ 2 √ , µF ,
1 2 ` 576s ΓW ¯
s s
−ymax u,d
(2.84)
where the W rapidity yW is constrained by xu xd¯s = m2W and therefore
1 s
|yW | ≤ ymax = log 2 . (2.85)
2 mW
58 Hard Scattering Formalism
Quite often, the fact that this is an approximation only is further aggravated by
also ignoring the Lorentz structures in the numerator of the propagator, thus effec-
tively forfeiting any knowledge of eventual correlations between initial and final state
particles. In such a case, the matrix element squared for a typical 2 → 2 s-channel
process ab → X → cd with an intermediate particle X (like the one studied here,
W -production and decay) becomes proportional to the respective branching ratios:
Alternatively one could ignore the decay of the intermediate particle and focus on its
on-shell production. Employing this approximation for the production of the W boson
for the moment, it is possible to discuss the on-shell production of a W boson. Its
matrix element squared at leading order is given by
2
gW |Vud |2 m2W
|M|2ud→W
¯ + = , (2.87)
12
resulting in the parton-level cross-section
Z 4
(LO) 1 d pW
σ̂ud→W
¯ + = (2π)4 δ 4 (pu + pd¯ − pW )(2π)δ(p2W − m2W )|M|2ud→W
¯ +
2ŝ (2π)4
πδ(ŝ − m2W ) 2
πδ(ŝ − m2W ) gW |Vud |2 m2W
= |M|2ud→W
¯ + = (2.88)
ŝ ŝ 12
4π 2 α|Vud |2
−→ ,
12 sin2 θW m2W
where in the last line the implicit integration over ŝ with has been used together
with the δ function and where gW = e/ sin θW has been employed. The production
cross-section in hadronic collisions thus becomes
Z1 X
(LO) (LO)
σh1 h2 →W + = dxu dxd¯ fu/h1 (xu , µF ) fd/h
¯ 2 (xd¯, µF ) σ̂ud→W
¯ +
0 u,d¯
2 2 Z
πgW |Vud | m2W dŝ
= δ(ŝ − m2W )
12 ŝ2
yZmax
X
× dyW xu fu/h1 (xu , µF ) xd¯fd/h
¯ 2 (xd¯, µF )
xu xd̄ s=m2W
−ymax u,d¯
yZmax
2
πgW |Vud |
2 X
= dyW fu/h1 (xu , µF ) fd/h
¯ 2 (xd¯, µF ) ,
12s
−ymax u,d¯
(2.89)
W boson production at fixed order 59
√
where E = s is the hadronic centre-of-mass energy and the limits of the rapidity inte-
gral ±ymax = ± 12 log ms2 follow from momentum conservation, ŝ = xu xd¯s = m2W and
W
the relation of the x1,2 with the rapidity of the produced system yc.m. , cf. Eqs. (2.76)
and (2.77). For future reference it is also useful to introduce
2 2
(LO) πgW |Vud |
σud→W
¯ + = (2.90)
12s
so that
yZmax
(LO) (LO)
X
σh1 h2 →W + = σud→W
¯ + (s) dyW fu/h1 (xu , µF ) fd/h
¯ 2 (xd¯, µF ) . (2.91)
−ymax u,d¯
The result above, Eq. (2.91), implies that the rapidity distribution of the W boson
is entirely defined by the PDFs, and, at leading order, by the quark PDFs only. While
the sea is more or less flavour symmetric, the valence contribution is not. In protons
(anti-protons) there are twice as many valence u quarks (ū anti-quarks) than d quarks
(d¯ anti-quarks). This has the following implications: in proton–anti-proton collisions,
like at the Fermilab TEVATRON, both species of W bosons, W + and W − bosons, have
exactly the same production cross-section. However, W + bosons tend to fly more likely
in the direction of the protons, the forward direction, while the W − bosons tend to
follow more likely the direction of the anti-protons, the backward direction. This is
because they are more likely to obtain a strong “kick” into the respective direction by
an up-type rather than a down-type valence quark. In proton–proton collisions like at
the CERN LHC, this picture does not hold true any longer. There, the cross-sections
for W + production is larger than the cross-section for W − production — in fact, if in
both cases a valence quark would have to be involved, the would differ by a factor of
two. This is not true, since there is a sizable contribution from the other sea quarks
and, at higher orders, from incident gluons. Also, their rapidity distributions are not a
reflection of each other around central rapidity any longer, but each of them of course
is symmetric under reflections around y = 0. In addition, the W + bosons tend to have
a slightly larger rapidity, due to the higher probability to obtain a strong “kick” from
an incident valence quark, while the W − bosons are more central. This behaviour
and the comparison of it at the TEVATRON and the LHC at different c.m.-energies is
exhibited in Fig. 2.16. It is worth noting that the shape at symmetric proton–proton
collisions also changes quite dramatically, as the c.m.-energy of the colliding hadrons
increases from 8 to 100 TeV. This is due to the fact that with increasing energies the
x the quarks need decreases, leading to the contributions of the sea-quarks becoming
larger and larger. The valence quarks, when annihilating anti-quarks from the sea, lead
to a pronounced boost of the W boson, and, thus, a depletion in the region of central
rapidities. This region is being filled by the more symmetric annihilation of pairs of
sea partons. At 8 and 14 TeV this leads to the plateau-shape of the distribution. This
plateau of course widens in rapidity with the energy of the colliding hadrons. At 100
TeV, however, the sea contribution takes over shaping a mount at central rapidities.
This behaviour is completely driven by the PDFs and the interplay of valence and
60 Hard Scattering Formalism
Eq. (2.73), which defines the form of the differential cross-section, it becomes clear
that the positively charged leptons prefer to travel anti-parallel to the incident up-
type quark. At the TEVATRON, this means that positively charged leptons preferably
move in the direction of the anti-proton, and the negatively charged leptons prefer-
ably move in the direction of the proton, i.e., in both cases, the leptons tend to move
against the direction of the W boson they come from, partially compensating their
initial boost.
In proton–proton collisions, the picture then looks a bit more confusing. Naively,
one would expect to have more W + than W − bosons, and typically they would stretch
out to larger rapidities, leading to the asymmetry always being positive and actually
increasing with increasing rapidity. However, the W bosons have a maximal rapidity,
about ymax ≈ 4.5 at a 8 TeV LHC, while the leptons, being effectively massless, can
reach rapidities well beyond this point. This implies that at some point the typical
relative rapidity the leptons have with respect to the original W boson becomes the
dominating factor, and, as discussed, this is where the negatively charged leptons
will become more abundant than the positively charged ones. This means that the
asymmetry will turn negative. The fact that most of the bosons are not at maximal
rapidity (which is possible only if one of the x, typically xu equals 1) and the fact
that the positively charged leptons are typically oriented against the direction of the
W + boson, translate into this point being significantly more central than the maximal
rapidity available to the W bosons. This behaviour is exhibited in Fig. 2.17, where the
lepton asymmetry at leading order is displayed.
Fig. 2.17 The lepton asymmetry, defined in Eq. (2.92), at the TEVATRON,
at the LHC at c.m.-energies of 8 and 14 TeV, and at a future hadron collider
with a c.m.-energy of 100 TeV. The calculation has been performed at
leading order with the CT10 PDF [713].
With this in mind, the real contributions can be decomposed into three sets of
diagrams each with two interfering amplitudes: one set where a gluon is emitted into
the final state, i.e. the sub-process ud¯ → gW + [→ ν` `], ¯ and two sub-processes where
the initial gluon gives rise to either a down-type quark or an up-type anti-quark,
ug → dW + [→ ν` `] ¯ and dg ¯ → ūW + [→ ν` `], ¯ respectively. These three sets of sub-
processes do not interfere, since their initial and their final states are composed of
particles that, in principle, could possibly be distinguished.
Ignoring, for simplicity, the decay of the W -boson, the resulting amplitudes are
given by
igs gW Vud p/d¯ − p/g
Mud→gW
¯ + = √ v̄d,i γν Tija γµL
2 (pd¯ − pg )2
(2.93)
p/u − p/g a µ ν,a
+γµL γν Tij uu,j W g ,
(pu − pg )2
W boson production at fixed order 63
Fig. 2.18 Real contributions to the NLO correction of the production and
leptonic decay of a W + boson. Here u and d stand for arbitrary up- and
down-type quarks, respectively.
igs gW Vud p/g − p/d
Mug→dW + = √ ūd,i γν Tija γµL
2 (pg − pd )2
(2.94)
p/u + p/g a µ ∗ν,a
+γµL γ T
ν ij uu,j W g
(pu + pg )2
and
igs gW Vud p/g + p/d¯
Mdg→ūW
¯ + = √ v̄d,i γν Tija γµL
2 (pg + pd¯)2
(2.95)
p/g − p/ū a µ ∗ν,a
+γµL γ ν ij vu,j W g
T
(pg − pū )2
Here, all colour indices and structures have been made explicit, by adding the funda-
mental (or triplet) colour indices i and j of the quarks and the adjoint (or octet) colour
index a of the gluon as well as the colour matrix Tija appearing in the quark–quark–-
gluon vertex. Looking at these expressions, it becomes apparent that the diagrams
contain potentially divergent structures. The divergences emerge in those cases, where
the additional parton in the final state becomes soft or collinear. The propagator of
the intermediate parton line reads
64 Hard Scattering Formalism
1 1
=− , (2.96)
(pq − pg )2 2Eq Eg (1 − cos θ)
which diverges for the energy of the outgoing parton, Eout , or its opening angle θ
approaching 0. This type of divergent structure is also known as an infrared diver-
gence.
ŝ = (pa + pb )2 = (p1 + p2 )2 ,
t̂ = (pa − p1 )2 = (pb − p2 )2 , (2.99)
2 2
û = (pa − p2 ) = (pb − p1 ) ,
where the incoming partons are labelled with a and b, and the outgoing particles are
labelled with 1 and 2. These variables satisfy
the soft divergence, in wich case both t̂ and û go to zero. The divergent regions of the
gluon emission phase space can be avoided by suitable cuts. A convenient choice here
is to demand that the gluon has a minimal transverse momentum, which would be
interpreted as the minimal transverse momentum of the corresponding jet.
Similar reasoning also applies to the case where the gluon appears in the initial
state, which exhibits a divergence with û → 0. There the case ŝ → 0 is prohibited by
the finite and sufficiently large mass of the W boson. This also prohibits the limit of the
gluon energy going to 0, with the result that this process is less divergent than the one
with the gluon in the final state. However, its remaining, purely collinear divergence,
can also be avoided by a cut on the transverse momentum of the extra parton.
Assuming massless incoming and outgoing partons, the outgoing momenta can be
written as
pµW = (m⊥W cosh yW , p⊥ cos φ, p⊥ sin φ, m⊥W sinh yW )
(2.102)
pµq,g = (p⊥ cosh yq,g , −p⊥ cos φ, −p⊥ sin φ, p⊥ sinh yq,g ) ,
√
and therefore, with x⊥ = p⊥ / s
ŝ = x1 x2 s
t̂ = − 2p1 pq,g = −x1 x⊥ s e−yq,g (2.105)
+yq,g
û = − 2p2 pq,g = −x2 x⊥ s e
The kinematic part of the gluon-emission matrix element squared thus becomes
t̂2 + û2 + 2m2W ŝ x2⊥ x21 e−2yg + x22 e2yg + 2x1 x2 x2M
= , (2.106)
t̂û x1 x2 x2⊥
√
where xM = mW / s. This shows that for p2⊥ → 0 the matrix element diverges like
2m2W /p2⊥ , a logarithmic divergence. This becomes apparent in Fig. 2.19, where the p⊥
distribution of the W + bosons in the hadronic production of the bosons in association
with a single jet at the TEVATRON and the LHC, calculated at leading order is displayed.
To actually properly calculate differential cross-sections like the one depicted in
Fig. 2.19, the phase-space element has to be determined in useful quantities such as
the rapidity of the gauge boson and its transverse momentum. To this end, the phase-
space element for the outgoing particles can be written as
d4 pW d4 q
(2π)4 δ(pu + pd¯ − pW − q) (2π)δ(p2W − m2W ) (2π)δ(q 2 )
(2π)4 (2π)4
m⊥W dm⊥W dyW d2 Q⊥
= δ(m2⊥W − Q2⊥ − m2W ) δ(ŝ + t̂ + û − m2W )
(2π)2
dyW dQ2⊥
= δ(ŝ + t̂ + û − m2W ) . (2.107)
4π
In the transformation of the δ functions, the fact has been used that
δ(q 2 ) = δ((pu +pd¯−pW )2 ) = δ(ŝ+m2W −2(pu +pd¯)·pW ) = δ(ŝ+ t̂+ û−m2W ) (2.108)
Ignoring all information concerning the additional parton by basically integrating over
its phase space, this can now be rewritten as the double differential cross-section for
the production of a gauge boson,
Z1
dσAB→W g 1
= dxA dxB Lud¯(xA , xB , µF ) |M|2ud→gW
¯
2
+ δ(ŝ + t̂ + û − mW )
dQ2⊥ dyW 4π
0
Z1 Z1 "
dxA dxB
= fu/A (xA , µF ) fd/B
¯ (xB , µF ) δ(ŝ + t̂ + û − m2W )
xA xB
x̃A x̃B
#
(LO) 1 αs CF t̂2 + û2 + 2m2W ŝ
× σud→W
¯ + (s) .
Q2⊥ 2π ŝ
(2.111)
The lower limits of the integration over the Bjorken parameters xA and xB in Eq. (2.111),
x̃A and x̃B are fixed by the dynamics of the W boson:
mW
m2W = x̃A x̃B s and x̃A,B = √ e±y . (2.114)
s
Taking a closer look at Eq. (2.111) exhibits the divergent structures related to the
emission of a gluon. First of all, there is the term 1/Q2⊥ , giving rise to a logarithmic
divergence of the form dQ2⊥ /Q2⊥ . In addition, and less obvious, there is the implicit
dependence on the Bjorken parameters xA and xB of all kinematic quantities, which
will lead to further divergences, which are partially related to the evolution of the
PDFs. They will be treated in the next section, by identifying and suitably absorbing
them.
68 Hard Scattering Formalism
d4 p 2 2 dD p
(2π)δ(p − m )Θ(E) −→ (2π)δ(p2 − m2 )Θ(E) , (2.115)
(2π)4 (2π)D
where D = 4 − 2ε.
70 Hard Scattering Formalism
Closer inspection reveals that the ultraviolet divergences cancel exactly with the
renormalization of the external particles, essentially obtained from self-energy dia-
¯
grams. This ultraviolet cancellation presents testimony to the fact that the ud-current
is conserved. Therefore, the only remaining divergences are infrared, and the result for
the virtual matrix element multiplying the Born-level matrix element reads 15
(1∗) (0)
2 Mud→W
¯ + M ¯
ud→W + =
2 α 2 ε (2.117)
(0) s µ 2 3 2
Mud→W¯ + C F 2
c Γ − 2
− − 8 + π ,
2π Q ε ε
with Q2 = (pu +pd )2 = m2W and cΓ = 4π ε /Γ(1−ε). For a detailed discussion of how this
result may be obtained, the reader is referred to the following chapter, Section 3.3.1.
For the real correction diagrams, the result emerges from a D-dimensional integration
over the phase space of the emitted parton, as discussed in the previous section. In
the case of the additional gluon in the final state, with its momentum denoted by k,
this yields (again, cf. Section 3.3.2 for details of this calculation)
Z 2 ε
dDk (0) 2
(0)
2 α
s µ
µ4−D Mud→W
¯ +g = Mud→W
¯ + C F cΓ
(2π)D 2π Q2
"
2 3 π2 4 (1 − z)2 (1 − z)2
× 2
+ + δ(1 − z) + log − 2(1 + z) log
ε ε 3 1−x z + z
#
(1)
2 Pqq (z)
− . (2.118)
ε CF
The divergences in the first line of the result for this real correction exactly cancel
the divergences in the virtual result. The additional divergence in front of the splitting
(1)
function Pqq is in fact universal, i.e., process-independent. This process-independent
term can thus be absorbed into the renormalization of the PDF, see Section 3.3. This,
however, is a scheme-dependent procedure. In total then, a finite result is obtained.
The method of directly calculating the phase space of the real emission part in
D dimensions followed here of course becomes prohibitively complicated for processes
with an increasing number of external particles. For such situations better algorithms
have been developed, among them what is by now known as infrared subtraction
algorithms such as Catani–Seymour dipole subtraction [344, 353] or the method
by Frixione, Kunszt, and Signer [539, 542]. The former will be further discussed in
Section 3.3.2.
15 More correctly, this is the result obtained in conventional dimensional regularization, a regular-
ization scheme that is defined in Section 3.3.1.
W boson production at fixed order 71
Note that the result still needs to be convoluted with the PDFs. The last term in
the square bracket above, proportional to the splitting function and containing a term
of log(µ2F /Q2 ), ensures that there is a compensating factor if the PDFs are evaluated
away from their “natural” scale (if µ2 = 6 Q2 ); for the process at hand it is the only
scale in the process, Q2 . This corresponds to the “running” of the PDF, mediated by
the corresponding splitting function and resluting in the logarithm of the scale ratio. If
the process at leading order was already containing n factors of αs , a similar running
of the strong coupling, mediated by n first order terms of the β-function would appear
as well, this time of course with logarithms of the ratio of µ2R and Q2 .
Another contribution, stemming from the gluon initial states,
(NLO) (LO)
σ̂ug→dW + = σ̂ud→W
¯ +
αs (µR ) (1) (1 − z)2 µ2 1
· TR Pqg (z) log − log F2 + (1 − z)(1 + 7z) , (2.120)
2π z mW 2
2.2.6 Scales
2.2.6.1 Sketching the issue
In order to estimate theoretical uncertainties, it has become customary to vary the
renormalization and factorization scales by a factor up or down. Typically this
factor is chosen to be two, and in most cases both scales are changed in parallel, i.e.,
both are at the same time multiplied with either 2 or 1/2. There is some dispute on
whether 2 is a sufficiently large factor to catch all uncertainties and on whether or not
the scales are indeed chosen in parallel. There is, however, an additional caveat, which
is more related to the actual choice of scale. Usually, the default scale µ = µF =
µR is determined by identifying it as the “characteristic scale” of the process under
consideration, either given by some intermediate particle’s mass or some function of
the final-state momenta.
In such an arrangement, and at leading order, the factorization scale is typically
interpreted as the scale up to which softer partons and, correspondingly, the emission
of additional partons are ignored in the actual matrix element. Such emissions are
rather subjected to a more inclusive treatment in the evolution of the parton content
of the hadrons from hadronic scales to the actual harder, i.e. perturbative, scale of the
72 Hard Scattering Formalism
process. When considering more complicated processes, this simple picture is likely to
change drastically. As an example, consider the case of W production in conjunction
with additional partons, defined as jets at leading order. Very broadly speaking, one
may distinguish two extreme cases of how the kinematics of this process works out.
There will be a region, where the W boson is accompanied by softer partons; here,
the parton emission can be thought of as QCD correction to an electroweak process.
This is a case, where one may still think of using mW or similar as the relevant scale
for µF , and, maybe, even for µR . On the other hand, there will also be a region in
phase space, where the partons are produced at transverse momenta, which are much
larger than the W mass. In such a configuration, one may interpret the W emission
as a weak correction to jet production, intrinsically a QCD process. Here, mW , pW ⊥ ,
or similar as the scale characterising the process would not be a great choice. In fact,
recent work [225] suggests that choices that interpolate between these two regimes,
such as HT /2 or so are much better suited. HT is defined as the scalar sum of the
transverse momenta of all jets, leptons, and the missing transverse energy,
X X
HT = p⊥,j + p⊥,l + E/⊥ . (2.121)
j∈jets l∈`
What is clear, though, is that the scale dependence essentially stems from an
inability to calculate cross-sections correctly, i.e., to all orders. Instead, in all calcula-
tions, the perturbative series is truncated at some fixed order, which leaves a residual
logarithmic dependence on the renormalization and factorization scales. With the per-
turbative series thought to be asymptotic, there is some assertion that this dependence
would diminish with each additional order in the calculation. While this seems to be
by far and large correct, cases like W +3 jet production discussed in [225] indicate that
with apparently bad choices of scale, pW⊥ in this case, this decreasing dependence is not
always realized at next-to-leading order. However, apart from such pathological cases,
higher-order calculations indeed always diminish the scale dependence. It is fair to
state that, in order to have any reasonable and reliable estimate of the corresponding
scale uncertainty, the inclusion of at least one additional perturbative order, i.e., a
next-to-leading order calculation, is mandatory.
dσ (LO)
= fq/p (µF )fq̄/p̄ (µF ) ⊗ αs2 (µR ) σ̂ (0) (2.122)
dp⊥
W boson production at fixed order 73
Fig. 2.21 Some leading-order diagrams for inclusive jet production from
a quark–anti-quark initial state.
where σ̂ (0) represents the lowest order partonic cross-section. Including the next-to-
leading order corrections, this can be written as
dσ (NLO)
= fq/p (µF )fp̄/q̄ (µF )
dp⊥
(2.123)
µR (0) µF (0)
· αs2 (µR )σ̂ (0) + αs3 (µR ) σ̂ (1) + 2b0 log σ̂ − 2Pqq log σ̂ ,
p⊥ p⊥
∂αs (µR )
= −b0 αs2 (µR ) − b1 αs3 (µR ) + O αs4 , (2.124)
∂(log µR )
cf. Eq. (2.18), where the two leading coefficients in the β-function, b0 and b1 , are given
in Eq. (2.20). Looking at the terms in Eq. (2.123) it becomes apparent that the first
term in the square bracket, the leading-order contribution, and the term proportional
to 2b0 in the round bracket cancel out, such that the remaining dependence on µR is
contained in terms O αs4 .
In a similar way, the factorization scale dependence can be calculated using the
non-singlet DGLAP equation
This time, the partial derivative of each parton distribution function, multiplied by
the first term in Eq. (2.123),
cancels with the final term. Thus, once again, the only
remaining terms are O αs4 .
This is a generic feature of a next-to-leading order calculation. Any observable pre-
dicted to O (αsn ) is independent of the choice of either renormalization
or factorization
scale, up to the next higher order in the strong coupling, O αsn+1 . Of course, this
is only a formal statement and the numerical importance of such higher-order terms
74 Hard Scattering Formalism
may be large. As a concrete example, consider jet production at the TEVATRON Run I.
In that case the numerical values of the partonic cross-sections entering Eq. (2.123)
are σ̂ (0) = 24.4 and σ̂ (1) = 101.5. Equipped with these values the LO and NLO scale
dependence can be calculated, as shown in Fig. 2.22, adapted from Ref. [585]. In this
case the factorization scale has been kept fixed at µF = p⊥ and only the dependence
on the renormalization scale is shown. The figure indicates the expected result, which
is that the renormalization scale dependence is reduced over a wide range of values for
µR when going from LO to NLO.
While the reasoning above, culminating in Fig. 2.22, is a fairly accurate represen-
tative of the situation found at NLO, the exact details depend upon the kinematics
of the process under study and on choices such as the running of αs and the PDFs
used. It is worth noting, though, that due to the actual structure of NLO corrections,
as exemplified in Eq. (2.123), there will normally be a peak in the NLO curve, around
which the scale dependence is minimized. The scale at which this peak occurs is often
favoured as a choice specific for the process, its kinematics, and additional cuts such
as the jet definition. For example, for inclusive jet production at the TEVATRON, using
a cone size of R = 0.7, a central scale of
µF = µR = pjet
⊥ /2 (2.126)
is usually chosen. This is near the peak of the NLO cross-section for a large set of dif-
ferent observables, cf. the specific case shown in Fig. 2.22. Adjusting the scale choice to
be in the region of the peak is often referred to as the “principle of minimal sensitiv-
ity” [864]. It is worth keeping in mind that such a choice is also usually near the scale
at which the LO and NLO curves cross, i.e. for the value of the scale where the NLO
W boson production at fixed order 75
Fig. 2.23 The scale dependence of the cross-section for W bb̄ production
at the 14 TeV LHC (left). A real radiation diagram leading to the large
corrections is also shown (right).
corrections do not significantly change the LO cross-section. Setting the scale by this
means assumes “fastest apparent convergence” (FAC) of the perturbative series [600].
Finally, a rather different motivation comes from the consideration of a “physical”
scale for the process. As examples take pjet⊥ for the inclusive jet production process or
the W mass in the case of inclusive W production. These typical methods for choosing
the scale do in general not agree, leading to somewhat different results in dependence
on the actual choice and thus a corresponding theoretical uncertainty. If, on the other
hand, the scales or the respective results do agree — quite often this seems fairly
accidental at first sight — this may be viewed as a sign for the perturbative expansion
to be very well-behaved.
A word of caution is due at this point. Although the improved scale dependence
sketched out here is typical, it is worth reiterating that this is by no means guaranteed.
In addition to effects due to apparently bad scale choices which do not appreciate the
full intricacy of the kinematics, like the case hinted at above, there is also another
source of potential pitfalls. They are related to cross sections, in particular at the LHC,
which at leading order are driven by quark–anti-quark initial states. Since the LHC
produces an abundance of gluons, real radiation diagrams containing one or maybe
even two gluons in the initial state can give rise to very large NLO corrections. Since
they enter for the first time at NLO, strictly speaking they are some kind of additional
leading-order contribution appearing at higher orders. As such they typically also
give rise to a sizable additional scale dependence. A well-known example is shown in
Fig. 2.23 for the case of W bb̄ production at the 14 TeV LHC. In the absence of gluons
the NLO calculation has the canonical behaviour; in their presence the rate is not
well-controlled due to diagrams such as the one shown in the same figure.
(W )
collisions. At LO in collinear factorization, there is no distribution and p⊥ = 0, since
there is no final state parton to compensate the recoil introduced by a finite p⊥ . So,
strictly speaking, the p⊥ –distribution of the W boson at hadron colliders is, at leading
order, an observable at O(αs ). This, however, is already the next-to-leading order
level for the total cross-section. Not surprisingly, then, the theoretical uncertainty,
i.e., the scale uncertainty, of the total cross-section is smaller than the induced shape
uncertainty due to scale variations of the p⊥ –distribution of the W . To make things
even more confusing, note that, in contrast, the rapidity distribution of the W is
O αs0 = 1 at leading order, since it is due to the PDFs which are of course present
at leading order. At the same time, the production cross-section for a Higgs boson in
gluon, gg → H, is O αs2 already at leading order, because the coupling of the Higgs
boson to the gluons proceeds through a quark loop. Even integrating out the heavy
quark does not change this picture - the emerging effective vertex is also proportional
to αs . In summary, this means that the correct assignment of the perturbative order
as LO, NLO, and so on, is not a fact of merely counting orders of αs . Instead it is a
process- and, even more confusing, observable-dependent characterization.
PDF may become significantly larger than the corresponding quark PDFs. In the case
discussed here, this leads to NLO K-factors of the order of KZNLO ≈ 1.3 compared to
the significantly smaller KeNLO
+ e− →hadrons ≈ 1.03.
This example is just the very innocent tip of an iceberg of a number of other
processes, where the opening of additional channels leads to K factors which are
drastically different from one.
Nevertheless, there is yet another class of processes which obtain large NLO cor-
rections, even without opening additional channels. The prime example for this is the
gluon-induced Higgs production, gg → H, where the NLO K-factor is about a factor
of two. The origin of the large K-factor is explained by two effects, that can be seen
by consideration of the virtual amplitude for this process. The one-loop amplitude for
gg → H, working in the large top-mass effective theory is,
2 ε
(1) (0) αs µ 2 11 2
Mgg→H = Mgg→H × CA cΓ − 2 + +π , (2.129)
4π Q2 ε 3
where the dimensional reduction scheme has been employed. Due to the analytic con-
tinuation that must be performed in order to evaluate the virtual amplitude for time-
like momentum transfer Q2 , this formula contains a factor of −π 2 /2 for every factor of
1/2 present. This expression is to be contrasted with the relevant part of the virtual
amplitude for W production in the same scheme given in Eq. (3.64),
2 ε
(1) (0) αs µ 2 3 2
Mud→W¯ + = Mud→W¯ + × CF cΓ − 2 − − 7 + π . (2.130)
4π Q2 ε ε
In the formula above the non-pole term is (π 2 − 7), with the π 2 factor resulting
from the analytic continuation mostly cancelled by the numerical constant. However
in Eq. (2.129) there is no such cancellation and the factor of π 2 remains an important
contribution. Moreover, as can be seen from these two formulae, this effect is amplified
by the overall colour factor of CA for the Higgs case compared to CF for Drell–Yan.
Taken together, this explains why the K-factor for Higgs production is much larger
than for the Drell-Yan process. Since the π 2 terms are so important numerically,
renormalization group techniques have recently been used to resum them to all orders
and thus provide improved predictions for the Higgs boson cross-section [130].
transverse momentum. This additional parton, together with the original one, can
produce a summed transverse momentum below the cut, and therefore the W + boson
recoiling against the partonic system will populate the transverse momentum region
below the 25 GeV cut. This has two consequences. Since the region below 25 GeV
originates solely from real radiation events, it should only be trusted as much as a
LO calculation. In addition, the region around the kinematic boundary at 25 GeV is
not well-described. The values of the histogram bins there are simply artifacts of the
calculation and are not reliable. Smoothing out this sort of problem, and providing
hadron-level predictions that can be directly compared with experimental results, is
the domain of the parton shower and resummation.
Another example for this has been discussed in [830], where the term “giant K-
factors” has also been coined. Following previous work, in particular [200, 205, 321, 521]
for the case of vector bosons accompanied by jets, but also, for the case of vector boson
pairs, [252, 311], it has been observed that there are substantial NLO corrections for
observables at large scales of the order of the vector boson masses and beyond, which
W boson production at fixed order 79
actually fall far outside the respective bands obtained from a simple scale variation
by a factor of two applied on the leading-order result. As an example, Fig. 2.25 shows
the NLO correction for the p⊥ -distribution of the additional jet produced in V + j at
the LHC. In the publications dealing with this problem, this apparently tremendous
K factor16 has been related to new configurations, like the ones exhibited in Fig. 2.26.
Essentially, the idea there is that for jets at very large transverse momenta compen-
16 In some loose sense, the term K factor introduced for total cross-sections could also be applied
to the ratio of higher-order and lower-order results for distributions, with the effect that the K factor
then becomes “local”.
80 Hard Scattering Formalism
sating each other, an additional relatively softer vector boson could be interpreted as
a real electroweak correction to a process that essentially is a QCD process. One of
the strategies for addressing this sort of issue, presented in Ref. [830], will be discussed
further in Section 3.4.2.
The relative size of the NLO corrections for a given process depends on the sum of
the Casimir colour factors for the initial state minus the Casimir colour factor for the
biggest colour representation possible for the final state. Again, this is a rule of thumb
and not a rigorous statement. It is also not yet clear whether this argument can be
extended to calculations at NNLO.
Fig. 2.27 Diagrams for the process e− e+ → `+ `− (upper panel) and for
e− e+ → `+ `− γ (lower panel) in QED. The diagrams relating to the Born
term in the upper panel and to the photon emission off an initial line, like
the one in the left lower panel, are relevant for the following discussion.
Diagrams corresponding to photon emission off a final-state leg will be
suppressed by fixing the `+ `− invariant mass.
where Q2 ≡ M 2 denotes the invariant mass of the virtual photon and ŝ, t̂, and û are
the usual Mandelstam variables, satisfying
ŝ + t̂ + û = M 2 = Q2 (2.133)
or
Q2 − t̂ − û = ŝ . (2.134)
Using
dσ̂ |M|2
= (2.135)
dt̂ 16πŝ2
one therefore finds
dσ̂e− e+ →γ ∗ γ 2πα2 t̂2 + û2 + 2Q2 ŝ
= . (2.136)
dt̂ ŝ2 t̂û
for the differential cross-section. In the following the limit of t̂, û → 0 or, equivalently,
ŝ ≈ Q2 will be considered. Factoring out the cross-section for the production of a
virtual photon,
(LO) 4π 2 α 4π 2 α
σ̂e− e+ →γ ∗ = ≈ , (2.137)
Q2 ŝ
yields
dσ̂e− e+ →γ ∗ γ (LO) α t̂2 + û2 + 2Q2 ŝ
= σ̂e− e+ →γ ∗ · . (2.138)
dt̂ 2πŝ t̂û
This ultimately also allows the replacement of this cross-section for the production of
a virtual photon with the cross-section for producing a lepton pair instead,
(LO) 4πα2
σ̂e− e+ →`− `+ = , (2.139)
3Q2
to arrive at the double-differential cross-section
dσ̂e− e+ →`− `+ γ (LO) α t̂2 + û2 + 2Q2 ŝ
= σ̂ e − e+ →`− `+ · . (2.140)
dt̂dQ2 2πŝQ2 t̂û
The limit of small transverse momenta of the virtual photon, or, equivalently, the
lepton pair, Q⊥ → 0, is related to t̂ → 0, û → 0, or both t̂ and û approaching
zero. Due to the symmetry of the cross-section in Eq. (2.140) under the exchange
t̂ ↔ û it is sufficient to only consider the case where one of the two Mandelstam
variables goes to zero, say t̂ → 0, with the other one being less singular, but potentially
approaching zero as well. Assuming that t̂ is the relevant Mandelstam variable allows
the replacements
To capture the potential divergence for û → 0, one must integrate over û. This is
equivalent to integrating over Q2 , which in turn exposes the logarithmic divergence
W -boson production to all orders 83
Q2 ≤ ŝ − Q2⊥ . (2.142)
Keeping in mind that the region Q2⊥ ŝ is being analysed therefore results in
2
ŝ−Q
Z ⊥
dσ̂e− e+ →`− `+ γ (LO) α dQ2 ŝ2 + Q4
2 = σ̂e− e+ →`− `+ ·
dQ⊥ 2πŝ Q2 ŝ − Q2
α 1 ŝ α dQ2⊥ ŝ
= σ̂0 2 log 2 + O (1) −→ dσ̂ R ≈ σ̂ 0 2 log 2 .
π Q⊥ Q⊥ π Q⊥ Q⊥
(2.143)
with p⊥ some arbitrary but finite cut-off on the maximally allowed transverse momen-
tum of the lepton pair. This integral will diverge due to the collinear divergence that
is manifest in the 1/Q2⊥ term in the real emission term σ̂R . So, naively, one would
now expect that this cross-section diverges. In fact, however, this is not the case; as
discussed in Section 2.2.5 the Block–Nordsieck and Kinoshita–Lee–Nauenberg theo-
rems guarantee that this divergence will be cancelled exactly by another divergence
in the corresponding virtual contribution, where again the final state fermions will be
ignored. This results in a finite overall cross-section at order α [257, 678, 724]. In other
words, adding in σ̂V ∝ δ(Q2⊥ ) and normalizing to σ̂0 results in
Zŝ
(1) 1 d(σ̂R + σ̂V )
Σ (ŝ) = dQ2⊥ = 1 + O (α) , (2.145)
σ̂0 dQ2⊥
0
with the coefficient of the order-α contribution being free of any potentially large
logarithms. Decomposing the integral as
2
Zŝ Zp⊥ Zŝ
1 d(σ̂R + σ̂V ) 1 2 d(σ̂R + σ̂V ) dσ̂R
dQ2⊥ 2 = dQ⊥ 2 + dQ2⊥ , (2.146)
σ̂0 dQ⊥ σ̂0 dQ⊥ dQ2⊥
0 0 p2⊥
where the fact that the virtual contribution σ̂V is concentrated in the region of Q2⊥ = 0
has been accounted for, allows one to rewrite
84 Hard Scattering Formalism
Fig. 2.28 Photon emission off a fermion line. Here, the thick blob repre-
sents the rest of the process, while the fermion line corresponds to one of
the incoming electrons.
2
Zp⊥ Zŝ
1 d(σ̂R
+ σ̂V ) 1 dσ̂R
Σ(1) (p2⊥ ) = dQ2⊥ ≈1− dQ2⊥
σ̂0 dQ2⊥ σ̂0 dQ2⊥
0 p2⊥
α ŝ
≈1− log2 2 .
2π p⊥
In the approximations up to now all terms that would yield constant or single logarith-
mic contributions have been ignored — therefore the result obtained in Eq. (2.147) is
correct in the double leading logarithmic approximation (DLLA), also known
as the Dokshitzer–Dyakonov–Troyan approximation (DDT), which was first
developed in [476].
p/ + k/ p·
e µ (k) ū(p)γ µ 2
. . . ≈ e ū(p) ... , (2.147)
(p + k) p·k
where terms have been ignored that are finite in the soft limit k → 0. This result
emerges from using both the anti-commutator of the Dirac matrices and, subsequently,
the equation of motion for the spinor.
In the same way, the two photon contributions can be written as a factor
e 2 p · 1 p · 2
... , (2.148)
2! p · k1 p · k2
where the factor of 1/(2!) stems from the proper symmetrization of the two identical
particles, the two photons, in the final state. From there, it is straightforward to see
the form of the n-photon contribution,
W -boson production to all orders 85
e n p · 1 p · 2 p · n
... ... . (2.149)
n! p · k1 p · k2 p · kn
The corresponding n-photon contribution to the scaled cross-section reads
2
n
Zp⊥ n
(n) 2 1 α 2 ŝ 1 α 2 ŝ
Σ (p⊥ ) = d log Q⊥ log 2 = − log 2 , (2.150)
n! π Q⊥ n! 2π p⊥
0
taking into account the leading logarithms only. This implies that
X∞ n
1 α 2 ŝ α 2 ŝ
Σ(Q2⊥ ) = − log = exp − log (2.151)
n=0
n! 2π Q2⊥ 2π Q2⊥
Inspection reveals that in this expression the resummation of the double leading log-
arithms has indeed tamed the divergence for Q2⊥ → 0, and in fact the cross-section
tends to zero in this limit. It can be shown that this very nice suppression in fact is
too strong and sub-leading logarithms will ameliorate this situation.
Σ(Q2⊥ ) is called the Sudakov form factor, and it encodes the probability for
not emitting a photon with a transverse momentum larger than Q⊥ . This probability
tends to zero for Q⊥ → 0; in other words it is impossible not to emit photons with
arbitrary small transverse momentum.
Up to now, the individual photon emissions have explicitly been treated as indepen-
dent, uncorrelated processes, which is not true. After all, the transverse momentum
Q⊥ of the lepton pair is given by the sum of all individual emissions,
n
X
~⊥ = −
Q ~k⊥,i . (2.153)
i=0
This constraint can be cast in the form of a δ-function, which in turn can be expressed
through a Fourier transform to impact parameter space. The impact parameter is
conjugate to the transverse momentum,
n
! Z " n
!#
X 1 X
2
δ Q ~⊥ + ~k⊥,i = d b⊥ exp i~b⊥ · Q
2 ~⊥ + ~k⊥,i . (2.154)
i=0
2π i=0
86 Hard Scattering Formalism
with y and Q⊥ the rapidity and transverse momentum of the singlet. The factor π
on the right-hand side of the equation stems from the integration over the azimuthal
angle of the produced system X. The leading-order cross-section for the production of
the singlet X would of course be identifed with the W -production cross-section:
Eq. (2.160) already exhibits the choice of scale at which the PDFs are evaluated,
namely 1/b⊥ , as seen in the equation below for W̃ij ,
88 Hard Scattering Formalism
W̃ij (b; Q, xA , xB ) =
2
ZQ 2
2
1 1 dk⊥ 2 Q
fi/A xA , fj/B xB , exp − 2 A(k⊥ ) log 2 .
b⊥ b⊥ k⊥ k⊥
1/b2⊥
(2.162)
The sum runs over all relevant parton flavours i and j contributing to the production
of X. The energy fractions with respect to the incoming hadrons A and B, xA and
xB , are fixed by the rapidity of the singlet system,
MX
xA,B = √ e±y , (2.163)
s
which is also the solution for the zeroes of the residual δ-function in Eq. (2.107).
Comparing terms with previous equations also shows that the exponential in the
resummation part W̃ij is nothing but the Sudakov form factor in k⊥ space. Thus,
2
the term A(k⊥ ), at the logarithmic order considered up to now, i.e. double leading
logarithms or DDT approximation,18 and ignoring the potentially dangerous region of
large b⊥ , is given by
2
2 αs (k⊥ )
A(k⊥ ) = CF . (2.165)
π
18 In fact, in the original DDT result, the Sudakov form factor was given in a form also including
some universal sub-leading logarithms (the term 3/2):
2
ZQ 2 2 2
!
dk⊥ αs (k⊥ ) Q 3
−
ΣDDT (qT , Q) = exp CF log 2 − , (2.164)
k⊥2 π k⊥ 2
2
qT
W -boson production to all orders 89
this in mind, the master equation for Q⊥ resummation in the CSS formalism
discussed up to now reads
X Z 2
dσAB→X (LO) d b⊥ ~ ~
= πσ̂ij→X exp(ib⊥ · Q⊥ ) W̃ij (b; Q, xA , xB )
dydQ2⊥ ij
(2π)2
(2.166)
+ Yij→X (Q⊥ ; Q, xA , xB ) ,
where W̃ij and Yij→X will be given in Eqs. (2.167) and (2.170), respectively.
For the resummation bit, increasing the precision amounts to extending the func-
tion W̃ij such that
1
X Z dξA Z dξB
1
1 1
W̃ij (b; Q, xA , xB ) = fa/A ξA , fb/B ξB ,
ξA ξB b⊥ b⊥
ab xA xB
xA xB xA xB
×Cia , b⊥ ; µ Cjb , b⊥ ; µ Hab , ;µ
ξA ξB ξA ξB
2
ZQ 2
dk⊥ 2 Q2 2
× exp − 2 A(k⊥ ) log 2 + B(k⊥ ) .
k⊥ k⊥
2
1/b⊥
(2.167)
All terms A, B, C, and H can be expanded in a perturbative series, which will define
the formal accuracy of the result in terms of the logarithmic order (LL, NLL, . . . )
and the fixed-order (LO, NLO, . . . ) accuracy. Note that the origin of the terms A and
B can also be traced to the splitting functions, see also Sections 2.3.3 and 5.2. This
expansion yields, for example,
X αs (µ) N
A(µ) = A(N ) (2.168)
2π
N
and similar for the other terms. For instance, by direct comparison with the previous
result A(1) = 2CF . In contrast to A(1) , however, the result for B (1) depends on the
2
lower limit of the k⊥ integration. Choosing (2e−γE /b⊥ )2 = (b0 /b⊥ )2 , leads to B (1) =
γE −3/4
−3CF /2, while in [410] it was given by B (1) = 2CF log e 2 .
The expansion of the C and H terms starts at N = 0, with
(0)
Cia (z) = δia δ(1 − z)
(0)
(2.169)
Hab (zA , zB ; µ) = δia δjb δ(1 − zA )δ(1 − zB ) ,
reflected in the corresponding colour factor, CF . For other processes, such as, e.g., the
production of a Higgs boson in gluon fusion, gg → H, these terms would be propor-
tional to CA , a factor of two larger, apart from sub-leading colour corrections. This
reflects of course the fact that, trivially, a gluon has two colour degrees of freedom,
while a quark has only one such degree of freedom. A more detailed discussion of
the technical steps summarized here, and an application of this formalism to a vari-
ety of processes, will be presented in Chapter 5. There, other different resummation
techniques will also be briefly discussed.
The finite remainders Y can be expanded in a similar series,
Yij→X (Q⊥ ; Q, xA , xB ) =
( )
Z1
dξA
Z1
dξB X αs N (N ) xA xB
fi/A (ξA , µ)fj/B (ξB , µ) Rij→X Q, ,
ξA ξB 2π ξA ξB
xA xA N
(2.170)
(1)
with the first non-trivial terms Rij→X different from zero listed in relevant sections in
Chapter 5.
In principle, scales in the hard remainder could be chosen differently from the
choices made in the resummation term; this has been made explicit by introducing
separate factorization and renormalization scales µF and µR . In addition, the hard
scale Q can be identified through Q = MX ≡ mW and the Mandelstam variables
which will emerge in the functions Rij→X can be expressed through the other kinematic
parameters as
q
Q2
1 1 + Q⊥2
ŝ = Q2 and t̂, û = 1 − Q2 . (2.171)
ξA ξB ξB,A
This is typically cured by modifying the Sudakov form factor in impact parameter
space. A naive way of achieving this is by adding a soft form factor through a function
ρ, multiplying the Sudakov form factor for all values of b⊥ . A typical parameterization
of this function ρ would look like
2
b
ρ(b⊥ ) = exp − ⊥ , (2.172)
4A
where A usually is identified with the average intrinsic transverse momentum of the
incident partons due to Fermi motion inside the nucleon or similar effects,
The effect of such a modification is negligible for small b⊥ , but effectively amounts to
a dampening for large b⊥ or small k⊥ :
Of course, there are more methods to tame the divergent behaviour of QCD around
the Landau pole, which will not be further discussed here.
the annihilation of electron–positron pairs into hadrons. This additional example will
further highlight the versatility of the resummation approach, and it will also motivate
why parton shower Monte Carlo event generators do a fairly decent job in describing
the bulk of existing data.
As a first step the Sudakov form factor, encoding the resummation of multiple
emissions, needs to be re-examined. This will lead to an alternative explanation for
the specfic form of the terms A(1) and B (1) . To see how this works, consider gluon
emission off a quark in the approximation, where the gluon is collinear with the quark.
Then, the respective matrix element is well approximated by the splitting function,
Pqq (z). This splitting function would merge when keeping also terms of order k µ /(p·k)
in Eq. (2.147). Correspondingly the emission factor ν (QCD) in Eq. (2.159) becomes
αs 1 Q2 αs 1
ν (QCD) (k⊥ ) = CF (QCD)
log 2 −→ νq→qg (k⊥ ; z) = CF Pqq (z) , (2.175)
π k⊥ k⊥ π k⊥
with
1 + z2
Pqq (z) = . (2.176)
1−z
The integration over k⊥ must also be supplemented with one over z. This integration
needs furthere manipulation, because of the divergent structures in splitting functions
for z → 1 or z → 0, which must be regularized. While this is typically achieved through
the “+”-prescription, cf. Eq. (2.33), another way to guarantee finite integrals will be
pursued here.
In the application of the resummation formalism to the description of jet produc-
tion, the emitted partons, the gluons, must be resolved, for example by demanding
that they have a minimal transverse momentum Q0 with respect to the emitting quark,
k⊥ > Q0 . Momentum conservation ensures that the momentum fraction they carry
away from the quark must be non-zero, and the residual momentum fraction of the
2
quark therefore must be smaller than 1 by an amount of ε = k⊥ /Q2 , if Q is the
scale of the quark momentum. This allows to drop the “+”-prescription in the split-
ting function, and, correspondingly, the δ-function compensating for it. This form of
modification, from now on, will be made obvious by replacing Pij −→ Pij .
Thus the Sudakov form factor can be rewritten for this specific case as
2
ZQ 2 2 Z
1−ε
dk⊥ αs (k⊥ )
S(Q, Q0 ) = exp − 2 dz Pqq (z) (2.177)
k⊥ 2π
Q20 0
Here, for the sake of the following discussion, the integrated splitting function Γ
has been introduced. From these results, the coefficients A(1) = CF and B (1) = − 32 CF
are readily read off. Of course, they are identical to the coefficients in the standard
resummation procedure outlined in Section 2.3.2.
How can this be interpreted? In Section 5.3 it will be argued that the Sudakov
form factor is nothing but a probability for a given particle not to radiate a
secondary particle between the two scales Q2 and Q20 . This can be motivated by
realizing that
(a) its kernel is related to the emission of a particle;
(b) its form as an exponential of a negative definite argument guarantees that
Q2
Γq,g (Q2 , q 2 ) = A(1)
q,g log
(1)
+ Bq,g . (2.181)
q2
Note that in the case of gluons the corresponding integrated splitting function consists
of two parts, the g → gg and g → q q̄ splittings. This leads to Sudakov form factors
given by
2
2 2 αs (1) 2 Q (1) Q2
∆q,g (Q , q ) = exp − Aq,g log 2 + Bq,g log 2 (2.182)
2π q q
for a fixed αs or
2
αs 2 αs (Q ) αs (Q2 )
∆q,g (Q2 , q 2 ) = exp − A(1)
q,g log + B (1)
q,g log (2.183)
2π αs (q 2 ) αs (q 2 )
94 Hard Scattering Formalism
In order to obtain the three-jet rate, it is important to realize that the probability
2 2
density dPrad (q⊥ )/dq⊥ to have a radiation off a quark at q⊥ is given by
2
dPrad (q⊥ ) d∆q (Q2 , Q2cut )
2 = − 2
dq⊥ dq⊥
2
αs (q⊥ ) 1
= CF 2 Γq (Q2 , q⊥
2
) ∆q (Q2 , Q2cut ) Θ(Q2 − q⊥
2 2
)Θ(q⊥ − Q2cut ) .
π q⊥
(2.185)
2
Here, the Θ-functions guarantee that q⊥ ∈ [Q2cut , Q2 ].
This allows one to write the three-jet rate as
2
ZQ 2
2 2 2
2 dq⊥ αs (q⊥ ) 2 ∆q (Q , Qcut )
R3 (Qcut ) = 2∆q (Q , Q2cut ) 2 CF Γq (Q2 , q⊥ ) 2 2)
q⊥ π ∆q (Q , q⊥
Q2cut
2
× ∆q (q⊥ , Q2cut )∆g (q⊥
2
, Q2cut ) ,
(2.186)
where the term in the round brackets accounts for the emission of the gluon off one of
the two quark lines, and the two additional Sudakov form factors ensure that the two
offsprings of this splitting do not radiate further. The ratio of Sudakov form factors
can be interpreted as the probability for the intermediate quark line not to experience
any radiation resolvable above Qcut , between Q2 and q⊥ 2
.
Similar expressions can also be constructed for higher jet multiplicities. A conve-
nient way to do this is by the introduction of generating functionals, cf. [345] for more
details.
σn+1 Rn+1 σn
R(n+1)/n = ≡ R or R(n+1)/n = = R with Rn =
σn Rn σtot
(2.187)
being constant, and where R depends on the core process and the requirements
on the jets only, but not on the multplicity of jets. This pattern is also known as
Berends scaling [223], and it has been observed for example by UA2 [140], at
the TEVATRON [107], and the LHC [20, 364].
2. Poisson scaling, where the exclusive jet rates PN behave like
n̄n e−n̄ n̄
Rn = or R(n+1)/n = . (2.188)
n! n+1
Such patterns emerge when individual events, in the case at hand jet formation
through hard parton radiation, repeat themselves and are independent from each
other.
In order to see how this pattern emerges, consider the subsequent emission of two
gluons off a primary quark line, given by
2 2
ZQ ZQ
(1) 1 2 2 0 2 0 2 0
σq→qgg ∝ dtΓq→qg (Q , t)∆g (Q , t) dt Γq→qg (Q , t )∆g (Q , t ) ,
2
Q20 Q20
(2.189)
where Q0 represents the jet resolution scale and Q is the hard scale of the process,
and the factor of 1/2 accounts for the ordering of emissions. The Sudakov form factors
guarantee that the gluons do not experience any further splitting, and they form jets.
Similarly, one could have a second contribution, where the second gluon emerges from
a secondary gluon splitting,
2
ZQ Zt
(2)
σq→qgg ∝ dtΓq→qg (Q2 , t)∆g (Q2 , t) dt0 Γg→gg (t, t0 )∆g (t, t0 ) . (2.190)
Q20 Q20
indicating that the pattern of subsequent, primary emissions off the quarks is
enhanced with respect to the secondary gluon splittings. This is the limit of in-
dependent emissions, therefore exhibiting Poisson scaling.
2. απs log2 QQ0 1. In this limit
96 Hard Scattering Formalism
(1) αs2 4 Q 3 6 Q (2)
σq→qgg ∝ log + O αs log ∝ σq→qgg , (2.192)
4(2π)2 Q0 Q0
and primary and secondary emissions off the quark or gluon, respectively, con-
tribute in roughly equal, democratic measure, with relatively small emission prob-
ability. This modifies the Poisson scaling. In QED such a non-Abelian contribution
from secondary emissions is absent, and as a consequence Poisson scaling is not
modified in QED.
At this point it should be clear that in order to observe Poisson scaling, the individual
emissions must be decoupled, to be treated as independent. This can be enforced by
selecting events with a strong hierarchy, for example by demanding that the hardest
jet has very large transverse momentum, while all other jets can be much softer. In
such configurations, the large scale ratio of the hard jet to the softer ones guarantees
large logarithms. Increasing the number of jets leads to more scale ratios with vanish-
ing logarithms, and this also introduces a jet emission phase space that is increasingly
constrained through momentum conservation. As an overall effect, the absence of log-
arithmic enhancement means that the emissions cannot be treated as independent
any more, negating the condition for Poisson scaling, and staircase scaling patterns
emerge.
2.4 Summary
In this chapter the technology underlying every discussion of high-energy phenomena
at hadron colliders based on first principles, perturbative methods, has been intro-
duced. In order to calculate cross-sections, the idea of factorization must be invoked.
This idea is at the heart of perturbative QCD at hadron colliders. First of all, factor-
ization guarantees that the partons, the constituents of the protons, can be treated as
quasi-free particles. This is the case if the characteristic time-scales related to the pro-
cess probing them are sufficiently smaller than the typical response time of the strong
field. Only then, when the partons can be treated as quasi-free, can they be quantized
like any other field in quantum field theory. This allows a perturbative expansion,
typically represented by Feynman diagrams, which is a systematic treatment based on
the Lagrangian of QCD. At the same time, the validity of factorization allows a de-
termination of the parton distribution functions (PDFs) in a process-independent way
which can be used to evaluate cross-sections for all other processes. At leading order
the PDF fa/h (x, µF ) describes the probability to find a parton a in hadron h with a
momentum fraction x at the factorization scale µF . Since they can be related to the
bound state stucture of the respective hadron they fulfil a number of sum rules, which
in turn act as theoretical constraints. While the PDFs are non-perturbative objects,
their evolution with the factorization scale is governed by the perturbative DGLAP
equations. This evolution in turn can be employed to understand multiple softer emis-
sions. In particular, initial-state radiation can be interpreted as the breakdown of the
coherence of the multi-particle Fock state of the incident particles, where the quantum
fluctuations populating the Fock state are governed by the evolution equation.
At the same time, final-state radiation leads to a proliferation of particles in the
final state by repeated emissions of secondary quanta. Again, the behaviour of the ra-
Summary 97
process with one extra parton together with the W in the final state merely yields
the leading-order expression for this observable. In order to increase the precision one
could, again, invoke higher-order corrections. However the validity of a normal NLO
calculation would be limited to non-negligible values of the W transverse momentum.
This is because, already at leading order, the differential cross-section with respect
to the transverse momentum of the W diverges for small transverse momenta. This
divergence appears again at any fixed order of perturbation theory. However, a careful
analysis of the situation indicates that a finite result can be obtained for small values
of the W transverse momentum by resumming the soft and/or collinear limits of extra
parton emission to all orders.
3
QCD at Fixed Order: Technology
In this chapter, the perturbative description at fixed order of processes at the LHC
and other hadron collider experiments, outlined in Chapter 2, will be further discussed.
In particular, theoretical issues related to calculations for more complex final states
will be addressed. First, the terminology used in this book to denote the accuracy of
a given calculation is described in Section 3.1.
Section 3.2 is then devoted to a discussion of the technology used in fixed-order
calculations, in particular for the calculation of multi-particle final states at leading
order (LO).
This is followed by a discussion of techniques employed in next-to-leading-order
calculations in Section 3.3, containing a presentation of subtraction methods for the
treatment of the infrared divergences which facilate their mutual cancellation between
virtual and real corrections and advanced methods to evaluate the former in processes
for multi-particle final states.
In the final part of this chapter, Section 3.4, some ideas are presented on how even
higher orders in perturbation theory can be treated.
The Black Book of Quantum Chromodynamics: A Primer for the LHC Era. John Campbell, Joey Huston, and Frank Krauss.
© John Campbell, Joey Huston, and Frank Krauss 2018. Published in 2018 by Oxford University Press.
DOI 10.1093/oso/9780199652747.001.0001
100 QCD at Fixed Order: Technology
αs1 NLO LO -
diagram also predicts a non-trivial rapidity distribution. On the other hand, since
the partons are collinear with the incoming protons, this diagram will not produce
any transverse momentum distribution of the W boson. Instead it will generate such
distributions only for its decay products. In order to have a leading-order expression
for the p⊥ distribution of the W boson, real-emission diagrams such as the one in the
middle panel must be evaluated. There the boson can recoil against the additional
(W )
parton. As seen in the previous chapter, for p⊥ → 0 the expression related to this
diagram diverges, a problem that actually persists for each fixed-order calculation and
which can only be resolved through the use of resummation. However, this diagram
(W )
not only yields a leading-order result for p⊥ but it is also part of the next-to-leading-
order calculation of the total cross-section or of observables such as the boson rapidity
or the kinematical distributions of the W boson’s decay products. This situation is
summarized in Table 3.1.
Technology of leading-order calculations 101
XZ Z
1
(LO)
= dxa dxb fa/h1 (xa , µF )fb/h2 (xb , µF ) dσ̂ab→n (µF , µR )
a,b 0
XZ Z
1
1
= dxa dxb dΦn fa/h1 (xa , µF )fb/h2 (xb , µF ) |Mab→n |2 (Φn ; µF , µR ),(3.1)
2ŝ
a,b 0
it is fairly obvious that already for this simplest result two major obstacles must be
faced.
First of all, the squared matrix element |Mab→n |2 must be evaluated. Since the
number of diagrams increases very quickly, typically faster than factorial, the tradi-
tional methods encountered so far become impractical. Squaring the amplitudes, using
the completeness relations to arrive at traces which may be evaluated analytically is
just too complex an operation. As a consequence, in modern techniques the focus is
on evaluating individual amplitudes as functions of their internal and external de-
grees of freedom, which means that every amplitude becomes just a complex number
and in turn renders their summation and squaring a straightforward exercise. Modern
algorithms to achieve this are introduced in Section 3.2.1.
Second, the complicated structure of the phase space resulting in an integral in
(3n − 2)-dimensions, possibly supplemented with complicated cuts, renders this high-
dimensional integration impossible to be evaluated analytically, even if the PDFs were
tractable in this way. As this is not the case, mainly numerical methods must take
over, both in the evaluation of the matrix elements and in the phase-space integration.
For the latter, essentially only Monte Carlo √ integration techniques are viable, since
for them the error estimate scales like 1/ N for the number N of integrand evalua-
tions, independent of the number of dimensions.1 A general discussion of Monte Carlo
techniques can be found in Section 3.2.2.
Sampling in such a way over the phase-space degrees of freedom to conveniently
obtain a numerical estimate for the cross-section — the actual result — begs the
question, how far sampling can also be extended to other degrees of freedom such as
particle spins, colours, or similar. In other words, for the matrix element evaluation,
a choice between summation and sampling over quantum degrees of freedom must
be made. Since the computation times for summation and sampling naively differ by
the sth power of the number of possible states, usually ≈ 2s for the possible helicity
assignments and ≈ 3s . . . 8s for the possible colour assignments, dependent on whether
1 For traditional integration methods such as trapezoid quadratures or similar, the number of
dimensions enters such that usually the error scales like N −k/n where n is the number of integral
dimensions and k > 0 depends on the method of choice.
102 QCD at Fixed Order: Technology
the particles in question are quarks or gluons, this may have a significant impact on
the overall evaluation time, especially if there are strong correlations also with the
phase space, which would thus favour a simultaneous sampling over external (phase
space) and internal (spins, colours, etc.) degrees of freedom. By suitably eliminating
common sub-expressions in the matrix elements, however, these naive factors can often
be reduced quite considerably, which renders this choice strongly dependent on the
process and the particle multiplicity in question.
2 As an analogy, one may think about allowed and suppressed electromagnetic transitions of excited
atoms, which are identified with E~ and B~ transitions. In the Gordon representation, the latter
scale with a term proportional to the particle mass.
Technology of leading-order calculations 103
evaluated once for both diagrams depicted and for similar ones. Using such recycling
recursively will reduce the number of complex multiplications like the ones in the
spinor products and thus increase the efficiency of the calculation, especially when such
common sub-amplitudes are identified in the construction of the helicity amplitudes
and factored out from the beginning.
In order to identify and isolate such sub-amplitudes, it is important to realize that
tensor structures in Dirac space in the numerators of propagators can be re-expressed
through spinors as follows:
" ! ! #
1 X m m
p/ + m = 1+ p u(p, λ)ū(p, λ) + 1 − p v(p, λ)v̄(p, λ) . (3.2)
2 p2 p2
λ
A similar relation also holds true for the propagator numerators for vector particles,
with the added complication of gauge choices for gauge bosons. Typically to fix the
gauge, a light-like gauge vector q µ is introduced, essentially fixing the axis of an axial
gauge and resulting in
qµ pν + qν pµ X
−gµν + = µ (p, λ)∗ν (p, λ). (3.3)
pq
λ=±
As further discussed in Appendix A.2, this allows us to rewrite the polarization vectors
as spinor products such that the full amplitude can be recast in the form of spinor
products.
straightforward to implement. There are different ways to represent spinors and their
products, as discussed in more detail in Appendix A.2.
One frequently used spinor representation is based on two-component spinors,
also known as Weyl spinors, which indeed form a basic representation of the Lorentz
group. To see, in short, how this works, it should be realized that the Lorentz group
is generated by the usual six infinitesimal boost and rotation transformations along
and around the x, y, and z axes, respectively. While individually the commutators of
these generators form a somewhat nonintuitive algebra, they can be rearranged into a
set of six linearly independent linear combinations which decompose into two SU (2)
algebras. In other words, the Lorentz group is a (locally isomorphic) product of two
independent SU (2) groups. These groups, in turn have their first non-trivial represen-
tation through two-component objects, or Weyl spinors, associated with left-handed
and right-handed spinors. The components of these two kinds of Weyl spinors are
indicated by dotted and undotted indices α and α̇, each of which ranges only from
1 to 2, a, ȧ ∈ {1, 2}.
For instance, for a massless four-vector k,
√
√ k+
ζa (k) = , (3.4)
k − eiφk
where the latter relation holds true because trivially one could identify ηȧ = ηa∗ and
where the fact that the individual spinor components must be Grassmann numbers,
anti–commuting, is encapsulated in the “spinor” metric ab given by
0 1
ab = ab = ȧḃ = ȧḃ = . (3.6)
−1 0
Note that it has been customary to use the momentum label of the spinors in the two
scalar products and to identify whether it is over the dotted or undotted spinors only
by the shape of the bracket, a convenient short-hand notation. Using them, regular
massive Dirac fermion spinors known from the usual textbook methods can be written
as two such Weyl spinors, arranged in a bi-spinor; this, however, introduces a “gauge”-
like degree of freedom into their definition.
Technology of leading-order calculations 105
where σ 0 = 1 is the two-dimensional unit matrix and the σ i (i ∈ {1, 2, 3}) are
the Pauli matrices. By introducing a light-like gauge vector q, polarization vectors for
external massles particles with four-momentum pµ can be written as
1 hq ∓ |σ µ |p∓ i
µ± (p, q) = ± √ . (3.8)
2 hq ∓ p± i
For polarization vectors for massive external gauge bosons, cf. Appendix A.2.
It is also interesting to note that regular scalar products of four-vectors can be
rewritten in the form of spinor products, for instance
It is relations such as the one above which render this spinor representation a versatile
and powerful tool to rewrite Feynman amplitudes in a form that lends itself to the
automation of scattering amplitude calculations through numerical methods. For more
details, cf. Appendix A.2.
Tiaj̄ Tj̄i
b
= δ ab . (3.10)
The self-coupling of gluons, mediated by the generators f abs in the group’s adjoint
representation, can be rewritten through the Tija as
Ultimately, the colour algebra allows the decomposition of every n-gluon amplitude,
which includes the most complicated colour structures, according to
X
A(1, 2, . . . , n) = Tr [T a1 T a2 . . . T an ] A(1, σ2 , . . . , σn ), (3.12)
σ∈Sn−1
106 QCD at Fixed Order: Technology
where σ denotes all (n−1)! permutations of the indices 2 . . . n, and the colour-stripped
or colour-ordered amplitude is denoted by A(1, σ2 , . . . , σn ), which only depends on
the momenta and helicities of the external particles. Of course, similar decompositions
also emerge when some of the particles are quarks or even colour-neutral objects,
which typically leads to a simplification of the overall result in terms of colour. As
a welcome by-product, the colour-ordered amplitudes A(1, σ2 , . . . , σn ) correspond to
planar graphs only, when it comes to their QCD parts, which is due only a small
set of Feynman amplitudes contributing to each of them and which renders them
simpler to calculate. Some of their most remarkable properties are defined by the
Kleiss–Kuijf relations [681] yielding linear relations among such amplitudes. One of
the consequences of these relations is that the maximal number of colour-independent
amplitudes is (n − 2)!, showing that colour ordering (CO) does not yield a minimal
set of such amplitudes. Such a set can be achieved by using the adjoint representation
instead, cf. [452, 453] for details.
Another decomposition to coloured amplitudes, known as colour dressing, has
been proposed in [651, 739]. Based on actual colour flows it treats the SU (Nc ) gauge
field, the gluons, as Nc ×Nc matrix, thus making the matrix character of the gluon field
Gaµ, ij̄ = Gµ Tiaj̄ in the fundamental representation more explicit than by denoting it
with an index a in the adjoint representation. Considering a term Tiaj̄ Tkal̄ , corresponding
to a gluon exchange motivates why this may be helpful for numerical implementations:
i l¯ i l¯
1 1
Tiaj̄ Tkal̄ = δil̄ δkj̄ − δ δ ←→ − (3.13)
Nc ij̄ kl̄ Nc
j̄ k j̄ k
This sketch also explains why this is based on colour flows — both terms correspond
to connecting indices of fundamental SU (Nc ) objects, with a sum over independent
colours that yields exactly the (Nc2 − 1) degrees of freedom present in the gluon field.
This also means that every QCD vertex can be written as a sum of δ functions in
colour space, connecting the quark and gluon colour attached to it in all allowed
combinations. This allows for a fairly straightforward implementation in terms of a
computer code, and replacing the potentially cumbersome colour algebra with factors
of one or zero further accelerates the evaluation of the amplitudes.
This idea of recycling identical parts of the amplitude is brought to perfection by using
recursion relations from the beginning. The basic idea here is to create one-particle
off-shell parts of the amplitude recursively in the spirit of the Dyson-Schwinger
equations [493, 840] known from text-books on quantum field theory, where they
are used to construct one-particle off-shell Greens functions. This very idea has been
put into effect in different realizations, directly in the HELAC code [308, 486], or in
some variations such as the ALPHA algorithm [326, 327], on which ALPGEN [743] and
O’MEGA [770] are based, or as Berends–Giele relations [219–222, 681], implemented
in COMIX [582].
Technology of leading-order calculations 107
The idea in all of them is to construct generalized currents Jα (π) for a set π of
external particles on their mass shell plus one internal one, which then of course may
be off-shell. In this construction, α denotes the combined quantum numbers of this
internal particle, including spin, colour, etc.. As a somewhat special realization, the
external particles may also be considered as currents; then the quantum numbers α are
directly identified with the ones of this particle. From these special cases, recursively
more and more complex currents are built, by applying
X X
Jα (π) = Pα (π) [S(π1 , π2 ) Vαα1 α2 Jα1 (π1 )Jα2 (π2 )]
α1 α2
P2 (π) Vα
X X
+ [S(π1 , π2 , π3 ) Vαα1 α2 α3 Jα1 (π1 )Jα2 (π2 )Jα3 (π3 )] .
α1 α2 α3
P3 (π) Vα
(3.14)
In this equation, Pα (π) denotes the propagator denominator, which of course depends
on the properties α of the propagating particle and on its momentum given by the
momenta in π. The terms S(π1 , π2 ) and S(π1 , π2 , π3 ) take care of symmetry factors,
which emerge in the partitions P2 (π) : π → π1 ⊕ π2 and P2 (π) : π → π1 ⊕ π2 ⊕ π3
of the original set. Finally, the Vα signify the three- and four-leg vertices connecting
the particles αi of the sub-currents with the emerging new particle α. This allows us
to write the total amplitude A(π) for one specific configuration π as
1
A(π) = Jαρ (ρ) ᾱ\ (π|ρ), (3.15)
Pᾱρ (π|ρ)
where π|ρ denotes the residual subset of π after the subset ρ has been taken off.
Overall conservation of quantum numbers also guarantees that the combined quantum
number απ|ρ of this conjugate subset is the conjugate of the set ρ, ᾱρ . It is worth noting,
however, that it is computationally advantageous to map the four-vertices onto vertices
with three legs only.
For some specific cases it is possible to solve the recursion equation in Eq. (3.14) in
closed form [220, 692]. For instance, a current with n external colour-ordered like-sign
gluons is given by
hq − |γ µ P/1,n |q + i
J µ (1+ , 2+ , . . . , n+ ) = gsn−2 √ , (3.16)
2 hq1ih12ih23i . . . h(n − 1)nihnqi
µ
where the four-momentum Pi,j is constructed from the outgoing momenta through
j−1
X
µ
Pi,j = pµl , (3.17)
l=1
and where q µ is the gauge vector. This current can be used, to prove the form of
108 QCD at Fixed Order: Technology
hiji4
A(1+ , 2+ , . . . , i− , . . . , j − , . . . . . . , n+ ) = igsn−2
h12ih23i . . . h(n − 1)nihn1i
(3.18)
[ij]4
A(1− , 2− , . . . , i+ , . . . , j + , . . . . . . , n− ) = igsn−2 .
[12][23] . . . [(n − 1)n][n1]
Note that the all-sign identical amplitudes vanish due to the conservation of angular
momentum.
The study of more general properties of scattering amplitudes, especially in the context
of the super-renormalizable N = 4 Super-Yang-Mills theory, has experienced a some-
what surprising renaissance in the early 2000s, leading to a remarkable progress in the
understanding of perturbative QCD from twistor-inspired methods [897]. Building on
a correspondence with some well-understood type of string theory, it could be shown
that colour-ordered tree-level amplitudes can be related to curves in twistor space,
giving rise to the Cachazo–Svrček–Witten (CSW) vertex rules [307]. Even loop
amplitudes were shown to follow the same principles and rules [306]. These rules state
that arbitrary colour-ordered scattering amplitudes can be constructed from MHV
amplitudes [306, 307, 897]. They serve as generalized MHV vertices and are connected
by scalar propagators, resulting in a full n-gluon amplitude being built from (n − l)
same-sign helicity gluons (and l gluons with helicity of the opposite sign) arranged in
(l − 1) of these vertices.
As an example, consider Fig. 3.3, displaying the construction of the colour-ordered
six-gluon amplitude A(1− , 2− , 3− , 4+ , 5+ , 6+ ) from MHV vertices. As a consequence
of this construction, for any n-gluon amplitude MHV vertices for up to n particles
may contribute, implying that the number of such vertices that are needed for a
cross-section calculation grows steadily with the number of external legs. This problem
has been addressed in [211], reformulating the CSW rules in a fully recursive fashion.
A further refinement of the CSW rules has been worked out in by Britto, Cac-
hazo, Feng, and Witten in [286, 287], stating that any colour-ordered tree-level
amplitude can be constructed from two on-shell amplitudes with a scalar off-shell
propagator in between them. This yields the BCF recursion relations, which can be
summarized as
n−2
X 1
An (1, 2, . . . , n) = Ak+1 (1̂, 2, . . . , k, −p̂−h
1,k ) An−k+1 (p̂h1,k , k + 1, . . . , n̂), (3.19)
p21,k
k=2
where the momentum of the propagator is given by the sum of external momenta,
Technology of leading-order calculations 109
Fig. 3.3 The six MHV graphs for the construction of the six-gluon am-
plitude A(1− , 2− , 3− , 4+ , 5+ , 6+ ) in the CSW formalism.
k
X
pµ1,k = pµi , (3.20)
i=1
Here the λ and λ̃ denote the co- and contra-variant components of the spinors for
light-like momenta,
pai ḃ = λai λ̃ḃi , (3.22)
and the shift parameter is given by
p21,k
z= . (3.23)
hn|pi,k |1]
In contrast to the CSW rules this implies that the sub-amplitudes contain on-
shell particles only, which makes them easier to calculate. They are, however, not
gauge-invariant objects due to the need of a gauge vector to define the opposite gluon
helicities, which enter through the shift parameter z. It is interesting to note in this
context that the shifted momenta p̂1 and p̂n are complex valued but still on-shell and
light-like, adding an interesting twist to the calculation. Finally, it is important to
stress that of course these formalisms have also been extended to include quarks or
the case of QED interactions.
110 QCD at Fixed Order: Technology
Here ~x denotes a point in the D-dimensional finite phase space with volume V , and
the ~xi are uniformly randomly distributed in it. In Monte Carlo integration the exact
value of the integral I is estimated as an average over N test points in the volume.
The central limit theorem guarantees that with infinitely many calls N → ∞ the
estimator hIi → I. The name of the game in Monte Carlo integration √ is to control this
convergence, through an error estimate hEi that will scale like 1/ N . A convenient
error estimate is the variance, written in useful form as
v v
u ! !2 u
u1 X N
1 XN u
hE(f )ix = t 2
f (~xi ) − f (~xi ) = thf 2 ix − hf i2x . (3.25)
N i=1 N i=1
Various methods have been discussed in the literature to accelerate error reduction.
The most prominent ones are known as importance sampling and stratified sam-
pling.
3 For a summary of Monte Carlo integration methods, see for example the review [648].
Technology of leading-order calculations 111
where the ρi are distributed according to the phase-space density g(~x). If f and g
are sufficiently similar, their ratio f /g will fluctuate less than f alone, leading to a
smaller variance and therefore an accelerated convergence. The limiting factor in this
is obvious: a function g must be found, which has to be inverted in order to generate
the points ρ. Furthermore, to keep things simple, the g(~x) must be non-negative in V
and integrate to unity Z
dD xg(~x) = 1, (3.27)
V
The overall variance will be minimized by making the individual variances in each bin
of equal size. This can be achieved by introducing a different a priori probability ab for
picking a bin b to generate a phase-space point in it, and by updating them regularly
after a sufficiently large number of function calls. Bins with larger variance after such
an optimization step, with more fluctuations, will obtain a larger ab and therefore an
increased number of points sampling the phase space in this bin; conversely bins with
smaller variance, less fluctuations of f (~x), will have a smaller ab and fewer sampling
points. Ultimately, the best theoretical solution is to have a priori weights ab which
behave like the variance in each bin. Therefore updating them by multiplying them
with the variance or similar will accelerate the convergence.
where the Θ(Ei ) account for the projection on physical, positive energy solutions for
the outgoing particles.
For massless particles, it is fairly straightforward to calculate the volume of the
final state phase space. As all ingredients in the N -particle phase space of Eq. (3.30)
are formulated in Lorentz-invariant quatities, in the following P µ = (Ecms , ~0) can
be safely assumed. Following [685], first an “unconstrained” phase-space volume Φ̃N
is introduced, based on unconstrained momenta qi not fulfilling overall momentum
conservation like the constrained pi :
" # N
Z Z YN 4 Z∞
d qi 1
dΦ̃N = 4
(2π)δ(qi2 ) Θ(qi0 ) f (qi0 ) = 2
dxxf (x)
(2π) (2π)
i=1 0 (3.31)
f (x)→e−x
1
−→ .
(2π)2N
Here, f (qi0 ) denotes an arbitrary regulator function that keeps the overall volume finite
— for the choice made here the result is displayed.4 In a second step a transforma-
tion between unconstrained momenta qiµ and constrained momenta pµi is found as a
combiniation of a scaling operation parameterized by x and a Lorentz boost given by
~b,
γqi0 + ~b · ~qi
pµi = x H~µ (qi ) = x (~b · ~qi )~b . (3.32)
b ~ 0
~qi + bqi +
1+γ
Consequently, the inverse transformation is given by
1 µ
qiµ = H (pi ) (3.33)
x −~b
p
With M = Q2 the invariant mass of the unconstrained system, the boost parameters
~b and γ and the scaling parameter x read
~ Q0
~b = − Q , γ= , and x =
E
. (3.34)
M M M
4 Integrating over the spatial components of the massless momenta is trivial: the δ function guar-
antees that |~qi | = qi0 and therefore
d 3 qi (qi0 )2 d2 Ω q0
Z Z
2
(2π)δ(qi ) = = i2 .
(2π)4 16π 3 qi0 4π
Technology of leading-order calculations 113
Inserting these transformations into the unconstrained phase space and after a few
manipulations detailed in [685], the two phase-space volumes are related with each
other through
Z Z ( N
! N
!
dx d3~b E 4 X X
dΦ̃N = (2π)4 δ E − Ei δ 3 p~i
(2π)4 x2N +1 γ i=1 i=1
YN 4 )
d pi (2π)δ(p2i )Θ(Ei ) 1 0
f H (pi ) (3.35)
i=1
(2π)4 x −~b
Z ( N )
dx d3~b E4 Y 1 0
= f H (pi ) dΦN .
(2π)4 x2N +1 γ i=1 x −~b
YN
1 0 γE
f H (pi ) = exp − , (3.36)
i=1
x −~b x
such that only the integration over ~b and x remains, which results in
Z Z∞ 4 γE
d3~b dx E exp − x E 4−2N Γ 23 Γ n − 1 Γ 2n
SN = = . (3.37)
(2π)3 (2π) γx2N +1 (2π)3 Γ n + 21
0
They are transformed into the constrained momenta pµi through the transformations
detailed here. This is the RAMBO algorithm invented in [685], where also the general-
ization to massive momenta has been worked out.
cf. Eq. (3.18). This particularly simple form is also recovered in other processes; the
inclusion of quarks typically just leads to the absence of some of the 1/ŝ factors,
therefore rendering the amplitudes somewhat less complicated to integrate.
A first method that has been optimized for the integration of such amplitudes has
been introduced in [487] and goes by the name of SARGE. It builds on sequentially
filling the phase space through emissions that follow the basic antenna density
1 4 (pi pj ) jk
dAkij = d pk δp2k Θ(p0k ) ik
g ξij g ξij , (3.41)
π (pi pk )(pj pk )
where the
jk (pj pk )
ξij = (3.42)
(pi pj )
are the arguments of the regulator functions
1 −1
g(ξ) = Θ(ξ − ξm )Θ(ξm − ξ), (3.43)
2 log ξm
which ensure that the divergences for (pi pk ) → 0 and (pj pk ) → 0 are avoided and
are chosen such that dA integrates to unity. The cut-off value ξm depends on the
actual cuts being applied on the outgoing momenta. Demanding that for each pair of
momenta i and j, (pi + pj )2 ≥ s0 , the expression for ξm is,
ŝ (n + 1)(n + 2)
ξm = − , (3.44)
s0 2
where ŝ is the energy squared of the partonic system in the centre-of-mass frame.
To generate a momentum pk according to the basic antenna structure, Eq. (3.41),
SARGE proceeds along the following steps. The initial momenta pi and pj are boosted
ik jk
into their centre-of-mass frame. Then, two numbers ξij and ξij are generated with a
probability density of g(ξ)/ξ. They allow the calculation of the energy of particle k,
p0k , and its polar angle θ with respect to pi . The azimuthal angle is chosen uniformly in
[0, 2π], which then enables the construction of pk in the rest frame of i and j. Boosting
pk back into their lab frame finishes the generation of one emitted momentum. It is
worth noting that this algorithm does not respect four-momentum conservation, since
recoil effects on the emitters i and j are not captured here.
Technology of leading-order calculations 115
This algorithm is now iterated to yield all outgoing momenta along the following
lines. Generating two outgoing momenta q1 and qn in the centre-of-mass frame of
the partonic collision triggers the consecutive generation of the (n − 2) other momenta
through the basic antennae dA21n dA32n . . . dAn−1
(n−2)n already discussed. Using the same
boost and scaling transformations introduced above for the RAMBO algorithm [685]
yields the final momenta pi , which indeed, by construction, satisfy four-momentum
conservation.
This democratic algorithm can be further improved by intelligent symmetrization
over colour orderings (outgoing momenta) and better maps, cf. the orginal publica-
tion [487]. Furthermore, going from a symmetric way of antenna generation to a more
hierarchical way, as implemented in the HAAG algorithm introduced in [880], leads to
a further refinement and, consequently, an accelerated convergence of the integration.
These more specific algorithms are good examples of how improvements in Monte
Carlo integration can be achieved by employing knowledge about the characteristics
of the underlying process. In the following this intuitive picture will be substantiated
a bit more through a more detailed discussion of optimization procedures in the lit-
erature and the introduction of the by now widely employed method for Monte Carlo
phase space integration in particle physics.
In contrast to democratic approaches with fairly symmetric final states there are of
course also processes for which the opposite is true: the final-state particles emerge
from a very specific set of Feynman diagrams which could be interpreted as a se-
quence of production processes of usually resonant particles followed by their decay
into lighter particles. A good example of this would be the production of a tt̄ pair
and its subsequent decay into two b quarks and two W bosons, which in turn decay
further. In such a case democratic and, in particular, isotropic mappings will not be
very efficient, since the particles are just not distributed independently in phase space
but will enjoy strong correlations due to the intermediate resonances. Knowing this
and understanding the underlying phase-space structure lends itself to the textbook
Monte Carlo method of importance sampling for efficient phase-space generation.
In the example of tt̄ production and decay this could be achieved for instance along
the following steps. Assuming first a useful distribution of the tt̄ pair in terms of its
centre-of-mass energy and rapidity, which for an e− e+ collider could naively be fixed,
while for hadron colliders PDF effects would have to be taken into account. Then, in
the centre-of-mass frame of the pair, the top and the anti-top would be isotropically
distributed, with back-to-back kinematics, and with their invariant masses individually
given by a Breit–Wigner distribution. Each of the top decays, again, could be treated
in their respective rest frame, with the invariant masses of the W ’s again given through
a Breit–Wigner form. Of course, this process would be finally repeated also for the
W s.
This implies a hierarchical structure of Breit-Wigner distributed invariant masses
of the intermediate resonant particles (or a similar distribution for the overall centre-
of-mass energy of the total system) followed by their binary decays, which in the
116 QCD at Fixed Order: Technology
respective rest frames reduces to choosing one pair (θ, φ) fixing the orientation in the
solid angle of the back-to-back binary decay kinematics.
1 X
g(~x) = P aj gj (~x). (3.45)
aj j
j
While in principle the integral does not depend on the aj , its estimator and, more
importantly, the error estimator does. This can easily be understood by writing the
expressions above again as integral. In I the mapping function g cancels, but this is
not the case in the first term of E:
Z Z
f (~x)
I = dDx g(~x) = dDx f (~x)
g(~x)
V V
Z 2 Z
f (~x) f 2 (~x)
E = dDx g(~x) − I2 dDx − I 2. (3.48)
g(~x) g(~x)
V V
From here it would be trivial to rediscover stratified sampling by using mappings gj (x)
which are unity inside the bin j and zero outside.
This, however, is not how multi-channel sampling is being used for the phase space
integration in scattering processes in particle physics. There, the form of the transi-
tion matrix element is known. Using, for instance, Feynman diagrams to construct it,
Technology of next-to-leading-order calculations 117
Table 3.2 Publicly available tools for leading–order cross-section calculations at hadron
colliders. For the matrix element part of the evaluation different methods are being used:
off-shell recursion relations in different algorithms, denoted by “off-shell”, Feynman diagram
based helicity amplitudes, denoted by “hel. amps”, and the traditional method based on
squaring the amplitudes and using completeness relations for external particles and traces over
the Dirac algebra, denoted by “traces”. Related to this, different methods for the treatment
of coloured particles are also listed, in particular the direct evaluation of the colour algebra
(“explicit”) and the method of colour dressing (CD) or colour connections (CC) supplemented
with a sampling over colour and helicities. For the phase-space integration, various versions
of automated multi-channel sampling methods are being used (“multi”), in addition to some
more process-specific solutions.
ME: |Mab→n |2 PS: dΦN
colours & helicities
ALPGEN [743] off-shell [326, 327] process-specific
explicit
AMEGIC++ [696] hel. amps [197, 684, 741] automated multi [682]
explicit
COMIX [582] off-shell [220] recursive multi [299]
CD [651, 739] & sampling
COMPHEP [266]/ traces specific single-channel
CALCHEP [210] explicit
HELAC/ off-shell [651] automated multi [795]
PHEGAS [308] CC [869] & sampling
MADGRAPH [150] hel. amps [773] automated SDE multi [740]
explicit
O’MEGA/ off-shell [327, 328] specific [789]
WHIZARD [677, 770] explicit
XZ Z
1
(NLO) (NLO)
σ = dxa dxb fa/h1 (xa , µF )fb/h2 (xb , µF ) dσ̂ab→n (µF , µR )
a,b 0
Z Z
= dΦB Bn (ΦB ; µF , µR ) + Vn (ΦB ; µF , µR ) + dΦR Rn (ΦR ; µF , µR ),
(3.49)
where the various contributions — Born term B, virtual correction term V, and real
correction term R — are given by suitably helicity-summed or averaged matrix el-
ements. Denoting the perturbative order of the Born-level contribution with b, and
indicating the orders of the matrix element M(b) accordingly, therefore
Technology of next-to-leading-order calculations 119
X̄ 2
Bn (ΦB ; µF , µR ) = (b)
Mn (ΦB , h; µF , µR )
h
X̄
∗(b+1)
Vn (ΦB ; µF , µR ) = 2 Re M(b)
n (Φ B , h; µF , µR )Mn (Φ B , h; µF , µR ) (3.50)
h
X̄ (b+1) 2
Rn (ΦR ; µF , µR ) = 2
Mn+1 (ΦR , h; µF , µR ) .
h
At this point it is important to stress that the Born-level and virtual matrix elements
are multiplied to yield the overall virtual correction. Thereby, the latter matrix ele-
ment emerges from the former by adding a closed loop without changing the external
particles of the process ab → n. Both matrix elements share the same phase space,
essentially an n-body final-state phase space, ΦB , see below. In contrast the real cor-
rection emerges as the square of matrix elements with one additional outgoing particle;
this may lead to replacing an incoming particle type a or b with a0 or b0 . For instance,
an incoming quark a may be replaced by a gluon, which splits into an outgoing anti-
quark and a quark a such that for the process in question ab → n is replaced by
gb → n + ā. With this change also the PDFs must change accordingly.
Turning to the phase-space elements, the expressions for the Born-level and real
correction phase space are given by
1
dΦB = dxa dxb fa/h1 (xa , µF )fb/h2 (xb , µF ) dΦn
2ŝab
(3.51)
1
dΦR = dxa0 dxb0 fa0 /h1 (xa0 , µF )fb0 /h2 (xb0 , µF ) dΦn+1
2ŝa0 b0
with
n
!
Y d4 pi X
dΦn = 4
(2π)δ(p2i − m2i ) (2π)4 δ 4 pa + pb − pi Θ(Ei ), (3.52)
i=1
(2π) i
not the case for other, traditional methods such as a regularization through a cut-off
or through the Pauli–Villars method. In dimensional regularization the ultraviolet
divergences manifest themselves as simple poles 1/ε. Having thus “quantified” the de-
gree of divergence the theory is renormalized through a suitable, scheme-dependent
redefinition of the Lagrangian, adding suitable counterterms. Reviews of this formal-
ism may be found in most of the many excellent textbooks on quantum field theory.
However, some generic methods to evaluate the new structures introduced by the loop
integration are discussed in Section 3.3.1.
However, in addition to ultraviolet divergences infrared divergences appear, both
in the virtual and the real correction. In both cases they are related to emissions,
either in the loop in the virtual correction or in the additional particle radiated in the
real correction, where the additional particle has zero energy or is parallel to another
particle. This has already been discussed in Section 2.2.5, where it was also pointed out
that due to the Bloch–Nordsieck (BN) and Kinoshita–Lee–Nauenberg (KLN)
theorems [257, 678, 724] these divergences need to cancel each other in physically
meaningful observables.
Again, in order to deal with the infrared divergences, they have to be regularized,
with dimensional regularization as the method of choice, as in the case of ultraviolet
divergences and for the same reasons. This time, however, the divergences manifest
themselves as double or single poles, 1/ε2 or 1/ε, respectively. As already implied,
these poles do not need to be renormalized since they will cancel in the total result.
There is a practical problem in the cancellation, though. Inspecting Eq. (3.49),
these poles show up in two different parts of the calculation, and in two in principle
independent integrations: for the infrared divergences in the virtual contribution, they
appear in the integral over the n-body phase space associated with Born configura-
tions, while for their counterpart in the real contribution, they of course appear in the
(n+1)-body final state. Even if these integrations could be performed analytically, they
are cumbersome to trace in D dimensions. This will be exemplified in Section 3.3.2,
where NLO corrections to W production are discussed. In the more general case, as al-
ready noted the phase-space integration has to be performed numerically, with Monte
Carlo methods. In this case it is impossible to integrate in D dimensions, and other
methods have to be found to isolate the divergences before the integration can be
successfully achieved. Typically this is by now achieved through subtraction meth-
ods, introduced in Section 3.3.2. A general subtraction method, Catani–Seymour or
dipole subtraction [344, 353], is discussed in Section 3.3.3.
The formula for the virtual amplitude is, cf. Eq. (2.116),
Z νρ
(1) 2 2ε dDk g p/ + k/ µL p/u − k/
Mud→W
¯ + = g g
W s F C µ v̄(p d ) γν d γ γρ (3.53)
(2π)D k 2 (pd + k)2 (pu − k)2
p/ + k/ p/d µL µL p /u p/u − k/ +
+γν d γ ρ 2 γ + γ γ ν γ ρ u(p )
u µ (W ) .
(pd + k)2 pd p2u (pu − k)2
The contribution of the vertex correction, represented by the first term in square
brackets, is in fact the only lengthy calculation that needs to be performed. This term
can be written as
Z
(vertex) 2 4−D dDk V µ µ (W + )
Mud→W¯ + = gW gs CF µ , (3.54)
(2π)D k 2 (pd + k)2 (pu − k)2
This expression is readily simplified with a little γ-matrix algebra to the form
h i
V µ = v̄(pd ) −2 (p/u − k/) γ µR (p/d + k/) + 2εaCDR (p/d + k/) γ µR (p/u − k/) u(pu ). (3.56)
This equation introduces a constant, aCDR , that specifies the variant of the regular-
ization scheme to be used in the calculation. In conventional dimensional regu-
larization all quantities are continued into D dimensions such that γν γ ν = D = 4−2ε.
This corresponds to the choice aCDR = 1. In dimensional reduction, only the
loop momentum is continued into D dimensions, and then γν γ ν = 4 implying that
aCDR = 0.
In order to evaluate the loop integral, it is simplest to combine the loop propagators
in Eq. (3.54) by introducing Feynman parameters, here x, y, and z.5 Applying the
identity Eq. (A.27) to the case at hand yields,
Z 1
1 2δ(1 − x − y − z)
= dx dy dz 3
(pd + k) (pu − k)2 k 2
2
0 [x(pd + k)2 + y(pu − k)2 + zk 2 ]
Z 1 Z 1−x
2
= dx dy 3
0 0 [k 2 + 2k · (xpd − ypu )]
where one of the Feynman parameters has been eliminated immediately using the
δ function. In a second step, the denominator has been simplified with the on-shell
conditions for pu and pd . It is now useful to shift the variable of integration from k
to ` with the relation, k = ` − xpd + ypu . After this shift the denominator takes the
simple form (`2 + Q2 xy)3 , where Q2 = 2pd · pu as usual.
At first glance, performing the same shift on the current V µ leads to a proliferation
of terms. However, since the denominator is an even function of `, any odd powers of `
may be dropped in the numerator since they will vanish upon integration. Moreover,
the integral of the term `α `β must be proportional to g αβ and contracting with gαβ
fixes the overall constant. It is therefore sufficient to replace
`α `β → g αβ `2 /D, (3.57)
under the integral. Finally, the equations of motion for the spinors, v̄(pd )p/d = p/u u(pu ) =
0 greatly simplify many of the resulting expressions.
After this simplification the integral takes the form
Z Z 1 Z 1−x Z
dDk Vµ dD` 4N v̄(pd ) γ µL u(pu )
= dx dy
(2π)D k 2 (pd + k)2 (pu − k)2 0 0 (2π)D (`2 + Q2 xy)3
(3.58)
where,
N = Q2 (1 − x)(1 − y) − εaCDR xy − `2 (1 − aCDR ε)(1 − 2/D). (3.59)
Hence,
Z
dD` 4N iΓ(1 + ε)
= (−Q2 )−ε
(2π)D (`2 + Q2 xy)3 (4π)D/2
n 1 o
× 2 (1 − x)(1 − y) − εaCDR xy x−1−ε y −1−ε − 2(1 − aCDR ε)(1 − ε) x−ε y −ε
ε
The integral over y can now be performed in a straightforward manner. The remaining
x integral is immediately in the form of a beta function that can in turn be expressed
in terms of gamma functions, cf. Eq. (A.6). After manipulating these and dropping
terms of order ε the final result for the integral is
Technology of next-to-leading-order calculations 123
Z
dDk Vµ
(2π) k (pd + k)2 (pu − k)2
D 2
i(−Q2 )−ε Γ(1 + ε)Γ2 (1 − ε) 2 3
= v̄(pd ) γ µL u(pu ) + + 7 + aCDR
(4π)2−ε Γ(1 − 2ε) ε2 ε
ε
4π 1 1 2 3
= − i v̄(pd ) γ µL u(pu ) − 2 − − − 7 − a CDR
.
Q 16π 2 Γ(1 − ε) ε2 ε
In order to arrive at the last line it is necessary to use the relation in Eq. (A.4) to
simplify the combination of gamma functions that naturally occurs.
Restoring the overall factors present in Eq. (3.54) and extracting the leading order
amplitude, the result for the vertex correction contribution to the amplitude is
ε
(vertex) (0) αs 4πµ2 1 2 3 CDR
Mud→W
¯ + = M ¯ + C F − − − − 7 − a . (3.61)
ud→W 4π Q2 Γ(1 − ε) ε2 ε
Fig. 3.4 A pentagon diagram entering the loop amplitude for Z + 2 jet
production (left) and the corresponding e+ e− → 4 jets diagram from which
it can be obtained by crossing (right).
However, since this integral has dimension D − 4 but contains no dimensionful quanti-
ties with which to express the result, it must vanish. As a result, self-energy corrections
on massless external lines are zero within dimensional regularization.
where J σ represents the external vector boson current and V µνρ is the triple gluon
vertex factor,
Inspecting this expression, the amplitude can be described in terms of a basis set of
loop integrals, categorized according to the number of loop-momentum factors that ap-
pear in the numerator. This number is referred to as the rank of the tensor integral
and integrals without any additional numerator factors are usually called scalar inte-
grals. One way of evaluating this contribution, for a long time the standard method,
is to separately tabulate integrals of each rank. Contracting the Lorentz structures in
the numerator with external momenta and polarizations produces the matrix element
in Eq. (3.71) Inspection of Eq. (3.71) reveals that, for the case at hand, tensor integrals
of up to rank 3 are required,
Z α α β α β γ
dD ` 1, ` , ` ` , ` ` `
. (3.73)
(2π) ` (` + p1 )2 (` + p12 )2 (` + p123 )2
D 2
For the simplest — the scalar — integrals, numerical libraries for their evaluation are
now widely available for up to 4-point functions [499, 879, 881]. In fact these libraries
are sufficient for all one-loop calculations since the more complicated scalar integrals,
pentagons and beyond, can be written in terms of linear combinations of scalar box
integrals. However the integrals with additional tensor structure are more complicated
to evaluate. A number of systematic methods for reducing them to the form of scalar
integrals do exist, the most widely used of which is the Passarino and Veltman
reduction method [798]. The basic method can be illustrated by considering the
integral Z
dD ` `µ
Iµ = . (3.74)
(2π)D `2 (` + p1 )2 (` + p12 )2 (` + p123 )2
Since the integral I µ can only depend on the momenta that appear in the denominator,
the Lorentz structure can be decomposed as
Now, for example, contracting each side of Eq. (3.74) with p3 µ results in
Z
1 dD ` (` + p123 )2 − `2 − p2123 − (` + p12 )2 − `2 − p212
I · p3 =
2 (2π)D `2 (` + p1 )2 (` + p12 )2 (` + p123 )2
Z (
1 dD ` 1 1
= − 2 (3.77)
2 (2π)D `2 (` + p1 )2 (` + p12 )2 ` (` + p1 )2 (` + p123 )2
)
2p12 · p3
− 2 .
` (` + p1 )2 (` + p12 )2 (` + p123 )2
The complication in this approach stems from the inversion of the matrix relation
in Eq. (3.76). The determinant of this matrix that is introduced in solving for the
Di , the Gram determinant, is an artefact of the computation introduced by the
expansion in terms of scalar integrals. The matrix element itself contains no singularity
in the limit in which these determinants vanish, yet one inverse power of a determinant
is introduced for each loop momentum factor present in the numerator of the original
integral. This redundancy leads to expressions for individual Feynman diagrams that
can not only be very lengthy but also be affected by significant numerical cancellations
between terms.
It is possible to reformulate the reduction in a number of ways in order to alleviate
problems such as Gram determinant singularities. To see how one such solution works,
consider a triangle integral with p21 =
6 0 and p22 6= 0. The basic scalar integral is,
Z
dD ` `µ
C0 (p1 , p2 ) = (3.79)
(2π) ` (` + p1 )2 (` + p12 )2
D 2
and the rank 1 tensor integral C µ can be decomposed in similar fashion to the box
example above,
Z
dD ` `µ
Cµ = = C1 pµ1 + C2 pµ2 . (3.80)
(2π) ` (` + p1 )2 (` + p12 )2
D 2
The matrix equation for the reduction reads, cf. Eq. (3.76),
2
C · p1 p1 p1 · p2 C1
= , (3.81)
C · p2 p1 · p2 p22 C2
128 QCD at Fixed Order: Technology
and the explicit expressions for the quantities that appear on the left-hand side are,
1
C · p1 = B0 (p12 ) − B0 (p2 ) − p21 C0 (p1 , p2 ) , (3.82)
2
1
C · p2 = B0 (p1 ) − B0 (p12 ) + (p21 − p212 )C0 (p1 , p2 ) , (3.83)
2
in terms of the scalar bubble integral B0 (q) which is defined by
Z
dD ` 1
B0 (q) = . (3.84)
(2π) ` (` + q)2
D 2
where ∆ = p21 p22 − (p1 · p2 )2 is the Gram determinant. Rather than following this path
to the solution, note that multiplying the top and bottom rows of Eq. (3.81) by p22
and (−p1 · p2 ) respectively, and adding, yields the equation,
(C · p1 ) p22 − (C · p2 ) p1 · p2 = p21 p22 − (p1 · p2 )2 C1 = ∆ C1 . (3.86)
From the explicit solutions for C · p1 and C · p2 given in Eq. (3.82) this equation
becomes,
1 2 2
∆ C1 = − p1 p2 + p1 · p2 (p21 − p212 ) C0 (p1 , p2 ) + {scalar bubbles} , (3.87)
2
where, for brevity, the exact form of the linear combination of scalar bubble integrals
has been suppressed. After rearranging this yields an expression for the scalar triangle
integral,
2 h i
C0 (p1 , p2 ) = ∆ C 1 + {scalar bubbles} . (3.88)
p1 · p2 (p212 − p21 ) − p21 p22
This equation indicates that, in the limit that the Gram determinant vanishes, the
scalar triangle integral can be written as a sum of bubble integrals. Note that the
same conclusion could have been reached more directly by noting that, in this limit,
p1 and p2 are collinear and that the correct expansion of C µ should therefore be,
However, the advantage of Eq. (3.88) is that it provides the O(∆) correction to this
relation, if the quantity C1 is known. The same pattern is reproduced at higher-tensor
ranks so that, for instance,
Technology of next-to-leading-order calculations 129
2 h X i
C1 = ∆ αij Cij + {up to rank 1 bubbles, C0 } , (3.90)
p1 · p2 (p212 2 2 2
− p1 ) − p1 p2 i,j
where Cij are rank 2 reduction coefficients and so on.6 This means that if the rank r
bubble integral is already determined, the rank r triangle integral can be determined
up to corrections of order ∆. This suggests an iterative approach to the problem, that
begins by using Eq. (3.88) with ∆ = 0 to determine the first approximation to C0 .
With this in hand, Eq. (3.90) can be used with ∆ = 0 to determine C1 for the first time.
At this point Eq. (3.88) can be used once more to refine the value of C0 , which is now
accurate to order ∆. This procedure involves denominators such as the one shown in
Eq. (3.88), which in general do not vanish at the same time as the Gram determinant.
However, it may be that the coefficients appearing in this iterative scheme are such
that the procedure does not converge. Alternative reduction methods to handle such
exceptional cases, together with explicit algorithms for reductions of up to six-point
integrals, are given in Ref. [456].
Despite the existence of libraries implementing the sort of rescue procedures de-
scribed above, this approach still becomes more cumbersome as the number of particles
in the final state increases, simply due to the rising number of Feynman diagrams that
must be computed. Nevertheless, such methods have been pushed to their limits in the
build-up to the LHC era, providing NLO predictions for final states such as bb̄bb̄ [253],
W + W − bb̄ [459] and tt̄bb̄ [242, 284].
δkµk 2 k3
1 k2 k3
δkk11kµk 3
2 k3
δkk11kk22kµ3
v1µ = , v2µ = , v3µ = , (3.92)
∆ ∆ ∆
where k1 = p1 , k2 = p1 + p2 and k3 = p1 + p2 + p3 are the momenta that naturally
appear in propagator factors of loop diagrams. The Kronecker delta appearing here is
6 To show this identity requires some further work to demonstrate that the coefficient C
00 can be
written in terms of bubble integrals and a scalar triangle, plus rank 2 triangle coefficients of order ∆.
This is indeed the case and, for instance, one can deduce the relation,
4∆C22 + 4(D − 1)p21 C00 + p41 C0 (p1 , p2 ) = {bubbles} . (3.91)
130 QCD at Fixed Order: Technology
∆ = δkµνρ
1 k2 k3
k1µ k2ν k3ρ ≡ δkk11kk22kk33 . (3.94)
vi · kj = δij . (3.95)
µk1 k2 k3
nµ = √ , (3.96)
∆
where the epsilon tensor ensures orthogonality
ni · kj = ni · vj = 0, (3.97)
and the normalization is such that n2 = 1. The properties of these vectors make it
straightforward to see that a four-dimensional loop momentum can be expanded as
3
X
`µ = (` · ki )viµ + (` · n)nµ . (3.98)
i=1
This is a particularly useful form for the following reason: the diagram shown in
Fig. 3.5 contains four propagators d0 , . . . , d3 , given by (assuming massless quarks)
d0 = `2 , di = (` + ki )2 , for i = 1, 2, 3. (3.99)
The dot products that appear in the loop momentum decomposition can thus be
re-expressed as
1 1
` · ki = (di − d0 ) − ki2 . (3.100)
2 2
The most complicated term in the evaluation of this diagram contains three powers of
the loop momentum in the numerator and can be treated by realizing that
Technology of next-to-leading-order calculations 131
3
!
µ ν ρ 1 µ ν X 2 ρ ρ
` ` ` = ` ` (di − d0 − ki )vi + (` · n)n
2 i=1
X 3
1
= `µ (dj − d0 − kj2 )vjν + (` · n)nν
4 j=1
3
!
X ρ
× − ki2 vi + (` · n)nρ + triangles
i=1
3
! 3
1 X X
= − km vm + (` · n)nµ −
2 µ
kj2 vjν + (` · n)nν
8 m=1 j=1
3
!
X
× − ki2 viρ + (` · n)nρ + triangles, bubbles
i=1
≡ δ0µνρ + µνρ
δ1 (` · n) + δ2µνρ (` · n)2 + δ3µνρ (` · n)3 + lower points.
In other words, the integral can be manipulated into a very particular form. It can
be represented by coefficients of a scalar box integral (δ0 ) and tensor integrals that
are still of rank 3, but where the loop momentum is contracted with the transverse
vector, n. The remaining terms are all integrals in which at least one propagator has
been cancelled.
One crucial simplification remains. Contracting the loop momentum in Eq. (3.98)
with itself yields the relation
3
X
2
` = (` · ki )(` · kj )vi · vj + (` · n)2
i,j=1
(3.101)
3
2 1 X
⇒ (` · n) = d0 − (di − d0 − ki2 )(dj − d0 − kj2 )vi · vj
4 i,j=1
This means that a factor of (` · n)2 in Eq. (3.101) may be replaced by a constant, up to
terms in which a denominator has been cancelled and that the results thus represent
lower-point integrals. Redefining δ0 and δ1 suitably to absorb the additional constant
terms, the rank 3 integral can thus be parameterized most simply by
After integration, the `-dependent term cannot contribute to the result since the inte-
gral will only produce contributions of the form ki · n = 0, thus vanishing by definition.
Therefore the rank 3 box integral has been reduced to a scalar box integral, together
with lower-point integrals of rank at most 2.
A similar line of reasoning can now be followed for the lower point triangle and
bubble, integrals. In these cases the parameterization of the loop momentum becomes
more complicated since additional transverse directions are required in order to span
132 QCD at Fixed Order: Technology
the four-dimensional space of loop momenta. However, the basic line of reasoning
follows and, in similar fashion, these integrals can also be reduced to their scalar coun-
terparts. This allows the entire diagram to be reduced to a simple sum of coefficients
multiplied by basic scalar integrals at the integrand level
(...)
In this equation the n-point scalar integral is denoted by In , where the box prop-
agators that have been cancelled are labelled in the superscript. This decomposition
is at the heart of the approach to computing one-loop amplitudes that was originally
formulated by Ossola, Pittau, and Papadopoulos [792], now commonly referred to as
the OPP method.
It is convenient to rewrite this decomposition one more time, with the cancelled
propagators in the lower-point integrals explicit,
1 X (i) X (i,j)
D= c4 + c3 di + c2 di dj . (3.104)
d0 d1 d2 d3 i i,j
If a particular loop momentum could be chosen such that di = 0 for all i, then all the
terms on the right-hand side of Eq. (3.104) would vanish except for c4 . Therefore the
coefficient c4 can be determined by evaluating the diagram for this special value of
the loop momentum. The form of the loop momentum that satisfies these additional
constraints is already determined by Eq. (3.98), Eq. (3.100), and Eq. (3.101). Setting
all the propagators to zero in these expressions, the loop momentum reads
12
3 3
1X 1 X
`µ = − ki2 viµ + k 2 k 2 vi · vj nµ . (3.105)
2 i=1
2 i,j=1 i j
Evaluating the expression for the diagram, D, at this particular value of `µ yields
the box integral coefficient c4 . With this coefficient determined, Eq. (3.104) can be
rewritten as
c4 1 X (i)
X (i,j)
D− = c3 di + c2 di dj (3.106)
d0 d1 d2 d3 d0 d1 d2 d3 i i,j
to clarify the strategy for determining the triangle coefficients. Another parameter-
ization of the loop momentum is required, which differs in important respects from
Eq. (3.98). Importantly, for the triangle case, there are only two independent momenta
so that spanning the physical space requires two transverse directions n1 and n2 ,
2
X
`µ = (` · ki )viµ + (` · n1 )nµ1 + (` · n2 )nµ2 , (3.107)
i=1
Technology of next-to-leading-order calculations 133
where v1 and v2 are now redefined appropriately. Despite this difference, the essential
strategy remains the same: use an appropriate parameterization of the loop momentum
that allows the reduction of tensor integrals to the scalar case, then demand that it
also satisfy di = 0 for the three propagators that appear in the integral. With this
choice, choosing for instance d1 = d2 = d3 = 0 in Eq. (3.106) singles out the coefficient
(0)
c3 . Repeating this procedure, at the level of bubble and tadpole coefficients, where
appropriate, eventually leads to values for all of the integral coefficients appearing in
Eq. (3.104). For further details, the interested reader is referred to Refs. [501, 792].
At this point two important points must be stressed. The first is that by setting
various combinations of propagators to zero, the integrands actually reduce to com-
binations of on-shell tree-level amplitudes. Combining this approach with the efficient
methods for computing such amplitudes discussed in the previous section has yielded
powerful new tools for computing NLO corrections in an automated fashion, far be-
yond the limits of purely analytic calculations. These techniques are equally adept at
handling massive particles propagating in the loop and the total particle multiplicity
is limited purely by computing power. Examples of such numerical codes are listed
below.
The second point is that the discussion presented above neglects an important
complication. The discussion was rooted in four dimensions, which is sufficient to
determine all of the contributions proportional to scalar integrals. However, when
working consistently in d = 4 − 2ε dimensions, the algorithm presented so far must
be extended. Explicitly, the triangle loop momentum decomposition is now described
not only by the transverse vectors n1 and n2 but also by the unit vector in (−2ε)
dimensions, nε ,
2
X
`µ = (` · ki )viµ + (` · n1 )nµ1 + (` · n2 )nµ2 + (` · nε )nµε . (3.108)
i=1
While at first glance this seems like a rather small change, it results in an expression
for the decomposition of a general tensor integral that is more complicated than the
form given previously. In particular, the relation analogous to Eq. (3.101) is,
3
1 X
(` · n1 )2 + (` · n2 )2 + (` · nε )2 = d0 − (di − d0 − ki2 )(dj − d0 − kj2 )vi · vj (3.109)
4 i,j=1
so that the reduction of a triangle integral of rank at least 2 leads to integrals with
(` · nε )2 numerators. The contribution of such integrals is straightforward to evaluate
by Passarino–Veltman reduction,
Z h i
(` · nε )2
dD ` = nµε nνε gµν C00 = (−2ε)C00 (3.110)
d0 d1 d2
where most of the components of the integral vanish by orthogonality, but the term
proportional to the metric tensor, C00 , survives (cf. the equation defining the box inte-
gral counterpart of C00 , Eq. (3.78)). Since C00 contains an ultraviolet 1/ε singularity,
134 QCD at Fixed Order: Technology
this integral thus gives a non-zero contribution. These additional terms are called
rational parts and additional work is required to determine their contributions. Al-
though this is an important aspect of the procedure, it is not conceptually different
from the strategy outlined here and does not require much more computational effort.
The idea underlying the OPENLOOPS algorithm [338] is to combine tensor integral
reduction and OPP methods to generate one-loop amplitudes based on a recursive
construction of Feynman diagrams. To see how this works in more detail, consider the
colour-stripped n-point one-loop diagram
Z
dDq N (In ; q)
A(D) = , (3.111)
D0 D1 . . . Dn−1
In = {i1 , i2 , . . . , in } (3.112)
β
Here, Xγδ and wδ are the vertices and subtrees that enter the tree algorithm. The
former are similar or identical to the vertices Vα used in the off-shell recursion rela-
tions, cf. Eq. (3.14), while the latter are different. In contrast to the currents in the
off-shell recursion relations that capture all possible combinations of given subsets of
external particles, the wδ are recursively constructed from Feynman (sub-)amplitudes.
Pictorially,
j
i = t
@
@
k
or
β
β
Xγδ (i, j, k) wγ (j) wδ (k)
w (i) = , (3.117)
p2i − m2i + i
with polarization vectors or spinors being the wave functions representating external
vector bosons or fermions. Expanding7 the vertex functions in orders of q µ ,
β β β
Xγδ = Yγδ + q ν Zν; γδ , (3.118)
similar to Eq. (3.114), allows to decompose the recursion for the numerator terms in
Eq. (3.116) to be written as
h i
β β
Nµβ1 µ2 ...µr ;α (In ) = Yγδ Nµγ1 µ2 ...µr ;α (In−1 ) + Zν; γ δ
γδ Nµ2 ...µr ;α (In−1 ) + w (in ).
(3.119)
The number of coefficents for this kind of decomposition grows polynomially, with a
degree determined by the tensorial rank r of the original numerator term. Recycling
subtrees, symmetrization over open-loop tensorial indices µ1 , µ1 , . . . , µr , “pinching”
7 In the equation below, only Feynman rules from gauge theory have been assumed, limiting the
expansion to first order in the four-momentum; including also effective theories such as the Higgs
Effective Theory with its more complicated vertex structure renders the inclusion of second-order
terms necessary.
136 QCD at Fixed Order: Technology
propagators, and thereby factoring out common factors In further enhances the ef-
ficiency of this algorithm. Once the coefficients of the decomposition are known, the
polynomial in Eq. (3.114) can be evaluated multiple times at very low CPU cost,
thereby yielding a massive acceleration of the overall calculation. As a further benefit,
the OPENLOOPS algorithm can be used in conjunction with both OPP and traditional
tensor reduction methods or any admixture, which enhances it even more. As a con-
sequence, after its initial implementation in the OPENLOOPS package with a first non-
trivial application in [337], the algorithm has also been adopted by the MADGRAPH
collaboration [148].
In Section 3.3.1, methods to calculate the virtual correction V in the expression for
cross-sections at next-to-leading order, cf. Eq. (3.49), have been discussed. As already
stated, this term exhibits two kinds of divergences: ultraviolet and infrared ones. While
the former are confined inside the virtual correction and can be dealt with there
through regularization and renormalization, the latter are a bit more tricky. This is
due to the fact that these divergences, the infrared ones, must cancel between virtual
and real contributions, due to the Bloch–Nordsieck (BN) and Kinoshita–Lee–-
Nauenberg (KLN) theorems [257, 678, 724]. This cancellation has already been
observed in the previous chapter, for the example of inclusive W production at hadron
colliders, cf. Section 2.2.5. Direct calculation of the virtual contribution in this example
resulted in the terms exhibited in Eq. (2.117), while the direct evaluation of the real
correction term, including an integration over the two-body final-state phase space
resulted in Eq. (2.118). It is important to stress here that the exact cancellation of
the infrared divergences in this case was only recovered upon integration over the
respective phase space of the final-state particles.
With more complicated processes, this way of directly calculating the matrix ele-
ments and performing phase-space integrals to cancel the divergences very quickly ex-
hausts its applicability. This can be traced back to the fact that, in general, final-state
phase-space integrals for final states with more than three particles cannot analytically
be evaluated, neither in four nor in the even more complicated case of D dimensions.
For such cases numerical methods — Monte Carlo integration techniques — must be
invoked, which by construction only work in an integer number of dimensions, another
approach to deal with infrared divergences and their mutual cancellation is manda-
tory. By now the method of choice is infrared subtraction. The underlying idea is to
isolate the divergent structures through suitable terms such that Eq. (3.49) becomes
Technology of next-to-leading-order calculations 137
XZ Z
1
(NLO) (NLO)
σ = dxa dxb fa/h1 (xa , µF )fb/h2 (xb , µF ) dσ̂ab→n (µF , µR )
a,b 0
Z Z
= dΦB Bn (ΦB ; µF , µR ) + Vn (ΦB ; µF , µR ) + dΦR Rn (ΦR ; µF , µR )
Z
= dΦB Bn (ΦB ; µF , µR ) + Vn (ΦB ; µF , µR ) + In(S) (ΦB ; µF , µR )
Z
+ dΦR Rn (ΦR ; µF , µR ) − Sn (ΦR ; µF , µR ) ,
(3.120)
where the real subtraction term Sn , which lives in the (n + 1)-particle phase space
(S)
of the real correction, and the integrated subtraction term In , which lives in the
(n)-particle phase space of the Born and virtual contributions, cancel each other in
the integrals
Z Z
(S)
0 ≡ dΦB In (ΦB ; µF , µR ) − dΦR Sn (ΦR ; µF , µR ). (3.121)
The catch in this method is that due to the universal structure of infrared
divergences in gauge theories it is possible to construct subtraction terms S in a
process-independent way, guaranteeing that the difference (R − Subtraction) is finite
in every single phase-space point in ΦR . Even more, these terms can be constructed as
the product of some Born-level configurations times some individual terms, accounting
for the emission of an additional particle. There terms can be integrated over the
phase space of the additional particle in D dimensions, yielding infrared divergences
that manifest as terms proportional to 1/ε2 and 1/ε. Before, however, turning to
the explicit construction of these terms in a specific formalism, Catani–Seymour
subtraction [353], the underlying idea will be elucidated in a toy model and applied
to the by-now familiar example of W production at hadron colliders.
Vn X
∗(V)
Vn = = |M(B)
n Mn |, (3.122)
ε
Rn )(x) X (R)
Rn (x) = = |Mn+1 (x)|2 ,
x
where both the Born and the virtual contributions are constant in the toy model and
the latter has already been regularized in D dimensions and its ultraviolet divergences
have been regularized and renormalized. The remaining infrared divergences then lead
to the pole in 1/, which has been made explicit in the virtual contribution — here,
138 QCD at Fixed Order: Technology
in the toy model, Vn indeed is infrared finite and ultraviolet renormalized. The real-
emission part in the toy model depends on a one-dimensional phase-space parameter
x ∈ [0, 1] and it diverges for x → 0, producing a single pole 1/, as again made explicit,
while the function Rn (x) is regular in the full interval.
Written in these quantities the NLO cross-section is given by
Z1
σ (NLO) = [Bn + Vn ] FnJ + J
dx Rn (x)Fn+1 (x)
0
(3.123)
Z1
Vn J dx J
= Bn + Fn + Rn (x)Fn+1 (x),
ε x
0
where F J is a (jet) criterion applied in order to ensure that the Born-level cross-
section and the corresponding kinematics are free of singularities. The concept of
jets has already been introduced in more detail in Section 2.1.6. Here, it suffices to
remember that in the language of QCD the application of a jet criterion implies that
the m outgoing particles are all sufficiently energetic and well separated from each
other such that the Born cross-section is free of infrared singularities. In fact, this jet
criterion could be replaced with any observable that ensures that the Born-level part
is well behaved.
Typically, when adding virtual corrections, no new phase-space regions become
available because the incoming and outgoing particles are still the same as at the
Born level. Therefore, the function F can be applied without much ado also to the
virtual part. This is not true when adding additional real radiation. In fact, infrared
divergences appear; in the toy model they are represented by the 1/x-poles. In order to
deal with them the observable F J must be defined in such a way that soft or collinear
emissions — those with x → 0 in the toy model — do not affect it. This catches
the essence of the intuitive definition of an observable being infrared-safe: it must be
defined in such a way that further soft and/or collinear emissions do not affect it.
Mathematically speaking this means that
J J
lim Fn+1 (x) = Fn+1 (0) = FnJ . (3.124)
x→0
It is important to stress, though, that without infrared safe observables the full concept
of higher-order calculations becomes entirely meaningless.
The Bloch–Nordsieck and Kinoshita–Lee–Nauenberg theorems [257, 678, 724] now
state that for infrared-safe quantities the infrared divergences in the real and virtual
contributions cancel, which implies that
In the framework of NLO calculations two systematically different methods have been
developed to isolate the pole in the real-emission contributions, namely phase-space
slicing [576] and subtraction [352, 353, 542]. In both cases the real-emission contribu-
Technology of next-to-leading-order calculations 139
tion will be written in D dimensions — in the toy model here this amounts to replacing
the x−1 pole by x−1− . Then the ideas underlying both methods are as follows:
1. Phase-space slicing
The idea [573] is to introduce an arbitrary cut-off δ and to rewrite the NLO part
of the cross-section as
Z1
(1) Vn J dx J
σ = F + Rn (x)Fn+1 (x)
ε n x1+
0
(3.126)
Zδ Z1
Vn J dx J dx J
= F + Rn (x)Fn+1 (x) + Rn (x)Fn+1 (x).
ε n x1+ x1+
0 δ
Zδ Z1
(1) Vn J dx dx
σ = F + Rn (0)FnJ + J
Rn (x)Fn+1 (x) + O (ε)
ε n x1+ x1+
0 δ
Z1
Vn J dx
= 1 − δ − F + J
Rn (x)Fn+1 (x) + O (ε) (3.127)
ε n x1+
δ
Z1
dx
= log δ · Vn FnJ + J
Rn (x)Fn+1 (x) + O (ε) ,
x1+
δ
where δ − = 1− log δ+O ε2 has been used. This procedure has a nice additional
feature, that the answer should be independent of δ. However, there is a tension
between retaining a good singular approximation (which is obtained by choosing
a very small δ) and still avoiding corresponding large logarithmic cancellations,
which leads to a preference of larger δ. An illustration of this check is shown in
Fig. 3.6, taken from a calculation of W bb̄ production at NLO [520]. This ambiguity
in choosing an optimal value of δ has made this method in practice going somewhat
out of fashion, since manual interference and careful monitoring of the numerical
stability of the final results are mandatory.
2. Subtraction methods
This problem is alleviated by subtraction methods, where a zero is introduced
into the result by adding and subtracting a term
Z1
dx
Rn (0) FnJ (3.128)
x1+
0
Z1
(1) Vn J dx J
σ = F + Rn (x)Fn+1 (x)
ε n x1+
0
Z1
Vn J dx
= F + Rn (0)FnJ
ε n x1+
0
(3.129)
Z1 Z1
dx dx
− Rn (0)FnJ + J
Rn (x)Fn+1 (x)
x1+ x1+
0 0
Z1
Vn J dx
= F [1 − 1] + Rn (x)FnJ (x) − Vn FnJ .
ε n x 1+
0
This subtraction is possible because the infrared structure of the toy model is
entirely known and fixed by Eq. (3.125). In real applications the case is not quite
as trivial, although the infrared structure still is fixed by the fact that in the soft
and collinear limits of parton emission the respective real-emission contributions
factorize into a process-dependent Born part and a process-independent parton
splitting part, where the spin dependence is given by the Altarelli–Parisi splitting
kernels already encountered for the case of QED in Eq. (2.9).
Technology of next-to-leading-order calculations 141
Translating the toy model of the previous section to the language employed in the
rest of the book, in general a cross-section at leading and at next-to-leading order is
written as
Z
σ (LO) = dΦB Bn (ΦB ; µF , µR )
Z
σ (NLO) = dΦB Bn (ΦB ; µF , µR ) + Vn (ΦB ; µF , µR ) + In(S) (ΦB ; µF , µR )
Z
+ dΦR Rn (ΦR ; µF , µR ) − Sn (ΦR ; µF , µR ) ,
(3.130)
cf. Eq. (3.120). As before, ΦB and ΦR denote the Born and real-emission phase space,
(S)
respectively, and Bn , Vn , In , Rn , and Sn , are the matrix elements for the Born,
renormalized virtual, integrated subtraction, real-emission, and real subtraction con-
tributions.
The integrated subtraction and the real subtraction contributions actually corre-
spond to the terms
Z1
J dx
±Rm (0) Fm , (3.131)
x1+ε
0
J
which are combined with the virtual correction Vm Fm /ε and with the real correction
term in the toy model. By now, there are different, well-established methods to con-
struct such subtraction terms in a process-independent way. Note that for hadronic
initial states Ṽ has been defined such as to include the collinear mass-factorization
counter-terms related to divergences that are absorbed into the definition of the PDFs
for the incoming partons, essentially the terms in the last line of Eq. (2.118). All
integrands include parton luminosity, symmetry, and flux factors.
To see how the subtraction method works in more detail, it is instructive to consider
the real radiation contributions that enter the NLO corrections to W production.
The starting point is the real matrix element squared for the process ud¯ → gW +
given in Eq. (2.101). As already observed, it is divergent in the limits t̂ → 0 and û → 0.
The key to a general approach for handling these infrared singularities lies in the ability
to deal with the collinear singularities associated with each of these limits separately.
This can immediately be seen by analysing the kinematic dependence, isolating the
divergent terms and then using partial fractioning,
2
t̂2 + û2 + 2m2W ŝ t̂ + û + 2m2W ŝ
= −2 (3.132)
t̂û t̂û
142 QCD at Fixed Order: Technology
2
1 1 t̂ + û + 2m2W ŝ 1 1 2m2 ŝ
= + −2= + m2W − ŝ + 2 W − 2.
t̂ û t̂ + û t̂ û mW − ŝ
In the last line, overall momentum conservation has been employed, embodied by the
identity ŝ + t̂ + û = m2W . Furthermore, the terms in square brackets in the final line
can be written in terms of the single dimensionless quantity
leading to
2 2
(LO) 2πCF αs (µR ) (LO) 2 t̂ + û2 + 2m2W ŝ
Mud→gW
¯ + = Mud→W
¯ + · (3.134)
m2W t̂û
2πCF αs (µR ) (LO) 2 1 1 2 2x
= Mud→W
¯ + · + − +x+1 − 2
x t̂ û 1−x mW
for the amplitude squared of the real-emission contribution. Isolating the divergent
terms ∝ 1/t̂ or 1/û, the equation above can be cast into the form
2
(LO) 1 (LO) 2
Mud→gW
¯ + = Mud→W
¯ + · D(t̂, x) + D(û, x) + R(x) . (3.135)
x
The infrared-singular terms are expressed in terms of the function D(t̂, x), where
1 2
D(t̂, x) = 8παs CF − −1−x , (3.136)
t̂ 1 − x
and the non-singular remainder reads
2x
R(x) = 8παs CF − 2 . (3.137)
mW
The sum of the two singular terms therefore forms the subtraction term,
1 (LO) 2
S(ΦR ) = Mud→W
¯ + D( t̂, x) + D(û, x) . (3.138)
x
It is worth noting that they are identical to the dipole subtraction terms used in
the seminal Catani–Seymour paper [353], but they have been simply derived from the
original singular matrix element. The general form of the subtraction terms in Catani–
Seymour subtraction will be discussed in the next section. This dipole term is also
closely related to the standard Altarelli–Parisi splitting function for a quark into a
quark and a gluon. There is one dipole for each of the t̂ and û singularities, but due to
the simplicity of the process and its symmetry with respect to the initial states their
forms are exactly identical.
To compute the corresponding integrated subtraction term, I (S) in Eq. (3.130),
first the phase space must be appropriately factorized to allow an analytic integration
Technology of next-to-leading-order calculations 143
over the one-particle phase space of the emitted gluon. The phase space is given
by
dDpW 2 2 dDpg
dΦW g = (2π)δ(p W − m W ) (2π)δ(p2g )(2π)D δ D (pa + pb − pW − pg )
(2π)D (2π)D
dD−1 pg
= (2π)2−D δ (pa + pb − pg )2 − m2W (3.139)
2E
where, in the last line, E represents the energy of the gluon. It is straightforward to
evaluate this in the c.m. frame. The result reads
−ε
(2π)2ε−2 t̂û t̂ + û
dΦW g = √ d − √ dt̂ dΩ1−2ε δ ŝ + t̂ + û − m2W , (3.140)
2 ŝ ŝ 2 ŝ
ŝ + t̂ + û m2W t̂
x = = , v = − , (3.142)
ŝ ŝ ŝ
where the former has been simplified using the Mandelstam relation ŝ + t̂ + û = m2W
from Eq. (2.100). The phase-space integral can be expressed through them as
ŝ1−ε dΩ1−2ε −ε
dΦW g = dx dv v −ε (1 − x − v) 2π δ xŝ − m2W . (3.143)
16π 2 (2π)1−2ε
The factors in square brackets can now be recognized as the final-state phase space for
W production, at the reduced c.o.m. energy squared, xŝ. The phase space is thus an
integral over x of the convolution of this reduced phase space with the dipole phase
space defined by
ŝ1−ε dΩ1−2ε −ε
dφ(x, v, ŝ) = dv v −ε (1 − x − v) , (3.144)
16π 2 (2π)1−2ε
where the integral over v ranges from 0 to (1 − x). In this case, since the only partonic
content in the leading order process is in the initial state, this corresponds to an
initial emitter and initial spectator, anticipating the language of the Catani–Seymour
approach that is used in the next section.
The dipole phase space derived above can be used to integrate the single dipole
term of Eq. (3.136). Restoring the correct overall dimensions with a factor of µ2ε ,
Z
µ2ε D(t̂, x) dφ(x, v, ŝ)
144 QCD at Fixed Order: Technology
ε Z 1−x
αs CF µ2 −ε 1 2
= cΓ xε dv v −ε (1 − x − v) − 1 − x . (3.145)
2π m2W 0 v 1−x
Here, the convenient constant cΓ of Eq. (3.62) has been used. The v integration can
be evaluated by substituting the variable v → (1 − x)y and then using the result
in Eq. (A.6):
Z 1−x
−ε 1 1 Γ2 (1 − ε)
dv v −ε (1 − x − v) = − (1 − x)−2ε . (3.146)
0 v ε Γ(1 − 2ε)
where terms of order ε2 or higher have been dropped. The remaining terms are non-
singular in the limit that x → 1 and so do not require the introduction of further
+-distributions. Thus the integrand becomes
1 ε −2ε 2
− x (1 − x) −1−x
ε 1−x
(
(1)
1 Pqq x 3 2 (1 − x)2
= − − δ(1 − x) − ε log
ε CF 2ε 1−x x +
2 2
2
1 ε π (1 − x)
− 1+ δ(1 − x) + ε(1 + x) log
ε 3 x
2
(1)
1 3 π 1 Pqq (x)
= 2
+ + δ(1 − x) −
ε 2ε 3 ε CF
2 (1 − x)2 (1 − x)2
+ log − (1 + x) log . (3.148)
1−x x + x
(1)
In this equation the regularized splitting function, to first order in αs , Pqq (x),
appears. It has been given in Eq. (2.33).
Using the relation between gamma functions given in Eq. (A.5) ultimately yields
Technology of next-to-leading-order calculations 145
Z ε
αs CF µ2 1 3 π2
µ2ε D(t̂, x) dφ(x, v, ŝ) = cΓ + + δ(1 − x)
2π m2W ε2 2ε 6
#
2 (1 − x)2 (1 − x)2
+ log − (1 + x) log
1−x x + x
2 ε
αs µ 1 (1)
− 2 cΓ Pqq (x).
2π mW ε
(3.149)
There are two such contributions that take identical forms originating from the D(t̂, x)
and D(û, x) terms in Eq. (3.135). The sum of the two is the form of the integrated
dipole term represented by I (S) in Eq. (3.130). Note that, since the starting expres-
sion in Eq. (3.136) was defined in four dimensions, this calculation has implicitly been
performed in the dimensional reduction scheme. Working in conventional dimensional
regularization would have introduced an extra term −ε(1 − x) in that equation that
would eventually manifest itself as an additional +(1 − x) inside the square brackets
in Eq. (3.149)
With the explicit expression for this contribution at hand it is possible to see
exactly how the factorization of singularities into the parton distribution functions
occurs. The relevant singular contribution in Eq. (3.149) is given by
2 ε
αs µF 1 (1)
cΓ − Pqq (x) , (3.150)
2π m2W ε
since the remaining singularities proportional to δ(1 − x) cancel with those obtained
from the virtual diagrams. The NLO quark PDF fq/h (y) can then be defined in terms
(B)
of the bare PDF fq/h (y), which does not exhibit any scaling violations, as follows
Z
(B) αs cΓ 1 dx (1)
(B) y
fq/h (y) = fq/h (y) + − Pqq (x)fq/h , (3.151)
2π ε y x x
in order to absorb this singularity. The factor of 1/x present in this equation can be
(B)
traced back to Eq. (3.135) while the dependence on fq/h (y/x) is a consequence of the
fact that xŝ = m2W . In addition to the pure 1/ε singularity, this definition also includes
constant terms that are obtained by expanding out the universal factor cΓ .
Note that it is only the pole term in Eq. (3.151) that must be absorbed in order
to arrive at a finite result while the constant terms are a matter of choice. The choice
of additional, finite constant terms that are absorbed defines the factorization scheme
that is used for the PDFs. The definition in Eq. (3.151) corresponds to the modified
minimal subtraction, or the MS-scheme, which is the preferred definition for all
modern PDF sets. This scheme dependence must also account for the particular reg-
ularization scheme used in the calculation. The redefinition in Eq. (3.151) corresponds
to conventional dimensional regularization. In the dimensional reduction scheme one
must make the replacement,
146 QCD at Fixed Order: Technology
1 (1) 1 (1) CF
P (x) → Pqq (x) + CF (1 − x) − δ(1 − x). (3.152)
ε qq ε 2
Hence, after this factorization has been accounted for, the contribution of a single
dipole term in Eq. (3.149) can be written as,
Z 2 ε
2ε αs CF µ 1 3 π2 1 − aCDR
µ D(t̂, x) dφ(x, v, ŝ) = cΓ + + − δ(1 − x)
2π m2W ε2 2ε 6 2
#
2 (1 − x)2 (1 − x)2
+1 − x + log − (1 + x) log
1−x x + x
2
αs µ (1)
− log Pqq (x).
2π m2W
(3.153)
In this equation the regularization scheme dependence has been captured by the con-
stant aCDR , previously introduced in Section 3.3.1. Notice that, as must be the case,
once both dipole contributions are accounted for the dependence on aCDR in this
equation exactly cancels that from the virtual corrections, given in Eq. (3.64).
In fact this result is also very close to the one for the full real corrections previously
quoted in Eq. (2.118). The two differ by a single term which is given by the integral
of the non-singular remainder in Eq. (3.137),
Z 2 ε
2ε αs CF µ
µ R(x) dφ(x, v, ŝ) = cΓ [−2(1 − x)] . (3.154)
2π m2W
partons i and j into an emitter {ij} with momentum p̃ij , while parton k accounts for
the recoil and changes its momentum from pk to p̃k . This not only allows all particles
to be kept on their mass-shell all the time, p2k = p̃2k = m2k and p̃2ij = m2ij , but it also
directly leads to a factorization not only of the matrix elements, but of phase space.
The latter fact is important to allow a calculation of Born-level matrix elements with
one less particle. The catch in this construction now is that these extra emission bits,
{ij} + k → i + j + k can be integrated analytically in D dimensions, thereby allowing
them to be subtracted in differential form from the real-emission matrix element and
added back to the virtual contribution in integrated form.
In order to put both types of contributions together, it is sensible to decompose
the eikonal cross-section term W(p1 , p2 ; k) with its symmetry between both emitting
partons, with light-like momenta p1 and p2 , into two individual terms with a definite
emitter and spectator assignment. Including colour factors, they schematically read
µ 2
T1 · T2 p1 pµ p1 p2
W(p1 , p2 ; k) = − − 2 = T1 · T2
2 p1 k p2 k (p1 k)(p2 k)
p1 p2 p1 p2 (3.155)
= T1 · T2 +
(p1 k)(p1 k + p2 k) (p2 k)(p1 k + p2 k)
= D̃1k;2 + D̃2k;1 ,
where the two dipole terms D̃ik;j have been introduced, with i denoting the emitter
or splitter, k being the emitted particle, and j denoting the spectator. Note that the
colour generators Ti,j related to the particles i and j act as matrices on the full colour
structure of the Born-level cross-section the eikonal is being attached to in order to
facilitate the extra emission.
Analysing these terms shows that each of them diverges in both the soft limit of
the energy of the emitted parton approaching zero, ωk → 0 and the collinear limit
of its momentum becoming parallel to the momentum of the splitter, k µ k pµi . At
the same time, the individual terms do not diverge for the emitted momentum k
going parallel to the spectator momentum. This helps to disentangle soft and collinear
divergences. It also helps to analyse singluarities related to the eikonal — essentially
the soft and soft-collinear ones — in colour space, including leading and sub-leading
colour contributions and to add the hard collinear divergences, which are encoded in
splitting functions and are a leading colour effect. This means that the eikonal-based
dipoles above have been generalized in such a way that they also contain the collinear
bits encoded in suitable splitting kernels. Full dipole subtraction terms D therefore are
introduced that emerge from the combination of Born-level matrix elements squared,
including all symmetry and PDF factors, with terms similar to the D̃ from Eq. (3.155).
The catch here is that this factorization of the full subtraction matrix element into
Born-level parts and individual emission terms also includes a factorization of the
phase space such that for these terms
ΦR = ΦB ⊗ Φ1 . (3.156)
148 QCD at Fixed Order: Technology
Here, to simplify the notation, the sum over dipole terms D̃ij;k (Φ1 ) has been replaced
with the product of a dipole operator, D(Φ1 ), with the summation implicit.
This structure is reflected by a sum of the corresponding integrated terms, to be
added back to the virtual contribution,
X
I (S) (ΦB , ε) = I (D) (pa , pb ; p1 , p2 , . . . , pi−1 , pi+1 , . . . , pn+1 )
dipoles
X (D)
(3.158)
= Bij;k (ΦB ) ⊗ Iij;k (ΦB ) −→ B(ΦB ) ⊗ D(ΦB ).
ij,k
Here, one final-state particle has been integrated out. This stresses that there are only
n particles in the final state of this integrated terms, which therefore have Born-level
kinematics.
The following discussion will proceed closely along the lines of the seminal paper
by Catani and Seymour (CS) [353]; a closer relation with their paper will be worked
out in Appendix C.2, where individual equations in CS will be directly linked to the
expressions below.
For example, for the case of a dipole with both splitter ij and spectator k being final-
state particles, these differential dipole subtraction terms, denoted as Dij;k , schemat-
ically read
Dij;k (pa , pb ; p1 , p2 , . . . , pn )
= B(pa , pb ; p1 , p2 , . . . , p̃ij , . . . , p̃k , . . . , pn ) ⊗ D̃ij;k (pi , pj , pk ), (3.159)
yij,k
p̃ij = pi + pj − pk
1 − yij,k
1 (3.160)
p̃k = pk ,
1 − yij,k
so that the spectator parton keeps its direction but its momentum is stretched by a
factor 1/(1 − yij,k ), where the dimensionless quantity yij,k is given by
pi pj
yij,k = . (3.161)
pi pj + pj pk + pk pi
The splitting functions depend on both yij,k and the splitting parameter z̃i ,
pi pk pi p̃k
z̃i = = and z̃j = 1 − z̃i . (3.162)
(pi + pj )pk p̃ij p̃k
With these parameters, the typical form of the divergences can be written as
pi pj + pj pk + pk pi 1
= . (3.163)
(pi + pj )pk 1 − z̃i (1 − yij,k )
0 2
hs|Vqi gj ;k |s i =8πµ2ε
R CF αs (µR ) − (1 + z̃i ) − ε(1 − z̃i ) δss0 .
1 − z̃i (1 − yij,k )
(3.168)
The corresponding integrated dipole term I (D) (ε) is given by the integral of the
dipole over the emission phase space of parton j, namely
ε
(D) αs (µ2R ) 4πµ2R Tij · Tk
Iij;k (ε) = − Vij (ε), (3.169)
2πΓ(1 − ε) 2p̃ij p̃k T2ij
where the Vij (ε) can be constructed from the following expressions:
2 1 π2 1
Vij (ε) = Ti − + γi + 1 + Ki
ε2 3 ε
7 π2
CF − for i = q
Kq,g = 2 6
2
67 π 10 (3.170)
CA − − TR nf for i = g
18 6 9
3
CF for i = q
γq = 2
11 CA − 2 nf TR for i = g.
6 3
(1)
The anomalous dimensions γq,g have first been encountered as the terms proportional
to δ(1 − z) in the construction of the DGLAP splitting kernels, Eq. (2.33). The term
Kg actually will return in another context, namely in resummation, where it will be
denoted as K and relate to a generic, flavour-independent higher-order correction to
the emission of a soft gluon, cf. Eq. (5.65) in Section 5.2.1. These terms effectively also
parameterize the contribution of collinear logarithms in Q⊥ resummation, see also
Eq. (5.66).
As before, for the differential dipole terms, an integrated dipole operator I(ΦB ; ε)
is being constructed such that, again,
X (D)
I (S) (ΦB ) = Bij;k (ΦB ) ⊗ Iij;k (ε) −→ B(ΦB ) ⊗ I(ΦB ; ε). (3.171)
ij;k
2pi pj
yij,k = , (3.172)
Q2
This is a fairly trivial manifestation of the requirement of infrared safety for cuts on
kinematic configurations in physical processes.
The three-parton matrix element for q q̄g production of course corresponds to the
real correction at NLO to the process and it is given by
Q2
dΦR = dΦB dx1 dx2 Θ(1 − x1 )Θ(1 − xx )Θ(x1 + x2 − 2). (3.176)
16π 2
The dipole subtraction terms are easily constructed. The splitting function is of
course the qg splitting function from Eq. (3.168). Inserting it into the Born-level matrix
element is fairly straightforward due to the Kronecker δ in the spins, and the only
somewhat tricky bit is the colour factor coming with it. Taking a closer look, the
relevant term actually reads
1 Tij · Tk
− ⊗ Vij,k , (3.177)
2pi pj Tij2
where the Tij and Tk are the colour matrices related to the splitter and spectator
that are inserted into the matrix element. In the dipole term here, Tij = Tq = T13
and Tk = Tq̄ = T2 . By virtue of the fact that the overall amplitude must be a colour
singlet, for each splitter ij the sum over all spectators k of the term Tk will result in
−Tij . In the case here, this implies that
152 QCD at Fixed Order: Technology
1 Tij · Tk 1
− ⊗ = . (3.178)
2pi pj Tij2 2pi pj
The dipole term relating to the gluon p3 being emitted from quark p1 therefore is
given by
(ε=0) 1 (ε=0) T13 · T2
D13;2 (p1 , p2 , p3 ) = B(p̃13 , p̃2 ) ⊗ − V
2p1 p3 q1 g3 , q̄2 T13 2
8πCF αs (µR ) 2
= B(p̃13 , p̃2 ) ⊗ − (1 + z̃1 ) .
2p1 p3 1 − z̃1 (1 − y13,2 )
(3.179)
The Born-level momenta are connected to the real-emission phase space through
1 µ 1 µ
p̃µ2 = p and p̃µ13 = Qµ − p̃µ2 = Qµ − p , (3.180)
x2 2 x2 2
where the quantities y13,2 and z̃1 in the general expression for the mapping in Eq. (3.160)
can be expressed through the xi in the specific case here as
1 − x3
y13,2 = 1 − x2 and z̃1 = . (3.181)
x2
Inserting this and using that xi = 2 − xj − xk ,
and a similar term for D23,1 with the replacement 1 ↔ 2. Together, these two terms
constitute the subtraction term,
Z1
CF αs (µR ) x21 + x22
= dx1 dx2 dΦB B
2π (1 − x1 )(1 − x2 )
0
1 2 1 − x1
− dΦB B(p̃13 , p̃2 ) · − 1 − x1 +
1 − x2 2 − x1 − x2 x2
1 2 1 − x2
−dΦB B(p̃23 , p̃1 ) · − 1 − x2 + .
1 − x1 2 − x1 − x2 x1
(3.184)
Casting the three-parton matrix element into a form similar to the subtraction terms
x21 + x22 1 2
= − 1 − x1 + {x1 ↔ x2 } (3.185)
(1 − x1 )(1 − x2 ) 1 − x2 2 − x1 − x2
CF αs (µR )
dσ (R−S) = − dΦB B. (3.186)
4π
The subtracted one-loop correction consists of the genuine virtual contribution and
the integrated subtraction term, in this case
h i
dσ (V +I) = dΦB V(ΦB ) + I (S) (ΦB ; ε)
h i
= dΦB V(ΦB ) + B(ΦB ) ⊗ I(ΦB ; ε) . (3.187)
(D)
The individual integrated dipole terms Iqg;q̄ and its barred counterpart, which consti-
tute I(ΦB ; ε), are given by
ε
(D) Tq · Tq̄ αs (µR ) 4πµ2R
Iqg;q̄ (ΦB , ε) = − Vqg (ε)
T2q 2πΓ(1 − ε) ŝ
ε (3.188)
CF αs (µR ) 4πµ2R 1 3 π2
= + +5− .
2π Γ(1 − ε) ŝ ε2 2ε 2
Again the fact has been used that the overall amplitude must be a colour singlet,
implying that, as before, the colour matrix Tq̄ = −Tq and therefore Tq · Tq̄ = −CF .
Together these yield the overall result for the subtracted virtual contribution to the
cross-section, namely
154 QCD at Fixed Order: Technology
ε
CF αs (µR ) 2 4πµ2R
3
dσ (V +I) = dΦB B(ΦB ) − − 8 + π 2
+−O (ε)
2πΓ(1 − ε) ε2 Q2ε
2
ε
CF αs (µR ) 4πµR 2 3 2
+ + + 10 − π + O (ε)
2πΓ(1 − ε) Q2 ε2 ε
CF αs (µR )
= dΦB B(ΦB ) .
π
(3.189)
As anticipated, the overall result for the subtracted virtual contribution also is finite,
rendering both individual parts of the next-to-leading order correction separately finite
and therefore allowing their safe numerical integration.
However, in combination with the real-emission part above this yields the well-
known result
(NLO) (LO) 3 CF αs (µR )
σe+ e− →qq̄ = σe+ e− →qq̄ 1 + . (3.190)
4 π
As a consequence of this decomposition into splitters and spectators, there are four
types of dipole structures in Catani–Seymour subtraction, namely all combinations of
initial and final splitters with initial and final spectators. In the original CS paper,
initial state particles are indicated by moving their label from Eq. (3.155) from sub-
script — reserved for final state particles — to superscript. This is also exhibited in
Fig. 3.7.
The full differential subtraction cross-section, including emissions from splitter–-
spectator pairs in all combinations of initial and final state particles, written in the
notation of this book, is given by
In this equation, the first line in the square bracket refers to emitters in the final
state with the first term referring to a spectator in the final state, whereas the second
term relates to a spectator in the initial state. The pattern repeats itself in the last
line, with the only difference being the splitters in the initial state. Of course, in all
cases kinematic maps similar to the one in Eq. (3.160) will be invoked to construct
Technology of next-to-leading-order calculations 155
the corresponding Born-level kinematics. For the details of this construction in the
various cases, cf. Appendix C.2
Turning to the actual construction of the subtraction terms, the terms relating to
final-state emitters in the first line have very similar structures, namely
" #
1 Tij · Tk
Dij;k (pa , pb ; {pi }) = B(pa , pb ; {. . . , p̃ij , . . . , p̃k , . . . }) ⊗ − Vij,k
2pi pj Tij2
(3.192)
for the final–state splitter final–state spectator (FF) case, cf. Eq. (3.179). For the FI
case,
" #
a 1 1 a Tij · Ta
Dij (pa , pb ; {pi }) = B(p̃a , pb ; {. . . , p̃ij , . . . }) ⊗ − V ,
2pi pj xij,a ij Tij2
(3.193)
indicating that the momentum of the spectator parton a in the initial state will be
altered. In both cases, FF and FI, only the relevant momenta have been shown as ar-
guments of the Born-part B. It is also understood that due to the implicit cancellation
of IR divergences in the difference of real-emission and subtraction terms, the dipoles
V ultimately will be evaluated in D = 4 dimensions, with ε = 0. The kinematic
maps for the two cases, FF and FI case, including in particular the extra term xij,a
156 QCD at Fixed Order: Technology
in the propagator-like structure before the dipole, is detailed in Appendices C.1.1 and
C.1.2. It is worth noting that, due to the initial-state parton compensating for the
recoil of the emission process in the FI case, the Born-level matrix element, including
the PDFs must be evaluated with different initial state kinematics, as indicated by
p̃aj in its arguments, and of course also flux and symmetry factors must be adjusted
correspondingly. This, however, is a fairly trivial manipulation.
This simple picture somewhat changes when initial-state singularities emerge due
to initial-state particle being the emitters. These cases are treated by the subtraction
terms in the last line of Eq. (3.191). The reason for this added complication is that,
first of all, the factorization of the phase space in the soft and especially in the collinear
limit must be checked, which has been done in great detail and instructively in the
original CS paper [353]. In addition, now the altered initial-state parton momenta
entering the Born term translate not only into the need to evaluate the PDF for
this parton at a different x, with a potentially changed flavour, as above, but the
emission off the initial-state parton also introduced new divergences which must be
absorbed into the definition of the PDF. Essentially, this is because the particles in
the initial state are fixed to move along a preferred direction, the beam axis. This
additional constraint, which is not present for final-state emitters, necessitates special
treatment and the emergence of additional terms. Different ways of how this is being
done and which finite terms are absorbed together with a collinear divergence give rise
to different factorization schemes, all of which of course are variants of the collinear
factorization. This implies that scheme-dependent terms may arise, which must be
properly accounted for. With this in mind,
" #
aj 1 1 aj Taj · Tk
Dk (pa , pb ; {pi }) = B(p̃aj , pb ; {. . . , p̃k , . . . }) ⊗ − V
2pa pj xjk,a k 2
Taj
(3.194)
for the IF case, and
" #
aj;b 1 1 aj,b Taj · Tb
D (pa , pb ; {pi }) = B(p̃aj , p̃b ; {. . . , p̃k , . . . }) ⊗ − V 2 ,
2pa pj xj,ab Taj
(3.195)
for the II case. Details of the phase space mappings and the appropriate splitting
functions Vkaj and V aj,b can be found in Appendices C.1.3 and C.1.4, respectively. It
is worth stressing here that by far and large the kinematic mappings are organized
such that spectator partons keep their direction. This of course is not feasible any more
when the splitter parton is in the initial state: in the case of IF splittings then the
final state spectator must compensate the transverse momentum transfer; in the case
of II splittings this is of course not possible. In this case, the transverse momentum
compensation is achieved by moving the complete final state.
Turning to the integrated subtraction terms, to be added back and combined with
the virtual parts of the NLO calculation, one has to remember that they are essentially
given by the integral over the real-emission phase space of the differential ones. In
principle, this is fairly straightforward and results in
Technology of next-to-leading-order calculations 157
where the integrated dipole term is constructed from terms with final- or initial-state
splitters and spectators in intuitive notation as
where the sum is over all pairs of particles {ij} and k. Note that here the “splitter”
parton has been denoted by {ij} in order to make the connection to the splitter–
spectator notation in the differential dipole terms more manifest. The terms V{ij} (ε)
for different flavour are given in Eq. (3.170). Naively, of course, terms with an identical
form, apart from trivial replacements in momenta and colours, also emerge for the
other integrated dipoles.
This, however, is a bit too simple to be entirely correct. In cases with initial state
partons a complication emerges, which originates from the fact that the kinematics
mappings imply a change of the incoming particle’s momentum, irrespective of whether
it is a splitter or spectator parton. Schematically, for the Born-level n-particle phase
space and the matrix element this will be accounted for by
FF→FI,IF,II
dΦB −→ dξ dΦB (ξ)
(3.199)
FF→FI,IF,II
B(ΦB ) −→ B(ΦB , ξ).
For the integrated dipole this translates into an additional integration over the re-
coil parameter ξ, in the interval [0, 1], which will ultimately lead to the emergence
of various functions of ξ to be folded into the resulting expression. In principle, ξ
parameterizes how much the momentum of the incident parton acting as splitter or
spectator changes, and it will therefore also change the four-momentum balance ac-
cordingly, encoded in a δ function in the phase space element dΦB . In addition, if the
parton in question actually is treated as the splitter then this change in momentum
will also impact on the argument of the PDF, replacing the corresponding x with x/ξ.
As already indicated in Eq. (3.149) in Section 3.3.2, there are also additional terms
that must be considered when emissions off initial state partons are present. These
terms are related to further collinear divergences originating from the fact that the
matrix elements must be convoluted with PDFs, which themselves exhibit divergent
structures in their scale evolution. A fixed-order part of this evolution actually emerges
when considering initial-state singularities alone in the matrix elements and therefore
must be treated through a suitable collinear subtraction. It is no surprise that these
terms conversely exhibit a dependence on the details of the factorization scheme, man-
ifesting themselves in some finite terms. In the MS scheme used throughout this book
these terms reduce to zero, but in other schemes such as the DIS scheme this is not
158 QCD at Fixed Order: Technology
XZ
1 ε
1 4πµ2R (1) aa0
dσa(C) = dξ dΦB (ξ) B(ΦB , ξ) − Pa0 a (ξ) + K (F.S.) (ξ) . (3.200)
ε µ2F
a0 0
(1)
Here, Pa0 a (ξ) are the Altarelli–Parisi splitting kernels to first order; their four-dimen-
aa0
sional version is given in cf. Eq. (2.33). The finite terms K(F.S.) (ξ) are related to the
choice of factorization scheme. As already indicated, for the MS scheme, they are given
by
aa 0 MS
K(F.S.) (ξ) = 0. (3.201)
Therefore, after absorbing the pole in the PDF, residual terms are left stemming from
their expansion in ε.
The form of this subtraction term in fact could have been anticipated without
any calculation, they merely connect the PDFs from their scale µF with the parton
density at the scale µ where the rest of the process and in particular the singularities
are evaluated. Not surprisingly the implicit change of parton density must be accounted
for, in a form similar to the DGLAP equation, which is why the respective splitting
kernels emerge. If the process has one initial-state parton only, merely one of these
terms has to be considered; in the presence of two initial-state partons, two of these
terms with suitable choices of flavours, etc. , have to be added to the virtual part.
In addition, and as already indicated, there are also terms coming from the initial-
state parton in the integral over the emission phase space in the dipole terms, like
the ones in the second line of the integrated dipole in Eq. (3.149). It has become
customary to construct the integrated dipole terms for initial-state partons through
specific collinear subtraction terms, extending the one in Eq. (3.200). These terms are
added to the simple dipole functions constituting the integrated subtraction terms,8
XZ
1
h 0 0
i
+ dξb dΦB (ξb ) Bab0 (pa , ξb pb ) ⊗ Kbb (ξb ) + Pbb (ξb pb , ξb ; µ2F )
b0 0
(3.202)
8 It is worth noting that similar reasoning in fact also applies for processes involving specified
hadrons in the final state and the corresponding fragmentation functions: also in this case collinear
subtraction terms must be added which stem from the evolution of the fragmentation functions
through secondary emissions.
Technology of next-to-leading-order calculations 159
where the dependence of the Born cross-sections on the incoming flavours and momenta
has been made explicit. The integrated dipole terms are given by Eq. (3.198) and
similar. They can be combined to yield
I(ε) ≡ I(pa , pb ; p1 , . . . , pm ;
ε)
ε ε
αs (µR ) X Vi (ε)
X 4πµ 2
R
X 4πµ 2
R
= − Ti · Tk + Ti · Tc
2πΓ(1 − ε i T2i 2pi pk 2pi pc
k6=i c∈{a,b}
"
X Vc (ε) X ε ε #
4πµ2R 4πµ2R
+ Tc · Ti + Tc · Td .
T2c i
2pc pi 2pc pd d6=c
c∈{a,b}
(3.203)
The terms in the first square bracket relate to dipoles with final-state emitters i —
the first sum is over final-state spectators k and the second sum is over the two initial
state spectators c — while the second square bracket relates to dipoles with initial-
state emitters c, where, again, the first sum is over final-state spectators i and the
second term is for the other initial-state particle d = 6 c being the spectator. In all
cases, the V are given by Eq. (3.170).
0 0
The operators in the collinear terms, for instance Kaa (ξa ) and Paa (ξa pa , ξa ; µ2F ),
emerge from the considerations above and are given by
" #
0 α s (µR ) aa0 0 X (1) T{ij} · Ta 0 1
Kaa (ξa ) = K (ξa ) + δ aa γ{ij} + δ(1 − ξa )
2π T2{ij} 1 − ξa +
{ij}
)
Tb · Ta0 aa0 aa0
− K̃ (ξa ) − KF.S. (ξa ) (3.204)
T2a0
and
0
Paa (ξa pa , ξa ; , µ2F )
αs (µR ) (1) X T{ij} · Ta0 µ 2
Tb · T a0 µ 2
= Paa0 (ξa ) log F
+ log F ,
2π T2a0 2ξa pa p{ij} T2a0 2ξa pa pb
{ij}
(3.205)
(1)
cf. Eq. (C.45). The γ{ij} in the first line of Eq. (3.204) are the anomalous dimensions
to first order of the DGLAP splitting functions introduced in Eq. (2.33). The functions
K and K̃ can be found in Appendix C.2. As already stated, the term KF.S. related to
the choice of factorization scheme vanishes for the choice of the minimal subtraction
(1)
scheme, the one made in this book, KM S (ξ) = 0. The first-order splitting kernel Paa0
occurring in the P–operator has been given in Eq. (2.33).
Note the occurrence of the sums over final-state particle {ij} in both the K and
the P operator. While in the former case these sums emerge from dipoles where the
160 QCD at Fixed Order: Technology
initial-state particle a acts as spectator and the final-state particle {ij} is the splitter,
as indicated by the colour factors, these roles are reversed in the latter case. There, in
the P operators in addition a term emerges which is due to the initial-state particle a
being the splitter and the other initial-state particle b being the spectator. Of course,
if the corresponding dipoles do not emerge, due to the lack of coloured particles in
either initial or final state, the respective terms are just dropped.
Using the Mandelstam identity Eq. (2.100) for this process, the recoil parameter xg,ud¯
in the dipole of Eq. (3.195) is given by
This is identical to the x of Eq. (3.133), in the first discussion of subtraction with
dipole terms from a suitable ad-hoc definition. In Catani–Seymour subtraction the
splitting kernel V qg,q in four dimensions, i.e. ε = 0, reads
Technology of next-to-leading-order calculations 161
ug,d¯ 2
V = 8πCF αs (µR ) − (1 + x) . (3.211)
1−x
Therefore,
" #
(S) 8πCF αs (µR ) 1 (LO) 2 1 12
dσ = dΦR |Mud→W
¯ +| + −
+ (1 + x) .
x 4 1−x
t̂ û
(3.212)
This result is exactly the sum of the terms D(t̂, x) and D(û, x) from Eq. (3.136). Thus,
for the subtracted real correction contribution one finally arrives at
(LO) 2 2πCF αs (µR ) 1 1 2 2x
dσ (R−S) = dΦR Mud→W ¯ + + − + (1 + x) − 2
x t̂ û 1−x mW
1 1 2
− + − + (1 + x)
t̂ û 1−x
4πCF αs (µR ) (LO) 2
= − dΦR Mud→W
¯ + .
m2W
(3.213)
Iud→W
¯ (ε) = Iud→W¯ (pu , pd¯; ε)
ε
αs (µR ) 4πµ2R Tu · Td¯ Td¯ · Tu
= − V u (ε) + Vd¯(ε)
2πΓ(1 − ε) 2pu pd¯ T2u T2d¯
2 ε
CF αs (µR ) µR 1 3 1 − aCDR π2
= cΓ + + 5 − − , (3.214)
π m2W ε2 2ε 2 2
where it has been realized that, at Born-level, the centre-of-mass energy of the annihi-
lating quark pair equals the W mass, consequently allowing the identification ŝ ≡ m2W .
Finally, the collinear subtraction terms from Eq. (3.202) have to be added. Speci-
fying to the ud¯ → W g case only, so ignoring gluon-initiated processes, they are given
by
Z1
(C) qq qq 2
dσ = dξu dΦB (ξu )Bud→W
¯ (ξu pu , pd¯) ⊗ K (ξu ) + P (ξu pu , ξu ; , µF )
0
Z1
qq qq 2
+ dξd¯ dΦB (ξd¯)Bud→W
¯ (pu , ξd¯pd¯) ⊗ K (ξd¯) + P (ξd¯pd¯, ξd¯; , µF ) .
0
162 QCD at Fixed Order: Technology
The K and P operators can be obtained from Eqs. (3.204) and (3.205) with the
contributing functions for the case at hand given by
" #
qq 2 1−ξ 1−ξ
K (ξ) = CF log − (1 + ξ) log + (1 − ξ) − δ(1 − ξ) 5 − π 2
1−ξ ξ + ξ
" #
qq 2 π2
K̃ (ξ) = CF log(1 − ξ) − (1 + ξ) log(1 − ξ) − δ(1 − ξ) .
1−ξ + 3
(3.215)
Not surprisingly, the sum over all particles contained inside these terms collapses to
the other initial-state particle only. Concentrating on the first term for the time being,
and making the dependence on the PDFs explicit, therefore,
Z1 Z1
xu 2 πδ(xu xd¯s − m2W )
dσu(C) = dxu dxd¯ dξu fu/h1 , µF fd/h
¯ 1 x d¯, µ 2
F
ξu m2W
0 xu
(LO) 2 αs (µR ) qq µ2
· Mud→W
¯ + K (ξu ) + K̃ qq (ξu ) − Pqq (1)
(ξu ) log F2
2π mW
Z1 Z1
xu 2 2 πδ(xu xd¯s − m2W )
= dxu dxd¯ dξu fu/h1 , µF fd/h¯ 1 xd¯, µF
ξu m2W
0 xu
(
1 − ξu µ2
(LO) 2 αs (µR )CF 2
· Mud→W
¯ + log + log(1 − ξu ) − log F2
2π 1 − ξu ξu mW +
2
1 − ξu µ
− (1 + ξu ) log + log(1 − ξu ) − log F2
ξu mW
2π 2 3 µ2
+ (1 − ξu ) − δ(1 − ξu ) 5 − + log F2 .
3 2 mW
(3.216)
Here, as before, the colour insertion term Tu ·Td¯/T2u = −1 has already been evaluated
and replaced.
Together with the virtual contribution and the subtraction term Iud→W ¯ (ε) this
results in the following contribution to the overall cross-section:
Z1
(V +I+C) CF αs (µR ) (LO) 2 δ(xu xd¯s − m2W )
dσud→W
¯ + (pu , pd¯) = |Mud→W
¯ +| dxu dxd¯
2π m2W
0
Z1 Z1
xu 2 xd¯ 2
dξu dξd fu/h1 ,µ fd/h
¯ 1 ,µ
ξu F ξd¯ F
xu xd
Technology of next-to-leading-order calculations 163
µ2 ε 2 3
R CDR 2
× cΓ − − − 7 − a + π δ(1 − ξu ) δ(1 − ξd¯)
m2W ε2 ε
2 ε
µR 2 3 CDR 2
+ cΓ + 2 + +9+a −π
m2W ε ε
2 2
2π 3 µ
+2 − 5 − log F2 δ(1 − ξu ) δ(1 − ξd¯)
3 2 mW
"
2 (1 − ξu )2 µ2
+ log − log F2
1 − ξu ξu mW +
(1 − ξu )2 µ2
− (1 + ξu ) log − log F2 + (1 − ξu ) δ(1 − ξd¯)
ξu mW
"
2 (1 − ξd¯)2 µ2
+ log − log F2
1 − ξd¯ ξd¯ mW +
(1 − ξd¯)2 µ2F
− (1 + ξd¯) log − log 2 + (1 − ξd¯) δ(1 − ξu ) .
ξd¯ mW
(3.217)
The double and single poles 1/ε2 and 1/ε cancel, which allows the prefactor to be
replaced by unity, 2 ε
µR
cΓ −→ 1. (3.218)
m2W
Therefore the total contribution to the cross-section from these terms is,
Z1
(V +I+C) CF αs (µR ) (LO) 2 δ(xu xd¯s − m2W )
dσud→W
¯ + (pu , pd¯) = |Mud→W
¯ +| dxu dxd¯
2π m2W
0
Z1 Z1
xu 2 xd¯ 2
dξu dξd¯ fu/h1 ,µ fd/h
¯ 1 ,µ
ξu F ξd¯ F
xu xd̄
4π 2 m2W
× − 8 + 3 log δ(1 − ξu ) δ(1 − ξd¯)
3 µ2F
164 QCD at Fixed Order: Technology
"
2 (1 − ξu )2 m2W
+ log
1 − ξu ξu µ2F +
(1 − ξu )2 m2W
− (1 + ξu ) log + (1 − ξu ) δ(1 − ξd¯)
ξu µ2F
"
2 (1 − ξd¯)2 m2W
+ log
1 − ξd¯ ξd¯µ2F +
(1 − ξd¯)2 m2W
− (1 + ξd¯) log + (1 − ξd¯) δ(1 − ξu ) .
ξd¯µ2F
(3.219)
Not surprisingly, this agrees with results that have previously been constructed in Sec-
tions 2.2.5 and 3.3.2. Here, however, the full result is quoted, including the convolution
with the PDFs.
In the previous sections, the Catani–Seymour dipole subtraction has been introduced
as a specific implementation of the more general idea of infrared subtraction. This
method adds and subtracts terms to the virtual and real corrections in such a way
that the resulting subtracted contributions are individually infrared-finite. Ultimately,
this allows their phase-space integration with Monte Carlo techniques, which is neces-
sary due to the complex structure of the high-dimensional integration region in such
calculations.
In other words, in
Z
(NLO) (S)
σ = dΦB Bn (ΦB ; µF , µR ) + Vn (ΦB ; µF , µR ) + In (ΦB ; µF , µR )
Z
+ dΦR Rn (ΦR ; µF , µR ) − Sn (ΦR ; µF , µR ) ,
(3.220)
the two integrals over dΦB and dΦR are individually finite and can therefore be inte-
grated with the fairly evolved and efficient methods discussed in Section 3.2.2.
There are two caveats, which have a purely technical reason. First of all, the
collinear terms encountered in the dσ (C) contribution of Eq. (3.202) feature “+” func-
tions. They are introduced in order to regulate possible divergences in the integration
in the limit where the variable approaches 1. Looking at their definition in Eq. (2.10),
Technology of next-to-leading-order calculations 165
Z1 Z1
dz[f (z)]+ g(z) = dzf (z) [g(z) − g(1)] , (3.221)
0 0
where g(z) is a regular function, this necessitates the inclusion of such terms — typ-
ically the respective Born-level cross-sections — in the integration. This, however, is
not difficult per se but merely a slight annoyance in actual implementations. The only
tricky issue there consists in the fact that for z → 1 the divergence in f (z) is countered
by the simultaneous vanishing of [g(z) − g(1)], which numerically is not always perfect
due to the limited accuracy of numerical methods. One way to account for this is to set
a numerically very small limit for the regulator [g(z) − g(1)] below which it is replaced
by exactly zero, thereby setting the full integrand to zero.9
A second problem emerges in the subtracted real contribution, for similar reasons.
By construction [R(ΦR ) − S(ΦR )] approaches zero in singular regions of the phase
space ΦR . This cancellation, however, is a cancellation of the type ∞ − ∞, which
notoriously is tricky to achieve numerically. It has therefore become customary to
set the difference above to exactly zero, when two four-momenta in ΦR become very
collinear or one of the momenta becomes very soft. In reality this means that scalar
products of the four-momenta are being checked, and if they fall below some very small
cut-off value α, [R(ΦR ) − S(ΦR )] will be replaced by zero. The name of the game then
is to optimize the choice of α in such a way that the overall result is numerically stable.
Both these issues have been discussed and tested quite exhaustively in a number
of automated implementations [536, 537, 584, 617].
some αdip , which parameterizes the phase space. For instance, for FF dipoles, the
“constrained” dipoles are given by [781]
0
Dij;k = Dij;k Θ(α − yij;k ), (3.222)
where yij;k has been defined in Eq. (3.161) and parameterizes the soft and collinear
limits of the splitting, as can be seen in Eq. (3.166). This idea has been extended
to other dipole configurations, namely to II, IF, and FI dipoles in [777], see also
Appendix C.2. Of course, the integrated contributions, the I, K, and P operators,
inherit this dependence, leading to some α dependent additional terms.
This parameter-dependent identification of singular regions and the ensuing decom-
position into subtracted and unsubtracted phase-space regions lead to a potentially
large saving in the number of terms. In addition, it also allows a very convenient check,
as the overall results for cross-sections must be parameter-independent.
Table 3.3 Tools for calculating virtual corrections: there are different types of relevant tools,
including those that list scalar (and other) basic integrals (“integrals”), reduce integrands
(“reduction”), provide libraries of virtual corrections or the full cross-sections (“libraries”),
or generate one-loop amplitudes automatically (“generators”). They use different reduction
technologies, such as Passarino–Veltman (“PV”), Ossola–Pittau–Papadopoulos (“OPP”), or
OPENLOOPS (“OL”) reduction. Some of them are set up to directly produce full calculations
(“full”) including all contributions.
type technology
dependencies on other codes
LOOPTOOLS [605] integrals
ONELOOP [879] integrals
QCDLOOP [499] integrals
COLLIER [457] reduction
CUTTOOLS [793] reduction OPP
FORMCALC [605] reduction PV
NINJA [802] reduction Laurent expansion
SAMURAI [756] reduction
BLACKHAT [226] library (amplitudes) OPP (unitarity)
MCFM [309] library (full calculation) PV & OPP
NJET [181] library (amplitudes) OPP
GOSAM [420] generator (amplitudes) OPP
SAMURAI +NINJA +
MADLOOP [625] generator (full calculation) OL+OPP
CUTTOOLS +
OPENLOOPS [338] generator (amplitudes) OL+OPP
COLLIER +CUTTOOLS +
HELAC–NLO [241] generator (full calculation) OPP
CUTTOOLS +
Table 3.4 Tools for automated infrared subtraction: there are different types of relevant
tools, those including the full underlying Born-level amplitudes and tools for phase-space
integration in the construction of the subtraction terms and those tools which only provide
the differential and integrated subtraction terms to be added on top of matrix elements
provided from outside.
type technology
dependencies on other codes
AMEGIC++ [584] full: ME+PS CS dipoles
SHERPA
AUTODIPOLE [617] subtraction only CS dipoles
MADGRAPH
COMIX [582] full: ME+PS CS dipoles
SHERPA
MADDIPOLE [537, 538] full: ME+PS CS dipoles
MADGRAPH
MADFKS [536] full: ME+PS FKS subtraction
MADGRAPH
MATCHBOX [808] subtraction only CS subtraction
Catani–Seymour subtraction scheme discussed in Section 3.3.3. For this type of con-
tribution, one event corresponds to the original emission, while the rest correspond to
the subtraction terms. The dipole terms, evaluated in n-parton phase space, are then
added back in the integrated subtraction events. For complex final states, there can
be many Catani–Seymour subtraction terms and thus the need for many subtracted
real-emission events in the ROOT ntuple. In fact, the subtracted real events take the
majority of the disk space required for storing the ROOT-ntuples. The statistical un-
certainty for events from the Born, virtual, and integrated subtraction ntuples can
be calculated in the standard Monte Carlo way. As the real-emission and the corre-
sponding subtraction configurations for a given event are strongly anti-corrrelated, the
anti-correlation must be taken into account in order not to over-estimate the statistical
error. It is possible to use a number of different jet algorithms and sizes, as long as
the appropriate subtracted real counter-events are present. Typically, a wide range of
jet sizes, R ∈ [0.2, 1], can be allowed with only a small overhead (number of additional
subtracted real events needed).
As mentioned above, the partons in each event can be re-clustered on the fly to
form cross-sections for different jet algorithms, for example using FASTJET. The ap-
propriate weight information is stored for each event, allowing the matrix element for
that event to be reweighted for different PDFs (and the appropriate value of αs (mZ )
for that PDF), and for different values of the renormalization and factorization scales.
Thus, the PDF, αs (mZ ) and scale uncertainties can be automatically calculated in
Technology of next-to-leading-order calculations 169
one run over the ntuples. For the Born and subtracted real events, this reweighting
is relatively straightforward. The virtual events have an additional dependence on
the renormalization scale resulting from the one-loop amplitudes. The integrated sub-
traction events also depend on the factorization scale and the PDFs as a result of
off-diagonal splittings.
There is a standard format for storing information from NLO calculations in ROOT
ntuples that was originally developed by the BLACKHAT +SHERPA collaboration [233],
but has now been adopted for use in a number of NLO calculations from other groups
as well [182, 420]. Thus, an analysis code developed for the evaluation of ROOT ntuple
events for one process, can easily be adapted for use with any other NLO process.
Consider the production of Higgs (+ ≥ 3) jets through gg fusion at NLO as an ex-
ample. This calculation has been performed by the GOSAM collaboration [420] and the
output has been made available in the BLACKHAT +SHERPA ROOT ntuple format.11
This is one of the most difficult NLO calculations carried out to-date. For cross-section
predictions for reasonable statistical uncertainties at 13 TeV, approximately 70 GB
of disk storage is needed for the Born ntuples, 2.5 GB for the virtual ntuples, 130
GB for the integrated subtraction ntuples and 2.2 TB for the subtracted real ntuples.
As discussed earlier, the subtracted real events by far require the most storage space.
Although the total disk space required is large, each ntuple file is restricted to a size
of a few GB, allowing for the option of parallel processing.
In Fig. 3.8 (left), the cross-section for the production of a third jet for gg → H+ ≥ 3
jets at 13 TeV is shown as a function of the third jet transverse momentum. Jets are
clustered using the anti-kT (D = 0.4) jet algorithm and must have a minimum trans-
verse momentum of 30 GeV and a maximum absolute rapidity of 4.4. The prediction
in the ntuple uses the CT10 NLO PDFs and the calculation is carried out at a central
scale of µR = µF = HT /2. The reweighting information in the ntuples was used to
produce the PDF uncertainty band (using the CT10 Hessian error set) and the scale
uncertainty band (varying the renormalization and factorization scales independently
up and down by a factor of two, while keeping the difference between the two scales
within a factor of two). The scale uncertainty dominates except at high pT where the
PDF uncertainty becomes comparable.
Fig. 3.8 (right) shows the jet mass distribution for the third jet, calculated from
the same ntuples. The scale dependence is significantly larger than for the jet pT
distribution since the jet mass, in this context, is a leading-order quantity, non-zero
only when an additional gluon is emitted. While the high jet mass behaviour shown
in the figure is reasonable, the low mass region lacks the Sudakov suppression that
would be present in a resummation calculation or parton shower Monte Carlo. Similar
distributions can be calculated for the jet mass distributions for the leading and second
leading jets, with the same caveat.
11 One of the authors (JH) is grateful to the GOSAM collaboration for providing and assisting in the
use of these ntuples.
170 QCD at Fixed Order: Technology
Fig. 3.8 The third jet transverse momentum distribution (left) and third
jet mass distribution (right), for Higgs + ≥ 3 jets, calculated at NLO using
the GOSAM ROOT ntuples. The predictions are shown with scale and PDF
uncertainties.
In the past this type of contribution had been the focus of the most attention
since the evaluation of two-loop amplitudes is highly non-trivial.
(b) This contribution corresponds to the square of the one-loop three-parton matrix
elements,
1−loop 2
A (Zq q̄g) . (3.224)
Note that these are the same amplitudes that appear in the NLO calculation,
although in that case they enter as an interference with the tree-level amplitude.
(c) The third contribution also contains one-loop matrix elements, this time with four
partons, and enters interfered with the corresponding tree-level amplitude,
Re A1−loop (Zq q̄gg) × Atree (Zq q̄gg)∗ . (3.225)
Beyond next-to-leading order in QCD 171
Notice that exactly this contribution would be present in the NLO calculation of
Z + 2 jet production. The difference in this case is that one of the partons may
be unresolved, leading to additional soft and collinear singularities.
(d) The final contribution involves only tree-level matrix elements that contain five
partons, tree
A (Zq q̄ggg)2 . (3.226)
In this case two partons may be unresolved, giving rise to singularities of a new
form than encountered at NLO. Isolating these has provided the toughest chal-
lenge to completing such NNLO calculations.
Any NNLO calculation will therefore be substantially more complicated than its
NLO counterpart and involve the introduction of new techniques. Unlike at NLO,
at present there is no automatic procedure for generating predictions at this level
and calculations are currently being performed on a case-by-case basis. It is therefore
important to understand the benefits of having a NNLO calculation at hand in each
case. Of course, by extending the perturbative calculation by an additional order, one
expects that the quality of the prediction should improve. Certainly more effects can
be accounted for at this order in perturbation theory. For instance, contribution (d)
allows a single jet to be composed of three partons, a situation that is impossible at
NLO. Such configurations are more sensitive to details of the jet algorithm that may
be reflected in real data. In addition, as already discussed in Section 2.2.6, the scale
172 QCD at Fixed Order: Technology
production through gluon fusion [566] and Higgs+jet [392] and Z+jet production [564].
This approach is most similar to the subtraction methods discussed at NLO above,
with counter-terms constructed that can render individual contributions finite but
that can also be analytically integrated to explicitly collect singularities. For example,
returning to the example above, the schematic form of the NNLO contribution to the
cross-section for the parton-level process i + j → Z+parton would be
Z h i Z h i Z h i
NNLO (a) (b) C1 (c) C2 (d) C3
dσ̂ij = Φ1 dσ̂ij + σ̂ij − σ̂ij + Φ2 dσ̂ij − σ̂ij + Φ3 dσ̂ij − σ̂ij .
(3.227)
In these equations the superscripts label the different contributions indicated in Fig. 3.9.
Each contribution is integrated over the appropriate phase space for n final-state par-
Ck
ticles, Φn . The counter-terms for each contribution are indicated by σ̂ij . In this for-
mulation the counter-term C3 , for instance, removes all singularities resulting from
single- and double-unresolved limits of the contribution from the matrix elements (d).
Of course, not all calculations at NNLO require the full machinery discussed here.
In particular, for the simplest 2 → 1 and 2 → 2 processes, particularly if one is only
interested in total cross-sections and not exclusive properties of the final-state particles,
the calculations can be performed more simply. In fact, for the most important such
cases NNLO results have been available for some time. The total inclusive cross-section
for the Drell–Yan process, production of a lepton pair by a W or Z in a hadronic
collision, has long been known to NNLO accuracy [607]. Similarly, the inclusive Higgs
boson cross-section, which is a one-scale problem in the limit of large top mass, was
also first computed at NNLO some time ago [158, 615].
To conclude, the frontier of NNLO calculations is currently evolving very rapidly.
There are many competing methods for performing the calculations and undoubtedly
these techniques will continue to be honed. At present, 2 → 2 reactions can be com-
fortably handled, even with coloured and massive objects in the final state. These
pioneering calculations will lead to an even greater availability of NNLO predictions
in the near future. An important final note is that, for a consistent description of
the entire hard process at NNLO, all of the calculations discussed above rely on the
availability of parton densities evolved at the same order. Such PDF sets are indeed
available, as will be discussed in detail in Chapter 6, thanks to the calculation of the
QCD three-loop splitting functions [767, 883].
Fig. 3.11 Comparison between the approximate NNLO result from Loop-
Sim (blue) and the exact result from DYNNLO (red). The corresponding
NLO result is shown in green. Reprinted with permission from Ref. [830].
that correspond to the real radiation contributions entering at NNLO, the method
finds a corresponding Born configuration and sequence of subsequent emissions. This
step can be performed using a sequential-recombination jet algorithm, for which the
Cambridge–Aachen algorithm is preferred. By considering all possible ways in which
emitted particles can be combined with emitters, LoopSim determines exactly the
singular (or logarithmic) contributions of loop diagrams, which unitarize the corre-
sponding singular terms in the real radiation diagrams.12 This approximate NNLO
prediction is then finite and differs from the full NNLO calculation only by constant
terms. Indeed, the expected precision of the method for an observable A is,
N N LO
dσLoopSim dσ N N LO αs2
= 1+O (3.228)
dA dA K N N LO (A)
where dσ N N LO /dA = K N N LO (A) dσLO /dA defines the “local” NNLO K factor as a
function of A. As K N N LO (A) increases, the quality of the approximation is expected
to dramatically improve. In cases where this can be explicitly tested, such as the Drell–
Yan process, this expectation is confirmed, as shown in Fig. 3.11. Comparisons of the
predictions of this method with LHC data representing more complicated final states
will be discussed in Chapter 9.
12 As such, this method bears an interesting similarity to multijet merging methods for matrix
elements and parton showers that will be discussed in Chapter 5. At present this connection has not
been fully explored.
176 QCD at Fixed Order: Technology
where z = m2H /ŝ so that (1 − z) is the variable that parameterizes the distance from
(3)
threshold. The results for the first term in the expansion of ηij (z) are presented in
Ref. [155], neglecting terms of order (1 − z). The same technique is of course also
applicable to the Drell–Yan process, where a similar level of approximation has also
already been applied [129]. The phenomenological impact of these results is ambiguous,
due to the fact that there can be substantial differences resulting from equivalent
parameterizations of the neglected terms. Nevertheless, the same methodology has very
recently been extended to compute the sub-leading terms to an arbitrary level [157].
This effectively results in a full N3 LO calculation, paving the way for a similar level
of precision for a range of inclusive cross-sections.
so that it does not diverge. As a result the eikonal approximation results in the EW
Sudakov factor,
αw s
σ̂EW real ∼ log2 σ̂0 . (3.230)
4π m2W
where s is the hard scale at which the process is being probed, cf. the QCD equivalent
in Chapter 2. There is a corresponding virtual contribution, generated by one-loop
diagrams in which a W or Z boson is exchanged internally, and just as in the QCD
case this enters with the opposite sign,
αw 2 s
σ̂EW virtual ∼ − log σ̂0 . (3.231)
4π m2W
However, there is a crucial difference between the QCD and electroweak cases. The Su-
dakov factor is associated with a combination of isospin generators and, for a fixed ini-
tial state, there is a mismatch when the forms appearing in Eq. (3.230) and Eq. (3.231)
are promoted to exact relations [866]. This violation of the Bloch–Nordsieck theorem is
due to electroweak symmetry breaking and the fact that the initial states present in a
given process are not averaged over, but weighted by different PDFs. In addition, and
perhaps even more importantly for the interpretation of collider data, it is of course
straightforward to isolate the effects of EW radiation in data — unlike the case of QCD
radiation. The lack of infrared divergences means that it is customary that events that
are identified as containing additional W or Z bosons form a separate data sample.
Of course, there are regions where W and Z radiation escapes detection and some
effect from these contributions can partially cancel the virtual corrections, depending
on the particular process and the experimental setup [206]. However, the net effect of
real EW radiation is typically rather small and therefore the virtual corrections have
been the subject of the most intense theoretical scrutiny.
Note that the form of the Sudakov correction in Eq. (3.231), that is enhanced by
two powers of the logarithm, multiplies the leading-order amplitude. This means that
it is easy to estimate the size of the leading relative EW corrections for the simplest
processes,
EW σ̂ EW virtual αw 2 s
δ = = − (constant) log . (3.232)
σ̂0 4π m2W
This approximation is rather crude since it assumes that all relevant kinematic scales
are large and can be approximated by a single value s, for instance for a 2 → 2 process
s ≈ |t| ≈ |u|. The constant that appears in Eq. (3.232) is a combination of isospin
Casimirs and depends on the identities of the particles participating in the reaction. It
can be written in terms of electroweak parameters such as the weak mixing angle and is
usually of order unity. To illustrate the size of the correction that can be anticipated,
√
Fig. 3.12 shows the value of δ EW obtained from Eq. (3.232) as a function of s,
with the constant set to 1. In reality this constant varies for each sub-process and the
large negative contribution is partially mitigated by collinear (single) logarithms, but
Fig. 3.12 gives a good guide to the size of the corrections that can be expected. With
corrections of order 10% or larger for scales of 1 TeV and beyond, in regions of large
178 QCD at Fixed Order: Technology
Fig. 3.13 Sample box diagrams entering the NLO EW corrections to the
process q q̄ → tt̄.
include a number of self-energy, vertex, and box diagram contributions. One of the
contributing box diagram contributions is shown in panel (a) of Fig. 3.13. It consists of
the tree-level O(αs ) process interfered with the O(αs αw ) one-loop correction involving
an internal Z-boson in the loop. However, at the same order one must also consider the
interference of the diagrams shown in panel (b): the tree-level O(αw ) process and the
one-loop O(αs2 ) box. In this way the weak and strong amplitudes become entangled
when computing EW corrections if a given final state may be obtained at tree-level
through both strong and weak interactions. In fact the inclusion of all the diagrams in
Fig. 3.13 results in an infrared-divergent contribution that must be cancelled by the
radiation of real gluons. These diagrams √ correspond to√dressing the tree-level ones in
Fig. 3.13 with a gluon, to obtain O(αs αs ) and O(αw αs ) amplitudes, and perform-
ing the interference. Singularities can be extracted and cancelled against the virtual
contributions using the same methods, such as dipole subtraction, that are applied in
the pure QCD case [704].
Although the motivation for including EW corrections has been presented in terms
of Sudakov logarithms, for the simplest processes it is possible to compute the correc-
tions exactly, to include all sub-leading effects. In this way EW corrections have been
obtained and investigated for most 2 → 2 processes, including the production of dijets,
vector bosons and a jet, top pairs, and dibosons. A recent review [765] summarizes
the most important results and contains references to the original calculations. Alter-
natively, one can use the known factorization properties of the Sudakov logarithms to
obtain an approximate form of the EW corrections [461], a strategy that can be used
to provide the corrections for more complex final states [395].
3.5 Summary
This chapter has discussed in some detail the essential elements involved in making
perturbative predictions for hadron colliders. There has been rapid progress in this area
in the years leading up to data-taking at the LHC, as the theoretical predictions have
had to evolve to match the expected breadth and precision afforded by such a machine.
The latest tools are able to provide NLO corrections for configurations involving many
jets and NNLO predictions and beyond for the most important processes. In many
cases even the effect of electroweak corrections can be included.
Poised at the brink of yet another substantial jump in machine capability — not
only an increase in energy, but also an unprecedented amount of data resulting from
the increased luminosity — it is imperative that perturbative predictions continue to
improve at a similar rate. The preparation of the “Les Houches wishlist” [294] has
provided a forum for discussing the most useful calculations that could be performed,
in order to ensure that such progress is achieved. By now, all of the NLO perturbative
QCD calculations contained in the original list have been completed, primarily due
to the emergence of the unitarity techniques discussed in Section 3.3.1. The latest
iteration of the list [296] reflects the high-precision calculations that are expected
to be required during the expected lifetime of the LHC snd typically demands the
inclusion of both NNLO QCD and NLO EW effects. As an example, Table 3.5 shows
the Higgs-related calculations for which a strong need is anticipated in the future.
For each calculation presented in the table, Ref. [296] discusses the motivation and
180 QCD at Fixed Order: Technology
degree of need, which is particularly important given the challenging nature of many
of the demands. Note that such discussions are certainly not limited to the case of
Higgs boson cross-sections and Ref. [296] contains wishlists for other Standard Model
processes.
As an example of the motivation, consider the Higgs + 2 jets final state. This
channel is crucial in order to understand the Higgs boson coupling to vector bosons,
through the vector boson fusion (VBF) channel. For the VBF channel the NNLO QCD
corrections are known in a fully differential form, in the double-DIS approximation,
and EW effects are known to NLO. However, the search for this production mode
suffers from a background consisting of Higgs production through gluon fusion, when
two additional hard jets are also radiated. Currently this channel is known only to
NLO QCD in the infinite top mass approximation, and only to LO QCD when the
full dependence on the top quark mass is retained. If both the VBF and gluon fusion
Higgs + 2 jets cross-section are known to NNLO QCD, and NLO EW, accuracy then
with 300fb−1 of data it may be possible to measure the HW W coupling strength to
the order of 5%.
Having discussed the frontier of fixed-order, parton-level treatments, the following
chapter will describe in more detail the application of these predictions to a wide
range of hadron collider processes. A variety of alternative approaches that go beyond
the ones presented so far will be discussed in Chapter 5. These represent all-orders
treatments that either address regions where the calculations presented here break
down, or which in addition allow predictions to be made at the hadron level for a
direct comparison with data.
Summary 181
Table 3.5 The “Les Houches wishlist” for processes involving the Higgs boson, taken from
Ref. [296].
Process Comments
H State of the Art
dσ @ NNLO QCD (expansion in 1/mt )
full mt /mb dependence @ NLO QCD and @ NLO EW
NNLO+PS, in the mt → ∞ limit
Desired
dσ @ NNNLO QCD (infinite-mt limit)
full mt /mb dependence @ NNLO QCD and @ NNLO QCD+EW
NNLO+PS with finite top quark mass effects
H + j State of the Art
dσ @ NNLO QCD (g only)
and finite-quark-mass effects @ LO QCD and LO EW
Desired
dσ @ NNLO QCD (infinite-mt limit)
and finite-quark-mass effects @ NLO QCD and NLO EW
H + 2j State of the Art
σtot (VBF) @ NNLO(DIS) QCD and dσ(VBF) @ NLO EW
dσ(gg) @ NLO QCD (infinite-mt limit)
and finite-quark-mass effects @ LO QCD
Desired
dσ(VBF) @ NNLO QCD + NLO EW
dσ(gg) @ NNLO QCD (infinite-mt limit)
and finite-quark-mass effects @ NLO QCD and NLO EW
H + V State of the Art
dσ @ NNLO QCD and dσ @ NLO EW
σtot (gg) @ NLO QCD (infinite-mt limit)
Desired
with H → bb̄ @ same accuracy
dσ(gg) @ NLO QCD with full mt /mb dependence
tH and State of the Art
t̄H dσ(stable top) @ LO QCD
Desired
dσ(top decays) @ NLO QCD and NLO EW
tt̄H State of the Art
dσ(stable tops) @ NLO QCD
Desired
dσ(top decays) @ NLO QCD and NLO EW
gg → HH State of the Art
dσ @ NLO QCD (leading mt dependence)
dσ @ NNLO QCD (infinite-mt limit)
Desired
dσ @ NLO QCD with full mt /mb dependence
4
QCD at Fixed Order: Processes
Z1 Z
1 X dxa dxb
σ2−jet = fa/h1 (xa , µF )fb/h2 (xb , µF ) dΦn |Mab→cd |2 , (4.1)
2s xa xb
a,b,c,d 0
The Black Book of Quantum Chromodynamics: A Primer for the LHC Era. John Campbell, Joey Huston, and Frank Krauss.
© John Campbell, Joey Huston, and Frank Krauss 2018. Published in 2018 by Oxford University Press.
DOI 10.1093/oso/9780199652747.001.0001
Production of jets 183
where the sum runs over all permissible combinations of quarks, anti-quarks, and glu-
ons. Four such parton-level processes that must be included at leading order in the
strong coupling, which shall be discussed in more detail shortly, are shown in Fig. 4.1.
These interactions should produce strongly interacting particles in opposite hemi-
spheres, with zero net transverse momentum. The rate for this process is extremely
high at the LHC, with cross-sections for typical jet cuts, pT (jet) > 50 GeV, in the
tens of microbarn range, cf. Fig. 4.2. The cross-section for two jets with transverse
momenta of 1 TeV or higher is still at the level of a nanobarn. Such energetic jets
are of course much easier to produce at a possible future hadron collider: 1 TeV jets
at a 100 TeV collider are just as common as 100 GeV jets at a 14 TeV LHC, with a
cross-section of a few microbarns.
This process represents an important probe of QCD in a number of ways. For
instance, by measuring cross-sections as a function of jet transverse momenta it is
possible to assess the running of the strong coupling. The large cross-section enables
measurements of this process to be made for a wide range of transverse momenta and
rapidities that, due to the very simple kinematics involved, can easily be translated
into information about the PDFs (as will be discussed further in Chapter 6). Besides
these areas where the two-jet process itself is the process of interest, jet production
can easily be a source of background events for many analyses, for instance when one
of the jets is misidentified as a lepton or a photon, or when mis-measurement of one
of the jets results in substantial missing transverse momentum. Since the cross-section
is so large, even a very small fake rate can lead to significant background rates. It is
therefore essential to understand this process in some detail.
184 QCD at Fixed Order: Processes
In the simple picture outlined so far the theoretical definition of the dijet process is
clear, a 2 → 2 scattering involving only quarks and gluons. For the sake of illustration
consider the four sub-processes, the reactions qq 0 → qq 0 , qg → qg , q q̄ → gg, and
gg → gg that are depicted in Fig. 4.1. The matrix elements for these processes, summed
over final-state colour and spins and averaged over the colour and spins in the initial
state, are,
2
2 1 (0) V ŝ + û2
|Mqq0 →qq0 | = m 0 0 = , (4.2)
4N 2 qq̄ →qq̄ 2N 2 t̂2
2 1 −1 V 2N 2
|Mqg→qg | = m(0)
qg→qg = 2
− ŝ2 + û2 , (4.3)
4N V 2N ûŝ t̂ 2
2 1 (0) V V 2N 2
|Mqq̄→gg | = 2
m q q̄→gg = 3
− 2
t̂2 + û2 , (4.4)
4N 2N ût̂ ŝ
2 1 (0) 2N 2 ût̂ ŝû ŝt̂
|Mgg→gg | = m = 3 − − − , (4.5)
4V 2 gg→gg V ŝ2 t̂2 û2
Production of jets 185
which also implicitly defines the non-averaged leading-order matrix elements m(0) . The
matrix elements are written in terms of the usual invariant quantities ŝ = (p1 + p2 )2 ,
t̂ = (p1 − p3 )2 and û = (p2 − p3 )2 . Note that, as is clear from Fig. 4.1, the diagrams
entering the calculations of the reactions qg → qg and q q̄ → gg are identical. The
processes differ only by the identities of the partons that are present in the initial state.
Therefore the matrix elements are expected to be related by a crossing symmetry,
in this case the exchange of p2 and −p3 , where the minus sign accounts for the fact
that the particles are exchanged between the initial and final states. By noting that
this corresponds to the exchange ŝ ↔ t̂, the crossing relation can be straightforwardly
verified by inspecting Eq. (4.5), up to an overall factor related to colour averaging in
the initial state and a minus sign from crossing.
To turn these matrix elements into a cross-section they must be combined with
the appropriate two-particle phase space. As shown explicitly in Appendix A.3, the
phase space for two massless particles can be written directly in terms of the transverse
momentum and rapidity of one of them as
p⊥ dp⊥ dηdφ
dΦ2 = 3
(2π) δ (p1 + p2 − p3 )2 (4.6)
2(2π)
where it is convenient to work in the lab frame, in which the four-momentum of the
first jet can be written as,
(see Appendix A.3). After performing the trivial integration over φ and using the p⊥
integration to remove the δ function, the phase space takes the simple form,
1 p2⊥
dΦ2 = dη. (4.8)
4π ŝ
This is the phase-space element for a jet of a given transverse momentum (p⊥ ), with
the remaining degree of freedom parameterized by its rapidity (η). An even more useful
variable is one that reflects more directly the kinematic configuration of both jets. To
that end it is convenient to write the momentum of the other jet as
frames,
ŝ ŝ
t̂ = − (1 − cos θ) = − ,
2 χ+1
ŝ ŝχ
û = − (1 + cos θ) = − . (4.11)
2 χ+1
From these it is clear that χ and θ are related by
1 + cos θ
χ= . (4.12)
1 − cos θ
Performing the change of variables dη → dχ/χ in Eq. (4.8) yields a particularly simple
parameterization of the phase space,
1 dχ
dΦ2 = . (4.13)
4π (χ + 1)2
Finally, combining the final-state phase space in Eq. (4.13) with the matrix ele-
ments of Eq. (4.5) recast in terms of χ, yields the relevant quantities for each partonic
channel,
" 2 #
2 1 V χ
dΦ2 |Mqq̄0 →qq̄0 | = dχ 1 + , (4.14)
4π 2N 2 χ+1
2 1 1
dΦ2 |Mqg→qg | = dχ (4.15)
4π 2N 2 "
2 #!
1 χ χ
× V + + 2N 2 1 + ,
χ(χ + 1) (χ + 1)3 χ+1
2
2 1 V dχ 1 2 1+χ
dΦ2 |Mqq̄→gg | = V χ + − 2N , (4.16)
4π 2N 3 (χ + 1)2 χ (χ + 1)2
2 1 2N 2 (1 + χ + χ2 )3
dΦ2 |Mgg→gg | = dχ . (4.17)
4π V χ2 (χ + 1)4
With the exception of the gg → gg reaction, these expressions depend rather weakly
on χ in the physical region, χ ≥ 1. As a result, a measurement of dσ/dχ for jet
production is quite insensitive to the details of the parton distribution functions that
have yet to be folded in. Indeed, this weak dependence on χ is none other than the
statement that these processes are dominated by the t-channel exchange of a spin-1
gluon, analogous to Rutherford scattering.
Indeed, this observable can be used to search for evidence of contact interactions
that would indicate quark substructure. For instance, in the presence of an additional
4-quark contact term [394, 718],
2π
Lcontact = (ψ̄L γ µ ψL )(ψ̄L γµ ψL ), (4.18)
Λ2
Production of jets 187
Fig. 4.3 The quantity 4π dΦ2 |Mab→cd |2 /dχ as a function of χ. This quan-
tity is plotted for qg → qg (upper dotted), q q̄ → q q̄ (dashed), q q̄ → gg
(lower dotted), gg → gg (dot-dashed), and q q̄ → q q̄ in the presence of a
contact interaction with ŝ = 0.5 TeV, Λ = 1 TeV, and αs = 0.1 (solid).
involving two partons with momentum fractions x1 and x2 of the parent hadrons,
momentum conservation yields the relations,
√
s
(x1 + x2 ) = p⊥ (cosh η + cosh η 0 ), (4.20)
√2
s
(x1 − x2 ) = p⊥ (sinh η + sinh η 0 ). (4.21)
2
These equations are easily solved for x1 and x2 ,
p⊥ 0
p⊥ 0
x1 = √ e η + e η , x2 = √ e−η + e−η . (4.22)
s s
Hence different ranges of transverse momenta and rapidities represent probes of par-
ticular regions of x1 and x2 . In order to probe large momentum fractions one can
investigate the behaviour of jets with large pT or η. Since a jet has a certain minimum
pT , in order to access small momentum fractions it is most useful to examine jets
of moderate transverse momenta but that lie at large rapidities. These features are
exploited in global fits of PDFs to hadron collider jet data, as will be discussed further
in Chapter 6.
s2 − u2 2 2 2 2
−4V CF 2π + 2l (t) + l (s) + l (u) − 2l(s)l(t) − 2l(t)l(u)
t2
Production of jets 189
s2 − u2
2 2 2 2
+Nc V 3π + 3l (t) + 2l (s) + l (u) − 4l(s)l(t) − 2l(t)l(u)
t2
u − s
+4V CF l(u) − l(s) − 2l(t) − l(s) − l(u)
t )
s u
+Nc − l(t) − l(u) + l(t) − l(s) . (4.24)
2t t
This equation is written in terms of the colour factor V = Nc2 − 1 and the logarithmic
function,
x
l(x) = log − 2 , (4.25)
Q
where Q2 > 0 is an arbitrary momentum scale. Note that this means the function
develops an imaginary part for x > 0. Since only the real part should be kept in
Eq. (4.24), this only results in contributions from terms of the form,
t t
l2 (t) = log2 − 2 → log2 − π2 if t > 0. (4.26)
Q Q2
The result in Eq. (4.24) also demonstrates that, unlike the simplest Drell–Yan case
considered in Chapter 3, in general the virtual matrix element exhibits a rich kinematic
structure. Only the singular terms are proportional to the lowest-order matrix element,
while the finite remainder depends on s, t, and u in a more complicated way.
The other partonic contributions can be written in a similar fashion. The virtual
contribution to the process q q̄ → gg is,
("
(v) 2 3 2 11 11 2
mqq̄→gg = cΓ CF − 2 − − 7 + Nc − 2 − + l(−µ )
ε ε ε 3ε 3
#
4 4 (0)
+nf TR − l(−µ2 ) mqq̄→gg
3ε 3
l(s) 2 2V t2 + u2 2
2t + u
2
+ 2Nc V + 2 − 4V
ε Nc ut s2
2
2
4N V u 2u t 2t2
+ c l(t) − 2 + l(u) − 2
ε t s u s
)
4V u t
− + (l(t) + l(u)) + f1 (s, t, u) + f1 (s, u, t), (4.27)
ε t u
which makes explicit the form of the singular terms. The auxiliary function f1 contains
only finite contributions and is defined by
190 QCD at Fixed Order: Processes
(
f1 (s, t, u) = 4Nc V
l(t)l(u) t2 + u2 1 s2 1 1 t2 + u2 t2 + u2 Nc t2 + u2
+ l2 (s) + + − −
Nc 2tu 4N 3 tu 4Nc 2 tu s2 4 s2
2
5 V 1 1 1 t + u2 V t 2 + u2
+l(s) − − 3 − Nc + 3 −
8 Nc 2Nc N Nc 2tu 4Nc s2
2 c2 2
2 1 1 3(t + u ) 1 t + u2 t2 + u2
+π + 3 + + Nc −
8Nc Nc 8tu 2 8tu 2s2
2 2
1 1 t +u
+ Nc + −
Nc 8 4s2
2 s u 1 1 t u 1 u s
+l (t) Nc − − + − + 3 −
4t s 4 Nc 2u 4s Nc 4t 2u
2
t + u2 3t 5u 1 1 u 2s s 1 3s 1
+l(t) Nc + − − − + + − 3 +
s2 4s 4t 4 Nc 4s u 2t Nc 4t 4
2 )
t + u2 u 1 u t 1 s u
+l(s)l(t) Nc 2
− + − + 3 − . (4.28)
s 2t Nc 2s u Nc u 2t
In this case it is clear that the singular contributions are not all proportional to the
(0)
leading-order matrix element, mqq̄→gg . Only the soft divergence, represented by the
2
1/ε term, multiplies this factor. The collinear divergence, given by terms proportional
to 1/ε, is more complicated due to the fact that the collinear factorization of the matrix
elements is modified by colour connections between the gluons. This is a general feature
of calculations for more complex final states.
Finally, the result for the all-gluon process gg → gg is
(
(v) 4Nc 22Nc 8nf TR 67Nc 20nf TR
mgg→gg = cΓ − 2 − + − +
ε 3ε 3ε 9 9
11Nc 4nf TR
+Nc π 2 + l(−µ2 ) − l(−µ2 ) m(0)
gg→gg
3 3
"
16V Nc3 2tu t4 + u4 2us u4 + s4
+ l(s) 3 − 2 + 2 2 + l(t) 3 − 2 +
ε s t u t u2 s2
#)
2st s4 + t4
+l(u) 3 − 2 + 2 2
u s t
+4V Nc2 [f2 (s, t, u) + f2 (t, u, s) + f2 (u, s, t)] (4.29)
where f2 is defined by
Production of jets 191
(
2(t2 + u2 ) 2 4s(t3 + u3 )
f2 (s, t, u) = Nc l (s) + − 6 l(t)l(u)
tu t2 u2
2 )
4 tu 14 t2 + u2 t u2 2
+ − − 14 − 8 + 2 l(s) − 1 − π
3 s2 3 tu u2 t
(
10 t2 + u2 16 tu s2 + tu 2
+nf TR + − 2 l(s) − l (s)
3 tu 3 s2 tu
)
2(t2 + u2 ) 2
− l(t)l(u) + 2 − π . (4.30)
tu
In all of these virtual matrix elements one can also see the emergence of the loga-
rithms of the renormalization scale, µ, that were previously highlighted in Eq. (2.123).
Inspecting Eqs. (4.24), (4.27), and (4.30) one can see that they all contain a term
proportional to the appropriate leading-order matrix element multiplied by the factor
11Nc 4
− nf TR l(−µ2 ) = β0 l(−µ2 ). (4.31)
3 3
Fig. 4.4 The scale dependence for the inclusive jet cross-section using
√
the anti-k⊥ jet algorithm with R = 0.4 at s = 7 TeV. Jets satisfy
60 < pT < 80 GeV and lie in the rapidity range 0 < y < 0.3. Cross-sections
have been computed with NLOJET++ [777] interfaced with Applgrid [329]
and are normalized to the prediction at scales of µR = µF = 2.5pT .
tains logarithms of the form log(µR /µF ) it is prudent to vary them in a similar range,
so that the surface extends as far as pjT /5 < µR , µF < 5pjT . The two-dimensional
equivalent of the NLO scale dependence observed, for instance in Fig. 2.22, is a saddle
shape. In this particular example the peak cross-section is at the saddle point, which
corresponds to a scale of approximately µR = µF = pjT .
A simpler way to see the saddle point, and the region of mild dependence on the
scales around it, is to examine a contour plot such as the one shown in Fig. 4.5 (left).
From this plot it is clear that the cross-section depends much less on the factorization
scale than on the renormalization scale. For a much higher jet transverse momentum
range the corresponding plot is qualitatively different, cf. Fig. 4.5 (right). The plot
appears to have been rotated by an angle of −45o with respect to the vertical, such
that the saddle point is at somewhat smaller scales and a scale choice of pjT no longer
corresponds to the peak cross-section.
This behaviour can be understood as follows. This process is dominated by jets
produced in the central rapidity region, thus probing partonic momentum fractions x ∼
2pjT /(7000 GeV), cf. Eq. (4.22). Hence, for low jet transverse momenta x ∼ 0.02 and the
dominant sub-process is gg → gg. At this x value, there is very little µF -dependence
in the gluon distribution, as will be discussed in Chapter 6. This is the behaviour
observed in Fig. 4.5 (left), where the jet cross-section is relatively independent of the
factorization scale. In contrast, the higher pT region shown in Fig. 4.5 (right) probes
much larger parton x values where sub-processes such as gq → gq and qq → qq are
also important. In this region both the quark and gluon distributions depend more
strongly on µF , leading to the “rotation” noted above.
The same analysis can be performed in different kinematic ranges, for instance other
Production of jets 193
Fig. 4.5 Left: scale-dependence contours for the NLO inclusive jet cross–
section, as defined in Fig. 4.4. Right: the same contours for higher jet
transverse momenta, in the range 1200 < pT < 1500 GeV.
Fig. 4.6 Scale-dependence contours for the NLO inclusive jet cross-section
using the anti-k⊥ jet algorithm with R = 0.4 (left) and R = 0.6 (right). Jets
satisfy 1200 < pT < 1500 GeV and lie in the rapidity range 1.2 < y < 2.1.
Cross-sections have been normalized to the prediction at µR = µF = 2.5pT .
regions of jet rapidity and jet separation. Fig. 4.6 (left) shows the contours of scale
dependence in the same high transverse momentum range but now also corresponding
to much larger rapidities. Again, the rotation has taken place but the saddle point is
once more at scales near µR = µF = pT . Fig. 4.6 (right), depicts the scale-dependence
contours when the jet size is increased from 0.4 to 0.6. The peak cross-section, the
saddle point, corresponds to smaller values for both scales as the jet radius increases.
Note that the standard scale uncertainty analysis corresponds to a one-dimensional
projection of the contour plot along the diagonal µR = µF . In all these cases this would
result in a curve with a maximum at or near the saddle point. However, as demon-
strated above, the exact form of the scale dependence clearly depends on the kinematic
region being studied and such a similarity between the one- and two-dimensional anal-
yses is not guaranteed. Such considerations need to be kept in mind, not only for inclu-
sive jet production, but more generally for all theoretical predictions for cross-sections
at the LHC.
194 QCD at Fixed Order: Processes
Fig. 4.7 Scale dependence of the inclusive jet cross-section at the 7 TeV
LHC, computed at LO, NLO, and NNLO, for the central scale choices
µR = µF = pjT (left) and µR = µF = pjT1 (right). The calculation uses
the anti-kT algorithm with R = 0.4 and jets are defined by |y| < 0.5 and
100 < pT < 116 GeV [422]. Reprinted with permission from the authors.
This discussion should of course be extended to even higher orders, when such the-
oretical predictions are available. Very recently, pioneering calculations have extended
the accuracy of inclusive jet production cross-sections to NNLO [335, 422, 566], as
shown in Fig. 4.7. The inclusive jet cross-section is shown for a particular slice of
jet transverse momenta at the 7 TeV LHC, with the central scale given by either pjT
or the pT of the leading jet, pjT1 . The behaviour when moving from LO to NLO to
NNLO clearly depends quite strongly on the choice of central scale. For instance, the
improvement in the scale dependence at each order that is observed for µ = pjT1 is not
observed for µ = pjT ; for a detailed discussion of this and other related subtleties, see
Ref. [422].
Fig. 4.8 The fraction of energy for a jet of radius R = 1.0 that is inside a
radius r. Data from the CDF experiment for jets of 100 GeV is compared to
fixed order theoretical predictions at NLO. The curves correspond to dif-
ferent values of the renormalization and factorization scales and/or values
of the parameter Rsep . Reprinted with permission from Ref. [510].
and are absent for example for the anti-kT jet algorithm. The main point is that jet
shapes can be reasonably well described using only the one extra gluon present in a
NLO calculation.
Jet shapes will be revisited in the context of experimental data from the TEVATRON
in Section 8.3. With the increasing precision of parton shower Monte Carlos, it has
become much more common to compare experimental jet shape measurements with
the predictions of those Monte Carlo programs (where the jet shape is described by
the effects of multiple gluon emissions, as well as non-perturbative effects).
4.1.5 Multijets
Final states containing more than two jets are also of considerable interest at hadron
colliders. Foremost, their study constitutes an essential test of the theory of strong
interactions and our ability to describe jets of hadrons using the partonic description.
In addition, the production of multijets is a considerable background in many searches
for new physics, typically when one or more of the jets fakes either a lepton or a
photon.
From a theoretical point of view the description of multijet states becomes sig-
nificantly more complicated than the dijet case. The number of Feynman diagrams
contributing to a given n-jet calculation increases more than factorially with n. In
addition, as n grows so does the complexity of the colour configurations that must be
included in the calculation of the scattering amplitudes. The use of recursion relations,
as described in Section 3.2.1, has been instrumental in taming the factorial growth and
enabling the calculation of leading-order predictions for n > 4 [220]. These recursive
techniques, coupled with the D-dimensional numerical unitarity methods mentioned
earlier, have led to algorithms capable of computing one-loop virtual corrections for
196 QCD at Fixed Order: Processes
essentially arbitrary multijet processes [181, 575]. In view of the factorial growth in
the number of diagrams it is especially interesting to observe the scaling behaviour of
the computational time using such techniques. The NJET code [181] scales only as a
power of n, the number of external partons in the scattering amplitude. Configurations
that involve only gluons are most time-consuming to evaluate and scale as approxi-
mately n5 . As pairs of gluons are replaced by a quark–anti-quark pair the computation
becomes less complicated and thus faster, although the scaling behaviour is harsher.
Ref. [181] indicates that a single such computation for n = 13, corresponding to 11-jet
production, may be completed in less than a second. This is a testament to the power
of modern numerical unitarity approaches.
The combination of these ingredients into full NLO calculations of multijet pro-
duction is an arduous task, due to the high number of partonic channels and their
associated infrared singularities. The three-jet case was first available in the NLOJET
code [777] in 1997, while the four-jet calculation has only more recently been completed
by two independent groups [180, 231]. Results for up to five jets have been presented
in Ref. [183], allowing for a comprehensive study of multijet production over a large
kinematic range. Fig. √ 4.9 shows the NLO predictions for the n-jet cross-sections for
n = 2, 3, 4, 5 at the s = 7 TeV LHC, computed using NJET virtual matrix elements
and SHERPA for the assembly into a full NLO prediction. The predictions use a central
scale of HT /2, which is found to be a scale at which the NLO corrections are small and
Production of photons and jets 197
the scale dependence flat. For n = 3, 4, 5 jets the agreement between the theoretical
prediction and the data is excellent, with differences at the level of 30%. Given the
complexity of the QCD interactions being described, this agreement is remarkable.
Moreover, as can be seen in the figure, the difficulty of experimentally measuring such
cross-sections means that the theoretical uncertainties for n > 3 are smaller than the
experimental ones. The situation is different in the two-jet bin, where the perturbative
expansion is not under such good control. There the effect of the NLO corrections is
very large under this set of cuts and the NNLO corrections should be important. Note
that the cross-section falls by almost the same factor, about ten, when another jet is
included in the final state. This is an example of Berends scaling, that was originally
observed in vector boson+jet production [224]. This observation motivates the study
of the quantity,
σ(n + 1 jet)
Rn = (4.32)
σ(n jet)
which is especially interesting since various sources of theoretical and experimental
uncertainty may cancel in such a ratio. Since Rn is proportional to αs at first order,
this allows determinations of the strong coupling from measurements of jet rates at
hadron colliders, in a similar manner to previous determinations at LEP [174].
Although no results have been presented for more than five jets, it is clear from
the discussion above that the virtual matrix elements are readily available in the NJET
code and could be utilized in the future. However, due to the time taken to evaluate
both the virtual matrix elements and the real corrections, and also because of its
limited phenomenological interest, NLO predictions for more than five jets may not
be available for some time.
where the amplitude has been expressed in terms of subamplitudes that are propor-
tional to products of colour matrices in two different orders. This decomposition is
made possible by the relationship between the QCD structure constants that appear
in the diagram containing the triple-gluon vertex and the anti-commutator of two
colour matrices,
[T a3 , T a4 ]i1 i2 = if a3 a4 b Tib1 i2 (4.35)
The subamplitudes that appear in Eq. (4.34) are simple examples of colour-ordered
amplitudes.
The helicity choice under consideration is a maximal helicity violating (MHV)
one, with two partons of a given helicity and the remainder of the opposite helicity.
As a result the corresponding colour-ordered helicity amplitudes are given by simple
expressions,
h1 3ih2 3i3
M (q̄1+ , q2− , g3− , g4+ ) = , (4.36)
h1 2ih2 3ih3 4ih4 1i
h1 3ih2 3i3
M (q̄1+ , q2− , g4+ , g3− ) = − . (4.37)
h1 2ih2 4ih3 4ih3 1i
In these two amplitudes the colour-ordering of the gluons is apparent from the de-
nominators: h2 3ih4 1i and h2 4ih3 1i respectively. In addition, the denominator factors
h1 2i and h3 4i are signatures of the presence of the diagram containing a triple-gluon
vertex.
The corresponding amplitude for the process in which one of the gluons is replaced
by a photon,
0 → q̄ + (p1 ) + q − (p2 ) + γ − (p3 ) + g + (p4 ). (4.38)
receives contributions from similar diagrams, and apart from the overall coupling factor
that is different, the coupling of a photon to quarks does not introduce a colour matrix.
Therefore the factor T a3 in Eq. (4.34) can be replaced by an identity matrix in colour
space and the overall coupling changed to yield,
M(q̄1+ , q2− , γ3− , g4+ ) = ieQq g (T a4 )i1 i2 M (q̄1+ , q2− , g3− , g4+ ) + M (q̄1+ , q2− , g4+ , g3− )
≡ ieQq g (T a4 )i1 i2 M (q̄1+ , q2− , γ3− , g4+ ), (4.39)
where Qq is the electric charge of the quark, in units of e. The photon amplitude is
thus obtained by a sum over both colour orderings of the gluon amplitude. One might
worry about the presence of the triple-gluon diagram in the original result, which
should not be present for the photon. However, since this diagram enters each colour
subamplitude with opposite sign, cf. Eq. (4.35), the sum in Eq. (4.39) ensures that
this diagram does not contribute to the photon amplitude. Explicitly, the result is,
Production of photons and jets 199
h1 3ih2 3i3
M (q̄1+ , q2− , γ3− , g4+ ) = (h2 4ih3 1i − h2 3ih4 1i)
h1 2ih2 3ih2 4ih3 4ih3 1ih4 1i
h1 3ih2 3i3
= , (4.40)
h2 3ih2 4ih3 1ih4 1i
where in the final line the denominators h1 2i and h3 4i cancel after using the Schouten
identity (cf. Section 3.2.1) to simplify the numerator, as expected. It is also clear from
Eq. (4.39) that the two-photon amplitude with the replacement g + (p4 ) → γ + (p4 ),
must be identical up to an overall coupling and colour factor.
With such rules it is straightforward to compute amplitudes for processes involv-
ing photons from results obtained in pure QCD [455]. An amplitude for a process
containing a photon can be simply obtained from a colour-ordered gluon amplitude
by appropriate symmetrization. Thus, once care has been taken in order to define the
photon appropriately according to the requirements in Section 2.1.6, the calculation of
an m photon plus n-jet final state is straightforward once the (m + n)-jet calculation
is at hand.
collinear. This singularity can be absorbed into the definition of the bare function
representing a quark fragmenting into a photon. The remaining finite fragmentation
function Di→γ is thus dependent on the physical scale at which this separation is
performed, the fragmentation scale MF . The inclusion of such fragmentation con-
tributions, dependent on Dq→γ (MF ), is indicated in Fig. 4.11 (right). Just like the
parton distribution functions, the fragmentation functions are non-perturbative quan-
tities that must be experimentally determined but whose evolution is governed by
perturbative QCD.
Schematically, a differential cross-section for photon production can thus be written
as the sum of two components,
X
dσ = dσγ+X (MF ) + dσi+X ⊗ Di→γ (MF ). (4.41)
i
The direct, or prompt component, is represented by the first term and the fragmenta-
tion contribution by the second. The sum runs over all partons, with σi+X the inclusive
differential cross-section for the production of parton i. As is clear from the equation,
this separation is well-defined only at a given value of the fragmentation scale MF
(and is also scheme dependent). Note that in the case of the Frixione isolation crite-
rion discussed in Section 2.1.6, the isolation constraint removes the collinear singularity
present in Fig. 4.11 (left). As a result there is no need to introduce the concept of a
fragmentation function in this approach. From a theoretical standpoint this is very
attractive since the calculation becomes more straightforward and amenable to the
usual techniques of pure QCD.
The direct photon process can also be used as an effective probe of the parton
distribution functions, in particular of the gluon distribution. Indeed, until 1999 direct
photon data were routinely used in the global extraction of PDFs. Since DIS measure-
ments are not directly sensitive to the gluon PDF this was a useful complement to
the wealth of available HERA data. However, the inability of NLO predictions of the
time to accommodate data from some fixed target and collider experiments led to the
abandonment of this approach. TEVATRON collider photon data at moderate to high
pT do have the potential to provide information on the PDFs, but the dominant sub-
process at a pp̄ collider of this energy is q q̄ → γg. Since the quark distributions are well
Production of photons and jets 201
constrained in this region already, little useful additional information is provided. This
is not true at the LHC, where the availability of a wealth of data has led to renewed
interest in constraining the gluon PDF with photon measurements [462]. Fig. 4.12,
taken from this reference, shows the size of various contributions to the direct photon
cross-section at the LHC, at central rapidity (y = 0) and as a function of the photon
transverse momentum. As expected, the “annihilation” contribution — resulting from
diagrams such as Fig. 4.10 (right) and its corresponding NLO corrections — is much
smaller than the “Compton” process (Fig. 4.10 (left)). For inclusive direct photon
production, Fig. 4.12 (left), the fragmentation component is large, even for very high
ET photons. However after application of typical experimental isolation cuts the less
well-known fragmentation contribution is much reduced, as shown in Fig. 4.12 (right).
As a result the direct photon process can be used to probe the gluon PDF at the LHC,
particularly now that the theoretical prediction is known to NNLO [316].
q + q̄ → γγ. (4.42)
(l) (0)
where Mn represents the corresponding l-loop amplitude. Thus Mn simply rep-
(0)
resents a tree-level calculation, with M0 just the diphoton production process in
Eq. (4.42). It is then a straightforward exercise to list all the contributions that en-
ter the calculation of the diphoton cross-section at a given order, remembering that
real-emission contributions (i.e. n > 0 in Eq. (4.43)) must also contribute. An explicit
list, up to NNLO, is given in the first column of Table 4.1. The corresponding in-
gredients for two other observables, the transverse momentum of the diphoton system
(pγγ
⊥ ) and the cross-section for diphoton production in association with two jets widely
separated in rapidity (σjet−gap ), are also shown. The table makes clear the order at
which each observable can be calculated, in the sense of Section 3.1. For instance, a
NNLO calculation of the total cross-section also contains a NLO calculation of pγγ T
and a LO one of σjet−gap . However the converse is not true — a NLO calculation of
pγγ
T is not equivalent to a NNLO calculation of σtot since, for example, it does not
(2)
contain genuine two-loop corrections originating from M0 .
The NLO QCD corrections to the total cross-section are included in the DIPHOX
Monte Carlo [254]. This calculation includes fragmentation contributions and thus al-
lows for a traditional implementation of isolation. The gluon PDF enhancement at
small momentum fraction x leads to a large flux of gluons at high energy, in par-
ticular at current LHC energies. Therefore, although diphoton production does not
involve gluons at leading order, it can receive significantly corrections from such con-
tributions at higher orders. Although formally suppressed by additional powers of
the strong coupling, such contributions are numerically important because of the size
of the gluon PDF. One class of diagrams that contributes to σtot at NNLO, those
involving loops of quarks as shown in Fig. 4.13, can give a significant additional con-
tribution [152, 463, 775]. As indicated in Table 4.1, such diagrams enter the calcula-
(1) 2
tion as M0 . However, they are particularly interesting since their contribution is
Production of photons and jets 203
Fig. 4.14 The azimuthal angle between the two photons, ∆φγγ at NLO
(lower histogram) and NNLO (upper histogram) [313]. The data points are
as observed by the CMS collaboration [378]. The height of the histogram
bins indicates the scale uncertainty. The lower panel shows the ratio of the
data to the NNLO prediction. Reprinted with permission from Ref. [378].
very large. Moreover, the prediction exhibits an interesting cusp in the region of small
δ. This is indicative of the presence of terms proportional to δ log δ resulting from the
emission of soft gluons [545]. In order to provide a well-controlled prediction for the
diphoton cross-section in this case, when the staggered cuts are very close together,
some form of resummation of these logarithms should be performed, cf. Chapter 5.
However, it is worth noting that away from the threshold region, for instance in the
region of the Higgs signal where mγγ ∼ 125 GeV, the reliability of the calculation is
not spoiled by such logarithms.
By now, NLO calculations of the diphoton process in association with up to three
jets are available [185], using the Frixione isolation criterion. On the one hand these
predictions can be used to provide further tests of QCD and the understanding of pho-
ton production. For instance, just as in the case of pure-jet production (cf. Section 4.1),
one can now construct NLO predictions for cross-section ratios that are sensitive to
the strong coupling. In addition, such processes are important backgrounds for Higgs
production, particularly in the weak boson fusion channel, when the Higgs boson sub-
sequently decays to photons.
Production of V+jets 205
known SM processes; for example, the production of top quarks and the investigation
of the newly discovered Higgs boson. Probing the properties of these known particles
requires a good understanding of these high-rate backgrounds. The fact that the rates
for the production of a vector boson in association with many jets are often significant
in many LHC analyses can be appreciated by inspecting Fig. 4.16. At current LHC
operating energies the cross-section for producing a vector boson in association with a
single 40 GeV jet is a few picobarns. For each additional jet the cross-section falls by
a factor of about three or four, a further example of the approximate Berends scaling
discussed in Section 4.1.
where all particles are outgoing and their helicities are indicated by the superscripts.
Following Ref. [234], the leading-order amplitude for this helicity configuration is given
by
ALO = 2e2 gTia12i3 Atree (4.46)
where a2 , i1 , and i3 are the colour indices of the gluon, quark, and anti-quark respec-
tively, and the basic function is
2
h3 4i
Atree = −i . (4.47)
h1 2i h2 3i h4 5i
which is written in terms of poles that are proportional to the tree-level amplitude and
a finite remainder. The one-loop factor cΓ has previously been defined in Eq. (3.62).
The remainder introduces a dependence on additional functions that are defined by
ln(x) L0 (x) + 1
L0 (x) = , L1 (x) = ,
1−x 1−x
π2
Ls−1 (x, y) = Li2 (1 − x) + Li2 (1 − y) + ln x ln y − , (4.50)
6
where the dilogarithm function Li2 (x) is defined in Eq. (A.9). The dilogarithm is
ubiquitous in general one-loop amplitudes since it naturally appears in the analytic
expression for scalar box integrals. Indeed, the function Ls−1 is simply related to
a scalar box integral evaluated in six space-time dimensions, which has neither an
infrared nor an ultraviolet divergence. Note that the functions L0 (x) and L1 (x) have the
property that they are finite in the limit x → 1, which reflects a useful reorganization
208 QCD at Fixed Order: Processes
After summing over all helicity combinations and crossing to account for each partonic
configuration, these formulae are sufficient to construct the complete virtual matrix
elements for a generic V + 1 jet process.
the type of “giant K factors”, corresponding to very large NLO corrections, already
discussed in Section 2.2.7. Similar issues have also been observed for higher jet mul-
tiplicities. In these cases the behaviour of the NLO calculation is perfectly normal
in the regions that dominate the cross-section, that is the production of an on-shell
vector boson with jets that are central and fairly close to the transverse momentum
threshold. It is away from these regions, in more extreme kinematic configurations,
that the bad behaviour is observed. When considering these regions it is clear that
one should not expect the use of a fixed scale, for instance the mass of the vector
boson, to produce reliable results. In the tail of a distribution, for instance at very
high jet transverse momentum, that very scale provides an alternative physical choice
that might be expected to provide a more sensible prediction.
The choice of a scale that changes event by event rather than one that is fixed, for
instance at the vector boson mass, must still be performed with care so as to capture
the correct behaviour. Fig. 4.18, taken from Ref. [225], shows LO and NLO predictions
for the transverse momentum of the second jet in W +3 jet events, computed at the LHC
for two different event-by-event scale choices. When using the scale choice µ = ETW ,
the transverse energy of the W boson, the NLO prediction differs substantially from
the LO one and, for sufficiently high transverse momentum, becomes negative and
therefore unphysical. This is clearly a breakdown in the perturbative expansion due
to a poor choice of scale. For jets at large transverse momenta ETW is not the relevant
physical scale. The dominant kinematic configuration is two jets that are produced
approximately back to back, with the W boson and the third jet relatively soft. A
sketch of such a configuration is shown in Fig. 4.19. In that case a scale such as
µ = ĤT , the scalar sum of all partonic transverse energies, is more appropriate. Using
such a scale restores the good behaviour of the NLO prediction, which now remains
210 QCD at Fixed Order: Processes
physical and rather close to the leading-order distribution, as can be seen in Fig. 4.18
(right).
As already discussed in Section 4.1, the approximate scaling of the V +jets cross-
section with the number of jets was first observed in the original leading-order calcu-
lations of up to V + 4 jets [224]. At that time such calculations were important for
assessing the leading backgrounds to top pair production in the semi-leptonic decay
mode. With the availability of NLO calculations for such quantities these observations
can now be re-examined at higher order. The NLO predictions for the ratio
σ(W + n jets)
Rn = (4.52)
σ(W + (n − 1) jets)
Production of V+jets 211
are shown in Fig. 4.20, for n ≤ 4. The ratio is shown for two different jet algorithms,
anti-kT and SISCone, and two choices of the jet separation, ∆R = 0.4 and 0.7. Thus,
for a variety of typical jet algorithms, the cross-section falls by approximately a factor
of five for each additional jet present in the final state. The exact value of this ratio,
and its approximation by a constant, is clearly dependent on the details of the cuts
and algorithm. In particular it fails for larger jet separations where the phase space
for multijet production becomes significantly constrained. Nevertheless, the idea that
cross-sections for high jet multiplicities — beyond the scope of any present calculation
— could be approximated in this way is very attractive. Such ideas have been the
focus of renewed interest in recent years [570, 571].
Calculations to NNLO are now available for the W +jet [274] and Z+jet [269, 564]
processes. In both cases the NNLO calculations indicate only a small correction at
this order, which is especially reassuring given the large NLO K factor observed in
Fig. 4.17. Most importantly, the theoretical scale uncertainty is reducted to the level
of about 5%. These developments pave the way for precision studies using these final
states in upcoming LHC runs.
sociated with parton splittings in the underlying matrix elements. At NLO there can
be two partons in a jet, so that for the first time, jets can have internal structure.
Although the collinear singularities have been cancelled against the virtual corrections
to yield a finite result, one remnant of this cancellation is the fact that the theoretical
prediction inherits a logarithmic dependence on the jet size. Controlling these loga-
rithms can become important if the value of R is small. For this reason it is possible, at
NLO, to increase the cross-section by increasing the value of R. The larger the number
of jets in the final state, the greater the potential difference between the behaviour of
the LO and NLO predictions for the dependence on the jet size.
It has already been argued that scale choices based on the variable HT are well
motivated for the W + 3 jet process. As the number of jets present in the theoretical
calculation increases this becomes ever more true: the number of possible relevant
scale choices grows rapidly, while the definition of HT tries to capture some of the
essential hardness of the scattering process in a generic fashion. The results presented
in this section use a central scale choice of µ = HT /2, where the prefactor is chosen for
a mixture of theoretical and pragmatic reasons. This scale yields well-behaved cross-
section predictions at both LO and NLO, with K factors near unity [225]. In addition,
as will be shown in Chapter 9, the use of such a scale results in good agreement with
LHC data.
The results of a study of the dependence of the cross-section on the jet algorithm is
shown in Fig. 4.21. Predictions from the BLACKHAT +SHERPA collaboration [225] are
shown for both the anti-kT and SISCone jet algorithms, as a function of the jet size
parameter R. The leading-order cross-section for W + 1 jet is independent of the jet
size and algorithm because the jet consists of only one parton. At NLO the SISCone
cross-section is slightly larger than the anti-kT cross-section for the same jet size, as
expected, since the phase space for two partons to be in the same jet is greater for
the SISCone algorithm than for the anti-kT algorithm. The NLO cross-section grows
with jet size since it is more likely to find two partons in the same jet as the jet size
increases.
For W + 2 jet production, the LO cross-sections decrease with increasing jet size
since the two partons must be separated by a distance ∆R. The effective separation
is larger for SISCone than for anti-kT (∆R = Rjet for anti-kT ), as discussed earlier,
leading to smaller cross-sections for SISCone. For W + 2 jets at NLO there can be
either two or three partons in the final state, and either one or two partons in a jet.
Now there are two competing effects with regards to the dependence of the jet cross-
section on jet size: the ∆R requirement and the larger phase space for two of the three
partons to be in the same jet. The second effect wins, with the net result that the
jet cross-sections increase with increasing jet size, with SISCone being slightly larger
than anti-kT .
For three and four jets in the final state the cross-sections decrease with increasing
jet size at both LO and NLO, with the slope becoming steep at LO. The first (∆R)
effect becomes dominant over the second (phase-space). The anti-kT cross-sections are
larger than the SISCone ones at both LO and NLO. It is interesting to note that
while the scale uncertainties increase dramatically with increasing number of jets at
LO (since each extra jet requires an extra power of αs ), the uncertainties at NLO are
Production of V+jets 213
Fig. 4.21 Cross-sections for W + n jet production at the 7 TeV LHC. Pre-
dictions are computed at NLO and LO, with jets defined by pT > 30 GeV,
and are shown ( from top to bottom) for n=1,2,3, and 4 jets (5 jets at LO
only). The predictions were generated using BLACKHAT +SHERPA ROOT
ntuples, using CTEQ6.6 PDFs, a central scale of HT /2, and uncertainties
obtained by varying this scale by a factor of two in each direction.
relatively stable.
The scale dependence for the W + 4 jet cross-section at the 7 TeV LHC is shown in
Fig. 4.22, for LO and NLO predictions, using the anti-kT jet algorithm. At LO the scale
dependence is fairly trivial. The cross-sections decrease monotonically with increasing
scale. The size of the cross-section decreases with increasing jet size since, again, the
larger the jet size the further the partons are required to be separated at LO. The LO
cross-sections are smaller for the SISCone algorithm than for the anti-kT algorithm,
for the same jet size parameter, given the larger effective separation required by the
SISCone algorithm.
At NLO, the cross-section behaviour is non-monotonic, with the anti-kT cross-
sections having a peak cross-section around a scale of HT /2 (the peak cross-section
for the SISCone jet algorithm occurs at lower scales, at or less than HT /4). At this
scale, for the anti-kT jet algorithm, the K factor is almost exactly 1; this is not strictly
true for other jet sizes or for the SISCone algorithm, although the K factors do not
differ greatly from unity for these other choices. Since the scale HT /2 is at or near
the peak of the anti-kT cross-section, any scale variations (for example in the range
HT /4 to HT ) will be one-sided. At smaller scales, the NLO cross-sections decrease; in
fact the cross-section for a scale of HT /8 for the anti-kT jet algorithm is negative. The
214 QCD at Fixed Order: Processes
decrease in the cross-sections at low scales becomes milder as the jet size increases (or
if SISCone is used instead of anti-kT ).
If the NLO and LO cross-sections were evaluated at a scale of about HT /5 (corre-
sponding to 80 GeV, or approximately mW ), the K factor would be much larger than
one, with the K factor increasing as the jet size decreases. This latter behaviour is
partially due to the divergent behaviour of the LO cross-section and the fact that the
NLO cross-section decreases more rapidly with low scales for smaller jet sizes.
An understanding of the dependence of the cross-section on jet algorithm and
jet size can be crucial when comparing data to theory. For example, in Ref. [227],
the Z + 3 jet cross-section was calculated at the TEVATRON, using both the SISCone
and anti-kT algorithms (with R = 0.7 for both). Using a central scale of HT /2 the
calculations give
NLO
σanti−k T
= 48.7+3.8
−7.9 fb,
NLO
σSISCone = 40.3+8.6
−8.5 fb. (4.53)
At first glance the anti-kT cross-section is noticeably higher than the SISCone one and
has a smaller scale dependence. However, if the peak cross-section is used for both
jet algorithms (i.e. HT /2 for anti-kT and HT /4 for SISCone), the cross-sections and
uncertainties become very similar.
Diboson production 215
that is, with all momenta outgoing and the superscripts labelling the particle helicities,
“−” and “+” representing left- and right-handed helicities respectively. The tree-level
amplitude for this process can be written as
2
tree e2
A = δi1 i2 PW (s34 )PW (s56 ) Atree,a + CL,u Atree,b (4.55)
sin2 θW
where the propagator and coupling factors are denoted by
s
PW (s) = ,
s − m2W + iΓW mW
s12 (1 − 2Qu sin2 θW )
CL,u = 2Qu sin2 θW + . (4.56)
s12 − m2Z
Note that the factor CL,u contains two terms, representing the coupling of a left-
handed up quark to an intermediate photon and Z boson. Atree,a and Atree,b are
gauge invariant primitive amplitudes: the coupling structure of Atree,a corresponds
to Feynman diagrams containing electroweak bosons directly attached to the quark
Diboson production 217
line, while Atree,b reflects a triple gauge-boson vertex. These same amplitudes can be
recycled for other diboson processes; clearly there is no contribution from Atree,b in
the case of ZZ production. The specific forms of the tree-level amplitudes are,
h1 3i [2 5] h6|(2 + 5)|4i
Atree,a = i ,
s34 s56 t134
i h i
Atree,b = h1 3i [2 5] h6|(2 + 5)|4i + [2 4] h1 6i h3|(1 + 6)|5i , (4.57)
s12 s34 s56
where the one-loop primitive amplitudes Aa and Ab correspond to all possible dressings
of the corresponding tree-level primitives. It is convenient to decompose the amplitudes
further according to h i
Aa = cΓ Atree, a V + i F a , (4.59)
2 2
1 (t234 δ12 + 2s34 s56 ) [4 5] h3 6i
+ T I33m (s12 , s34 , s56 ) + +
2 h2|(5 + 6)|1i∆3 [3 4] [5 6] h3 4i h5 6i
h3 6i [4 5] (t134 − t234 ) 1 h6|(2 + 5)|4i2
+ − , (4.62)
h2|(5 + 6)|1i∆3 2 [3 4] h5 6i t134 h2|(5 + 6)|1i
flip : 1 ↔ 2, 3 ↔ 5, 4 ↔ 6, ha bi ↔ [a b] . (4.63)
should only be applied to the terms inside the brackets ([ ]) in which it appears. The
amplitude is written in terms of
δ12 ≡ s12 − s34 − s56 , δ34 ≡ s34 − s12 − s56 , δ56 ≡ s56 − s12 − s34 , (4.64)
where
s12 s34 2s56
x= , y= , ρ= √ . (4.68)
s56 s56 δ56 + ∆3
The coefficient of the logarithm reads
2 2
1 t134 h3 4i [4 5] [3 4] h3 6i
+ + − 2 h3 6i [4 5]
2 h2|(5 + 6)|1i∆3 [5 6] h5 6i
h3|(1 + 4)|5i h3 4i [1 4] h2 6i [4 5] δ12 − 2 [4|36|5]
+ −
[5 6] h2|(5 + 6)|1i h2|(5 + 6)|1i∆3
h3|4|5ih6|(1 + 3)|4i + h6|3|4ih3|(2 + 4)|5i
+4
h2|(5 + 6)|1i∆3
δ12 [4 5] h3|(2 + 4)|5i h3 6i h6|(1 + 3)|4i
+2 − , (4.69)
h2|(5 + 6)|1i∆3 [5 6] h5 6i
The primitive amplitudes given here are sufficient to describe all helicity amplitudes
for the W W process and, after suitable permutations of momenta, all diboson pro-
cesses. The interested reader is referred to the original paper for further details [471].
have included the effect of single-resonant diagrams [311, 314, 760]. Examples of such
diagrams, that are required for electroweak gauge invariance when the decays of the
vector bosons are included, are shown in Fig. 4.25. Such single-resonant diagrams
are not important in the calculation of the inclusive cross-section for e− e+ e− e+ pro-
duction; however, they sculpt other distributions, most notably the invariant mass of
the 4-lepton system. In fact, the presence of just such a contribution was useful in
cross-checking the first observation of the putative Higgs boson in the decay channel
H → ZZ → 4 leptons [369]. Finally, a further important contribution to many of the
diboson production processes originates from gluon–gluon initial states. These are the
counterparts of the diphoton loop diagram depicted in Fig. 4.13.
An important issue for the V γ processes is the treatment of photon radiation.
In particular, the photon is radiated copiously from any leptons produced in the de-
cay of the vector boson V , for instance via the leading-order diagrams indicated in
Fig. 4.26. The propensity of the leptons to radiate in this way can be assessed by
Diboson production 221
Fig. 4.27 The ratio of LO cross-sections for the e− e+ γ and ν ν̄γ final
states, as a function of the minimum photon pT .
comparing the cross-sections for e− e+ γ and ν ν̄γ as a function of the photon trans-
verse momentum. Such a comparison, shown in Fig. 4.27, demonstrates that for suf-
ficiently high photon transverse momentum, radiation in the decay is effectively re-
moved. As a result the ratio σ(e− e+ γ)/σ(ν ν̄γ) tends to the ratio of branching fractions,
BR(Z → e− e+ )/BR(Z → ν ν̄) ≈ 1/6. However, for typical photon pT cuts used at the
LHC, the ratio exceeds this by up to a factor of two. Therefore the proper inclusion
of this radiation is essential in order to provide a good theoretical description of the
whole event sample. Conversely, in probes of the diboson production mechanism, such
as in searches for anomalous couplings, it is advantageous to suppress the radiation in
the decay by using a higher photon pT cut.
The dependence of the W W cross-section, for the LHC operating at 8 TeV, on
the choice of the renormalization and factorization scale is shown in Fig. 4.28. Since
this is an electroweak processes the scale dependence of the LO result is very mild,
originating solely from the factorization scale inherent in the definition of the PDF.
Of course, this dependence is not indicative of the theoretical uncertainty of the pre-
diction, as can be seen from the NLO curves. These lie outside the range of the LO
scale uncertainties due to the O(αs ) real radiation corrections that are sensitive to the
gluon PDF. The NLO corrections are quite large, increasing the theoretical prediction
222 QCD at Fixed Order: Processes
which is relevant for W − γ production, with subsequent leptonic W decay. The ampli-
tude is [471],
2
√ e3 PW (s34 ) h1 3i
Atree = i 2 Vud δi1 i2 (Qu s25 + Qd s15 ) (4.72)
sin2 θW (s12 − s34 ) h3 4i h1 5i h2 5i
using the same notation as before, cf. Eq. (4.56), and where Qu = 2/3 and Qd = −1/3
are the charges of the up- and down-quarks. Note that the amplitude can be written
in this way despite the presence of the diagram in which the photon is radiated from
the W boson thanks to the relation QW − = Qd − Qu . Stripping away overall coupling
and spinor factors, the amplitude is thus proportional to a simple factor,
Qu p2 · p5 + Qd p1 · p5 , (4.73)
that can in turn be readily evaluated in the partonic centre-of-mass. Introducing the
angle θ? between the photon and the up-quark (positive z) direction in this frame, the
amplitude is thus proportional to the combination,
Quick inspection of this equation reveals that the amplitude exactly vanishes for the
scattering angle cos θ? = (Qu +Qd )/(Qd −Qu ) = −1/3. This vanishing of the amplitude
for a specific scattering angle is characteristic of this process and a feature of all the
contributing helicity amplitudes. It has been termed a radiation amplitude zero
and its features well-known for some time [764].
At a hadron collider it is more useful to translate this critical scattering angle into
the corresponding rapidity, yγ? . This is easily done, with the result,
1 1 + cos θ?
yγ? = log ≈ −0.35. (4.75)
2 1 − cos θ?
In order to construct a boost invariant quantity, and thus obtain an observable that can
be measured in the laboratory frame without recourse to reconstructing the partonic
centre-of-mass frame, it is usual to consider the rapidity difference, ∆y ? = yγ? −yW
?
. For
γ
typical experimental analyses, where most events are observed with pT significantly
smaller than mW , the mass of the W boson means that for this θ? the rapidity of
the W boson is positive but significantly closer to zero. Explicitly, for small photon
transverse momentum pγT (relative to mW ),
? 1 mW − pγT cos θ?
yW ≈ log , (4.76)
2 mW + pγT cos θ?
224 QCD at Fixed Order: Processes
? pγ,min
T
yW ≈ . (4.77)
3mW
Therefore for typical experimental cuts, for instance pγT > 20 GeV, the expected zero
in the distribution of the W -photon rapidity difference is for ∆y ? ≈ −0.45 for the sub-
process in Eq. (4.71). At the TEVATRON the quark and anti-quark directions correspond
reasonably well with those of the protons and anti-protons, leading to a prediction for
the radiation zero in ∆y ? that is only slightly diluted by PDF effects. At the LHC this
is no longer the case; instead the radiation zero is expected to be located at ∆y ? = 0.
The effect of the radiation zero in W + γ production at the TEVATRON is shown in
Fig. 4.29 (left), for pγ,min
T = 20 GeV. As expected, the zero is replaced by a dip in the
distribution that does, however, occur at the expected value of y ? . Also shown is the
impact of the NLO corrections, which tend to diminish the effect of the radiation zero.
Experimentally one cannot reconstruct the W and, instead, it is simpler to just use the
lepton rapidity which retains much of the angular information from the W . However,
it does serve to make the radiation zero less pronounced. Despite this, and further
contamination from effects such as radiation from leptons in the W decay, evidence
of the radiation zero has been observed at the TEVATRON [84]. This is illustrated in
Fig. 4.29 (right), which compares DØ data [97] with the theoretical prediction for
Q` (ηγ − η` ). As expected, the shape of this distribution is very similar to the NLO
one shown in the left-panel of the same figure and the excellent agreement between
the SM prediction and the measurement confirms the presence of the radiation zero.
quark production is of particular significance. The discovery of the top quark was the
main triumph of the TEVATRON, as discussed in Chapter 8. From a theoretical point of
view, unlike the calculations performed so far, it is necessary to include the non-zero
quark mass in order to arrive at sensible predictions. However, it is not only the mass
of the top quark that is special but also its lifetime. Assuming that the CKM matrix
has |Vtb | = 1, the width of the top quark can be computed by considering only the
decay t → W b. At leading order it is given by
GF m3t p
Γt = √ (1 − β 2 )2 + ω 2 (1 + β 2 ) − 2ω 4 1 + ω 4 + β 4 − 2(ω 2 + β 2 + ω 2 β 2 ),
8 2π
(4.78)
where the dimensionless quantities ω and β are the masses of the decay products
rescaled by the top quark mass, ω = mW /mt , β = mb /mt . Inserting the known
masses, this formula gives Γt ≈ 1.5 GeV, rather small compared to the top quark
mass. The short lifetime of the top quark therefore sets it apart from the other light
quarks — unlike them, it is able to decay before hadronizing. As a result there are no
bound states of top quarks (“toponium”), but the decay products of the top quark do
allow unique studies of its nature (cf. Section 2.1.4).
Fig. 4.31 The total number of top quark pairs produced at the TEVATRON
and LHC colliders, as a function of year. The dotted (blue) line indicates
future projections, including estimated shutdown periods, as of 2015.
2
t̂ + û2 + 2m2t ŝ
Mqq̄→tt̄ 2 = V , (4.81)
2N 2 ŝ2
2N 2 4m4t ŝ2
Mgg→tt̄ 2 = 1 V
− 2 2 2 2
t̂ + û + 4mt ŝ − , (4.82)
2V N t̂û ŝ t̂û
Thus the propagator always remains off-shell, since m2T ≥ m2t . The same reasoning
applies to all the propagators that appear in the diagrams for top pair production.
The addition of the mass scale mt sets a lower bound for the propagators — that
does not occur when considering the production of massless (or light) quarks, where
the appropriate cut-off would be the scale ΛQCD . In contrast, as long as the quark is
sufficiently heavy, mQ ΛQCD (as is certainly the case for top, and even bottom,
quarks), the mass sets a scale at which perturbation theory is expected to hold. As a
result it is possible, for instance, to provide a theoretical prediction for the inclusive
top quark pair production cross-section.
Although the large top quark mass therefore provides an easier framework in which
to perform theoretical calculations, the actual calculations themselves can become con-
siderably more complex. At the most basic level, retaining non-zero masses for external
fermions leads to more complicated expressions for the amplitudes. This can be seen
directly by comparing the squared tree-level matrix elements for gg → tt̄ in Eq. (4.82)
with those for the process q q̄ → gg in Eq. (4.5). In one-loop calculations of virtual
corrections, even the scalar integrals are more complex than their massless counter-
parts. In addition, obtaining analytic expressions for the loop amplitudes themselves
is more complicated since the spinor helicity formalism for massless particles does not
readily extend to the massive case. This means that, for instance, the analytic on-shell
unitarity methods discussed earlier are not immediately applicable to processes con-
taining heavy quarks. In fact, although massive fermions cannot be defined in terms
of states of definite helicity, it has been possible to adapt the spinor methods appro-
priately [683] to obtain amplitudes for some processes of interest, including top quark
pair production [186]. Note that the numerical unitarity-based methods discussed in
earlier sections do not suffer from most of these problems and are therefore well-suited
to calculations involving heavy quarks.
Theoretical predictions for top pair production cross-sections are shown in Fig. 4.32.
In addition to the total inclusive cross-section, predictions are also shown for produc-
tion of an additional jet. Even at 7 TeV there is copious production of additional jets
with transverse momenta of 40 GeV or more, with such events accounting for approx-
imately 50% of the total cross-section. At higher energy collisions this fraction grows
significantly,
√ with about 80% of all top pair events accompanied by a 40 GeV jet at
s = 100 TeV. Such a large proportion of events containing an additional jet calls
into question the pertubative description of these cross-sections. Note though that, at
a theoretical level, the situation can be easily improved by choosing a higher jet pT
threshold, for instance 100 GeV. For associated production with two jets, results have
been obtained using a numerical unitarity approach, dubbed HELAC–NLO [243, 244].
A closely related calculation, for the case where the two associated jets originate from
b quarks, is important as an irreducible background to Higgs boson production in the
tt̄H channel, with H → bb̄. NLO corrections for this case have been computed by
several groups [242, 284, 285].
One can move beyond the NLO calculation for the total top cross-section in a
number of ways. In the preceding discussion the top quarks are considered to be stable
particles in the theoretical calculation. In reality the top quark is only short-lived and
decays to a bottom quark and a W boson that itself subsequently decays. The decays
228 QCD at Fixed Order: Processes
of a pair of top quarks into the different possible combinations of leptons and jets form
the overall branching fractions shown in Fig. 4.33. Since BRt→W b ≈ 1 due to other
modes being CKM-suppressed, these branching fractions correspond to the product of
two W branching fractions to very good approximation. The dilepton decay mode can
be most accurately measured but only captures a relatively small fraction of top quark
decays. The converse is true for the “all-jets” channel, so that a good compromise is
often found by concentrating on lepton+jets final states. This will be explored further
in Chapter 8 and Chapter 9.
One way of accounting for the decay is by using a spin-density approach as in
Ref. [236, 237], which has been used to provide predictions at NLO. Alternatively
one can directly compute amplitudes including the decay, taking advantage of the
considerable simplification that occurs due to the left-handed nature of the W inter-
action [312, 683]. With this type of approach it is possible to include NLO effects in
both the production and decay of the top quarks [312, 762]. Example diagrams that
illustrate the division between production and decay stages are shown in Fig. 4.34, for
the case of the real radiation contribution. The division of the process into production
and decay stages can be performed only in the limit of top quarks that are produced
exactly on-shell. The quality of the approximation is then reliant on the fact that the
Top-pair production 229
Fig. 4.33 Branching fractions of a top quark pair into leptons and jets.
Fig. 4.34 Diagrams entering the real radiation calculation of top pair
production in the (a) production and (b) decay stages. The double-barred
line indicates that the top quark propagator is considered on-shell at that
point in the diagram.
top quark width is quite small, with corrections expected of order Γt /mt . In general,
including NLO corrections in the decay of the top quark has only a very small effect
on observable quantities. One distribution that is subject to non-trivial corrections is
the invariant mass of the lepton and b jet produced in the top quark decay, m`b [762].
This observable, and closely related ones — such as the invariant mass of the lepton
and b meson observed in the top decay — are interesting since they could be used
to measure the top quark mass [256]. This is possible since the distribution shows a
distinct kinematic edge, as shown in Fig. 4.35, whose position and shape is sensitive to
mt . However, since the position of the edge is dictated by the kinematics of the decay
process, it can be modified by the inclusion of gluon radiation, as is clear from the
figure. Hence an extraction of the top quark mass from such a study is best performed
with the inclusion of NLO effects in the decay.
230 QCD at Fixed Order: Processes
Fig. 4.35 The invariant mass distribution m`b expected at the 7 TeV
LHC, for typical experimental selection cuts. Predictions are shown at LO
and at NLO, with the latter computed both with and without including
NLO effects in the top quark decay.
Although the calculations that consider top quark production and decay in a fac-
torized manner can be extended to higher jet multiplicities, the accuracy of the ap-
proximation — dropping non-resonant and non-factorizable contributions — can only
be judged in the light of a more complete approach. Calculations of the NLO correc-
tions away from the resonance regions are available for the final states W + W − bb̄ [459]
and νe e+ be− ν̄e b̄ [245]. Fig. 4.36 shows the three classes of diagrams that must be con-
sidered in these calculations, where either two, one, or no top quark propagators may
be resonant. A detailed study [134] of the two approaches in Refs. [459, 762] showed
that, as expected, the doubly resonant approximation of Ref. [762] is adequate for
many distributions, resulting in differences of the order of a few percent compared to
the full calculation in Ref. [459]. However, some observables, such as the transverse
momentum of the bb̄ pair, differ by as much as 10–20% for pT > 250 GeV. The im-
proved predictions, throughout a wide kinematic range, provided by these calculations
enable a better assessment of these backgrounds in new physics searches.
Another avenue is the calculation of terms in the perturbative expansion beyond
parton-level NLO. The total cross-section for tt̄ production is known to NNLO [427]
and first results have been presented for differential distrbutions [265, 428]. The calcu-
lation of the total cross-section to this order represented a tremendous breakthrough
in the field of NNLO computations since it was the first calculation involving mas-
sive quarks. Fig. 4.37 demonstrates that the resulting theoretical prediction is under
excellent control, with a residual 10% uncertainty that accounts for both scale and
PDF uncertainties. Note that the accuracy can be further improved in this case by
resumming large logarithms that appear as a result of producing a top pair close to
Top-pair production 231
Fig. 4.36 Example diagrams for the process gg → νe e+ be− ν̄e b̄, illustrat-
ing three categories that may enter the calculation. Clockwise from top left:
double-resonant contributions, with two top quark propagators that may
be on-shell; single-resonant, with only one such propagator; non-resonant,
containing no top propagator that could be resonant.
The mass of the top quark has a particularly important role in the SM. Through
radiative corrections it affects the mass of the W boson and, indeed, precision mea-
surements of the latter provide an indirect determination of mt . Due to its large mass
compared to the other fermions, the Higgs boson also couples relatively strongly to the
top quark. As a result, simultaneous measurements of mW , mt , and mH can provide
a stringent test for the presence of BSM physics.
232 QCD at Fixed Order: Processes
Given the importance of the top quark mass, an accurate determination of its
value directly from experiment is highly desirable. However, the situation is compli-
cated by the fact that the top mass parameter that appears in the SM Lagrangian, the
fundamental parameter of the theory, is subject to renormalization at each order of
perturbation theory. As a result, a perturbative calculation at a given order replaces
this fundamental parameter, normally referred to as the pole mass mt , by a renormal-
ization scheme-dependent running mass, mt (µR ). The relationship between the two
quantities can be computed in perturbation theory and is currently known up to four
loops [752]. For instance, in the MS scheme the pole and running masses are related
by
αS α 2
S
mt = mMS t (µ R ) 1 + c 1 + c 2 + . . . , (4.87)
π π
where the coefficients c1 , c2 are known. At NLO only the one-loop coefficient is re-
quired, c1 = 4/3 + log[µ2R /mt (µR )2 ] and, evaluating the running mass at the scale
µR = mt gives,
4αS
mt = mMS,N
t
LO
(mt ) 1 + . (4.88)
3π
Hence the pole mass is approximately 5%, or 8 GeV, larger than the equivalent NLO
MS pole mass. The distinction between the two quantities is therefore of great impor-
tance numerically, not just for theoretical consistency. Conventional extractions of the
top quark mass, for instance in most of the original TEVATRON analyses, implicitly as-
Top-pair production 233
Fig. 4.38 Dependence of the NLO and approximate NNLO top pair
cross–section on the running top quark mass, m(m). The data point with
vertical error bars represents the experimental measurement of the cross–
section given in Ref. [87] and the horizontal error bars the corresponding
uncertainties on the extraction of m(m) at the two orders of perturbation
theory. Reprinted with permission from Ref. [720].
sume the extraction of the pole mass through kinematic fits of distributions. However,
due to the fact that the top quark is a coloured object, it suffers from residual theo-
retical uncertainties in its definition of the order of 1 GeV [860]. A more well-defined
extraction of the top quark mass may be performed by exploiting the dependence of
the top pair production cross-section on the mass. The running mass can be extracted
order by order in perturbation theory by comparing the cross-section prediction, com-
puted at the same order, with the experimentally measured value. This procedure,
which was first used in Ref. [720], is illustrated in Fig. 4.38. The resulting NLO and
approximate NNLO running masses are very consistent and, upon converting to the
equivalent pole mass, also agree well with other determinations. A number of alter-
native determinations have been either proposed or implemented, typically based on
either kinematic endpoints or clean measurements of leptonic top decays; for a detailed
discussion of methods and projections for the LHC the reader is referred to a recent
review of this topic [650].
Fig. 4.39 Sketch of the rapidity distributions of top and anti-top quarks
expected at the TEVATRON (left) and LHC (right).
anti-top quarks is shown in Fig. 4.39, from which the asymmetries at the TEVATRON
and LHC can immediately be seen. This asymmetry arises from corrections to the
q q̄-initiated process, as indicated in Fig. 4.40. At the TEVATRON the asymmetry is
clearest: top quarks are not produced equally in the forward and backward regions,
where “forward” is defined with respect to one of the beam directions. At the LHC an
asymmetry does not arise in the same fashion since the strong interaction is parity-
invariant and the pp initial state is an eigenstate of parity. However, as seen in Fig. 4.39
(right), there is a difference in the rapidity distributions of the top and anti-top quarks,
which is due to the sub-leading q q̄ and qg initial states. A charge asymmetry can be
formed by considering the difference between the distributions in the forward and
central regions, but it is very small and thus hard to measure, cf. Chapter 9.
In contrast, the asymmetry at the TEVATRON is a relatively prominent effect that
will be discussed in more detail. At NLO the asymmetry is induced primarily from
the interference between the Born amplitudes for q q̄ → tt̄ 2 , and the part of the
one-loop amplitude that is anti-symmetric under exchange of the quark and anti-
quark [701, 702]. This contribution arises from box diagrams such as the one shown in
Fig. 4.40 (right). Note that this is an example of a loop-induced effect that does not
change the magnitude of the tt̄ cross-section but does affect the kinematic distributions
of the final state. This type of asymmetry is not unique to the tt̄ system and, in fact,
it was first discovered in the calculation of QED corrections to the process e+ e− →
µ+ µ− [218].
A second type of contribution to the asymmetry arises from diagrams in which
a hard parton is radiated, as shown in Fig. 4.40 (left). The interference of diagrams
with initial and with final-state gluon radiation leads to a smaller asymmetry in the
opposite direction, with a size that depends on the transverse momentum of the top
quark pair. At large pT (tt̄) the pair recoils against a radiated hard gluon. In the leading
colour approximation that is, neglecting contributions of order 1/Nc2 , the gluon is
colour-connected to either the t–q pair, or the t̄–q̄ pair (where q, q̄ are the initial state
quark and anti-quark). Since gluon radiation indicates an accelerating colour charge,
if the gluon is radiated from the t–q pair then the top quark is more likely to have
The asymmetry is expected to be negative for sufficiently hard emission and to in-
crease in absolute value with pT (tt̄).3 This expectation is predicated on the assump-
tion that the beam from which the light quark in the initial state was produced can
be determined. As such, it is only a useful observable at the TEVATRON where the
asymmetric collisions between protons and anti-protons allow one to use the proton
beam as a proxy for the quark direction. The expectations for the asymmetry aris-
ing from the real radiation contribution is borne out by the NLO prediction for Atlab t̄
shown in Fig. 4.41. As seen in the figure, the virtual contribution — that because of
the 2 → 2 kinematics contributes only at exactly pT (tt̄) = 0 — has the opposite sign.
To obtain a more useful prediction at low pT , one can consider the results ob-
tained using a parton shower. Such a prediction is shown schematically in Fig. 4.41.
The parton shower prediction interpolates smoothly between large positive values at
small pT (tt̄) and negative values at large top quark pair transverse momentum. The
inclusive asymmetry, obtained by integrating out the dependence on pT (tt̄), is small
and positive. One expects that its value should not be altered by the parton shower,
although further studies have indicated that in fact this is not necessarily the case, due
to colour connection effects that may be accounted for in the shower treatment [859].
Finally, one must remember that, although the asymmetry arises at NLO in the cross-
section, the non-zero value of the asymmetry is strictly a leading-order prediction. As
such it suffers from the usual scale uncertainties, exacerbated by the O(αs3 ) nature of
the observable, resulting in a theoretical uncertainty due to unknown higher orders
of around 40%. The calculation of the NNLO corrections to the inclusive asymme-
3 This intuitive understanding must be modified somewhat when considering the NLO corrections
to tt̄+jet production [763]. The presence of a second scale in the process (pjet
T in addition to mt )
leads to large logarithms in the ratio of the two scales that reduce the predicted asymmetry to a
value near zero. This is another lesson in the importance of large logarithms when in the presence of
two disparate scales.
236 QCD at Fixed Order: Processes
Fig. 4.41 The lab frame top quark forward–backward asymmetry, as de-
fined in Eq. (4.89), at the TEVATRON, The asymmetry is shown as a func-
tion of the top quark pair transverse momentum, pT (tt̄). The NLO predic-
tion is shown in red, with a single bin at zero from the virtual corrections,
and a NLO+parton shower prediction is shown schematically in black.
try [428] indicate that they are important, at the level of 15–30%. This is crucial in
helping to reconcile the theoretical prediction with the TEVATRON data, as will be
discussed further in Section 8.6.
at a significant rate, about half that of the t-channel process. Conversely, the associated
production of a top quark with a W boson is practically inaccessible at the operating
energy of 1.96 TeV. At the LHC the situation for the sub-dominant modes is reversed.
This is due to the fact that it is far more likely to find a gluon than an anti-quark
in the high-energy proton beams of the LHC. The discovery of single top production
at the TEVATRON will be discussed in Chapter 8 and the more detailed measurements
carried out at the LHC in Chapter 9.
The various single top production channels provide a number of different theoretical
challenges. The need to retain the mass of the top quark means that the calculations
are more complex than similar 2 → 2 processes such as jet production. A further
difficulty is the fact that the separation of the channels is strictly only possible at the
first few orders of perturbation theory. To see how this works, consider the partonic
process
g(p1 ) + q(p2 ) → t(p3 ) + b̄(p4 ) + q 0 (p5 ) (4.90)
that enters at one order higher in the strong coupling than the processes depicted in
Fig. 4.43. This amplitude receives contributions from Feynman diagrams such as the
ones shown in Fig. 4.44, which are part of a single gauge-invariant set and so must
be considered together. From the form of the diagrams it is clear that they could
enter the calculation of next-to-leading-order corrections to either the s- or t-channel
processes, but given that they interfere it is not immediately clear how the interference
should be apportioned. However, the issue can be quickly resolved by inspection of the
colour structure of the different contributions. Using the labelling in Eq. (4.90), the
contributions of the two diagrams shown in Fig. 4.44 are
238 QCD at Fixed Order: Processes
where K (a) and K (b) contain all the kinematic information such as spinors and gamma
matrices. When these diagrams are squared the result is
2
(a) (b) (a) 2 (b) 2 2 (a) 2 (b) 2
D + D = D + D = Nc CF K + K (4.92)
where the interference term vanishes since the colour matrices are traceless, Tia31i3 =
Tr [T a1 ] = 0. Therefore it is possible to simply attribute contributions of the form of
Fig. 4.44 (left) to the NLO calculation of the t-channel process and Fig. 4.44 (right) to
the effects of NLO in the s channel. However, this argument clearly relies on the fact
that one of the fermion lines remains free of any coloured interactions, which is reflected
in the δ factors in Eq. (4.91). In the presence of further radiation these can be replaced
by colour matrices, so that analogous interference effects do not vanish. Therefore the
clean separation of channels breaks down at NNLO and beyond. Nevertheless, the
NNLO computation can be performed in the approximation that such contributions
are neglected and, in this fashion, fully differential results for the t-channel process
have already been presented [288].
Both the t-channel and associated production modes rely on diagrams that contain
Single-top production 239
a bottom quark in the initial state. Usually one does not consider any intrinsic bottom
quark content in the proton; instead it is generated perturbatively from the gluon
and light quark distributions through evolution equations above the bottom quark
threshold. Explicitly, at the first order,
2 Z 1 x
αs µ dz
fb (x, µ2 ) = log Pqg (z)fg , µ2
+ O(αs2 ) (4.93)
2π m2b x z z
which indicates that the leading contribution to the b-quark PDF is from gluon split-
ting, g → bb̄, in the proton. Rather than accounting for this effect in the PDF, it is
often useful to instead directly compute the t-channel and W t processes with the gluon
splitting present in the matrix elements. Specifically, an equivalent description of the
diagrams shown in Fig. 4.42 (b) and (c) is provided by the diagrams shown in Fig. 4.45.
One motivation for proceeding in this way is that experimental analyses of these final
states often attempt to separate the single top processes from backgrounds by making
use of the presence of the additional b quark that is produced in the gluon splitting.
However, such a calculation can be considerably more complicated. For instance, the
inclusion of the b-quark mass is necessary in order to render a finite result for the
diagrams in Fig. 4.45. Without the b-quark mass there would be a collinear divergence
when the b quark is not explicitly observed, meaning that an inclusive cross-section
could not be defined.
Although explicit information about the anti-bottom quark has been lost in the
original approach, logarithms associated with the splitting have been included to all
orders in the PDF evolution through the O(αs2 ) terms that are indicated in Eq. (4.93).
At the first order in the evolution the diagrams with the explicit gluon splitting are
recovered, albeit
in the collinear approximation. If the logarithmic terms, of order
αs log µ2 /m2b — with µ typically chosen as a physical scale related to the process
such as pT (b) — are important then this approach may be superior. Of course, as
calculations in the two schemes are performed at successively higher orders, the results
of the calculations should agree to a better degree. The often-used nomenclature for
the two calculational schemes contrasts the number of quark flavours included in the
240 QCD at Fixed Order: Processes
initial state: 4-flavour (4F) for the diagrams of Fig. 4.45 and 5-flavour (5F) when
computing instead the diagrams of Fig. 4.42 at LO. A comparison of the predictions of
the two schemes for t-channel single top cross-sections at the LHC is shown in Fig. 4.46,
both at LO and at NLO. At NLO, once the uncertainty from the choice of scale and
the PDF sets is included, the two calculations are only in marginal agreement. More
importantly, the two calculations give access to kinematic quantities at different levels
of precision. Despite the fact that the 5F calculation is performed to NLO, predictions
for any properties of the spectator anti-bottom quark only enter the calculation in the
real corrections. They are therefore predicted only at leading order. In contrast, the
4F calculation by definition is already sensitive at LO. Indeed, the predictions for such
properties are identical in the 5F NLO and 4F LO calculations. However, the 4F NLO
calculation raises the precision of the observable to the NLO level, where significant
deviations from the LO expectation may be observed.
A further subtlety arises in the consideration of the associated top production
process. A careful inspection of the diagrams involved in the NLO calculation of the
5F process reveals, in addition to the LO 4F diagrams such as the one in Fig. 4.45
(right), contributions from diagrams such as the one shown in Fig. 4.47. This is none
other than a diagram for tt̄ production, with the decay t̄ → W − b̄ included. Since such
diagrams, proceeding through resonant top pair production, provide the dominant
contribution to the cross-section, a method for effectively excluding them must be
devised in order to obtain a useful prediction for the single top tW − final state. Since
gauge invariance requires that all contributing diagrams be included, it is not possible
to remove them from the calculation. A number of methods have been proposed for
achieving this goal, for instance applying a cut on the mass of the W − b system in
order to remove the resonant top mass region, or insisting on a b-jet veto at moderate
pT . Note that this issue would be present even in the LO calculation of the 4-flavour
scheme, with the diagram shown in Fig. 4.47 part of the leading-order contribution.
A more sophisticated treatment is to include all diagrams that contribute to this sort
of final state, namely pp → e+ νe bµ− ν̄µ b̄, and to then see how this compares with the
approximate separation into various channels. This sort of calculation is now possible
and the NLO results that have been presented indicate that it is possible to make a
reasonable approximation in this way [534].
Rare processes 241
Fig. 4.46 Single top t-channel cross-section at the 14 TeV LHC, computed
in the 4- and 5-flavour schemes. Scale uncertainty is indicated by the dotted
lines and the total, considering also the uncertainty from the PDFs, is
shown as a dashed line. Figure based on the results of Ref. [317].
Fig. 4.47 A diagram entering the NLO calculation of W t single top as-
sociated production. It is related by gauge invariance to the one shown in
Fig. 4.45 (right) and represents the process gg → tt̄ followed by the decay
t̄ → W − b̄.
these are taken into consideration, the expected event rates for many of these processes
are so small that their observation will require high-luminosity datasets from future
LHC runs.
For the first category of processes in the figure, top pair production in association
with a vector boson, the outlook is not so gloomy. Indeed, first evidence for produc-
tion of these final states at the SM level has already been established in Run I of the
LHC [46, 175, 403, 406]. These processes are important for a number of reasons. All
three represent significant backgrounds to multi-lepton signals that, for instance, are
mainstays of SUSY searches. In particular, the tt̄W ± and tt̄Z processes constitute a
non-trivial source of same-sign dileptons within the SM. The tt̄W ± final state also
exhibits a charge asymmetry analogous to the one already discussed for top pair pro-
duction; in this case the emission of a W boson effectively polarizes the top quarks,
leading to a significant O(15%) charge asymmetry that should be observable [738].
In contrast to the case of tt̄W ± production, where the W boson has only an indirect
effect on the top quarks, tt̄Z production directly probes the coupling of the Z boson to
top quarks. Direct constraints on the nature of this coupling should be possible with
future LHC data [829], where a precise knowledge of the SM cross-section that takes
into account both NLO QCD and electroweak effects [541] will be invaluable.
The second category of processes comprises final states containing three vector
bosons: “tri-boson” production. Once again, they constitute important backgrounds
for not only BSM searches, but for ongoing probes of the Higgs boson. For example,
Higgs bosons 243
Fig. 4.49 Feynman rules for the coupling of the SM Higgs boson to W
and Z bosons (left) and to fermions (right).
To begin, it is useful to review the ways in which the Higgs boson, in its usual
incarnation, may couple to other SM particles. Since the particle was originally intro-
duced as the agent of electroweak symmetry breaking, responsible for giving the W
and Z bosons non-zero masses, the Higgs boson has tree-level couplings to those par-
ticles. The interactions and the corresponding Feynman rules are shown in Fig. 4.49
(left). The nature of the Higgs boson means that all the couplings are proportional
to the masses of the particles with which it interacts (recall that, at tree level, the W
and Z masses are related by mW = mZ cos θw ). If these were the only interactions of
the Higgs boson then opportunities for its observation at a hadron collider would be
limited to channels with particularly small cross-sections, since production would pro-
ceed through Feynman diagrams with multiple powers of the weak coupling. The two
such mechanisms that are of primary importance are shown in Fig. 4.50. Associated
production refers to the modes in which the Higgs boson is produced together with
an additional W or Z boson. The last production mode, referred to as weak boson
fusion or vector boson fusion, is especially interesting since it has a very clear
experimental signature. The quarks only receive a moderate transverse kick (typically
of order mV /2) in their direction when radiating the W or Z bosons, so they can be
detected as jets very forward and backward at large absolute rapidities. At the same
time, since no coloured particles are exchanged between the quark lines, very little
hadronic radiation is expected in the central region of the detector. Therefore the type
of event that is expected from this mechanism is often characterized by a “rapidity
gap” in the hadronic calorimeters of the experiment.
Although not strictly part of the original formulation of the Higgs boson interac-
tions, it is usual to consider the SM Higgs boson as one that also provides a mechanism
by which fermions acquire masses. The Higgs boson then has Yukawa interactions with
all the fermions, with a strength proportional to the fermion mass, mf . This interac-
tion is shown in Fig. 4.49 (right) and, in the SM, it is the coupling to the top quark
that is especially relevant due to the large top quark mass. In particular it leads to the
channel in which the Higgs boson is produced in association with a pair of top quarks,
through leading-order diagrams such as the ones shown in Fig. 4.51.
Finally, but most importantly, the tree-level interactions described above lead to
couplings of the Higgs boson to light particles that are mediated by loop diagrams.
Higgs bosons 245
Fig. 4.50 Feynman diagrams for the production of a Higgs boson through
electroweak interactions alone. The three production modes are associated
production of W ± H (left) and of ZH (centre) and weak boson fusion
(right).
For hadron colliders the key coupling is that of the Higgs boson to two gluons, through
loops of heavy quarks, as shown in Fig. 4.52. Since there is no tree-level coupling of the
Higgs boson to two gluons it is clear that there is no way to renormalize any possible
divergence that this loop diagram could contain. Therefore its contribution must be
finite. Explicitly, the colour- and spin-averaged matrix element squared is given by,
1 αs 2 4 3 2
|M|2 = pH Iq m2t /m2H , (4.94)
8V 3πv 4
√
where v 2 = ( 2 GF )−1 ≈ (246 GeV)2 is the squared vacuum expectation value of the
Higgs field. The function Iq (x) is defined by,
where the loop function F (x) is sensitive to whether the top quark in the loop is above
or below the pair-production threshold,
( √ √ 2
1
2 log (1 + 1 − 4x)/(1 − 1 − 4x) − iπ x < 14
F (x) = −1 √ 2 (4.96)
−2 sin (1/2 x) x ≥ 14 .
Although the matrix element is formally suppressed by two powers of αs and a loop
factor, in the calculation of the cross-section the gluon parton distribution function
enters twice. Thus, despite the loop suppression, this coupling actually results in the
246 QCD at Fixed Order: Processes
Fig. 4.52 The one-loop diagram representing Higgs production via gluon
fusion at hadron colliders. The dominant contribution is from a top quark
circulating in the loop, as illustrated.
largest cross-section at the LHC. This mode is usually referred to as Higgs production
by gluon fusion.
Before going on to discuss each of the production modes in turn, it is useful to
consider the size of their cross-sections. Fig. 4.53 shows the cross-sections for each of
the Higgs production processes at a pp collider, as a function of the c.o.m. energy. The
gluon fusion mode is larger than all other modes combined by an order of magnitude
across the range shown, with a cross-section in the range of a few tens of picobarns at
LHC energies. In contrast, associated production with top quarks is the mode with the
smallest cross-section, a few hundred femtobarns.
√ However, the relative importance of
this mode grows far more rapidly with s than the VBF and associated production
modes. This is mainly due to the fact that this channel benefits from the gluon flux that
is increasingly important as the typical Feynman-x value that is probed decreases as
√
s rises. Despite this fact, the VBF mode remains the second-largest cross-section at
all foreseeable future hadron collider energies. Also shown, for comparison, is the Higgs
boson pair production cross-section. This cross-section is so small that, even with the
full 3000 f b−1 of integrated luminosity anticipated at the LHC in the future, one could
only expect to produce about 10,000 Higgs boson pairs in total. After accounting
for Higgs boson branching fractions and experimental efficiencies, this leaves √ very
few events to analyse. Although this cross-section also rises sharply with s, based
on cross-sections alone one might expect it as difficult to observe Higgs boson pair
production at a 100 TeV collider as to definitively establish the associated production
modes of the Higgs boson at the LHC.
p
Z 12m2V p2
2
1 gW m3H mV ΓV (mH −mV )2 λ(p2 ) λ(p2 ) + m4H
Γ(H → V V ? ) = dp2
SV 64πm2W π 0 (p2 − m2V )2 + mV Γ2V
2
(4.97)
and the symmetry factor accounts for two identical Z bosons, SZ = 2 and SW = 1.
For a heavier Higgs boson above the diboson threshold this reduces to the simpler and
more well-known result (cf. the NWA given in Eq. (2.83))
s
2
1 gW m3H 4m2V 4m2V 12m4V
Γ(H → V V ) = 1 − 1 − + , (4.99)
SV 128πm2W m2H m2H m4H
248 QCD at Fixed Order: Processes
2 e4 GF m4H 2
|M| = √ Nc Q2t Iq (m2t /m2H ) + Q2b Iq (m2b /m2H ) + IW (m2W /m2H )
16π 2 8 2π 2
(4.101)
In this equation the contribution of the top, bottom, and W -boson loops is shown
separately, in terms of the function Iq (x) already introduced in Eq. (4.95) and
The function F (x) is defined in Eq. (4.96). It is instructive to evaluate these contri-
butions at their physical values, in this case using mb = 4.5 Gev, mt = 175 GeV,
mW = 80.4 GeV and a Higgs boson mass of 125 GeV. Writing the terms in the same
order as in Eq. (4.101) the result is
2 e4 GF m4H 2
|M| = √ |(1.838) + (−0.016 + 0.019i) + (−8.323)| . (4.103)
16π 2 8 2π 2
The W -boson and top quark contributions enter with opposite signs and therefore
interfere destructively. The contribution of the bottom quark loop is complex (cf. the
behaviour of the function F (x) in Eq. (4.96)) and, as expected, is very small. However,
it affects the cross-section at the 0.5% level due to interference with the rest of the
amplitude. Beyond its importance to establishing a clear experimental signature of the
Higgs boson, the H → γγ decay is also particularly interesting from a theory point
of view. Since the partial width is very small, it is rather sensitive to new particles
that could couple to the Higgs boson and circulate in the virtual loop. Moreover, the
threshold behaviour of the resulting contribution, cf. Eq. (4.96), could in principle be
observed experimentally. Note that the closely related process, which proceeds through
almost identical loop diagrams and thus is qualitatively rather similar, is the decay
H → Zγ. However, this mode results in an even smaller branching ratio once the
decay of the Z boson is also folded in.
Although the mass of the Higgs boson has now been determined at the LHC, it is
useful to review the pattern of branching ratios that was expected prior to its discovery.
As shown in Fig. 4.55, which depicts the branching ratios for a SM Higgs boson as
a function of its mass, the most important decay channels are strongly dependent
on the Higgs boson mass. This basic fact necessitated a very broad range of search
strategies at the LHC. Note that these results include various higher-order QCD and
electroweak corrections, although the basic pattern is very similar to the one that
would be obtained using the lowest-order formulae above. As is clear from the figure,
the branching ratios to the different decay products are very sensitive to kinematic
thresholds, unlike the very smooth mass dependence observed in the Higgs boson
production rates. Over the mass range shown, the decay of the Higgs boson to bottom
quarks dominates for mH < 135 GeV and, above that value, it is the W W branching
ratio that is largest. Above 200 GeV the decays into W W and ZZ remain the most
important channels, with a small contribution from H → tt̄ that is as large as 0.2 for
mH ≈ 450 GeV. It is interesting to note that the dominance of the W W branching
ratio is true even below threshold, with at least one of the W bosons produced far from
mass-shell. The presence of all of these features results in a very rich phenomenology
and a difficult experimental task, with search analyses fine-tuned for different putative
Higgs masses. The ZZ → 4 leptons and γγ decay modes benefit greatly from the
excellent identification and resolution of the detected particles. For the decay H →
W W → 2`2ν, the analysis is hampered by the missing transverse momentum which
means that the candidate Higgs boson mass cannot be directly constructed. Although
the transverse mass may be used as a substitute, the resulting resolution is not as
good as in the fully reconstructed cases. Any of the decay modes involving jets of
hadrons suffer from similar resolution issues but are complicated foremost by the fact
250 QCD at Fixed Order: Processes
that they must compete with large QCD backgrounds. This is especially true for the
decay H → bb̄, with b jets produced prolifically in both pure QCD processes and in
top quark decays.
Following the discovery of a Higgs-like boson at the LHC in 2012, it is useful to
focus specifically on the case mH = 125 GeV. The branching ratios for this mass
are shown in Table 4.2 which is taken from Ref. [460]. For each decay the table also
includes the uncertainties due to variation of input parameters (αs and heavy quark
masses) and due to uncalculated higher orders. The order in perturbation theory at
which the branching ratio is known is also included, for completeness. One additional
branching ratio is shown in this table that did not appear earlier: the one for the decay
H → µ+ µ− . The branching ratio for this mode is an order of magnitude smaller than
that for H → γγ. Since both final states are well-reconstructed and also have smooth,
understood backgrounds, one might expect that the luminosity required to observe
H → µ+ µ− should be a factor of 100 larger than that needed for H → γγ. A more
detailed analysis supports this rough estimate, with hopes that this rare decay mode
could indeed be observed with several inverse attobarns of LHC data [401].
Higgs bosons 251
The predicted total width of the Higgs boson is given by summing over all contribut-
ing partial widths so that, for instance, the simple expressions given previously can
be used to obtain the leading approximation for the total width. For the boson dis-
covered at the LHC, with a mass around 125 GeV, the total width is dominated by
the decay into bottom quark pairs, cf. Table 4.2. The total Higgs boson width is thus
well approximated, up to a O(1) factor corresponding to BRH→bb̄ , by Eq. (4.100) with
mf → mb . Compared to the widths of its partner electroweak bosons the W and the
Z, the width of the Higgs boson is thus suppressed by a factor m2b /m2W . This results
in a SM prediction of ΓH ≈ 4 MeV for a Higgs boson of mass 125 GeV.
This width is much smaller than the typical mass resolution of the LHC experiments,
even in the best-measured channels such as H → γγ and H → ZZ. Therefore a
typical scan of the threshold region does not yield very much information about the
intrinsic width. This type of direct scan yields only a rather weak bound at present,
ΓH . 1600×ΓSM H [402], while an estimate of the eventual sensitivity using this method
is at the level of ΓH . 50 × ΓSM H [439]. A number of other methods to constrain the
width directly have been proposed, relying on either interference effects that alter the
shape of the mass distribution in diphoton decays [473] or on a comparison of Higgs-
related cross-sections at the resonance and in the high-mass region beyond it [322].
The latter method rests on the observation that there is a significant fraction of
Higgs boson events in the ZZ final state in the region m(ZZ) > mH . This feature can
clearly be seen in Fig. 4.56, which shows the cross-section as a function of the 4-lepton
invariant mass for ZZ → 4` decays, for typical LHC cuts at 13 TeV. The large off-
shell contribution is the result of two effects. First, the branching ratio BRH→ZZ grows
significantly as the virtuality of the Higgs boson approaches the threshold for producing
two real Z bosons. Second, there is an additional enhancement of the spectrum in
the region m4` ∼ 2mt when there is sufficient energy to resolve the internal top
threshold in the loop. The result is that, for typical lepton cuts, the predicted number
of H → ZZ → 4-lepton events in the off-shell region defined by m4` > 130 GeV can
be 15% or more [315, 322, 654]. However, this is not the full story, due to the fact that
252 QCD at Fixed Order: Processes
there is another class of diagrams that contributes to this final state at the same order
in perturbation theory. These are indicated in Fig. 4.57 (right) and correspond to box
diagrams in which a quark circulates in the loop. In contrast to the Higgs diagram on
the left, the contribution of light quarks in the loop is non-negligible. The inclusion of
the box diagrams has an important consequence in the off-shell region due to the fact
that the two sets of diagrams interfere destructively at high energies. This behaviour
is a consequence of the unitarizing effect of the Higgs boson on the production of
longitudinally polarized Z bosons [723]. The effect of the destructive interference is
clearly seen in Fig. 4.56, where the contribution of all diagrams is smaller than the
contribution of Higgs diagrams alone in the region m4` > 700 GeV. Information on
the width of the Higgs boson may be obtained by noting that the peak cross-section
is related to both the couplings and the width of the Higgs boson, while the off-shell
cross-section does not depend strongly on the width. Such methods provide much more
stringent limits than direct approaches, ΓH < (5−9)×ΓSM H [37, 661]. The experimental
situation is discussed further in Section 9.6.4.
In addition to these direct constraints, the total width of the Higgs boson can be
determined indirectly by comparing measurements of its couplings in different chan-
nels. Converting these measurements into constraints on the maximum width requires
additional theoretical assumptions, for instance bounds on the couplings of the Higgs
to W and Z bosons that are allowed in broad classes of SM extensions [475]. With
this caveat, the indirect bounds can be rather strong: 0.3 < ΓH /ΓSM H < 3.56.
Higgs bosons 253
where the trace is over the colour degrees of freedom. Using the resulting ggH coupling,
it is straightforward to compute the LO matrix element squared,
1 α s 2 4
(LO) 2
Mgg→H = mH . (4.105)
8V 3πv
Alternatively, the same expression could have been derived by taking the limit mt /mH →
∞ in the corresponding expression for the same matrix elements, Eq. (4.94).4 By tak-
ing the ratio of these two expressions one finds that the matrix element squared, and
hence the cross-section, in the full theory is related to the one in the effective theory
by the factor
σfull 3 2
R≡ = Iq m2t /m2H . (4.106)
σeffective 4
The value of this ratio is shown in Fig. 4.58, for Higgs boson masses between 100
and 300 GeV. Although formally one would expect that this approximation is valid
only when all other scales in the problem are much smaller than mt , in fact one finds
that mH < mt is sufficient to render the effective theory valid to within 10%. For a
Higgs boson of mass 125 GeV, R corresponds to a correction factor of about 6.5%. In
more complicated applications of the effective theory, for instance in the presence of
jets, it has been found that the requirement pT (jet) < mt is necessary for an accurate
approximation [454].
4 In this limit it is easy to show that the function Iq (m2t /m2H ) → 4/3.
254 QCD at Fixed Order: Processes
The clear benefit of the effective theory is the ease with which higher-order cor-
rections can be computed. The NLO corrections were first computed long before the
discovery of the Higgs boson [438], with the calculation performed in a very similar
style to the Drell–Yan case discussed earlier in Sections 3.3.2 and 3.3.3. The contribu-
tion of the loop diagrams is, as in the Drell–Yan case, proportional to the leading-order
matrix element. Explicitly, using the dimensional reduction scheme, the result is
h i α N µ2 ε 2 2
(1−loop) ∗(LO) s c 2 (LO)
2Re Mgg→H × Mgg→H = 2 cΓ − 2
+ π Mgg→H (4.107)
2π mH ε
Since the LO amplitude contains the strong coupling, it must be renormalized at this
order. This is achieved by adding a term,
b0 αs (LO) 2
−2 cΓ Mgg→H , (4.108)
ε 2π
where the overall factor of two reflects the O(αs2 ) nature of the LO matrix elements.
In addition there is a finite renormalization of the strong coupling associated with the
use of dimensional reduction, to bring its definition into the standard MS scheme,
Nc αs (LO) 2
+2 Mgg→H . (4.109)
6 2π
Higgs bosons 255
Finally, the full calculation should also include the O(αs ) correction to the effective
Lagrangian shown in Eq. (4.104), which it is natural to include here. Accounting for
all of these contributions, the renormalized virtual result is
2 ε 2
αs Nc 2 µ 2 b0 2 (LO)
cΓ − 2 − + 4 + π M gg→H . (4.110)
2π ε m2H ε Nc
The real corrections from the process gg → Hg are easily computed using either an
explicit calculation or making use of the Catani–Seymour dipoles that were introduced
in Section 3.3.2. The derivation of these contributions is very similar to the calculation
of the closely related Drell–Yan process, that was explicitly worked out there. The full
result is obtained by adding the real radiation corrections and the appropriate PDF
collinear subtractions to the result in Eq. (4.110). In this way one arrives at the
expression for the NLO corrections (cf. Eq. (3.219) for the Drell–Yan case),
Z1
N LO αs Nc (LO) δ(x1 x2 s − m2H )
dσgg→H (pg1 , p g2 ) = |Mgg→H |2 dx1 dx2
2π m2H
0
Z1 Z1
x1 2 x2 2 11 4π 2
dξ1 dξ2 fg/h1 ,µ fg/h1 ,µ + δ(1 − ξ1 ) δ(1 − ξ2 )
ξ1 F ξ2 F 3 3
x1 x2
"
2 (1 − ξ1 )2 m2H 1 2 (1 − ξ1 )2 m2H
+ log + 2 − 2 + ξ 1 − ξ1 log
1 − ξ1 ξ1 µ2F + ξ1 ξ1 µ2F
11 (1 − ξ1 )3
+ δ(1 − ξ2 )
6 ξ1
"
2 (1 − ξ2 )2 m2H 1 2 (1 − ξ2 )2 m2H
+ log +2 − 2 + ξ2 − ξ2 log
1 − ξ2 ξ2 µ2F + ξ2 ξ2 µ2F
11 (1 − ξ2 )3
+ δ(1 − ξ1 ) . (4.111)
6 ξ2
To compute a complete set of corrections at this order one must also include contri-
butions from the real radiation diagrams, gq → Hq, and all crossings. These contain
no virtual contributions and their calculation is more straightforward. The net effect
of the NLO corrections is large, which can already be seen from the coefficient of the
LO-like term, proportional to δ(1 − ξ1 ) δ(1 − ξ2 ), in Eq. (4.111).
The size of the corrections motivated the calculation of NNLO corrections to the
Higgs boson cross-section using the same effective theory approach [158, 615], as al-
ready discussed in Section 3.4.1. The inclusion of the NNLO terms provided only a
relatively small further correction, thus stabilizing the perturbative expansion of the
cross-section. However, the residual scale uncertainty remained at the 10% level until
the recent completion of the full N3 LO calculation [157]. The results of that calculation
are illustrated in Fig. 4.59, which shows the cross-sections and uncertainties at each
256 QCD at Fixed Order: Processes
Fig. 4.59 The scale uncertainty of the Higgs gluon fusion cross-section,
√
computed in the range [mH /4, mH ], versus the collider energy, s. The
3
cross-section is computed at LO, NLO, NNLO, and N LO in the strong
coupling. Reprinted with permission from Ref. [157].
order of perturbation theory. At N3 LO both the size of the correction, and the total
scale uncertainty, are at the level of a few per cent. This has an immediate impact
on the precision Higgs programme of the LHC, for example by significantly reducing
the theoretical uncertainty associated with the extraction of the Higgs boson coupling
strengths. Other improvements in the accuracy of the theoretical prediction can also
be taken into account. For instance, the predictions shown in Fig. 4.53 also include
contributions from electroweak corrections [123]. These represent a few per cent ef-
fect but introduce a small additional uncertainty since one must choose a scheme for
combining the separate QCD and electroweak corrections.
An important issue in detecting events in which a Higgs boson is produced in this
channel is the presence of additional jet activity. There are many significant back-
grounds to Higgs production that naturally contain jets so that it is natural to try
to use the jet information to better separate the signal and background processes.
One example of this is top pair production, which has a considerable cross-section and
leads to final states containing two W bosons and two jets and is thus a background
to searches looking for the decay H → W W . Calculations of the signal rate in associ-
ation with additional jets have been performed at NLO for up to three jets [420] and
at NNLO for H + 1 jet [270, 271, 392]. As a further example of the stabilizing effect of
higher orders in perturbation theory, the cross-section for Higgs+jet at LO, NLO, and
NNLO is shown in Fig. 4.60, as a function of the transverse momentum required to
Higgs bosons 257
define the jet. However, the real issue is that all these higher-order calculations yield
predictions for cross-sections that include at least the number of jets present in the
leading-order process. That is, the NLO calculations explicitly include up to one addi-
tional jet and the NNLO ones up to two. However, the signal discrimination requires
predictions for an exact number of jets — and no more than that.5 In the simplest
case, one is thus interested not in the inclusive Higgs cross-section but the cross-section
for Higgs production with zero jets, that is, in the presence of a jet-veto. However,
this type of veto on additional radiation can render the fixed-order perturbative cal-
culations more unreliable. Particularly if the jet-veto scale is much smaller than the
Higgs boson mass, as is usually the case, the perturbative expansion develops large
logarithms of the form log(pveto
⊥ /mH ). In order to recover accurate predictions in this
case it is necessary to go beyond the fixed-order approach and perform a resummation
of these types of logarithm. Such calculations will be discussed further in Chapter 5.
5 This is not the case for other decay modes such as H → γγ and H → ZZ ∗ where inclusive
measurements can be carried out. See Section 9.6.
258 QCD at Fixed Order: Processes
in the full theory. Of course, in the full theory even the lowest-order prediction already
contains loop diagrams and so the resulting matrix elements contain the usual one-loop
functions. These matrix elements were first computed in Ref. [500] and the results of
that calculation are summarized here.
There are two basic parton-level processes that must be considered:
gs2 gW 1 A
Mgq→qH = −i (t )i3 i2 gs (4.114)
16π 2 4mW 2
!
1 pµ (pα + pα
3)
ū(p3 )γµ u(p2 ) g αµ − 1 2 α (p1 )F (s23 , sH )
s23 p1 · (p2 + p3 )
where α (p1 ) is the polarization vector of the gluon, sH = (p1 + p2 + p3 )2 and the loop
function F (s23 , sH ) for a single quark of mass mq is given by
h
F (s23 , sH ) = −8m2q 2 − (sH − s23 − 4m2q )C0 (p1 , p23 ; mq , mq , mq )
2s23 i
+ B0 (p123 ; mq , mq ) − B0 (p23 ; mq , mq ) . (4.115)
sH − s23
This function depends on the bubble and triangle scalar integrals defined by
Z
µ4−D 1
B0 (p1 ; m1 , m2 ) = D Γ(1 − ε) dDl 2 2 + i)((l + p )2 − m2 + i) ,
iπ 2 (l − m 1 1 2
1
C0 (p1 , p2 ; m1 , m2 , m3 ) = 2 (4.116)
Z iπ
1
× d4 l 2 .
(l − m21 + i)((l + p1 )2 − m22 + i)((l + p1 + p2 )2 − m23 + i)
gW gs3 2
Mgg→gH = − s fABC α (p1 )β (p2 )γ (p3 )
mW 32π 2 H
"
F2αβγ (p1 , p2 , p3 )A3 (p1 , p2 , p3 ) + F1αβγ (p1 , p2 , p3 )A2 (p1 , p2 , p3 )
Higgs bosons 259
#
+F1βγα (p2 , p3 , p1 )A2 (p2 , p3 , p1 ) + F1γαβ (p3 , p1 , p2 )A2 (p3 , p1 , p2 ) , (4.118)
where the colour labels of the gluons are denoted by A, B, and C. This equation
introduces the projectors F1 and F2 that are defined by
! !
αβ β α γ γ
g p p p p
F1αβγ (p1 , p2 , p3 ) = − 1 22 2
− 1
p1 · p2 p1 · p2 p2 · p3 p1 · p3
!
β γ α β γ
αβγ pα
3 p1 p2 − p2 p3 p1 g αβ pγ1 pγ2
F2 (p1 , p2 , p3 ) = + − (4.119)
p1 · p2 p1 · p3 p2 · p3 p1 · p2 p3 · p1 p3 · p2
! !
β β
g βγ pα
2 pα
3 g αγ
p3 p1
+ − + − .
p2 · p3 p1 · p2 p1 · p3 p1 · p3 p2 · p3 p2 · p1
The functions A2 and A3 contain the loop integral functions and it is convenient to
rewrite A3 in terms of a further function, A4 , as follows
1
[A2 (p1 , p2 , p3 ) + A2 (p2 , p3 , p1 ) + A2 (p3 , p1 , p2 ) − A4 (p1 , p2 , p3 )] .
A3 (p1 , p2 , p3 ) =
2
(4.120)
The functions A2 and A4 are then defined by
The remaining functions, W1 , W2 and W3 are remnants of the scalar integrals that
enter the calculation. Their definitions are as follows:
Z 1
s
W1 (s) = 2 + dx log 1 − 2 x(1 − x) − i
0 mq
260 QCD at Fixed Order: Processes
Z 1
dx s
W2 (s) = 2 log 1 − 2 x(1 − x) − i (4.123)
0 x mq
W3 (s, t, u, v) = I3 (s, t, u, v) − I3 (s, t, u, s) − I3 (s, t, u, u)
Z 1 !−1
m2q t v
I3 (s, t, u, v) = dx + x(1 − x) log 1 − 2 x(1 − x) − i
0 us mq
and explicit results in all kinematic regions of interest are given in the appendix of
Ref. [500]. The corresponding matrix element squared is
gW gs 8Nc2 CF h
2 6
2 2
|Mgg→gH |2 = |A2 (p1 , p2 , p3 )| + |A2 (p2 , p3 , p1 )|
256π 4 s12 s13 s23
i
2 2
+ |A2 (p3 , p1 , p2 )| + |A4 (p1 , p2 , p3 )| . (4.124)
These matrix elements can be used to make a comparison of the Higgs+jet cross-
section in the full theory with the effective theory approximation. Such
√ a comparison is
shown in Fig. 4.61, where the cross-sections have been computed at s = 13 TeV and
for mH = 125 GeV. The ratio R, defined analogously to Eq. (4.106), demonstrates the
anticipated behaviour. The cross-section at low jet p⊥ is larger by the same factor as for
the inclusive Higgs cross-section, cf. Fig. 4.61. The difference between the calculations
actually decreases as p⊥ increases, until the result of the full calculation is smaller
than in the effective theory. Nevertheless, the approximation remains reasonable for
p⊥ (jet) . 200 GeV. At higher jet momenta the approximation quickly breaks down,
resulting in a very poor description in the effective theory.
The rate for producing a Higgs boson in association with an additional jet is signif-
icant at the LHC, due to the nature of the gluon-fusion process. It is more important
than, for instance, the similar Drell–Yan process due to the fact that the gluons natu-
rally radiate more copiously than quarks. Indeed, the cross-section for producing more
than one jet is still rather high. This is indicated in Fig. 4.62, which shows cross-
sections for the production of a Higgs boson in association with up to three jets as a
function of the machine operating energy. At LHC energies, with typical jet cuts, the
cross-section for producing H+jet is about half the inclusive Higgs production rate,
and√the rate for two or more jets is only smaller by about a further factor of three.
As s increases, the jet cross-sections grow relatively more rapidly if the jet definition
remains the same. Thus the study of Higgs boson processes that contain additional
jets becomes even more of a concern for any future hadron collider.
The first measurements of differential jet production in association with a Higgs
boson at the LHC will be discussed in Section 9.6.5.
On the theoretical side, the weak nature of the process means that the cross-section
is under good control. The NLO corrections are small and the scale uncertainty at
this order is around 10% [171, 228]. To perform a computation beyond NLO it is
useful to work in a structure function approach, where the process is described by
the independent production of two W or Z bosons by each initial parton. The W
or Z bosons then subsequently interact to produce the Higgs boson, as indicated
schematically in Fig. 4.63.
This double deep-inelastic scattering approach is an excellent approximation be-
cause of the fact that interference effects between the quark lines are very small. Using
this framework it has been possible to compute the NNLO corrections to the WBF pro-
cess [263, 264], including also the calculation of fully differential observables [301]. Most
recently the calculation has been extended to N3 LO for the total cross-section [488].
Fig. 4.64 shows the scale dependence of the cross-section through N3 LO, computed√us-
ing the calculation presented in Ref. [488], as a function of the pp collider energy ( s).
This indicates that the scale uncertainty in the N3 LO prediction for the weak boson
fusion cross-section at the LHC is at the level of a few per mille. The electroweak cor-
rections to this process have also been computed at NLO and included in the HAWK
code [398]. The corrections are of the same size as the NLO QCD contributions and
must therefore be taken into account.
Isolating a clean sample of events in which the Higgs boson is produced through
weak boson fusion is complicated not only by the presence of the usual SM back-
grounds but also by production of the same final state through other Higgs processes.
For instance, the Higgs boson may be accompanied by two jets through associated
262 QCD at Fixed Order: Processes
Fig. 4.62 Cross-sections for Higgs production (mH = 125 GeV) through
gluon fusion at proton–proton colliders as a function of centre-of-mass
√
operating energy, s. Cross-sections for production of a Higgs boson in
association with jets are computed for jets satisfying pT > 40 GeV, |y| < 5
and kT -clustering with D = 0.4.
production in which the W or Z boson decays hadronically. However, the biggest such
contamination occurs from the gluon fusion process, with the radiation of additional
jets from the initial state. Enhancing the efficiency of selecting weak boson fusion
events by applying a large rapidity separation between two of the jets in the event still
leaves a substantial contribution from gluon fusion events, as shown in Fig. 4.65. This
presents an additional complication when trying to make a precision measurement of
the couplings of the Higgs boson in this channel.
An interesting aspect of the weak boson fusion process is its ability to probe the
tensor structure of HW + W − and HZZ couplings [811]. The most general possible
structure of the HV V vertex (where V = W ± or V = Z) consistent with gauge
invariance can be written as,
µν
CHV V (p1 , p2 ) = a1 (p1 , p2 )g
µν
+ a2 (p1 , p2 ) (p1 · p2 g µν − pν1 pµ2 )
+a3 (p1 , p2 )µνρσ p1,ρ p2,σ . (4.125)
Higgs bosons 263
The SM is realized by a constant value for a1 , independent of the vector boson mo-
menta p1 and p2 (cf. the Feynman rules for the SM Higgs interactions in Fig. 4.49).
A non-constant value for a1 , or the new tensor couplings represented by a2 and a3 in
Eq. (4.125), can be realized by loop-induced couplings within a particular model of
new physics. a2 represents an additional CP-even coupling, while a3 is CP-odd due to
the presence of the epsilon tensor in the interaction. The gold-plated observable for
differentiating between these three types of coupling is the azimuthal angle between
the two tagging jets, ∆φjj . Fig. 4.66 illustrates the expected distribution of this angle,
for the SM case and for pure CP-even or CP-odd contributions represented by anoma-
lous couplings a2 or a3 . The SM expectation is for a relatively flat distribution, while
the anomalous CP-even and CP-odd couplings lead to pronounced dips at ∆φjj = 90◦
and ∆φjj = 0, 180◦ respectively. This behaviour can be understood from the structure
of the interactions and resulting matrix elements [811]. For example, the presence of
the epsilon tensor in the CP-odd interaction means that it vanishes when there are
fewer than four independent momenta in the process, that is, when the tagging jets are
collinear. The picture is more complicated in the presence of an admixture of SM and
anomalous CP-odd and CP-even effects, but the observable ∆φjj remains an excellent
probe.
Note that a similar analysis can be made for Higgs boson events produced through
gluon fusion, where the SM Hgg effective coupling takes exactly the form of the a2
term in Eq. (4.125). As a result, the SM expectation for the ∆φjj distribution in gluon
fusion events has a shape similar to the CP-even (a2 ) curve in Fig. 4.66.
Fig. 4.64 The scale uncertainty in the weak boson fusion cross-section,
at each order through N3 LO. The cross-sections are shown at each order
as a function of the collider energy and are normalized to the N3 LO result.
Reprinted with permission from Ref. [488].
enabled theoretical predictions for the cross-sections to be made at NNLO in the strong
coupling [524]. Moreover, the NLO electroweak corrections are also known [458].
This channel is particularly interesting as a way to get a handle on the bottom
quark decay mode, H → bb̄. A standard analysis of this mode would be plagued
by large backgrounds, in particular from top pair production that yields events with
similar kinematic properties. However, in Ref. [297] it was shown that sensitivity could
be recovered in these channels by looking in the boosted regime where the vector bosons
are produced back to back and at large transverse momenta. Although this significantly
reduces the signal cross-section, the dominant backgrounds are impacted even more
strongly. However, the key to utilizing the boosted kinematics fully comes from the
properties of the jets that should be produced in the signal process: the bb̄ pair should
be reconstructed in a fat jet, as discussed in Section 2.1.6. In order to provide the
best discrimination against background processes it is also necessary to enforce a veto
against any additional jet activity. The application of such a requirement greatly affects
the size of the QCD corrections in this case. For the inclusive V H cross-section the
effect of higher-order corrections is rather mild, but after applying a jet veto this is no
longer true. The situation in the presence of the jet veto is indicated in Fig. 4.67, which
Higgs bosons 265
Fig. 4.65 The cross-section for Higgs production through gluon fusion
and weak boson fusion, as a function of the rapidity separation between
the two jets. Jets are defined using the anti-kT algorithm with D = 0.4
and satisfy pT > 40 GeV.
shows the theoretical prediction for the p⊥ distribution of the fat jet in W H events at
LO, NLO, and NNLO. There is a significant negative correction at NLO and a smaller
further reduction at NNLO. The fact that the first-order correction is so large indicates
that further work is required to obtain a reliable prediction for the cross-section in
this region. Since, at this point, an even higher-order calculation is infeasible, a better
avenue for improving the prediction is through the use of resummation techniques of
the type that will be discussed in Chapter 5.
The second associated production channel results in a final state containing a
Higgs boson together with top quarks. A top quark can be produced through one of
the normal strong processes, with the Higgs boson coupling to it through the Yukawa
interaction. The largest such cross-section is tt̄H production (see Fig. 4.51), taking
advantage of the large top quark mass to enhance the Higgs coupling. Even so, at
LHC operating energies, this cross-section is the smallest of the SM production modes
shown in Fig. 4.53. As a result the LHC has only limited sensitivity in this channel
until the accumulation of bigger datasets in Run II and beyond. Nevertheless, such
data offer the chance of providing direct evidence of the coupling of the Higgs boson
to the top quark. It could also yield information on the coupling to bottom quarks
through a striking signature containing four identified b jets, two originating from the
top quark decays and the remainder from a H → bb̄ decay. Since the lowest-order
process is much more complicated than in the other channels, predictions for the tt̄H
cross-section and related observables are only available at the NLO in QCD [209, 440].
The closely related process, where the Higgs boson is produced in association with
bottom quarks, is too small to be probed at the LHC in the SM.
266 QCD at Fixed Order: Processes
Fig. 4.66 The azimuthal angle between WBF jets for SM Higgs produc-
tion (magenta), and anomalous production through pure CP-even (green)
and CP-odd (blue) couplings. Reprinted with permission from Ref. [613].
It may also be possible to achieve additional sensitivity through the t-channel single
top process shown in Fig. 4.68. However, this channel suffers from larger experimental
backgrounds and the sensitivity to the Htt̄ coupling is smaller because its effect is
washed out by the coupling of the Higgs boson to the t-channel W boson in this
process.
V1 + V2 → V3 + V4 , (4.126)
where V1 , . . . , V4 are a suitable combination of Z and W ± bosons. Soon after the devel-
opment of the Higgs theory, such processes were identified as keys to demonstrating
the validity of the known SM. The reason is that, at high energies, the behaviour
of vector boson scattering amplitudes is very sensitive to the gauge structure of the
electroweak sector [723].
Consider the case of W boson scattering, V1 = V3 = W + , V2 = V4 = W − in
Eq. (4.126). There are three prototype diagrams for this process, as shown in Fig. 4.69,
corresponding to s- and t-channel exchange of an intermediate boson and a single
diagram involving the quartic coupling. In the high-energy limit of this scattering
Higgs bosons 267
Fig. 4.67 The transverse momentum distribution of the fat bb̄ jet in
W H(→ bb̄) events at LO, NLO, and NNLO, from the calculation of
Ref. [524]. Reproduced with permission from Ref. [310].
√ ! 21
8 2π
mH < ≈ 1 TeV. (4.127)
3GF
Indeed, this bound was a powerful argument that a Higgs boson below this mass, or
something playing that role, should be observed at the LHC. Even with the discovery of
a light Higgs boson, it is still possible that this particle alone is not wholly responsible
for unitarizing the high-energy behaviour of the amplitudes. Probing vector boson
scattering is therefore an essential closure test of the SM.
At the LHC it is possible to probe vector boson scattering through the O(α4 ) elec-
troweak processes pp → V1 V2 jj. Representative diagrams are obtained by attaching
quark lines to two of the vector bosons in Fig. 4.69. However, such diagrams only
represent a small fraction of all the possible diagrams one may draw at this order in
perturbation theory. The remaining diagrams represent, for instance, simple emission
of vector bosons from quark lines. All of the vector boson scattering processes have
been computed at NLO in QCD and implemented in the VBFNLO program [187].
A search for VBS is challenging for a number of reasons. First, the weak nature of
these processes means that cross-sections are naturally small. Second, the VBS compo-
nent must be isolated from the relatively uninteresting other production mechanisms.
Finally, due to the inherent cancellation mechanism at high energies, the sought-after
signal is even smaller than might otherwise be expected. Cross-sections for various
VBS processes at 13 and 100 TeV are shown in Table 4.3. Since the cross-sections are
so small, and the important aspect to probe is the contribution at high-energies, the
study of these processes is much more fruitful at 100 TeV. Note that the t-channel
exchange diagrams mean that there is a significant rate for same-sign W -boson produc-
tion. This is particularly interesting since the same-sign final states have a considerable
benefit in that they suffer from far fewer backgrounds than the other channels.
4.9 Summary
This chapter has discussed the application of fixed-order QCD perturbation theory to
a variety of hadron collider processes. While the technical achievements in this arena
are impressive, from NLO computations of 2 → 6 processes to calculations at NNLO
and beyond, this survey has also revealed a number of shortcomings of this approach.
A central theme of the breakdown of the fixed-order description is its application to
Summary 269
Table 4.3 Cross-sections for vector boson scattering processes at the LHC and
a 100 TeV proton-proton collider. Jets are defined by the anti-kT (D = 0.4)
algorithm and satisfy pT > 20 GeV, |η| < 5 and mjj > 100 GeV.
Final state Nominal process σ(13 TeV) [fb] σ(100 TeV) [fb]
e− ν̄e νµ µ+ jj W − W + jj 11.2 316
νe e+ νµ µ+ jj W + W + jj 3.35 114
e− ν̄e µ− ν̄µ jj W − W − jj 1.21 69.6
νe e+ µ− µ+ jj W + Zjj 1.44 39.3
e− ν̄e e+ µ− µ+ jj W − Zjj 0.89 31.8
e− e+ µ− µ+ jj ZZjj 0.29 10.5
final states where the kinematics are particularly restricted. Examples highlighted were
threshold production of photon pairs (Section 4.2.3) and top quarks (Section 4.5), as
well as the effects of a jet veto in Higgs production by gluon fusion (Section 4.8.4) and
in the associated V H mode (Section 4.8.7).
In these cases the fixed-order approach results in perturbative predictions that do
not sufficiently converge at sucessive orders of calculation, or that exhibit unphysical
behaviour. In each case the origin of these symptoms is a large logarithm, present at
each order of the calculation, which spoils the expected perturbative behaviour. As
explained in detail in the following chapter, these logarithms can be systematically
identified and analytically resummed to restore the predictive power of QCD in these
situations. Moreover, Chapter 5 will also show how similar ideas may be used to
perform resummations numerically in order to provide parton shower predictions that
offer an extremely wide range of applicability. Ameliorating the fixed-order description
discussed in this chapter with such methods will be crucial to achieving the level of
understanding necessary to confront the theory of QCD with experimental data. Such
comparisons will be presented in some detail in Chapters 8 and 9.
5
QCD to All Orders
As already discussed in Section 2.3, often kinematic situations occur which are
characterized by very different scales µi . Then logarithms of the type log(µ0 /µ1 ) may
become so large that they overcome the smallness of the couplings, α log(µ0 /µ1 ) ≈ 1.
In such cases, truncating the perturbative expansion at any fixed order will not yield
correct results, and rather than attempting to include, order by order, perturbative cor-
rections, it is often more important to resum such dangerous logarithmically enhanced
terms to all orders. In this way, the programme of resummation of large logarithms
complements the fixed-order efforts described in previous chapters. In fact, there are,
broadly speaking, two ways to try to resum large logarithms, namely either through
analytic methods, which were introduced in Section 2.3, or numerically, by means of a
parton shower. In both cases the name of the game is to further push the accuracy by
combining the knowledge of both fixed-order results and of the logarithmic structures
in one best theoretical prediction.
In order to develop an intuitive understanding of a number of QCD phenomena
further, the chapter starts with revisiting the QCD radiation pattern in Section 5.1.
There, some care is taken to better quantify some ideas concerning the interface of
perturbative QCD and emissions described by it and of the non-perturbative phase
governed by hadrons and hadronization. Some first phenomena, such as angular or-
dering or the QCD hump-backed plateau, will be elucidated based on these rather
quantitative considerations.
In Section 5.2 the discussion of analytic resummation with the example of the
p⊥ spectrum of the W boson already sketched in Section 2.3 will be extended to
higher accuracy and, in addition, also other processes will be considered. To round
this section off, other analytic resummation methods will be introduced, which either
aim at different kinematical situations or are based on a different formalism.
Section 5.3 introduces the numerical implementation of resummation in the proba-
bilistic parton shower picture. It will connect the parton shower to the analytic resum-
mation discussed before. In addition, various methods to improve the formal accuracy
of event simulation through the parton shower by including higher-order exact matrix
elements will be summarized in Sections 5.4 and 5.5. They have been at the centre
of formal developments in the framework of simulation tools and continue to play a
The Black Book of Quantum Chromodynamics: A Primer for the LHC Era. John Campbell, Joey Huston, and Frank Krauss.
© John Campbell, Joey Huston, and Frank Krauss 2018. Published in 2018 by Oxford University Press.
DOI 10.1093/oso/9780199652747.001.0001
The QCD radiation pattern and some implications 271
central role in the quest for an ever-improved precision in the analysis of LHC data.
This yields a relatively broad spectrum both in transverse momentum and energy of
emitted gluons, peaking at small transverse momenta and small energies. As already
discussed in Section 2.1, the spectrum is cut off at small transverse momenta and
energies by the onset of hadronization. This process takes place at distances of typical
hadron radii of R ≈ 1 fm or at masses of the order of at most a few ΛQCD . Broadly
speaking, in a hard scattering process characterized by the hard scale Q, two classes
of secondary emissions present themselves. First, there are emissions, where
1
k⊥ ∼ ω ∼ Q −→ wq→qg ∼ αs (k⊥
2
) 1, (5.2)
R
signalling the production of jets. Second, there are emissions, where
1
≤ k⊥ ≤ ω Q −→ wq→qg ∼ αs (k⊥
2 2
) log k⊥ ∼ 1, (5.3)
R
associated with inner- and intra-jet radiation.
Taking a closer look, different orderings in these emission patterns can be further
distinguished, which lead to different physical phenomena, namely
• double-logarithmic enhanced emissions,
1
≤ k⊥ ω Q , (5.4)
R
constituting the bulk of the emissions and producing the Bremsstrahlung pattern
in inner-jet emissions;
• hard collinear emissions,
1
≤ k⊥ ω ∼ Q , (5.5)
R
which are responsible for scaling violations in DIS and similar;
• soft, typically wide-angle, emissions,
272 QCD to All Orders
1
≤ k⊥ ∼ ω Q , (5.6)
R
which are responsible for emissions in the phase space between jets, and which
lead to the observable drag effect (see Section 5.1.2.2).
In the following, these different kinematic regions will be investigated in more detail,
pointing out their visible consequences.
kk
tform = 2 and thad = kk R2 . (5.7)
k⊥
Demanding that partons are formed before they hadronize (or, in more martial terms,
that they are born before they die) automatically implies k⊥ > 1/R. Extending this
reasoning to study the dynamics of the phase transition from partons to hadrons in
more detail, consider partons living at the edge, i.e., those partons for which k⊥ ≈
1/R, aptly dubbed “gluers” by the authors of [477]. Such gluers are first formed at
times R, with momenta given by kk ∼ k⊥ ∼ ω. As time increases, more and more of
such gluer-partons are being formed.
Assuming that the spectrum of hadrons closely follows that of the partons, a con-
cept known as local parton-hadron duality (LPHD) [177], and assuming the
absence of parton emissions below the cut-off R allows the identification of final-state
hadron energies with the energies of gluers ω. Keeping only the dominant logarithmic
enhanced terms when integrating over the soft emissions encoded in Eq. (5.1) yields
the approximate hadron energy spectrum, namely
ZQ
dk⊥2 2
CF αs (k⊥ ) h ω i dω
dN(hadrons) ∼ 2 1+ 1− (5.8)
k⊥ 2π E ω
k⊥ >1/R
CF αs (1/R2 ) dω CF αs (1/R2 )
∼ log(Q2 R2 ) = log(Q2 R2 ) d log ω(5.9)
.
π ω π
Replacing ω −→ means that the distribution of hadron energies approximately fol-
lows the form
dN(hadrons) /d log = const. , (5.10)
a plateau in the logarithm of their energy. Therefore, the energy distribution of the
(gluer)
hadrons peaks at an energy min that can be related to ωmin ∼ 1/R ∼ mhad , the
typical hadron length and mass scales.
It is interesting to study how additional hard radiation changes this naive picture.
To gain some insight, consider the case of a single secondary parton emitted under
an angle θ and introduce the separation time t(sep) , after which it reaches a distance
The QCD radiation pattern and some implications 273
R from the original parton. For such secondary partons, a hierarchy of time scales
emerges, namely
kk
tform ∼ 2
k⊥
tsep ∼ Rθ ∼ tform (Rk⊥ )
thad ∼ kk R2 ∼ tform (Rk⊥ )2 . (5.11)
For gluers, as Rk⊥ ∼ 1, these scales are all identical, but they differ for “proper” gluon
emissions.
The natural question now is, how such hard secondary partons enter the hadroniza-
tion business? The quick answer is fairly straightforward: further gluers are formed,
following the secondary parton. This can however be further quantified.
(gluer)
From θ(gluer) ∼ θ and k⊥ ∼ 1/R one finds ω (gluer) ∼ 1/(Rθ) and therefore
characteristic times R/θ. In other words, at a time t(sep) the secondary parton starts
decoupling from the other colour sources through the emission of gluers. They in turn
also become hadrons with characteristic energies of about ω (gluer) , which are a factor
1/θ larger than those stemming from the original, primary parton. The secondary
parton therefore starts looking like a jet, produced at its own hardness scale Q ≈ k⊥ ,
but with energies boosted by 1/θ. It therefore does not contribute to the yield of the
softest hadrons. The explanation for this is that for hadron energies in the interval
< <
1/R ∼ ω(hadron) ∼ 1/(Rθ) (5.12)
the primary and secondary parton are not yet separated enough to start emitting
gluers independently — the soft colour field manifesting itself in the gluers just “sees”
the combined colour charge of both. This is the most striking manifestations of colour
coherence in the QCD emission pattern.
Fig. 5.1 The Chudakov effect: a primary photon with momentum p ~k splits
into an electron–positron pair with momenta p(−) + p(+) , which in turn
emits a secondary photon with momentum k. The fermion momenta before
~(+) = z~
photon emission are given by p pk +~ ~(−) = (1−z)~
p⊥ /2 and p pk +~
p⊥ /2,
respectively, and p⊥ denotes their relative transverse momentum.
2
k⊥ ≈ (p(+) )2 sin2 θeγ ≈ (zpk )2 θeγ
2
(5.14)
In order for the photon to resolve the positron as individual point-like charge rather
than just being emitted by the dipole with its zero net charge, the transverse wave-
length of the photon
1 1
λγ⊥ ≈ ≈ (5.19)
k⊥ zpk θeγ
The QCD radiation pattern and some implications 275
cf. Eq. (2.14). Squaring it yields the radiation function. For massless particles the
velocity equals the speed of light, i.e., both |~v |, |~v 0 | → 1. The classical analogy
condensed in Eq. (2.12) is recovered, and expressed in angles the radiation function in
both the quantum mechanical and classical case reads
Here ~n is the direction of the photon, and ~n± are the direction of the two leptons.
Following [504], this expression can be decomposed into two parts, related to the
emission of the photon by either the electron or the positron,
(+) (−)
We+ e− = We+ e− + We+ e− , (5.23)
where
(±) 1 1
We+ e− = We+ e− + − . (5.24)
1 − cos θγe± 1 − cos θγe∓
This decomposition will be encountered again, in later parts of this section. The indi-
vidual emission functions need to be integrated over the full angular region of photon
(+)
emission. Concentrating on We+ e− , the angular integral of the photon direction with
respect to the direction of the positron is given by
To solve this integral, in a first step, the term (1 − cos θγe− ) is expressed as a function
of φγe+ such that
1 − cos θγe− = 1 − cos θe+ e− cos θγe+ − sin θe+ e− sin θγe+ cos φγe+
(5.26)
= a − b cos φγe+ .
276 QCD to All Orders
dz
dφ = i (5.27)
z
and
z + z∗
cos φ = . (5.28)
2
Putting it all together yields
Z2π I
dφγe+ 1 i dz
I = = ∗
2π 1 − cos θγe− 2π z a − b z+z
2
0 |z|=1
I I (5.29)
1 dz 1 dz
= = ,
iπ bz 2 − 2az + b iπb (z − z+ )(z − z− )
|z|=1 |z|=1
Z2π
dφγe+ (+) 1 cos θγe+ − cos θe+ e−
We+ e− = 1+
2π 1 − cos θγe+ | cos θγe+ − cos θe+ e− |
0
(5.32)
0 if θγe+ > θe+ e−
= 2
else.
1 − cos θγe+
This proves the angular ordering property for each of the two individual radiation
functions for the QED case discussed here, and thus also for the combined overall
QED radiation pattern.
In QCD a similar effect can be observed. Consider a colour charge, like a quark,
emitting a first gluon under an angle Θ. Then, a second gluon emitted under an angle
θ would resolve the individual colour charges of the quark and the primary gluon,
only if θ < Θ. If, conversely, θ > Θ, then this secondary gluon would only feel the
combined colour charge of the quark and the primary gluon, i.e., the colour charge
of the quark only. Colour coherence therefore results in an angular ordering in the
The QCD radiation pattern and some implications 277
Then, the momenta of the Bremsstrahlung gluons emitted in the process are con-
strained by k⊥ < p⊥ . The amount of the resulting soft radiation is, roughly speaking,
given by the sum of the colour charges emitting into the respective combined cones,
i.e., the sum of the colour charges Ca + Cc = 2Ca for the forward and Cb + Cd = 2Cb
for the backward cone.
The way angular ordering dominates the emission pattern of soft partons in such
singlet-exchange processes can be summarized as follows Taking into account their
respective orientation, incoming and outgoing particles form coloured dipoles defined
by corresponding opening angles θ. Soft emissions off the individual particles forming
278 QCD to All Orders
these dipoles are confined to cones with an opening angle of θ around the particle
direction.
Consider now the other case of the reaction ab → cd being transmitted by the
t-channel exchange of a coloured particle. In this case the reasoning above does not
apply, since the colour flows are not decoupled any more. Instead, now the region of
emission angles θak , θck ≥ θac will be filled by emissions off the t-channel particle
and is therefore susceptible to its colour charge. Since, typically, at large energies
and relatively small scattering angles, i.e., in the region ŝ/t̂ 1 gluon t–channel
exchange dominates QCD scattering processes, the colour charge usually is CA . This
is a universal property of QCD scattering at large energies and small angles. The
additional soft radiation into central rapidity regions is independent of the scattering
partons. It can also be shown that the soft particle distributions in the region between
the two forward cones are fairly independent of the rapidity and more or less constant.
This is quantified in Section 5.2.5.
The findings are summarized in Fig. 5.2, exhibiting sketches of the rapidity dis-
tribution of particles being emitted in the two cases discussed. The picture of such
processes of course changes when further, hard emissions populate the central rapid-
ity region. As this is a relatively rare process, suppressed by one factor of the strong
coupling without any sizeable logarithmic enhancement, however, the overall picture
for the bulk of the events is relatively well described by the reasoning above.
A striking and highly relevant example for the impact of this aspect of the QCD
radiation pattern in hadron collisions is the production of some potentially heavy sys-
tem, such as, e.g., a Higgs boson, in the fusion of weakly interacting bosons (WBF).
In this case the two outgoing partons c and d form forward tagging jets. These jets
typically will have sizeable energies and relatively small scattering angles, as their
characteristic transverse momentum scale will be given by the mass of the produced
boson. A crucial part of the signal for such a process, allowing a very effective suppres-
The QCD radiation pattern and some implications 279
sion of QCD backgrounds, relies on the fact that soft emissions under large angles are
effectively absent due to angular ordering. As a result, in WBF events, QCD radiation
in the central region between the two forward, opposite hemisphere tagging jets, is
depleted [478, 482]. Therefore, vetoing events with central jets will typically result in
a very effective suppression of the regular QCD background while the signal remains
nearly completely unaffected [810, 820, 821].
On the other hand, angular ordering leads to shrinking allowed emission angles such
that after a few emissions there is no viable phase space left for further softer emissions.
To cast this more quantitatively, consider a jet emerging from a massless quark
travelling with energy E. The gluers emitted by the quark translate into hadrons with
transverse momentum p⊥ relative to the jet axis and with energy . These quantities
are related to the emission angle θ with respect to the incident quark through
p⊥ ∼ θ ∼ 1/R . (5.38)
The plateau in the distribution of hadrons with respect to the logarithm of their energy,
dN/d log = const., cf. Eq. (5.10), can be translated into a spectrum with respect to
the angle,
ZE Z1
d dθ
N ∝ δ(θ − 1/R) , (5.39)
θ
where the δ-function encodes the phase space condition defining gluers.
Consider a case where a hard gluon of energy ω is emitted by the quark under
an angle θ0 . Assuming that the radiation off the quark and the gluon can naively
be added, i.e., assuming independent emission of secondaries without any quantum
coherence, the gluon would contribute gluers with energies given by
1/R ≤ ≤ ω , (5.40)
If this picture was correct, the number of gluers and therefore hadrons would read
280 QCD to All Orders
ZE Z1
d dθ
N ∝ δ(θ − 1/R)
θ
(5.42)
ZE Z1 Zω θZmax
dω dθ0 d dθ
+ αs Θ(ωθ0 − 1/R) δ(θ − 1/R) .
ω θ0 θ
The condition on the gluon being hard, or, more precisely, harder than a gluer, is
encoded in the Heavyside Θ-function. The maximal emission angle of gluers is given
by θmax . In the incoherent case outlined above, this angle essentially is unconstrained,
and for simplicity one would set it to θmax = 1.
Following previous discussions it is clear that this picture of incoherent addition of
the gluer spectra is overly simplistic. Encoding angular ordering in the upper limit for
the angle of the emission of the gluers translates into setting θmax = θ0 . Combined
with the integral over the angle of the hard gluon emission the effect of including
this upper limit or not, i.e., between adding gluers from the gluon coherently or not,
corresponds to about a factor of one half:
Z1 Zθ0 Z1 Z1
dθ0 dθ 1 dθ0 dθ
≈ . (5.43)
θ0 θ 2 θ0 θ
This factor exponentiates with additional gluon emissions, thereby leading to a drastic
reduction in the overall number of gluers emitted in the parton cascade. Ultimately
it leads to a reduction of the soft part of the hadron spectrum produced in such a
cascade, a testable result of angular ordering.
Transforming the integral expression for the total number of hadrons into a spec-
trum with respect to log yields
α
dN 1 + s log2 (ER) − log2 (R) for incoherent sum, θmax = 1
= 2 (5.44)
d log 1 + αs log E log R for coherent sum, θmax = θ0
Instead of the incoherent energy spectrum peaking at energies given by hadron
masses,
1
hi = hEhad i = ∼ mhad , (5.45)
R
the depletion of the soft part of the coherent
√ spectrum results in a peak of hadron
energies roughly scaling like hEhad i ∼ E, the energy of the parton giving rise to
the jet. In fact, the overall energy spectrum of the particles assumes an approximately
Gaussian shape in the variables ξ = − log xp ≈ − log xE . For convenience here the
scaled momentum or energy of the particles xp,E = |~ p|/E, /E is used. In Fig. 5.3
the hadron spectrum following from this discussion is sketched, and it is compared to
data taken at e− e+ colliders at various centre-of-mass energies. The figure shows that,
qualitatively, the actual hadron spectra follow the form given in our relatively rough
discussion.
The QCD radiation pattern and some implications 281
Fig. 5.3 The hump-backed plateau in hadron spectra: in the left panel
the impact of coherence is sketched. As discussed in the text, its domi-
nant effect is the depletion of hadrons in the soft regime, i.e., for large
values of log(1/E). This effect is visible also in the right panel, where data
taken in e− e+ annihilation from TASSO [283] and OPAL [132] at various
centre-of-mass energies are displayed.
depends on the directions ~nq of the quark, ~nq̄ of the anti-quark, and ~n of the emitted
gluon. The two terms Wq and Wq̄ are of course nothing but the radiation functions of
the individual particles constructed in Eq. (5.24). Each of them decomposes into two
parts: the first terms in the square brackets, 1/(1 − ~n~nq,q̄ ) represent the incoherent
part of the radiation pattern, while the second terms in the square brackets account
for interference effects and thus reinstall coherence. As has been shown above, after
282 QCD to All Orders
Fig. 5.4 “Mercedes-star” topologies in q q̄γ (left) and q q̄g final states
(right). They are characterized by relative angles of around 120◦ between
the three particles. Due to momentum conservation their directions ~n1,2,3
must lie in a plane in the c.m.-system of the collision. It is interesting to
compare the radiation pattern of secondary gluons emitted in the ~n direc-
tion midway between the quarks with the counterpart along ~n0 , between
the quark and the boson.
integration over the azimuth angle, this interference part is roughly equal in size to
the incoherent part outside the cone with opening angle θqq̄ such that
Z
dφ 2
hWq (~n; ~nq̄ )i = Wq (~n; ~nq̄ ) ≈ Θ(θqq̄ − θqg) . (5.48)
2π 1 − cos θqg
The drag (or string) effect in the QCD radiation pattern is best studied in the
emission of an additional (second) gluon with direction ~n from “Mercedes-star” q q̄γ
and q q̄g states. These are configurations formed in electron–positron annihilation,
e− e+ → q q̄ + {γ, g}, where the energies of the quark, the anti-quark, and the photon
or gluon, Eq , Eq̄ , and Eγ,g , and their relative angles are of roughly the same magnitude,
i.e.,
E
Eq ≈ Eq̄ ≈ Eγ, g ≈
3 (5.49)
2π
θqg ≈ θq̄g ≈ θqq̄ ≈ ,
3
see also Fig. 5.4.
As an extreme example, consider first the radiation of soft quanta in the direction
~n opposite to the boson (the photon or gluon), but in the plane spanned by the three
objects — quark, anti-quark, and boson. In case of radiation off the q q̄γ final state, the
only relevant effect of the photon emission is the corresponding reduced phase space
of the quark–anti-quark dipole. The radiation pattern in this case therefore is given
by
αs dω d2 Ω~n
dwqq̄γ = CF Wqq̄ (~n) , (5.50)
2π ω 4π
essentially the radiation of a quark–anti-quark pair at a c.m.-energy reduced by the
The QCD radiation pattern and some implications 283
amount carried away through the photon emission, and boosted back into the lab
(lab (cms
system. This boost thus induces a cone size of θqq̄ ≈ 2π/3 less than θqq̄ ≈ π in
the case of the pair in its own c .m.-system, which is compensated for by the suitably
increased energies in the lab system.
For the case of a gluon emitted instead off a photon, consider first a “QED” ana-
logue, essentially replacing the gluon with a positron pair and the quark and anti-
quark with an electron. This mimics the fact that a gluon carries about twice the
colour charge of a quark, which in the limit of large Nc is exactly correct, as
Nc2 − 1 Nc →∞ Nc
CF = −→ . (5.51)
2Nc 2
In this toy model, the radiation pattern is given by
µ 2
00 αs d3 k p1 pµ2 pµ3
dw“qq̄g = + − 2
2π (2π 3 )2ω p1 k p2 k p3 k
(5.52)
αs dω d2 Ω~n 1
= Wqg (~n) + Wgq̄ (~n) − Wqq̄ (~n) ,
2π ω 4π 2
A simple calculation based on the individual W shows the complete absence of soft
radiation in the direction of ~n:
1
Wqg (~n) + Wgq̄ (~n) − Wqq̄ (~n)
2
" #
2 1 − cos 4π 3 1 2 1 − cos 4π3
= 2· − · 2
1 − cos 2π3 1 − cos π 2 1 − cos 2π
3
4π 1
= 1 − cos 2 · 2 − · 8 = 0. (5.53)
3 2
In a similar way, in the direction ~n0 , opposite to one of the quarks, soft radiation is
given by
1 3
Wqg (~n0 ) + Wgq̄ (~n0 ) − Wqq̄ (~n0 ) = · 9. (5.54)
2 2
This is to be compared with the case of the q q̄γ final states, where
2 1 − cos 4π3 3
Wqq̄ (~n) = = ·8
2π 2 2
1 − cos 3
(5.55)
2 1 − cos 4π 3
Wqq̄ (~n0 ) = 3 = · 2.
1 − cos 2π
3 1 − cos π 2
In proper QCD these quantities are of course given by fully including the correct SU (3)
colour factors [176]. Denoting with ~n⊥ yet another direction, orthogonal to the event
plane, this leads to the following ratios for the emission of a soft secondary gluon in
the QED and QCD cases
284 QCD to All Orders
and to the following ratios between the QED and QCD cases in the direction ~n opposite
the boson:
dwqq̄g (~n) Nc2 − 2 7
q q̄γ
= 2
= . (5.57)
dw (~n) 2(Nc − 1) 16
A number of these findings are remarkable. First of all, in the QCD case the soft
radiation in the direction opposite to the gluon is massively suppressed by a factor of
about three (1/Nc ) with respect to the favoured region between the quarks and the
gluon, around ~n0 . Possibly even more noticeable, the destructive interference between
the three dipoles renders this direction, ~n, more unfavourable than the direction ~n⊥
orthogonal to the event plane, which naively is susceptible mostly to the combined
colour charge of the system, namely 0. This depletion is relatively easy to understand
with the help of the toy model. There, the “gluon” was made of two equal charges,
two positrons, and the quark and anti-quark were identified with two electrons. In the
symmetric Mercedes-star configuration the two equal charges compensate each other
in the direction of ~n. A similar effect also is responsible in the case of QCD where
the coloured gluon leaves the quark and the anti-quark with nearly opposite charges.
Secondly, the radiation midway between the gluon and quark, in direction ~n0 is well
enhanced, due to constructive interference effects. This becomes most visible when
comparing the soft radiation there with the soft radiation in the favoured direction ~n
in the q q̄γ final state, the QED events. The ratio of the soft radiation in both regimes
is about 11/8, an enhancement by about 30% in the QCD case. Finally, it is worth
noting that in the QED case, soft radiation in the directions ~n0 and ~n⊥ are similarly
disfavoured, by a factor of four with respect to the “best” direction. By far and large,
these findings have been confirmed by the JADE collaboration in [203].
1 Q2⊥,X
αsn logm where m ≤ 2n − 1 . (5.58)
Q2⊥,X Q2X
Analytic resummation techniques 285
Although these terms seem to be highly singular for Q⊥,X → 0, when properly re-
summed they reorganize into the finite Sudakov suppression factor already encountered
in Section 2.3.2. The transverse momentum Q⊥,X of the boson is of course nothing
but its recoil against (soft) gluons that have been emitted by the incident partons,
lending it the name of “soft-gluon resummation” or “transverse-momentum
resummation”.
The formal accuracy of resummation methods is classified according to the order
m of the logarithms above that are being taken into account in the resummation of all
orders in the strong coupling, n. If all terms m = 2n−1 are included, the resummation
is said to be of leading logarithmic (LL) accuracy. Of course, integrating the LL
expression over all transverse momenta above some minimal value Q⊥,X will result in
terms of the form αsn log2n (Q2⊥,X /Q2X ).
For higher logarithmic orders, there is some dispute in the literature concerning
nomenclature. For the purpose of this book, however, a convention is chosen, where
including terms with m ≥ 2n − 2 refers to next-to-leading logarithmic (NLL)
accuracy. Including one more logarithmic order, i.e., all terms with m ≥ 2n − 3, will
be dubbed NNLL-accurate and so on.
cf. Eq. (2.166). The two Bjorken-parameters xA and xB in this equation are entirely
fixed by the invariant mass of the singlet system, Q2 , and its rapidity y. As already
discussed, the resummation part W̃ij and the hard remainder Yij can be written in
terms of various coefficient functions. However, before discussing their form, it is worth
noting that it has become customary to evaluate the PDFs in both the resummation
part W̃ij and the hard remainder Yij at a common scale µF . While at LO/LL accuracy
this presents an irrelevant shift beyond the actual perturbative accuracy this particular
choice has to be taken into account at higher orders. In the resummation part W̃ij
the common scale µF is corrected to the “natural” factorization scale 1/b⊥ ; this is
achieved in the collinear terms Cia and Cjb , which are a part of W̃ij , see Eq. (5.60).
Therefore, employing this choice, and suppressing the scales as arguments of the
two parts in the expression for the resummed cross section,
W̃ij (b⊥ ; Q, xA , xB )
X Z dξA Z dξB
1 1
= fa/A ξA , µ2F fb/B ξB , µ2F
ξA ξB
ab x xB
A
286 QCD to All Orders
xA 2 1 2 xB 2 1
×Cia , µR , , µF Cjb , µR , 2
, µF Hab µ2R
ξA b⊥ ξB b⊥
2
Q
Z 2
2
dk⊥ 2 Q 2
× exp − 2 A(k ⊥ ) log 2 + B(k ⊥ ) (5.60)
k⊥ k⊥
b20 /b2⊥
and
Z1 Z1
dξA dξB
Yij (Q⊥ ; Q, xA , xB ) = fi/A (ξA , µ2F )fj/B (ξB , µ2F )
ξA ξB
xA xA
(5.61)
xA xB
×Rij→X Q⊥ ; Q, , ,
ξA ξB
see Eqs. (2.167) and (2.170), respectively. The term b20 appearing in the lower limit of
the k⊥ -integration in the Sudakov form factor typically is chosen as
b0 = 2e−γE (5.62)
X αs (µ2 ) N
A(µ2R ) = R
A(N )
2π
N =1
X αs (µ2 ) N
B(µ2R ) = R
B (N )
2π
N =1
xA 2 1 xA
Cia ,µ , , µ2 = δia δ 1 −
ξA R b⊥ F ξA
X αs (µ2 ) N (N ) xA 1
+ R
Cia , , µ2F
2π ξA b⊥
N =1
X αs (µ2 ) N
(N )
Hab→X µ2R = 1+ R
Hab→X
2π
N =1
X αs (µ2 ) N (N )
xA xB xA xB
Rij→X , , Q, µ2R = R
Rij→X Q, , . (5.63)
ξA ξB 2π ξA ξB
N =1
Typically the first terms of the functions A, B, and C depend on the incoming particles
only but not on the specifics of the produced singlet system X, while the hard terms
H as well as the functions R are process-dependent.
Analytic resummation techniques 287
It is interesting to note here that there is some freedom in how different contri-
butions are accounted for. This is in particular true for the hard higher-order (i.e.,
loop) corrections encoded in Hab→X and the collinear coefficients Cia . In the original
formulation by Collins, Soper, and Sterman[410], implicitly Hab→X ≡ 1 was chosen
and the loop corrections were encoded within the Cia . In contrast, in more recent work
by Catani, de Florian and Grazzini [342], the term Hab→X was explicitly introduced
and chosen different from unity.
Cq = CF and Cg = CA . (5.64)
The higher terms A(n) are given by the coefficients of the soft part of the DGLAP
splitting function Paa with a = q, g to the nth order in αs : the A(1) is the coefficient
(1)
of the soft terms in the leading-order splitting functions Paa (z), cf. Eq. (2.33), i.e.,
by the term that comes with 1/(1 − z)+ with the numerator taken in the soft limit
z → 1. The A(2) and A(3) merely modify these terms with universal factors encoding
the soft one- and two-loop corrections to these kernels. Therefore [502, 689, 767, 883],
A(1)
q,g = 2Cq,g
67 π 2 10
A(2)
q,g = 2Cq,g K = 2Cq,g C A − − TR n f
18 6 9
0
A(3)
q,g = 2Cq,g K
( " 2 #
2 245 67 π 2 11 11 π 2 55
= 2Cq,g CA − + ζ(3) + + CF nf − + 2ζ(3)
24 9 6 6 5 6 24
2
209 10 π 7 1
+ C A nf − + − ζ(3) + n2f − . (5.65)
108 9 6 3 27
Here, ζ(3) ≈ 1.2021 is a special valiue of Riemann’s ζ-function (cf. Appendix A.1.4).
At first sight it may seem a bit of a coincidence that, apart from a global prefactor
(2,3)
2Cq,g , which depends on the particle in question, the higher-order terms in Aq,g are
independent of the particle. This, however, is fairly straightforward to explain: the soft
2
terms 1/(1−z)+ which give rise to A(k⊥ ) in Eq. (5.60) stem from an eikonal expression
for soft gluon emission. As already seen, such an eikonal, being quasi-classical does
not show any dependence on the details of the particles emitting the gluon beyond
charge factors — it is sufficient that there are sources for gluon emission, their exact
characteristics such as spin are beyond the resolution power of the long wavelength
QCD fields.
The B (1) terms essentially can be identified with the factors in front of the δ(1−z)–
(1)
terms in the one-loop splitting functions Paa (z), cf. Eq. (2.33). They are also known
288 QCD to All Orders
(1)
as the anomalous dimensions γa of the splitting functions,
Therefore [502],
Bq(1) = −3CF
11 4
Bg(1) = −2β0 = − CA − TR nf . (5.67)
3 3
In contrast to the A(n) terms this way to extract the coefficients through a simple
procedure does not trivially extend to higher orders: in fact, for all n ≥ 2 the B (n)
terms also contain parts that depend on the hard process in question.1
Furthermore, the first terms in the expansion of the collinear subtraction factor
are given by
(0) 1 2
Cia z, ,µ = δia δ(1 − z)
b⊥ F
(5.68)
(1) 1 (1) b2 π2
Cia z, , µ2F = Pia log 2 0 2 − Pia
(z) + δia δ(1 − z)Ca .
b⊥ b⊥ µF 6
(1)
The terms proportional to the first-order splitting kernels Pia , cf. Eq. (2.33), account
for the PDF evolution from µF to the correct scale b0 /b⊥ , which would be the natural
choice in the resummation (to which the Cia contribute). They thus reflect the scale
choice already hinted at. Furthermore, the Pia (z) denote the O() terms of the splitting
kernels at next-to-leading order and originate from the way collinear divergences are
treated in the M S-scheme. They read
Pqq (z) = − CF (1 − z)
Pgq (z) = − CF z
(5.69)
Pqg (z) = − 2TR z(1 − z)
Pgg (z) = 0 .
These terms as well as the ones proportional to π 2 /6 should not come as a surprise.
They are nothing but the finite collinear terms that stem from the expansion in the
subtraction, where 1/ poles conspire with terms proportional to in the splitting
functions to yield finite contributions to be absorbed into the PDFs. These terms, the
P , are thus readily identified with, for example, the corresponding terms in fixed-order
calculations.
1 The identifcation of the B (1) terms with the anomalous dimensions is the reason why parton
showers can be shown to provide an approximation to the p⊥ spectrum of colour-singlet objects in
hadron collisions, which is accurate up to the next-to-leading logarithmic level in the Sudakov terms.
Analytic resummation techniques 289
where
3 π2 17 11π 2 1 2π 2
γq(2) = CF2 − + 6ζ(3) + CF CA + − 3ζ(3) − CF TR nf −
8 2 24 18 6 9
8 4
γg(2) 2
= CA + 3ζ(3) − CF TR nf − CA TR nf . (5.71)
3 3
This ultimately results in the expressions known from the literature,2 namely
(2) 2 2 3 11π 2 193
Bqq̄→Z = CF π − − 12ζ(3) + CF CA − + 6ζ(3)
4 9 12
17 4π 2
+ CF TR nf −
3 9
2
(2)H 2 23 22π 2 8π 2 11
Bgg→H = CA + − 6ζ(3) + 4CF TR nf − CA TR nf + − CF CA .
6 9 3 9 2
(5.72)
2 When comparing with literature, care has to be taken, since at this order differences between the
by now customary M¯S-scheme and the DIS-scheme start showing up, leading to some shifts that are
cumbersome to trace.
290 QCD to All Orders
Note that here the loop correction to Higgs boson production has been evaluated in
the mt → ∞ limit.
From this expression all terms at O(αs ) must be subtracted, which are already
present in the resummation part W̃ij , but which do not originate from a genuine higher-
order correction. At NLO, this does not include the generic higher-order corrections
encoded in the terms C (1) and H (1) . While the former originate from the treatment
of collinear divergences at NLO, which are absorbed into the PDFs, the latter are
genuine loop corrections. Therefore both can be ignored for the R(1) . This reasoning
results in the following contributions, to be subtracted:
• terms, where one of the incoming partons remains at its light-cone momentum
fraction, z = 1, and where the PDF evolution of the other parton through the
DGLAP splitting function and the corresponding shift in its momentum is taken
into account. This will yield terms of the form
1
δ(1 − zA ) P (zB ) . (5.76)
Q2⊥
• terms which have their origin in the expansion of the Sudakov form factor to
the first order in αs . With them multiplying a Born-like phase space, they are
proportional to δ(1 − zA )δ(1 − zB ). Therefore, overall, they amount to a term of
the form
δ(1 − zA )δ(1 − zB ) (1) Q2⊥ (1)
A log + B . (5.77)
Q2⊥ Q2
For the example at hand, therefore
2
(1) CF t̂ + û2 + 2m2W ŝ
Rqq̄0 →W = 2 δ(ŝ + t̂ + û − Q2 )
πQ⊥ ŝ
Analytic resummation techniques 291
Q2
−δ(1 − zA )δ(1 − zB ) 2 log 2 − 3
Q⊥
2
2
)
1 + zB 1 + zA
−δ(1 − zA ) − δ(1 − zB ) . (5.78)
1 − zB + 1 − zA +
Contact with the literature, and in particular with [410], where the R(1) term has
been worked out for the first time in the context of QT resummation, is established
by identifying
t̂2 + û2 + 2m2W ŝ = (Q2 − t̂)2 + (Q2 − û)2 , (5.79)
using the Mandelstam identity
m2W ≡ Q2 = ŝ + t̂ + û . (5.80)
The hard remainders for the q q̄-initiated parton-level process are given by
(1) (1) 1 (ŝ + t̂)2 + (t̂ + û)2
Rqg→W = Rq̄g→W = − δ(ŝ + t̂ + û − Q2 )
4π ŝû
z 2 + (1 − zB )2
−δ(1 − zA ) B
Q2⊥
(5.81)
(1) (1) 1 (ŝ + û)2 + (t̂ + û)2 2
Rgq→W = Rgq̄→W = − δ(ŝ + t̂ + û − Q )
4π ŝt̂
2
zA + (1 − zA )2
−δ(1 − zB ) .
Q2⊥
In addition, there will of course be those terms that originate from real-emission
processes initiated by other partons. In the example of W production, such terms
are related to processes with a gluon in the inital state, like qg → W q 0 . Such terms
do not need special care in subtracting out contributions already accounted for in the
Sudakov form factor, since they are simply not present, and only those terms involving
the PDFs must be subtracted.
When going to higher orders n ≥ 2, of course, the picture becomes a bit more
involved. First of all, of course, the finite multiple-emission matrix elements assume
an increasingly complicated structure, eventually mixing loop corrections with real-
emission terms, which will make it harder to identify terms in a straightforward way.
292 QCD to All Orders
This, however, is of course merely a nuisance, since they could in principle be taken
from numerical routines. However, at the same time, in the expansion of the resum-
mation part, the terms A(n−k) and B (n−k) from the Sudakov exponents will combine
with terms of order k in the finite parts, the C (k) and H (k) , which will become process-
dependent.
Another problem, up to now more or less swept under the carpet, needs to be ad-
dressed. It is related to the fact that the Fourier transform to impact parameter space
necessitates an integral over all b⊥ , from zero to infinity. While the short distance
part, b⊥ → 0 is no problem at all, the long-distance part with b⊥ → ∞ is, and for two
reasons. First of all, integrals with infinite integration limits are notoriously unpleas-
ant to deal with and guaranteeing their convergence to the true value is sometimes
not trivial, especially when aiming for high numerical precision. While this is a merely
technical problem, the physical problem in this region is much harder to solve in a sat-
isfying fashion. It is related to the fact that in all integrations, the PDFs are evaluated
at scales µF ∝ 1/b⊥ , which therefore may be taken into unphysical infrared regions
where Q2 → 0. Similarly, in the Sudakov form factor, the strong coupling is evaluated
2 2
at scales k⊥ , but, again, the integration region of k⊥ extends down to values of 1/b2⊥
thus also probing the strong coupling in the infrared regime and eventually hitting the
Landau pole when b⊥ → ∞.
As already indicated in Section 2.3.2, the dangerous region of large b⊥ can be dealt
with by multiplying the resummation part W̃ij with a non-perturbative modification
factor. This is often supplemented by suppressing large values of b⊥ by replacing it in
W̃ij with a modified b∗ defined through
where
b⊥
b∗ = p . (5.84)
1 + (b⊥ /bmax )2
There are different parameterizations for this non-perturbative suppression factor be-
yond the simple Gaussian of Eq. (2.3.2), namely
2
(CSS) 2 Q
W̃ij (b⊥ ) = exp −F1 (b⊥ ) log − Fi/h1 (x1 , b ⊥ ) − Fj/h1 (x2 , b ⊥ )
Q20
(DWS) 2 Q
W̃ij (b⊥ ) = exp −g1 b2⊥ − g2 b2⊥ log
2Q0
(LY) Q
W̃ij (b2⊥ ) = exp −g1 b2⊥ − g2 b2⊥ log − g1 g3 b⊥ log(100x1 x2 )
2Q0
(BLNY) 2 Q
W̃ij (b⊥ ) = exp −g1 b2⊥ − g2 b2⊥ log − g1 g3 b2⊥ log(100x1 x2 ) . (5.85)
2Q0
Analytic resummation techniques 293
The parameters in some of the parameterizations above have been fitted in [716],
resulting in global values of
and in values for the parameters g1 , g2 , and g3 as listed in Table 5.1. The functions F1
(CSS)
and Fi/h in the more general form of the non-perturbative function in W̃ij should,
in principle, be general and would have to be extracted from data. However, by far
and large, this approach has not been followed.
One way to circumvent the problem with the large b⊥ integration comes from the
realization [712] that the exponential leading to the Bessel function can be rewritten
as
Z Z∞
d2 b⊥ ~ ⊥ ) f (b⊥ ) = 1
exp(−i~b⊥ · Q db⊥ b⊥ J0 (Q⊥ b⊥ ) f (b⊥ )
(2π)2 2π
0
Z∞
1
= db⊥ b⊥ h1 (Q⊥ b⊥ , v) + h2 (Q⊥ b⊥ , v) f (b⊥ ) , (5.87)
4π
0
where
Z
−π+ivπ Z
−ivπ
1 −iz sin θ 1
h1 (z, v) = − dθ e and h2 (z, v) = − dθ e−iz sin θ . (5.88)
π π
−ivπ π+ivπ
These functions reduce to the usual Hankel functions H1,2 (z) for z → ∞. They are
finite for all finite values of z and v and fulfil
5.2.2.1 Z production
It is straightforward to arrive at expressions for all terms relevant for the production
of a Z boson in q q̄ annihilation and the respective higher-order corrections: The Born
cross-section for W production given in Eq. (2.67) has to be replaced with the one for
Z production. This can be achieved by adapting couplings, effectively replacing
h 2 i
2 2
2 2
e |Vij | W →Z e 1 − 4|ei | sin θ W + 1 δij
2
gW |Vij |2 = −→ . (5.90)
sin2 θW 4 sin2 θW cos2 θW
All other terms in the resummation part of course are not susceptible to the details
of the gauge boson production, since they merely account for QCD effects in the
Analytic resummation techniques 295
initial state. For the hard remainder, the same holds true, after the couplings in the
real-emission matrix elements have been adapted as above.
Fig. 5.5 p⊥ spectrum of the Higgs boson at the 14 TeV LHC, evaluated in
the m⊥ → inf ty approximation with the HqT code [282, 442], and ignoring
contributions from b-quarks. Black lines correspond to results contributing
to or accurate at NLO+NLL accuracy, while the red lines show results or
contributions to the NNLO+NNLL result. Dashed and dotted lines show
the resummed and fixed-order parts only, while the fully matched results
are shows as straight lines. For all results the CT10 PDF NLO set with
αs = 0.118 has been used, with µQ = µF = µR = mH /2 = 62.5 GeV.
Non-perturbative effects are not included.
For Higgs production through gluon fusion mediated by the effective vertex, the
structure of the result at NLO+NLL is identical to the one for the production of vector
bosons in quark–anti-quark annihilation. The LO cross-section is given by
√
(LO) 2GF αs (m2H )
σgg→H = (5.91)
576π
296 QCD to All Orders
and, of course, it must be convoluted with suitable (gluon) PDFs. Similarly, in the
Sudakov form factor, the terms A and B are the ones for gluon processes, taken from
Eqs. (5.65) and (5.67), and the collinear terms again are those for gluons, this time
taken from the corresponding item in Eq. (5.68). The hard loop correction is taken
from Eq. (5.74). The same strategies already employed in the case of vector boson
productions also will yield the results for the hard remainder terms R.
In Fig. 5.5 results for the p⊥ spectrum of a 125 GeV Higgs boson at the 14 TeV LHC
are exhibited, at NLO+NLL and NNLO+NNLL accuracy and in the large top-mass
limit. The results have been obtained with HqT [282, 442]. The fixed-order correction
in the matched result has two prominent effects: first, it changes the overall cross-
section, and second, it fills the tail of the distribution at scales of the order of half the
Higgs boson mass or above.
Up to now, potentially large logarithms of the type log(Q2X /Q2T,X ) of the transverse
momentum QT,X of a heavy (colour-singlet) system X with mass QX have been re-
summed in impact-parameter or b⊥ -space. This variable essentially emerges from a
Fourier transformation, and is conjugate to the transverse momentum. It has been
introduced to guarantee that the transverse momenta q⊥ of the emitted soft gluons
combine to yield the overall transverses momentum of X such that
X
Q~ ⊥,X = ~q⊥,i . (5.92)
i
As a by-product of this approach, the various pieces in the master equation Eq. (5.59),
and in particular the resummed and finite remainder contributions W̃ij and Yij→X
emerge through a convolution of various contributions with the PDFs, where the nat-
ural scale for the evaluation of the latter is given by 1/b⊥ . This cross-talk induced by
the convolution sometimes renders an analysis of the individual contributions a tricky
task. The somewhat unsatisfying situation can be greatly alleviated by using Mellin
transforms which allow one to rewrite convolutions as simple products. The price3 to
pay for this seemingly superior procedure is the necessity to transform the results,
obtained in a simpler way in Mellin space, back into x-space, which often is highly
non-trivial.
The resummed part of the cross-section for singlet production in Eqs. (2.166) and
(5.59) can be manipulated in such a way that the Bjorken parameters xA and xB are
not fixed anymore. To make contact with the literature, e.g., [436], this can be realized
by integrating over y and considering only dσ/dQ2 . Ignoring the hard remainder,
(res) X Z1 Z 2
Q2 dσAB→X (LO) d b⊥
= πσ̂ij→X dxA dxB J (Q⊥ b⊥ ) W̃ij (b⊥ ; Q, xA , xB ) .
2 0
dQ2⊥ dQ2 ij
(2π)
0
(5.93)
Introducing the rescaled invariant mass variable
Q2
τ = , (5.94)
S
with S the hadronic centre-of-mass energy squared, allows the definition of the N th
moment of the cross section above with respect to τ :
1−2Q
Z ⊥ /Q (res)
Q2 Q2⊥ dσAB→X
ΣAB→X (N ) = dτ τ N . (5.95)
(LO)
πσAB→X dQ2⊥ dQ2
0
This constitutes a Mellin transform of the normalized differential cross section. Among
other convenient factors, it has been multiplied by Q2⊥ to cancel the singular 1/Q2⊥
behaviour. The upper limit of the integral approximates the kinematic boundary for
soft particle emissions. It could be set to one under the assertion that the integrand
does not have any support for larger values of τ . As discussed in some detail in Ap-
pendix A.1.4, the trick about using such Mellin moments is that the cross-section
neatly factorizes into separate contributions from PDFs and partonic cross-sections
without complicated convolutions of both:
" #
X
ΣAB→X (N ) = fi/A (N, µF )fj/B (N, µF ) Σ̂ij (N ) . (5.96)
ij
Here the partonic part Σ̂ij (N ) collects all the terms in the Sudakov form factor and
the collinear functions. After the integration over the angle between ~b⊥ and Q ~ ⊥ it is
thus given by
X Z b⊥ db⊥
∞
Σ̂ij (N ) = Q2⊥ J0 (b⊥ Q⊥ ) Cia (N, αs (b20 /b2⊥ )) Cjb (N, αs (b20 /b2⊥ ))
(2π)
a,b 0
2
ZQ 2
dk⊥ 2 Q2 2
× exp − 2 A(α (k
s ⊥ )) log 2 + B(α (k
s ⊥ ))
k⊥ k⊥
b20 /b2⊥
µ2F
Z
dk⊥2
2 2
− 2 γia (N, αs (k⊥ )) + γjb (N, αs (k⊥ )) ,
k⊥
b20 /b2⊥
(5.97)
298 QCD to All Orders
where J0 (x) is the Bessel function from the angular part of the integral over impact
parameter space.
At this point it is useful to compare the result with the original expression in
Eq. (5.59). Apart from normalization factors, there are a few differences due to the
Mellin transformation. First of all, the convolution of the collinear functions C with
the PDFs is replaced by a simple product of their Mellin transforms, the C(N ) and
fi,j/A,B , and the summation over the Mellin moments. With the PDFs at µF factored
out, their DGLAP evolution to the scale 1/b⊥ is captured by the integral over their
anomalous dimensions, the γ(N ). In b⊥ space this was accounted for by the terms
(1)
proportional to Pia log(b2⊥ µ2F ), cf. Eq. (5.68). In Mellin space these terms are not
present any more. For the case of Drell–Yan-like processes they are therefore given by
(remember i = a = q)
1
Cia (N, µ, αs (b20 /b2⊥ )) = MN C z, , µF
b⊥
Z 1
αs (µ) π2
= dzz N δia δ(1 − z) + −Pia
(z) + δia δ(1 − z)Ca + O αs2
2π 6
0
i=a=q CF αs (µ) 1 π2
−→ 1 + + + O αs2 . (5.98)
2π (N + 1)(N + 2) 6
Note that in the result above the constant terms are at variance with the result quoted
for instance in [436]. This is because there the hard loop contribution has been absorbed
into the collinear functions C. The Mellin transform of this additional part here is
trivial, since the hard loop contribution is proportional to a δ-function in x-space,
which yields unity.
Due to the absence of the analytically hard-to-control convolutions in x-space and
their replacement by mere products due to the Mellin transform, it is now possible
to try to identify logarithmic terms in such a way that the integral over the residual
impact parameter b⊥ can be handled with an increased level of analytic control. An
important step into that direction is to replace the looming logarithms of b⊥ in the
Sudakov form factor, which will need to be integrated over, by logarithms of Q⊥
instead.
To achieve this, the integrand is expanded in a power series of αs , while collecting
all logarithms of the form log(Q2 b2⊥ /b20 ):
Z∞
Q2⊥ X
Σ̂ij (N ) = db⊥ b⊥ J0 (b⊥ Q⊥ )
2π
a,b 0
( ∞ n+1 n m )
XX αs (µ) Q2 b2⊥
× exp n Dm (N, L; a, b) log 2 .
n=1 m=0
2π b0
(5.99)
The term L = log(Q2 /µ2F ) stems from the second term in the exponential, by moving
Analytic resummation techniques 299
the upper limit from µ2F to Q2 to combine it with the original Sudakov form factor
terms. Typically, however, this is not a large logarithm in singlet production. The other
logarithms, log(Q2 b2⊥ /b20 ), in contrast are potentially large, since, after integration over
b⊥ they will result in logarithms of the form log(Q2 /Q2⊥ ). They stem from the Sudakov
form factor and the terms ∝ γai (N ) contribute some of the sub-leading logarithms.
Up to second order in αs , the n Dm , with dependence on the parton flavours a and
b implicitly understood, are given by
1 (1)
1 D2 (N, L) = − A
2
1 D1 (N, L) = − B (1) − 2γ (1) (N )
d[xJ1 (x)]
= xJ0 (x) (5.102)
dx
between the Bessel functions and through integration by parts. The boundary terms
for large impact parameters vanish due to the exponential dampening encoded in the
Sudakov form factor, and therefore, after substituting x = b⊥ Q⊥
Z ∞
Q2 X
Σ̂ij (N ) = − ⊥ dxJ1 (x)
2π
a,b 0
( ∞ n+1 n m )
d2 XX αs (µ) Q2 x2
· exp n Dm (N, L) log 2 2 .
dQ2⊥ n=1 m=0
2π Q⊥ b0
(5.103)
under integration. With this in mind — and ignoring the residual sub-leading terms
in the exponential — the integral over x reduces to unity and the cross-section can
be written purely in Q⊥ space. This basically amounts to the replacement of the
b⊥ –integrated Sudakov form factor
2
Z∞ ZQ 2
2
dk⊥ Q (non−pert.)
db⊥ b⊥ J0 (b⊥ Q⊥ ) exp − 2 A log 2 + B W̃ij (b⊥ ) (5.105)
k⊥ k⊥
0 b20 /b2∗
Here, the Q⊥ -space parameters à and B̃ coincide up to NLL accuracy with their b⊥ -
space counterparts and the relative differences are exactly calculable. However, despite
its elegance this approach ultimately could not be extended to accuracy higher than
NLL and it therefore was not further pursued.
which encodes a logarithmic enhancement in its soft part ∝ 1/(1 − z)+ , when z → 1.
The terms giving rise to these logarithms are typically of the form
" #
k
log (1 − z)
αsn . (5.107)
1−z
+
Z1 Z1
dσAB→X Q2
= dτ dxi dxj fi/A (xi , µF )fj/B (xj , µF ) δ τ −
dQ2 Sxi xj (5.108)
0 0
(thres)
· Wij (τ, Q, µR , µF ) ,
√
where Q is the mass of the system and S the hadronic centre-of-mass energy. As
before, the renormalization and factorization scales are given by µR and µF , and
(thres)
Wij (τ, Q, µR , µF ) encodes the hard partonic cross-section, which is calculable in
perturbation theory. In contrast to the function W̃ij from QT -resummation, no large
(thres)
logarithms explicitly show up in Wij . Instead, they emerge through convolution
(thres)
with the PDFs. Their origin are terms inside Wij which become singular in the
limit τ → 1 or z → 1 and diverge like αsn log2n−1 (1 − z)/(1 − z). Sub-dominant terms
come with smaller exponents for the logarithm. Contributions from initial-state gluons
splitting into quarks or initial quarks radiating a quark play no role here, since only
soft gluon emissions feature the 1/(1 − z) term in the splitting function.
After a Mellin transformation, the relevant term reads
h i Z1
(thres) (thres)
MN Wij (τ, Q, µR , µF ) ≡ MN [Wij ] = dτ τ N Wij (τ, Q, µR , µF ) ,
0
(5.109)
with µR = Q as the usual choice. In the limit of soft gluon emissions,
Q2
τ −→ ≈ 1 (5.110)
Sxi xj
and therefore the behaviour of MN [Wij ] is defined by its limit for large N .
302 QCD to All Orders
cf. Eq. (2.14), where is the energy of the emitted gluon. Soft-gluon unitarity also
fixes the virtual contribution w0 through
Z
w0 + dw(k) = 0 . (5.112)
The overall contribution to first order in αs from the eikonals therefore reduces to a
correction factor Weik to the cross-section given by
Z
Weik = (1 + w0 ) δ(1 − τ ) + dw(k) δ 1 − τ − (5.113)
E
where E is the energy of the incident quark. Taking the moments of this term yields
(thres)
the Mellin transform of Wij to first order in αs ,
2 2
Z1 (1−z)
Z Q
h i(1) 4CF zN − 1 dk⊥2
(thres) 2
MN Wij = dz 2 αs (k⊥ ) . (5.114)
2π 1−z k⊥
0 (1−z)Q2
Here 1 − z = /E has been identified and, as before, k⊥ denotes the transverse mo-
mentum. It is related to the momentum transfer along the incoming hard propagator,
which for emissions off parton a is given by
2
k⊥
q 2 = |(pa − k)2 | ≈ . (5.115)
1−z
The 1 of the numerator (z N − 1) in Eq. (5.114) accounts for the virtual contribution.
As the argument of the running coupling the transverse momentum of the emitted
Analytic resummation techniques 303
gluon is being used, i.e. (1 − z)q 2 . This choice is similar to the ones made before,
resumming leading IR singularities.
The phase space available for the emissions is defined by the maximal interval of
virtualities allowed for the intermediate propagator — the emission of the gluon with
energy fraction (1 − z) induces a maximal virtual mass of (1 − z)Q2 in the propaga-
tor. The lower limit is related to the regularization of the collinear singularities. For
inclusive processes, where no additional conditions are imposed on, e.g., the rapidity
of the final state, the largest scale available in the process, namely Q2 , serves as a
meaningful cut-off.
Z1 Z1
4CF αs zN − 1 4CF αs log(1 − z)
= dz log(1 − z) = dz
2π 1−z 2π 1−z +
0 0
( ) 2
4CF αs
= ψ 0 (N ) + ζ(2) + ψ(N ) + γE . (5.116)
4π
Alternatively, directly taking the large-N limit of the integrand in Eq. (5.114) amounts
to replacing
1
zN − 1 ≈ Θ 1 − −z , (5.118)
N
thereby constraining the phase space for the energy integral. This results in
h i(1,alt) Z1
(thres) 4CF αs log(1 − z) 1
MN Wij ≈ dz Θ 1− −z
2π 1−z N
0
1
1− N
Z
4CF αs log(1 − z) 4CF αs
= dz = log2 N , (5.119)
2π 1−z 4π
0
304 QCD to All Orders
the leading logarithmic term. These terms can be exponentiated, to yield the resummed
result at leading logarithmic accuracy,
h i(LL)
(thres) CF αs π 2 2
MN Wij = exp + (log N + γE ) . (5.120)
π 6
For sub-leading contributions, the running of αs has to be taken into account and the
global soft enhancement terms proportional to the soft pole in the splitting function
must be included as well. In addition, at higher order, i.e., starting at O αs2 new
structures are emerging which reflect the fact that the emitted gluons skew the original
colour flow by adding new directions for eikonals. Such contributions are encoded in
2
terms D(k⊥ ). For cases, where there are also coloured particles in the final state, of
course, also B(k⊥ ) terms would appear. They are absent for initial-state emissions
only, since they relate to the non-soft remainders of splitting functions, once the soft,
eikonal-type terms are subtracted. As such they have of course no support in the
drastically reduced initial-state phase space which actually gives rise to the threshold
logarithms in the first place. In other words, the phase-space constraint effectively
squeezes them out in the initial state. Ignoring the case of coloured final-state particles
and concentrating on colour-singlet production, therefore
2 2
Z1 (1−z)
Z Q
h
(thres)
i zN − 1 dk⊥2
2 2
MN →∞ Wij = exp dz 2 A(k ⊥ ) + D(k ⊥ ) ,
1−z k⊥
0 2(1−z)Q
(5.121)
similar to the case of QT resummation, but with the B terms replaced by the D terms.
To further stress the analogy, also for threshold resummation the functions A and
D can be expanded in powers of the strong coupling as
∞
X n ∞
X n
αs (µ2 ) αs (µ2 )
A(µ2 ) = A(n) and D(µ2 ) = D(n) (5.122)
n=1
2π n=2
2π
with
To obtain actual physical results in x-space from the Mellin transforms the back-
transformation must be invoked, translating the expressions in terms of Mellin mo-
ments N back to centre-of-mass energies.
|f i = Ŝ|ii . (5.125)
or, schematically,
iT̂ † T̂ = T̂ − T̂ † = Im(T̂ ) . (5.128)
implies " #
X X X
Tf i − Tif∗ = i (2π) δ 4 4
pµn − pµi Tf∗n Tni (5.130)
n
summing over all possible states |ni. Equating initial and final state — tacitly assuming
that in fact the scattering processes discussed here have two initial-state particles, the
two initial hadrons — and making contact with observables yields
Analytic resummation techniques 307
" #
1 X X X µ 1
∗
σtot = (2π)4 δ 4 pµn − pi Tin Tni ∝ Im(Tii ) , (5.131)
2s n 2s
the imaginary part of the forward elastic amplitude. It has to be forward and elastic
since the identity |f i = |ii also implies an identity of momenta. This relation is at the
heart of the Cutkosky rules.
Denoting elastic 2 → 2 scattering amplitudes, e.g. qq → qq, qq 0 → qq 0 , or gg → gg,
with A(ŝ, t̂), this means that
connecting the imaginary part of the amplitude with its s-channel discontinuity
Z0 Z∞
ds0 Disc[A(s0 , t̂)] ds0 Disc[A(s0 , t̂)]
A(ŝ, t̂) = + . (5.135)
2πi s0 − ŝ 2πi s0 − ŝ
−∞ −t̂
This dispersion relation can also be rewritten as an integral in the plane of the (po-
tentially complex) cosine of the scattering angle,
2ŝ
zt = − cos θt = − 1 + , (5.136)
t̂
namely,
Z−1 Z∞
dzt0 Disc[A(zt0 , t̂)] ds0 Disc[A(zt0 , t̂)]
A(ŝ, t̂) = + . (5.137)
2πi zt0 − zt 2πi zt0 − zt
−∞ 1
This form documents the impact of the unphysical region of the scattering on the
amplitude, which is entirely a consequence of the analytic properties of the S matrix:
the integral is over the unphysical region of the scattering, i.e. outside the interval
zt0 ∈ [−1, 1]. This consequently implies that all discontinuities or singularities in the
scattering amplitude, those that impact on the cross-section, are in the unphysical
regime of the scattering.
308 QCD to All Orders
Assume that the amplitude can be decomposed according to a partial wave expan-
sion, X
A(ŝ, t̂) = (2l + 1)Al (ŝ, t̂)Pl (zt ) , (5.138)
l
with the Pl the usual Legendre polynomials. Using Eq. (5.137) and the Legendre
function
Z1
0 1 dz 0
Ql (z ) = Pl (z) (5.139)
2 z0 − z
−1
Here L is the related to the overall parity of the amplitude. In Section 7.1 the
Sommerfeld–Watson transformation will be introduced in more detail. Here it
suffices to state that it essentially replaces the sum over discrete values l of angu-
lar momentum parameter with an integral, thereby rendering l a continuous complex
parameter. After the corresponding transformations the equations above, in the high-
energy limit
2ŝ
zt −→ − −→ ∞ , (5.141)
t̂
ultimately become
Z
δ+i∞ Z∞
dl dzt0 Pl (−zt )Ql (zt0 )
A(ŝ, t̂) = (2l + 1) 1 + (−1)l+L Disc[A(zt0 , t̂)]
2i 2πi sin(πl)
δ−i∞ 1
Z
δ+i∞
1 (−1)l + (−1)L ly
−→ − dl e Fl (t̂) . (5.142)
4π sin(πl)
δ−i∞
have been used. The relevant quantity thus is the Laplace transform
Z∞
Fl (t̂) = dye−ly Disc[A(zt0 , t̂)] (5.144)
0
Analytic resummation techniques 309
of the discontinuity, with the Laplace parameter y = log(zt /2). This quantity will be
evaluated later on.
To start the discussion of how the large logarithms log(t̂/ŝ) emerge in perturbative
QCD it is useful to reformulate the kinematics of elastic parton–parton scattering such
that relevant limits can be inspected in a transparent way. A very handsome tool for
this purpose is the Sudakov or light-cone decomposition, in which momenta are
written as
pµ = αP+µ + βP−µ + pµ⊥ (5.145)
with two light-cone axes P± and the assumption that the transverse plane indicated
with the subscript ⊥ is orthogonal to both these axes. For the discussion of cross-
sections at collider experiments, it is sensible to choose these two axes as the beam
axes PA and PB . In the case of parton elastic scattering, e.g. qq 0 → qq 0 , the incoming
momenta are then given by
√ √
pa = s xA , 0; ~0 and pb = s 0, xB ; ~0 , (5.146)
where k⊥,i is the absolute value of ~k⊥,i . In these coordinates, the metric tensor is
decomposed into longitudinal and transverse components according to
d3 k d2 k⊥ dy
= . (5.149)
(2π)2 (2E) (2π)2 4π
y0 + y1 1
ȳ = = log(xA /xB ) (5.150)
2 2
as the rapidity of the overall system and the rapidity distance of the outgoing partons
from the centre-of-mass rapidity
310 QCD to All Orders
y0 − y1
y∗ = (5.151)
2
allows the rewriting
k⊥ 2k⊥ ȳ
xA = √ (ey0 + ey1 ) = √ e cosh y ∗
s s
k⊥ 2k⊥ −ȳ
xB = √ e−y0 + e−y1 = √ e cosh y ∗ . (5.152)
s s
CF2 y∗ 2
∗
1 2 CF2 ŝ2 + û2 CF2 4 cosh2 y ∗ + e2y
|Mqq 0 →qq 0 | = = = e cosh y ∗ .
(4παs )2 4 t̂2 4 e−2y∗ 4
(5.154)
Therefore,
2
dσ̂qq0 →qq0 (4παs )2 |Mqq0 →qq0 | πCF2 αs2 2y∗
= = 2 e . (5.155)
dt̂ 16πŝ2 64k⊥
The factorization formula, Eq. (2.52), relates the matrix element to the corre-
sponding fully differential cross-section, in the case of elastic quark–quark scattering:
Z1 Z1
σ = dxA dxB fq/A (xA , µ2F )fq̄/B (xB , µ2F ) σ̂qq0 →qq0 µ2F ; µ2R . (5.156)
ζA ζB
As usual, at leading order the parton distribution function fa/A (xA , µ2F ) param-
eterizes the probability to find a parton of type a in the beam particle A, with a
line-cone momentum fraction xA , and at the factorization scale µF . In order to
actually detect the two outgoing partons, they must carry some minimal energy and
momentum, thus implying a minimal momentum fraction ζ w.r.t. each of the beams.
Further emissions of course would make this picture more complicated and the trivial
leading order partonic cross-section here would start to develop a more complicated
kinematic dependence. For example, the integration over the x would allow further
real emissions to be included in the partonic cross-section σ̂, which would need more
energy and momentum than the ζ which define the kinematics of the two outgoing
quarks, and a dependence on the ratios ζA,B /xA,B would emerge. However, using the
relations above, Eq. (5.156) can be cast into
dσqq0 →qq0 2 2
2 dy dy = xA fa/A (xA , µF )xB fb/B (xB , µF )
dk⊥ 0 1
παs2 2y∗
= xA fa/A (xA , µ2F )xB fb/B (xB , µ2F ) 2 e . (5.157)
36k⊥
Analytic resummation techniques 311
This becomes large for large rapidity distances y ∗ . In this limit, ŝ ≈ −û −t̂, and
in fact,
∗ 1 ŝ
y → log − , (5.158)
2 t̂
indicating that the large rapidity distance limit is identical to the high-energy limit.
Replacing û with −ŝ and retaining only the leading terms of order ŝ2 /t̂2 , stemming
from gluon exchange in the t-channel, yields the following squared matrix elements in
the high-energy limit:
a −2ipµb pνa a
M = ūi (k0 )(−igs Tij )γµ ūj (pa )
qq 0 →qq 0 ūk (k1 )(−igs Tkl )γν ūl (pb ) .
ŝt̂
(5.161)
Squaring the amplitude and summing (averaging) over final (initial) state colours and
spins yields
4
a b 2
Mqq0 →qq0 2 = 4gs Tr T T 16 2(k0 pb )(pa pb ) [2(k1 pa )(pa pb )]
4·9 ŝ2 t̂2
4
ab 2
g TR δ 4û2 8gs4 û2 8gs4 ŝ2
= s = ≈ (5.162)
9 t̂2 9 t̂ 2 9 t̂2
reproducing as anticipated the matrix element in the high-energy limit.
A similar treatment can be applied to gluon-gluon scattering, which is dominated
in the high-energy limit by the t-channel diagram shown in the upper left of Fig. 5.6,
Mgg→gg = igs f ad0 c gµa µ0 (pa + k0 )ξ + gµ0 ξ (−k0 + q)µa + gξµa (−q − pa )µ0
·igs f bd1 c gµb ζ (pb − q)µ1 + gζµ1 (q + k1 )µb + gµ1 µb (−k1 − pb )ζ
−ig ξζ µa ∗
· · λa (pa )µλbb ∗ (pb )µλ00 (k0 )µλ11 (k1 )
q2
Analytic resummation techniques 313
ζ ξ
2pa pb
≈ −i 2gs f ad0 c gµa µ0 pξ,a 2gs f bd1 c gµ1 µb pζ,b · µλaa ∗ λµbb ∗ µλ00 µλ11
ŝt̂
2ŝ
≈ −igs2 f ad0 c f bd1 c gµa µ0 gµ1 µb · λµaa ∗ µλbb ∗ µλ00 µλ11 . (5.163)
t̂
Here the first two lines encode the triple-gluon vertices, q = pa − k0 with t̂ = q 2
denoting the momentum transfer along the t-channel propagator. In the last line before
the polarization vectors the metric tensor is already rewritten in the axial gauge,
omitting sub-leading or vanishing contributions following the same logic as above. To
be more specific, in the limit of small momentum transfer pa ≈ k0 and terms such as
k0 · (pa ) or pa · (k0 ) therefore are small and can be neglected. This reasoning, again,
reduces the triple-gluon vertices to helicity-conserving eikonal terms to be convoluted
with a single light-cone polarization of the t-channel gluon.6 These manipulations of
the elastic gluon–gluon amplitude, namely the restriction on t-channel exchange only,
supplemented with rewriting the polarization vectors and the metric tensor, violate
gauge invariance. This is easy to see by replacing any of the polarization vectors,
for instance (pa ), with the corresponding momentum (pa in the example) and by
realising that this will not make the amplitude in Eq. (5.163) vanish. In order to
restore full gauge invariance, all contributions to the amplitude and all diagrams, i.e.,
s- and u-channel exchange and the four–gluon vertex, must be included. The complete
gauge-invariant set is shown in Fig. 5.6.
However, when consistently working in the high-energy limit and in a physical
gauge, the amplitude above is sufficient. With the squared colour factor given by
0 0
2
f ad0 c f bd1 c f d0 ac f d1 bc = CA Nc2 − 1 (5.165)
6 In order to understand this in more detail, the sum over the physical helicities λ of the external
gluons is cast into
0 0 0
!
X µ ∗µ0 µν nµ pµ + nµ pµ n2 pµ pµ
λ (p)λ (p) = − g − + ,
λ
(n · p) (n · p)2
where n denotes an arbitrary four-vector, acting as the gauge vector in axial gauge. This vector can be
chosen in a convenient way. For example, by setting n = pb for the incoming gluon a with momentum
pa the sum above becomes
0
µ0 µ
X µ ∗µ0 0 pµ pµ
a + pb pa 0
λa (pa )λa (pa ) = − g µµ
−2 b ≡ δ µµ ,
⊥
λ
ŝ
a
effectively the δ over the transverse polarizations from above. Contracting the polarization sums of the
incoming gluon pa and the outgoing gluon k0 with a metric tensor yields the two physical polarizations
plus terms that vanish in the high-energy limit t̂/ŝ → 0, thus explicitly encoding helicity conservation
in this limit:
X µ
a ∗µ0a X µ ∗µ00 t̂
gµa µ0 gµ0 µ0 λa (pa )λa (pa ) 0
λ (k0 )λ (k0 ) = 2 1 + O . (5.164)
a 0
λ λ
0 0 ŝ
a 0
This means that in the high-energy limit the Lorentz structures of the triple gluon vertices can be
simplified such that only the terms with the metric tensor between incoming and outgoing gluons
remain.
314 QCD to All Orders
and the relation in Eq. (5.164), the summed and averaged amplitude squared reads
2 4 2 4 2
Mgg→gg 2 = 4CA gs ŝ = 9gs ŝ , (5.166)
2
Nc − 1 t̂ 2 2 t̂2
as expected.
Turning this into a partonic cross-section necessitates folding the matrix elements
squared with suitable phase-space elements; for a two-body final state
2 2
dy0 dk⊥,0 dy1 dk⊥,1
dΦ2 = · (2π)4 δ 4 (pa + pb − k0 − k1 )
4π(2π)2 4π(2π)2
2
1 cosh(y0 − y1 ) + 1 dk⊥,0 2
dk⊥,1 2
= (2π) 2 2 ~
δ k⊥,0 + ~k⊥,1 ≈ 1 dk⊥ ,
2ŝ sinh(y0 − y1 ) (2π)2 (2π)2 2ŝ (2π)2
(5.167)
where the large-energy limit has been used, i.e., a strong ordering of rapidities y0 y1 ,
leading to the ratio involving the hyperbolic functions to approach unity. In addition,
the conservation of transverse momentum leads to the k⊥,0 = k⊥,1 = k⊥ . Conse-
quently, the differential partonic cross-section reads
2
dσ̂ M
2 = . (5.168)
dk⊥ 16πŝ2
and in this limit the scales in the propagators are driven by the transverse compo-
nents of their four-momenta only. Of course, the same reasoning applies for the next
propagator, q2 = pa − k0 − k1 , such that, in general
The 2 → 3 gluon scattering amplitude in the high-energy limit, cf. the left diagram in
Fig. 5.7, is given by
Mgg→ggg
1
= 2igs f ad0 c1 gµa µ0 pξa1
t̂1
· −igs f c1 c2 d1 gξ1 ξ2 (q1 + q2 )µ1 + gξ2 µ1 (−q2 + k1 )ξ1 + gµ1 ξ1 (−k1 − q1 )ξ2
1
· 2igs f ad2 c2 gµa µ0 pξb2 , (5.176)
t̂2
where the first and the last line are the by-now familiar eikonal factors for the emission
of a soft gluon off the incident gluon line, and the second line stands for the additional
triple-gluon vertex sandwiched between the two t-channel propagators. Multiplying
this term with the “pending” four-momenta from the external vertices and suitably
normalising allows to define an effective vertex, namely
316 QCD to All Orders
Mgg→ggg = 2iŝ µa ∗ (pa )µb ∗ (pb )µ0 (k0 )µ1 (k1 )µ2 (k2 )
ad0 c1 1 c1 d1 c2 1 ad0 c1
× igs f gµa µ0 igs f Cµ1 (q1 , q2 ) igs f gµa µ0 . (5.179)
t̂1 t̂2
The Lipatov effective vertex is manifestly gauge-invariant, as can be seen by con-
tracting it with k1 instead of the polarization vector of the gluon. This allows simple
contraction of this effective vertex through the ordinary metric tensor for (k1 ) when
squaring the amplitude, leading to
µ1 2 t̂a1 t̂b1 2t̂1 t̂2 ŝ
C Cµ1 = (q1 + q2 )⊥ − + t̂2 + t̂1 +
2ŝ t̂a1 t̂b1
Analytic resummation techniques 317
2 2 2 2 2
k⊥,1 q⊥,1 q⊥,2 4q⊥,1 q⊥,2
≈ 2~q⊥,1 ~q⊥,2 − −2 2 = 2 (5.180)
2 k⊥,1 k⊥,1
This ultimately allows the high-energy amplitude squared to be cast into the form
3 6
ŝ2
Mgg→ggg 2 = 16CA gs . (5.182)
2 2 k2
Nc2 − 1 k⊥,0 k⊥,1 ⊥,2
In fact this result is reproduced, in the limit of strong rapidity ordering, by the exact
squared amplitude for gg → ggg,
X X 1
Mgg→ggg 2 = 4(παs CA )3 ŝ4ij , (5.183)
exact ŝa0 ŝ01 ŝ12 ŝ2b ŝba
i>j non−cycl
where i, j ∈ {a, 0, 1, 2, b} and the second sum goes over all non-cyclic permutations of
this set [450].
The approximated result can also be compared with the corresponding gg → gg
amplitude squared which reads
2 4
ŝ2
Mgg→gg 2 = 4CA gs , (5.184)
2 2 2
Nc − 1 k⊥,0 k⊥,1
cf. Eq. (5.166) and replacing t̂2 in the denominator with k⊥,02 2
k⊥,1 , following the logic
of the high-energy approximation. This gives rise to the — correct — suspicion that
additional gluons emitted along the t-channel ladder will lead to factors 4CA gs2 /k⊥,i 2
it becomes apparent that there is not explicit rapidity dependence on the emission of
the extra gluon; relieving the strong-ordering condition it will be emitted with a flat
probability anywhere between the two most forward gluons, 0 and 2. This indepen-
dence will remain also for further gluons, with the only constraint imposed by rapidity
ordering; other than that, the probabilities for gluon emissions will be flat over rapidi-
ties. This maybe surprising pattern will change when sub-leading corrections are taken
into account, a complication well beyond the scope of the discussion here. Following
318 QCD to All Orders
where |y0 − y2 | is the rapidity interval gluon 1 is allowed to populate, and φ02 is the
2
azimuthal angle between the gluons 0 and 2. With rapidities given by log(ŝ/k⊥ ) the
logarithmic enhancement of the production cross-section of three gluons with respect
to the production of two gluons at the same rapidities y0 and y2 becomes manifest.
This provides motivation for trying to resum this new class of large logarithms, which
have their origin in rapidities (or as log(1/x)) in contrast to the usual collinear double
and single logarithms that are resummed by the DGLAP equation.
µb0 µb µb00 µb µb00 µb0 µb00 µb0 µb
× g (l − 2pb ) +g (pb + l) − 2g l
)
× g µ100 µ1 (pa − k0 + l + k1 )µ10 + g µ1 µ10 (l − k1 − pb )µ100
To obtain the expression in the second line, the loop momentum l has been decomposed
according to
ŝ
lµ = αpµa + βpµb + l⊥ µ
and d4 l = dαdβd2 l⊥ . (5.190)
2
In the final expression for I above, the actual pole structure of the propagators has
been made explicit, since this allows the analytic structure to be analysed and ulti-
mately the integrations to be performed. This is more or less straightforward, since
closer inspection reveals that the α integration of the integral in Eq. (5.189) is not too
hard: all propagators apart from the third one will have a pole in the lower complex
half-plane of α and the α integration may easily be performed in the upper half-plane:
Z
2 dβd2 k⊥ 1 1 1
I ≈ −2iŝ 2 + iε · k 2 + iε · (q − k )2 + iε , (5.191)
(2π)4 βŝ − k⊥ ⊥ ⊥ ⊥
320 QCD to All Orders
2
leaving a logarithmic integral over β in the range β ∈ [k⊥ /ŝ , 1], resulting in
which encodes the residual loop integration in the transverse plane. This function
could be regularized through an infrared cut-off µ, resulting in
αs CA q2
α(t̂) ≈ − log ⊥2 , (5.194)
4π µ
showing that the amplitude is doubly logarithmic-divergent.
However, the second, crossed diagram in Fig. 5.8 needs to be considered as well.
This is relatively straightforward, since the dominant term can be obtained by just
replacing ŝ −→ û, and using that û = −ŝ − t̂ ≈ −ŝ in the usual limit t̂ → 0. The
overall amplitude thus reads
16παs ŝ 0 0 0
M ≈ − α(t̂) g µa µ0 g µb µ1 µa ∗ µb ∗ µ0 µ1 f aa c f a d0 c
C −t̂
A
ŝ cb0 b c0 d1 b0 ŝ c0 b0 b cd1 b0
× log f f − log + iπ f f . (5.195)
−t̂ −t̂
It is interesting to note, though, that in the calculation here, self-energy and vertex-
correction diagrams have been omitted: their dominant contribution would consist of
a running of αs , which, of course, could be inserted “by hand” as well.
However, this leaves some colour algebra as a final task. In the final step for the
virtual amplitude, the leading contribution corresponding to one single simple colour
structure will be extracted. To this end, it is useful to recall that the underlying
problem is the determination of colour structures in the scattering of two gluons,
of 8 ⊗ 8 in the adjoint representation of QCD. It is important to remember that
the generators of the adjoint representation, 8, indeed are anti-symmetric. Broadly
speaking, therefore, two kinds of structures emerge, symmetric and asymmetric ones,
given by
[8 ⊗ 8]S = 1 ⊕ 8S ⊕ 27
[8 ⊗ 8]S = 8A ⊕ 10 ⊕ 10 . (5.197)
Analytic resummation techniques 321
The products of structure constants f abc of the results are mapped onto these struc-
ad0
tures through projectors P̂bd1
, given by
ad0 1
P̂bd (1) = δ ad0 δbd1
1
Nc2 − 1
ad0 1 acd0
P̂bd (8A ) = f fbcd1
1
Nc
ad0 1 a d0 1 acd0
P̂bd (10 ⊕ 10) = δ b δ d1 − f fbcd1 , (5.198)
1
2 Nc
while the other symmetric structures do not contribute any logarithmically enhanced
terms, since the colour structures between s- and u-channel are antisymmetric and
the two contributions differ only by finite terms. Furthermore, the projector for 10 ⊕
10 cancels exactly when contracted with the colour structures in the result above,
Eq. (5.195). This leaves, at leading colour, only the anti-symmetric octet contribution,
thereby effectively reducing the result to
ŝ ŝ µa µ0 µb µ1 acd0 bcd1
M ≈ −8παs α(t̂) log g g f f . (5.199)
−t̂ −t̂
5.2.5.8 Putting it all together: the 2 → n gluon amplitude
..
.
1
× exp α(t̂n+1 )(yn − yn+1 )
t̂n+1
× igs f cn bdn+1 gµb µn+1 µb ∗ (pb )µn+1 (kn+1 ) . (5.201)
Apart from the exponential factors attached to the t-channel gluon propagators re-
summing the leading virtual correction to all orders, this constitutes a straightforward
generalization of the 2 → 3 gluon amplitude in Eq. (5.179).
In the next step the discontinuity of the elastic amplitude has to be evaluated. In this
setup this translates to applying a cut on the s-channel gluons k, equivalent to putting
them onto their mass-shell, i.e., to replace
i
−→ 2πδ(k 2 ) . (5.202)
k2
Thereby the (n + 1)-loop integral is effectively replaced with an integral over the
(n + 2)-particle phase space:
n Z
!
Y n+1
d4 ki Y Y Z dyi d2 ki,⊥
n+1 n+1
X
2 4 4
Φn+2 = (2π)δ(kj ) = (2π) δ p+ pb − ki .
i=0
(2π)4 j=0 i=0
4π (2π)2 i=0
(5.203)
Being interested in the amplitude in the multi–regge regime, this can be recast into a
form, where the conservation of longitudinal light-cone momentum is guaranteed by
the two most forward outgoing gluons,
Analytic resummation techniques 323
Z " n Z # n+1
!
1 d2 k0,⊥ Y dyi d2 ki,⊥ d2 kn+1,⊥ X
Φn+2 = (2π)2 δ 2 ki,p⊥ . (5.204)
2ŝ (2π)2 i=1 4π (2π)2 (2π)2 i=0
To obtain the discontinuity of the elastic gluon scattering amplitude with the exchange
of two t-channel gluons, one must sum over all 2 → (n + 2) gluon amplitudes in
Eq. (5.201), which ultimately translates into
0 0
Disc[iMaba b
µa µb µa0 µb0 (ŝ, t̂)]
∞ Z
" n Z # n+1
!
X 1 d2 k0,⊥ Y dyi d2 ki,⊥ d2 kn+1,⊥ X
= 2 2 2
(2π)2 δ 2 ki,p⊥
n=0
2ŝ (2π) i=1
4π (2π) (2π) i=0
ad0 c1 c01 d0 a0
× −2ŝδµa µa0 gs f −2ŝδµb µb0 gs f
1 1 0
× exp α(t̂1 )(y0 − y1 ) 0 exp α(t̂1 )(y0 − y1 )
t̂1 t̂
1
c1 d1 c2 c01 d1 c02 µ1
× igs f Cµ1 (q1 , q2 ) (igs f )C (q − q1 , q − q2 )
..
.
0 0
× igs f cn dn cn+1 Cµn (qn , qn+1 ) (igs f cn dn cn+1 )C µn (q − qn , q − qn+1 )
1 1
× exp α(t̂n+1 )(yn − yn+1 ) 0 exp α(t̂0n+1 )(yn − yn+1 )
t̂n+1 t̂n+1
0 0
× (igs f bdn+1 cn+1 )(igs f cn+1 dn+1 b ) , (5.205)
Projecting on colours — octet vs. singlet exchange along the two-gluon ladder — yields
colour factors C,
Nc for singlet
C = (5.207)
Nc /2 for octet
such that in total
∞ Z "Yn
# Z n+1
0 0 X dy i
Y d2 qj,⊥
Disc[iMaba b
µa µb µa0 µb0 (ŝ, t̂)] = 2iŝ (−gs2 C)n+2
4π (2π)2
n=0 i=1 j=1
324 QCD to All Orders
"n+1 #" n #
Y 1 0 Y
× 0
e[yk−1 −yk ][α(t̂k )+α(t̂k )] 2K(qm , qm+1 ) .
k=1
t̂k t̂ k m=1
(5.208)
At this point the Laplace transform prepared in Eqs. (5.142) and (5.144) allows to
disentangle the multiple integrals in a very convenient way. This is achieved by
1. assuming a strong ordering of the rapidities, yi yi+1 , thereby enforcing the
kinematical situation of the high-energy limit, cf. Eq. (5.171),
2. integrating over rapidity differences yi − yi+1 instead of rapidities, and
3. using the overall rapidity difference y0 − yn+1 as a Laplace parameter.
Then, the Laplace transform reads
∞
"Z n+1 #
X Y d2 qi,⊥
2
Fl (t̂) = −2i(4παs C) t̂
n=0 i=1
(2π)2
Y n
1 1
× 0 l − 1 − α(t̂ ) − α(t̂0 )
(−2αs C)K(qj , qj+1 )
j=1
t̂ j t̂ j j j
1 1
× , (5.209)
t̂n+1 t̂0n+1 l − 1 − α(t̂n+1 ) − α(t̂0n+1 )
which yields
Analytic resummation techniques 325
1
floct (q1 , t̂) = floct (k, t̂) = , (5.213)
l − 1 − α(t̂)
where the term α(t̂) in the denominator stems from the integral. Pushing this through
the Laplace transform, the amplitude for octet exchange is duly obtained,
πα(t̂) ŝ 1+α(t̂)
Moct
gg→gg (ŝ, t̂) = 4παs Nc 1+e iπα(t̂)
. (5.214)
sin[πα(t̂)] −t̂
The exponent stems from the pole at l = 1 + α(t̂) in the complex angular momentum
plane, which essentially yields the spin of the exchanged object. In Section 7.1.2 objects
exchanged in the t-channel will be dubbed “reggeons”, and the exponent indicates
their “equivalent” spin — essentially the intrinsic angular momentum related to their
exchange which, of course, does not need to be half-integer or integer. Using the fact
that α(t̂) = 0 for t̂ = 0 shows that the reggeized gluon indeed has spin 1. In the high-
energy limit, where t̂ is small compared to ŝ the amplitude thus can be approximated
as
ŝ α(t̂)(ya −yb )
Moct
gg→gg (ŝ, t̂) = −8παs Nc e (5.215)
t̂
as anticipated.
It is worth noting
that although having started the consideration with
the discon-
tinuity at O αs2 , the overall result for the elastic amplitude is O αs2 . This reflects
the fact that of course the colour-octet exchange amplitude starts with a single gluon
in the t-channel, which is at this very same perturbative order.
After some substitutions, the BFKL equation for the exchange of a colour singlet at
t̂ = 0 is given by
1
(l − 1) f˜lsing (q1 , k) = δ 2 (~q1,⊥ − ~k⊥ )
2 " #
Z 2 2
d q2⊥ 1 q1,⊥
+ 4αs Nc f˜sing
(q2 , k) − 2 f˜sing
(q1 , k) ,
(2π)2 (q1 − q2 )2⊥ l q2,⊥ + (q1 − q2 )2⊥ l
(5.216)
where f˜lsing (q1 , k) is the differential form of the singlet function for t̂ = 0,
Z
d2 k⊥ ˜sing
flsing (q1 , t̂ = 0) = f (q1 , k, t̂ = 0) . (5.217)
(2π)2 l
The homogeneous part of the equation can be solved after a Fourier expansion,
326 QCD to All Orders
∞
X Z∞
q2 k2
f˜lsing (q1 , k) = dν a(ν, n) exp iν log 12 − log 2 + in φ1 − φ ,
n=−∞−∞
µ µ
(5.218)
where (φ1 − φ) is the angle between q1 and k in the transverse plane and µ is an
arbitrary scale. Expanding the δ function in a similar way results in
δ 2 (~q1,⊥ − ~k⊥ )
∞
X Z∞
1 1 q2 k2
= dν exp iν log 12 − log 2 + in φ1 − φ ,
k⊥ q1,⊥ (2π)2 n=−∞−∞
µ µ
(5.219)
i.e. the same expansion as for the homogeneous part, but with constant coefficients
a(ν, n) = 1/(4π 2 k⊥ q1,⊥ ). Inserting this expansion into the equation for f˜ yields an
equation for the coefficients, namely
1
(l − 1)a(ν, n) = + ω(ν, n)a(ν, n) , (5.220)
4π 2 k⊥ q1,⊥
and therefore
1 1
a(ν, n) = . (5.221)
4π 2 k⊥ q1,⊥ l − 1 − ω(ν, n)
After a bit of algebra, the eigenvalues ω(ν, n) can be written as
2αs Nc |n| + 1
ω(ν, n) = − Re ψ + iν − ψ(1) . (5.222)
π 2
d log Γ(x)
ψ(x) = . (5.223)
dx
The exchange of such a colour-singlet object in the t-channel is often identified
with the exchange of a pomeron, which in turn is the driver of the total cross-section,
cf. Sections 7.1.2 and 7.1.3. However, while in the simplistic picture advocated in
this later part of the book, the pomeron is assumed to be a simple pole, this is not
the case in perturbative QCD, studied here. Here, it is important to stress that the
eigenvalues are continuous, which implies that for the singlet solution the idea of a
simple t̂-dependent pole must be abandoned. The perturbative pomeron is a branch
cut in the complex angular momentum plane.
However, in order to analyse the behaviour of this structure in more detail, consider
the leading contribution, which is located at ν = n = 0. Expanding ω for small ν and
n = 0 yields
2αs Nc 4Nc log 2
ω(ν, n = 0) = 2 log 2 − 7ζ(3)ν 2 + . . . ≈ αs ≈ 2.65 αs . (5.224)
π π
Analytic resummation techniques 327
with
4Nc log 2 14ζ(3)Nc
A = αs and B = αs . (5.226)
π π
5.2.5.12 Total gluon-gluon cross-section
Undoing all the steps until now, Laplace transform, relation between discontinuity
of the amplitude and total cross-section, etc., then results in the total gluon–gluon
scattering cross-section in multi-Regge kinematics
Z 2
8N 2 d ka,⊥ d2 kb,⊥ sing
(tot)
σ̂gg→gg = 2 c αs2 2 2 f (ka , kb , |ya − yb |) (5.227)
Nc − 1 ka,⊥ kb,⊥
with
f sing (ka , kb , y)
∞ Z∞ " #
1 X 2
ka,⊥
= dν exp ω(ν, n)y + iν log 2 + in(φa − φb ) ,
4π 2 ka,⊥ kb,⊥ n=−∞−∞
kb,⊥
(5.228)
and where ka and kb denote the two outermost, i.e. most forward, gluons. The pref-
actor accounts for the averaging over incident colours and spins. This is schematically
depicted in Fig. 5.9.
Integrating over the azimuthal angles means that only the n = 0 term contributes,
and thus
Z∞ " #
(tot) 2
dσ̂gg→gg Nc2 αs2 ka,⊥
2 dk 2 = 3 k3 dν exp ω(ν, n = 0)|ya − yb | + iν log 2
dka,⊥ b,⊥ 4ka,⊥ b,⊥ kb,⊥
−∞
" #
Nc2 αs2 π 1 1 k2
2 a,⊥
≈ 3 k3
p exp A|ya − yb | − log 2 .
4ka,⊥ b,⊥ πB|ya − yb | 4B|ya − yb | kb,⊥
(5.229)
Fig. 5.9 Sketch of the scattering process gg → gg to all orders. The dou-
ble-lined box represents the exchange of a singlet gluon ladder, indicated
by a few gluons and some dotted lines hinting at further gluon, giving
rise to the function f sing (ka , kb , y) in Eq. (5.227). The vertical dotted line
shows that this is actually the discontinuity of the amplitude.
width growing with the rapidity distance of the two. This is not too surprising since
the BFKL equation could also cast into the form of a diffusion equation with the
2 2
diffusion rate given by log(ka,⊥ /kb,⊥ ).
Performing the integration over transverse momenta with a cutoff ka,⊥ , kb,⊥ > p⊥
ultimately yields
Z∞
N 2 α2 (p2 ) eω(ν, n=0)|ya −yb |
(k⊥ >p⊥ )
σ̂gg→gg = c s2 ⊥ dν
4p⊥ ν 2 + 14
−∞
4Nc αs (p2⊥ )|ya − yb | log 2
exp
N 2 α2 (p2 )π π
≈ c s2⊥ r . (5.230)
2p⊥ 7ζ(3)Nc αs (p2⊥ )|ya − yb |
2
This exposes the exponential growth of the cross-section with the rapidity distance
of the two most forward partons, synonymous with a power-like growth in ŝ. Taking
|ya − yb | = log(ŝ/p2⊥ ) allows the determination of the resulting approximate K-factor
as a function of the parton centre-of-mass energy, and for different values of p⊥ . Simple
inspection shows that the K-factor increases steongly with the logarithm of ŝ/p2⊥ , as
expected. The Mueller–Navelet jets [771] aim at probing exactly this regime by fixing
the x and vary the rapidities of the two most forward jets and by then measuring the
dijet cross-section. This implies that the hadron centre-of-mass energy scales with the
partonic one, i.e. with the exponential of |ya − yb |.
Parton shower simulations 329
The azimuthal correlation of the two most forward jets is another interesting fea-
ture of the cross-section in the BFKL approximation. In the Born approximation for
gg → gg, the two gluons are back-to-back in transverse space with exactly the same
transverse momentum, i.e. ka,⊥ = kb,⊥ and |φa − φb | = π. This picture changes when
higher-order corrections in the BFKL approximation are taken into account. Since
there the leading term is given for n = 0, naively these two jets become entirely
decorrelated. In addition, due to the diffusion property of the BFKL equation, also
the correlation between their two transverse momenta fades away as their rapidity
distance increases. This, the decorrelation of forward jets at large rapidity distances in
transverse space, actually constitutes a possible test for the onset of BFKL dynamics.
This is a very simple example of unitarity, written in the form of probability conser-
vation: the isotope has either decayed or not.
Differentiating the decay probability thus yields the probability density that the
isotope decays exactly at time t:
Zt
P nodec. (t, 0) = exp − dt0 Γ(t0 ) , (5.234)
0
αs Γa (T, t)
Γ(t) −→ , (5.236)
π t
such that the Sudakov form factor for parton splitting takes the form
T
Z 0
dt αs
∆a (T, t) = exp − Γa (T, t0 ) . (5.237)
t0 π
t
This has already been encountered in Section 2.3.2 and in Section 2.3.3. The notable
difference with respect to the toy example of the radioactive decay is the fact that the
time evolution from a start time t0 = 0 to the decay time tdec has been replaced with
an evolution from a large scale T to a smaller scale t. It is no coincidence that this is
identical to the form of the Sudakov form factor in QT resummation, as the integrated
splitting functions have identical coefficients A(1) and B (1) at leading order.
over the scale. In other words, introducing an infrared cut-off tc , typically of the
order of a few ΛQCD , the additional constraint
t0 ≥ tc > 0 (5.238)
must be satisfied. For any given large starting scale T , and for a scale-independent
strong coupling, there is the probability
CF,A αs 2 T (1) T
1 − ∆q,g (T, tc ) = 1 − exp − log − γ̃q,g log
π tc tc
(5.239)
CF,A αs 2 T (1) T 2
= log − γ̃q,g log + O αs
π tc tc
for a quark or gluon to emit a gluon above the resolution scale tc , which increases with
increasing T /tc . Comparing this result with the Sudakov form factor of QT resumma-
(1) (1) (1)
tion, Eq. (2.166), the terms A(1) = Cq,g and B (1) = −γq,g = −Aq,g γ̃qg are readily
identified. The naive mismatch by a factor of two between the two formulations, par-
ton showers and analytic resummation stems from the fact that in the former each
emitter is treated individually while in the latter, and in particular in the discussion
in Sections 2.3.2 and 5.2.1 the case of two incoming partons of the same type — quark
or gluon — fusing into a colourless state has been considered. The corresponding com-
bined Sudakov form factors in this case account for both of the incoming partons,
explaining the relative factor of two. Note also that terms of higher order in αs have
been omitted here, although the term equivalent to A(2) is easily included as well.
The probability not to emit any resolvable gluon decreases with increasing ratio
T /tc ,
CF,A αs 2 T (1) T
∆q,g (T, tc ) = exp − log − γ̃q,g log
π tc tc
(5.240)
CF,A αs 2 T (1) T 2
=1− log − γ̃q,g log + O αs
π tc tc
at O(αs ). For large ratios T /tc , the emission term ∼ αs can become larger than unity,
which leads to this expression turning negative at O(αs ). In such a case the higher-order
terms encoded in the exponentiation will also increase and thereby guarantee that
the exponent of the Sudakov form factor remains negative, rescuing the probabilistic
interpretation.
Due to its probabilistic nature, the Sudakov form factor incorporates real, resolv-
able emissions as well as unresolvable ones, although only the former ones appear
explicitly as the driving term. The unresolvable ones are taken into account only by
the introduction of the cut-off and the interpretation of the Sudakov form factor. Dia-
grammatically speaking, such unresolvable emissions can be attributed to either the
soft and/or collinear real gluon radiation at scales below the cut-off tc , or to the virtual
diagrams, see Fig. 5.10 for a pictorial representation. Since both exhibit infrared di-
vergences which cancel each other, and because the expression in Eq. (5.240) is free of
332 QCD to All Orders
pole terms 1/, it becomes apparent that the Sudakov form factor resums all large log-
arithms of the type log(T /tc ) at leading order, stemming from unresolvable and virtual
diagrams, or, conversely, the same logarithms originating from the resolvable emissions.
This is nothing but the celebrated Kinoshita–Lee–Nauenberg theorem [678, 724], for-
mulated in a probabilistic way. In this fashion the unitarity-conserving character of
the parton shower manifests itself: by employing a probabilistic picture virtual and
unresolvable emissions are inherently accounted for.
Fixing the problems related to the divergent low-scale behaviour through the intro-
duction of the infrared cut-off tc , the actual scale of this cut-off needs to be discussed.
There are two considerations driving this choice. For one, a well-defined description
of radiation in a well-understood perturbative framework with only two parameters
(αS and tc ) is preferable over a description resting on phenomenological models with
many parameters, which, at best, are understood only in qualitative terms. Such phe-
nomenological models describing the emergence of hadrons at the end of the parton
shower are discussed in more detail in the section introducing the ideas underlying
hadronization models and their implementation in Section 7.3. This line of reasoning
drives the infrared cut-off to minimally small scales. On the other hand, due to its
confinement property, the perturbative description of strong interaction processes will
eventually break down. This break-down of perturbation theory typically is related to
scales of the order of ΛQCD . Combining both considerations motivates choices for tc
of the order of about 1 GeV2 .
On a similar note, also the hard scale T must be fixed. In previous sections dis-
cussing analytic resummation techniques, it became clear that this choice is quite
important, as it also drives the size of the Sudakov logarithms. Bu there is no recipe
based on first principles or a universal scale choice: it is process-dependent. While for
simple topologies as in Drell–Yan-like processes the choice is fairly straightforward,
something of the order of the invariant mass of the colour-singlet system, more intri-
cate colour topologies will have to be described by more complicated choices. As a
rule of thumb one could argue that the typical scale related to the distortion of the
colour flow presents itself as a reasonable choice. It can be argued that indeed, inside
the parton shower, such a scale choice is the most advantageous, as it resums certain
classes of sub-leading logarithms.
Parton shower simulations 333
Here, the limits of the z-integration, z± , emerge from the need to reconstruct the
kinematics of the splitting obeying exact four-momentum conservation. They therefore
depend on the specifics of the scale choice, etc., which will be discussed in more detail
below. It is worth stressing that the introduction of an infrared cut-off of the parton
shower will guarantee that these limits cut out the divergences in the splitting function.
Another subtlety arises when actually constructing the decay kinematics param-
eterized by {t, z, φ}. A massless particle on its mass-shell in general cannot decay
into two other massless particles, due to four-momentum conservation. This translates
into the necessity to involve other partons, which will “donate” some four-momentum,
absorb the recoil of the decay and thus guarantee local four-momentum conservation.
This role is typically filled by the colour partner of the decaying parton. The resulting
reshuffling of momenta of course is very minor in the case of soft and collinear split-
tings and vanishes in the limit where the invariant mass of the produced two-body
system approaches the invariant mass of the decaying parton. Therefore, the details of
the momentum shuffling are beyond the intrinsic accuracy of the parton shower. For
emissions away from the soft and collinear regions they start to become increasingly
important. In order to characterize a parton shower, it is thus necessary not only to
define the interpretation of the parameters t, z, and φ, but also to define precisely how
four-momentum conservation is achieved. In the next section, Section 5.3.2, the actual
construction of parton showers will be exemplified through some typical realizations.
both the incoming hadrons at the low scale and the parton entering the hard interac-
tion at the high scale are fixed. Of course, to circumvent this problem, a naive idea
would be to just start the parton shower evolution from both hadrons and arrive at
a hard scatter. A quick glance at various cross-sections relevant for phenomenology
reveals that such a procedure would be prohibitively inefficient, since all interesting
processes have cross-sections that are many orders of magnitude smaller than the
inelastic hadronic cross-sections. Therefore, in order to arrive at any statistically sig-
nificant sample of simulated events it is unavoidable that the hard interaction is fixed
first and only subsequently dressed with multiple parton emissions and further stages
of event simulation.
In final-state radiation, all external hadrons in the final state are equally permis-
sible, while in the case of initial-state radiation the incoming hadron of course is fixed
by the collider setup. Since the parton shower evolution preferably is described as an
evolution from the large to the small scales, the forward evolution in the simulation
of final-state radiation thus becomes a backward evolution for emissions off the
initial state.
The theoretical motivation is as follows [851]. The parton distribution function
entering the calculation of the hard process cross-section, cf. Eq. (2.52), already em-
bodies an inclusive summation over all possible initial-state showers starting from a low
hadronic scale and arriving at the hard factorization scale µF . The scaling behaviour
of these PDFs is given by the DGLAP evolution equations, Eq. (2.31). Schematically,
Z x
dfb/h (x, t) αs (t) X dx0
= 0
Pa → bc 0
fa/h (x0 , t) . (5.242)
d log t 2π a
x x
This needs to be turned into an expression for the probability of parton b disappearing
from x during a small decrease of scale, dt. A simple way of achieving this is to
divide the equation above by fb/h (x, t). To first order in αs this leads to the following
expression for the parton decay probability:
Z
dfb/h (x, t) dt αs (t) X dx0 (1) x fa/h (x0 , t)
dPb = = − P . (5.243)
fb/h (x, t) t 2π a
x0 ba x0 fb/h (x, t)
Exponentiating this expression for the individual splitting, as before, yields a Sudakov
form factor, this time for backward evolution. It encodes the probability for a process
not to occur, where a parton b at momentum fraction x in the initial state is replaced
by a parton a at x0 = x/z under emission of parton c.
T z
Z dt0 Z + dz Z2π dφ α (p (t0 , z)) x
fa/h z , t 0
s ⊥ (1)
∆a→bc (T, t) = exp − Pa→bc (z) .
t0 z 2π 2π fb/h x, t0
t z− 0
(5.244)
The only visible difference to the forward evolution is that here also ratios of PDFs
enter. Their role is to assure that, starting from a hard scale, one actually arrives
Parton shower simulations 335
at the “right” initial hadron, and that on the way to this lower scale emissions are
unfolded respecting the DGLAP evolution equation. A simple way to see that this
is indeed the case is by realizing that for every splitting to take place, the PDF for
parton b at its values of xb and tb is replaced by the PDF for the parton a at lower
scale ta ≤ tb and larger xa ≥ xb as encoded in the ratio of the PDFs in Eq. (5.244). As
a further welcome consequence of this backward evolution, momentum conservation
is trivially guaranteed, i.e. no x0 > 1 can be chosen — the PDFs just vanish there.
This effectively constrains the lower limit of the z integral to be larger than x: z− ≥ x.
In addition, the flavour symmetry of the PDFs, in principle treating all (massless)
quarks on the same footing is broken. For instance, moving towards larger x0 and
smaller scales t, the flavour-symmetric sea components of the quark PDFs vanish and
the flavour-unsymmetric valence contributions emerge.
2 2
2
2
αs (k⊥ ) αs (k⊥ ) αs (k⊥ ) 67 π 2 5nf
−→ + · CA − − (5.245)
2π 2π 2π 18 6 9
In Section 5.1.1 the notion of angular ordering of subsequent emissions has been
identified as an important feature of the QCD radiation pattern beyond the naive
double-leading logarithmic approximation.
For parton shower simulations angular ordering implies that the exact choice of the
evolution parameter t is essential for the formal accuracy in terms of leading and sub-
leading logarithms. In the past, different choices for the form of the ordering parameter
t have been implemented and formed the basis of parton showers successfully describing
data. These choices include the following properties of the produced pair:
• invariant mass, t = Q2 , employed in the very first parton shower realizations [216,
533, 593, 766, 788],
• opening angle, t = θ2 , taking into account quantum coherence effects [579, 750],
• and transverse momentum t = p2⊥ , similarly including coherence effects [603, 729,
731, 778, 805, 856],
While all three choices exhibit logarithmic behaviour in the limit of small opening
angles, the exact form of the logarithms differs for these three choices; in particular it
was found that only the latter two systematically incorporate the effects of quantum
coherence.
The effect of not taking into account such effects in hadronic collisions was high-
lighted through an analysis performed by the CDF collaboration during Run I of
the TEVATRON. The analysis studied the angular distribution of a third jet in QCD
events [111], cf. Fig. 5.12, where its pseudo-rapidity distribution is depicted. The jets
were defined by the midpoint algorithm with a radius of R = 0.4 and a minimal trans-
(jet)
verse momentum E⊥ > 10 GeV in a pseudo-rapidity region given by |η (jet) | ≤ 2. In
addition, the first (hardest) jet was demanded to have at least a transverse momentum
(jet )
of E⊥ 1 > 110 GeV. A number of observables are sensitive to angular ordering (QCD
coherence) effects, the most intuitive ones being the η–distribution of the third jet and
its spatial distance R in the η-φ plane from the second jet.
Both can directly be related to angular ordering, when considering the colour flow
in typical jet events in hadron collisions, cf. Fig. 5.13. The underlying hard process can
Parton shower simulations 337
Fig. 5.12 The effect of colour coherence in QCD jet events at Run I of.
the TEVATRON, by the CDF collaboration [111].
be visualized as two incoming and two outgoing partons, where usually each outgoing
parton is colour-connected to one of the incoming partons. This gives rise to a maximal
angle θmax for the emission of the third parton, which thereby effectively is constrained
inside cones of radius θmax around potential emitters — the incident or outgoing
partons. Identifying partons at leading order with jets explains the features in jet
production seen in the experiment. Ultimately, these findings lead to the first choice
of evolution parameter, invariant mass, to be supplemented by an explicit angular veto
in an improved version of the PYTHIA event generator [214, 215].
It should be noted here that the form of the kernels in the soft limit given here
may possibly not work for dipole showers, since it potentially leads to a double-
counting of the soft region due to the symmetry of the eikonal. This problem can
be remedied by replacing the eikonal with a similar form which will reproduce the
eikonal when emissions of both parts of the dipole are added:
z→1 1 pi pk
Kij;k (Φ1 ) −→ · . (5.248)
pi pj (pi + pk )pj
For more details, and a connection to, e.g., the Catani–Seymour subtraction
method, cf. Section 3.3.3; for a discussion in the framework of constructing algo-
rithms for parton showering, see for instance [807].
2. The precise choice of evolution and splitting variable and the choice of scale in
the strong coupling. Together with the exact form of the splitting kernels this
typically influences the formal accuracy of the parton shower. At this point it is
worthwhile to stress, again, that in principle different evolution parameters will
often lead to identical leading logarithmic (LL) accuracy, as can be seen from
FF : p̃ij + p̃k −→ pi + pj + pk
IF : p̃aj + p̃k −→ pa + pj + pk
(5.250)
FI : p̃ij + p̃a −→ pi + pj + pa
II : p̃aj + p̃b −→ pa + pj + pb
340 QCD to All Orders
The subscripts i, j, and k label the splitting parton, the emitted parton, and the
spectator, all after the emission. The Sudakov form factor for a thus defined emission
between the two scales t and T reads
T
Z 0 Z Z
(K) dt dφ
∆ij;k (T, t) = exp − dz J(t0 , z, φ) Kij;k (t0 , z, φ)
t0 2π
t
T (5.251)
Z
= exp − dΦ1 Kij;k (Φ1 ) .
t
The quantity J(t0 , z, φ) in the first line of this equation takes care of any Jacobean
that becomes necessary, but it is suppressed henceforth and assumed to be part of the
one-particle phase-space integral in the following line.
For simplicity, parton luminosity factors as well as the specific coupling structure
also including all charge factors are included in the splitting kernel. In all imple-
mentations of Eq. (5.251) the argument of the coupling is related to the transverse
momentum of the splitting, given by the decay kinematics Φ1 .
One of the first full-fledged implementations of a parton shower covering all aspects
of initial- and final-state radiation was based on an ordering of the emissions by the
virtual mass of the splitting parton: virtuality ordering, see also [216, 533, 593, 766,
788]
A parton shower with this ordering paradigm has been employed in early versions
of the PYTHIA framework [852]. It is realized as a final-state shower [214, 215, 786],
where the spectators are typically final-state particles as well, and as an initial-state
shower [766, 851], with the other initial-state particle as spectator. The only exception
is the first final-state splitting of a particle that has just been emitted from an initial-
state parton, for which special arrangements are made. In general, the algorithm is
built on the parton shower evolution being driven by 1 → 2 splittings, (ij) → i+j. The
spectator parton k typically is defined through the configuration of the hard process
for the first emissions, or where it is related to the splitter (ij) by having a common
splitter, i.e. (ijk) → (ij) + k.
1. Splitting kernels
In both cases, FF and II, the evolution kernels Kij;k are given by the leading
order splitting kernels of the DGLAP evolution equation for the fragmentation or
parton distribution functions, P(ij)i , cf. Eq. (2.31).
2. Evolution and splitting parameters
The evolution and splitting parameters t and z are given by
Parton shower simulations 341
Ei
t(FF) = tij = p̃2ij = (pi + pj )2 and z (FF) = zij =
Ẽ(ij)
(5.252)
xa Ea
t(II) = taj = |p̃2aj | = |(pa − pj )2 | and z (II) = zaj = = .
x̃aj Ẽaj
where m(ij) is the on-shell mass of the splitting particle (ij), E(ij) its energy, fixed
in the previous splitting, and tij is its virtual mass that has already been fixed by
the Sudakov form factor. This choice assumes the offsprings i and j are massless
— some momentum reshuffling will have to take place once they acquire a mass
through their further splittings. This effectively will lead to a reinterpretation of
the energy-splitting parameter of the previous parton branchings, as sketched in
the following discussion (for further detail see the original literature).
The infrared cut-off scale is given by a parameter Q0 such that it depends on the
physical mass of the splitting parton, m̃(ij) :
r
Q20
t ≥ t(ij)
c = m̃2(ij) + . (5.254)
4
Expressed by the parameters above, and assuming massless partons, the argument
of the strong coupling is given as
2 2 zij (1 − zij ) tij (FF)
αs (k⊥ ) with k⊥ = (5.255)
(1 − zaj ) taj (II)
while the scale argument in the PDFs in the backward evolution is given by the
respective t of the splitting.
3. Construction of kinematics
The kinematics of individual FF splittings are constructed in the following way.
The recoil partner is selected to be either the other particle in the hard 2 → 2
scattering, if it is the first splitting in the process, or the other particle being
generated in the splitting, or the particle emerging from them and being the
colour partner. These possibilities are illustrated in Fig. 5.14. Having fixed t and
z for a splitting (ij) → i + j with massless partons i and j, the decay kinematics
are fixed by realizing that
342 QCD to All Orders
Fig. 5.14 Recoil partners in final-state parton showers. For the splitting
(kl) → k + l, with tkl > tij , (ij) took the recoil, with k being colour
connected with (ij). In the subsequent splitting (ij) → i + j parton k will
be the recoil partner. Assuming k to be the next parton to split, it will be
the turn of l to act as recoil partner in the construction of kinematics, and
in turn one of the decay products of k will compensate for four-momentum
imbalance in the splitting of parton l.
Z1
αs (tij ) xi fi/h (xi , tij )
dzij . (5.259)
2π xij fij/h (xij , tij )
xij
It will always be the other initial-state particle that will account for the recoil,
thereby boosting and rotating the full system. The algorithm is basically such that
the Bjorken-x parameters of both initial-state particles, splitter and spectator, fix
the centre-of-mass system.
To see how this works in more detail, consider a kinematical situation like the
one depicted in Fig. 5.15, where two particles ã and b̃ collide to produce a final
Parton shower simulations 343
state with total four-momentum squared zŝ = ŝãb̃ . Particle ã will eventually
emit a parton j in the backward evolution, thus acquiring a negative virtual mass
p2ã = p̃2aj ≤ 0, its absolute value readily identified as the evolution parameter,
|p̃2aj | = taj . The momentum of the emitted particle, pj , can be fixed by using
local four-momentum conservation, i.e. pj = pa − paj . This leaves the task of
constructing the four-momentum of the new initial-state parton, pa and the emit-
ter after emission, paj . The centre-of-mass squared of the emerging system, ŝab̃ ,
is given by rescaling the original ŝãb̃ by the splitting parameter,
ŝãb̃
ŝab̃ = (pa + pb̃ )2 = , (5.260)
zaj
In this system pa and pj will have a transverse momentum with respect to the
k-axis; its azimuth angle is one of the four degrees of freedom of, say, pa . It is
344 QCD to All Orders
chosen isotropic, and Eq. (5.260) fixes another d.o.f.. A third one is fixed by the
mass of pa , Q2(ak) = |p2ak , given by the next showering step, where a parton k is
being emitted. This leaves a fourth d.o.f., which is identified with the virtual mass
of the outgoing parton j. The latter is fixed by the final-state parton shower, i.e.
the further splitting of this parton, subject to kinematic constraints and, possibly,
by considerations invoking quantum coherence. The kinematic constraint on the
mass assumes a completely collinear branching, p~(aj) k p~a k p~j , and reads
with
ŝãb̃
qak = ŝab̃ + Q2(ak) − Q2(bl) = + Q2(ak) − Q2(bl)
zaj
(5.263)
2
raj = qaj − 4Q2(aj) Q2(bl)
2
rak = qak − 4Q2(ak) Q2(bl) .
This algorithm has been improved to approximate some of the effects of quantum
coherence with an a posteriori-fix. This fix consists of a veto on increasing splitting
angles, applied after each parton emission. For example, for final-state splittings, the
opening angle of the splitting (ij) → i + j can be estimated as
√
p⊥,i p⊥,j 1 tij
θij ≈ + ≈ p . (5.264)
Ei Ej E
zij (1 − zij ) (ij)
Denoting the kinematic variants related to the splitting of parton i with subscripts i
then leads to the angular ordering constraint
zi (1 − zi ) 1 − zij
θi < θij −→ > , (5.265)
ti tij
5.3.2.3 p⊥ -ordering
The original idea of ordering emissions according to the transverse momentum has
been introduced in the framework of showering algorithms based on colour dipoles or
colour antennae, [603, 729, 731, 805], see in the following sections. It has been adopted
as a way to construct a parton shower only some time later, in [856], in the framework
of the PYTHIA event generator. By now, this is the default parton showering algorithm
in the latest versions of PYTHIA 6 and in its replacement, PYTHIA 8. Similar to the
virtuality-ordered parton shower, again there is a distinction between final-state and
initial-state parton showers.
1. Splitting kernels
As in the virtuality-ordered parton showers, in both cases, FF and II, the evolu-
tion kernels Kij;k are given by the leading-order splitting kernels of the DGLAP
evolution equation, cf. Eq. (2.31). In addition, FI configurations are considered,
again using the DGLAP splitting kernels for the evolution.
2. Evolution and splitting parameters
For final-state parton showering, i.e. FF and FI splittings, the evolution variable
is given by
p2⊥,ij = zij (1 − zij ) (tij − m2(ij) ) , (5.266)
where, as before, m(ij) is the physical on-shell mass of the splitting particle, and
tij is its invariant mass which is now obtained after fixing the decay kinematics,
i.e. p2⊥,ij and zij . In terms of momenta after the splitting, zij is defined as
1 x1
zij = − k3 (5.267)
1 − k1 − k3 2 − x2
where
tij − λ(tij , m2i , m2j ) + m2j − m2i
k1 = (5.268)
2tij
346 QCD to All Orders
with
p0k = pk (5.272)
for FF splittings and
" #
tij − m2(ij)
p0k = pk 1− − 2tij + 2m2(ij) (5.273)
(pi + pj + pk )2
" #−1
tij − m2(ij)
× 1+ − 2tij + 2m2(ij) (5.274)
(pi + pj + pk )2
for FI splittings.
For initial-state showering, the situation is a bit more complicated. Assuming a
massless parton splitting into a space-like one with a virtual mass of −Q2 and a
time-like one with virtual mass m2 , the relative physical transverse momentum
reads
p2⊥ = (1 − z)Q2 − zQ4 /ŝ , (5.275)
where ŝ is the invariant mass of the system formed from the splitter and the
spectator — the other parton in the initial state — before the backward splitting.
The choice of this exact form of the transverse momentum would lead to an
unwanted ambiguity, when mapping it onto the virtuality of the splitter after the
emission took place. To overcome this, instead the evolution scale
is chosen. Close to the bottom and charm thresholds, the evolution variable is
changed to
p2⊥,evol = (1 − z)(tij + m2(ij) ) . (5.277)
In terms of momenta after the branching, z is given by the Q2 -ordered definition
2p(aj) pb
z= (5.278)
2pa pb
More details on the construction of this parton shower are given in [856].
3. Construction of kinematics
The kinematics in the implementation [856] of this parton shower algorithm in
PYTHIA is constructed by mapping the quantities p⊥ and z onto invariant masses
Parton shower simulations 347
and splitting parameters. The recoil of the splitting is taken by the whole final
state for initial-state emissions, while the colour-connected particle acts as recoil
partner for final-state splittings. This is analogous to dipole showers.
In particular, for FF splittings the branching is performed in the rest frame of
splitter ij and spectator k, oriented along the positive and negative z axis. In
the splitting process, the hitherto massless splitter ij receives a virtual mass tij ,
which induces a reduction of the energies and momenta of both splitter and spec-
tator. For FI splittings, the construction is practically identical. For initial-state
splittings, the kinematics is identical to the initial-initial kinematics described in
the discussion of dipole showers below, and in [839] (up to a φ-rotation).
Finally, it should be noted that a veto on increasing angles (identical to the
virtuality-ordered cascade) is usually applied to initial-state splittings.
As discussed in Section 5.1.1, angular ordering is a quantum effect stemming from the
interference of radiation from different emitters. This idea has first been successfully
employed for the organization of a parton shower in [750, 886], forming the backbone
of the widely used HERWIG event generator [415]. In various publications, including
for example [356], it has been shown that angular-ordered parton showers are accurate
up to next-to-leading logarithms, and that the first contributions that are missed are
colour-suppressed by at least a factor of 1/Nc2 .
In its original version, indeed, the opening angles of emissions, scaled by the energies
of the emitting particles, have been employed as ordering parameters. In an improved
version presented in [579], a modified angular variable is used instead, which allows the
inclusion of mass effects in the parton shower in a more straightforward way. It thereby
eliminates an artefact that is known as the “dead-cone” effect [479, 751].7 Therefore the
following discussion is based on the angular-ordered shower algorithm in its improved
version, implemented in the HERWIG ++ event generator [188]. It should be noted that
in this specific implementation, all outgoing partons have a minimal outgoing mass of
the order of 1 GeV or below. This allows a very smooth interface with the subsequent
cluster hadronization model in HERWIG ++. It also enables the parton shower to cover
the transverse momentum range down to 0 GeV, while in other shower models the end
of the parton shower is defined by a cut-off in transverse momentum.
1. Splitting kernels
The splitting kernels for emissions off both initial- and final-state particles are
given by DGLAP kernels involving masses, in particular
7 This artefact is based on the observation that the masses of the emitting or emitted particles
shield the collinear divergence in the emission pattern, due to simple four-momentum conservation.
In the original angular-ordered showers the corresponding suppression of radiation was approximated
by a hard cut on the emission angle in q → qg splitting, given by θ > m/E, with m the quark mass
and E its energy. This cut is too hard, and to obtain a better description, massive splitting kernels
like the ones below have to be employed. They still exhibit a substantial depletion of radiation in the
low-angle region, but the transition is smooth.
348 QCD to All Orders
1 + z2 2z(1 − z)m2
P̂qq (z, p2⊥ ) = CF − 2
1−z p⊥ + (1 − z)2 m2
(5.279)
p2
P̂qg (z, p2⊥ ) = TR 1 − 2z(1 − z) 2 ⊥ 2 ,
p⊥ + m
with the g → gg splitting of course still given by the form in Eq. (2.31).
The actual angular ordering condition in terms of scales t (and t̄) for subsequent
emissions i and i + 1 is given by
where the splitting factors enter because of the rescaling of the momenta by them.
In all cases above, Qg,min is the minimal virtual mass for gluons and light quarks
at the end of the parton shower, and
with m the mass of the light or heavy quark. The splitting parameter z is the
ratio of light-cone momentum fractions along the direction of the splitting parton
before and after the splitting took place, and ~k⊥ is the transverse momentum
in the splitting with respect to this axis. In its original version [579], the two
light-cones were fixed by the two hardest partons, for instance the quark and the
anti-quark directions in e− e+ → q q̄. In an improvement this was replaced by using
the splitter and spectator axes of motion to fix the light-cone momenta.
Parton shower simulations 349
In initial-state branchings, all partons are assumed to be massless (or light) and
the evolution parameter is given by
~k 2 + zQ2
⊥ g,min
t = (5.284)
(1 − z)2
2 qi2 ki2 ~ 2
p⊥,i
qi−1 = + + , (5.288)
zi 1 − zi zi (1 − zi )
One feature of angular-ordered parton showers is that they do not usually fill the full
phase space available for emission. These gaps in the radiation pattern emerge in the
hard, wide-angle regime and must be filled. In all practical implementations this is
achieved by supplementing the first emission with hard matrix-element correc-
tions filling these gaps. This “dead region” is shown in Fig. 5.18, Section 5.4.2, when
the generic method of matrix-element corrections is discussed. In addition, it is worth-
while to stress that the parton shower fills the available phase space in such a way
that typically the first emissions are the large-angle soft ones at relatively low trans-
verse momentum, while harder emissions are usually appearing later in the process.
This is in contrast to the other parton shower algorithms based on an ordering of the
emissions by their transverse momentum.
350 QCD to All Orders
cf. also Eq. (3.168). They depend on a recoil parameter yij;k and a splitting
parameter zi ,
pi pj
yij;k =
pi pj + pi pk + pj pk
pi pk (5.290)
zi = = 1 − zj .
pi pk + pj pk
The FI, IF, and II expressions emerge from the FF case by replacing the final–state
splitter momentum pi or spectator momentum pk with corresponding initial-state
ones: pi,k ↔ −pa,b . In addition, for initial-state splitters, recoil and splitting
parameter change their roles, cf. also Table 5.2. Splitting kernels for them can
be found in Appendix C.2; a modification for the FI and IF kernels reproducing
exact fixed-order matrix elements has been proposed in [330]. This modification
essentially consists of adding some non-singular terms which vanish in the soft
and collinear limits and therefore do not change the logarithmic accuracy of the
shower.
It should be mentioned here that the splitting kernels above come with a factor of
1/(pi pj ), a suitably normalized strong coupling factor and a normalization taking
into account the number of spectators, 1/Nspec . The choices of relevant kinematic
quantities of course also depend on the specific case. However, in the original
publications they are identical to the corresponding expressions used in Catani–
Seymour subtraction. For a massless shower, they are listed in Appendix C.2.
2. Evolution and splitting parameters
The formalism in principle allows different evolution parameters to be employed,
Parton shower simulations 351
including the invariant mass sij of the pair after the splitting or its transverse
momentum. This latter choice was the default in the original publications, i.e.
2
t = k⊥ , given by
(
2
2pi pj zi (1 − zi ) for final-state emissions
t = k⊥ = (5.291)
2pa pj (1 − xa ) for initial-state emissions
for massless partons and more complicated expressions for massive ones.
2
Irrespective of the evolution parameter t, the transverse momentum squared k⊥ is
used as the renormalization scale for the strong coupling and as the factorization
scale, at which PDFs are evaluated in initial-state splitting processes. Various
refinements to the definition of the transverse momentum variable with respect
to the original proposal in [778] and its first implementations [466, 839] have been
suggested, the most recent one in [626, 628]. There, a variation of the original
2
evolution parameter t = k⊥ in Eq. (5.291), making it flavour-specific, has been
introduced, in order to better capture the singular behaviour of the splitting
kernel. In particular, for massless final-state splittings (ij)˜ + k̃ → i + j + k the
refined evolution parameter t reads
zi (1 − zi ) if i, j = g
(1 − zi ) if i = 6 g and j = g
t = 2pi pj · (5.292)
if i = g and j 6= g
zi
z (1 − z ) if i, j 6= g ,
i i
(
1 − xaj,k if j = g
t = 2pa pj · (5.293)
1 6 g.
if j =
3. Construction of kinematics
In the original implementations, for the construction of the kinematics a recoil
scheme has been used, which essentially is the inverse of the Catani–Seymour
mappings. For the example of massless FF splittings, the kinematics is therefore
constructed according to the inverse of the mapping in Eq. (3.160):
pk = (1 − yij;k ) p̃k .
for final state splittings. This leads to light-cone fractions xl (with l ∈ {i, j, k}) of the
outgoing partons given by
2pl Q
xl = . (5.296)
Q2
As before, Qµ = p̃µi + p̃µk = pµi + pµj + pµk is the total momentum of the antenna —
the splitter and spectator — and Q2 = sijk . For massless partons the relevant phase
space can be parameterized by a generic Lorentz-invariant quantity which reduces to
transverse momentum in a suitable frame and a generalized rapidity of the emitted
parton:
2 sij sjk 1 sij
k⊥ = and y = log . (5.297)
sijk 2 sjk
Note that with the evolution parameters k⊥ and y as defined in Eq. (5.297), phase-
space boundaries in k⊥ , k⊥ > k⊥,c , translate into limits on y, ensuring finite inte-
gration volumes and, hence, a Sudakov form factor that can be evaluated in a fairly
straightforward way.
1. Splitting kernels
Writing unnormalized differential splitting probabilities in striking similarity with
Eq. (2.2) as
2
dk⊥ dφ
dPĩk̃→ijk = dy K (5.298)
2
k⊥ 2π ĩk̃→ijk
allows corresponding splitting kernels to be defined. In antenna showers this is typ-
ically achieved by deducing them from suitably chosen matrix elements through
2
2 dφ |MX→ijk |
dPĩk̃→ijk = dk⊥ dy 2 , (5.299)
2π M
X→ĩk̃
2
thereby extracting the 1/k⊥ singularity from the real-emission matrix element.8
8 Ultimately, such an approach means that instead of exponentiating the singular terms only in
the Sudakov form factor, full matrix elements are used and exponentiated in a way similar to what
will be introduced as a matrix-element correction in Section 5.4.2. It is not hard to imagine that this
is the source of the success ARIADNE enjoyed in describing QCD data from LEP.
354 QCD to All Orders
In the case of FF splittings, and for gluon emission off a quark–anti-quark an-
tenna, typically the matrix elements for γ ∗ → q q̄(g) are used. Using that for any
combination {l, m, n} of the three massless final-state vectors
this leads to
dφ CF αs x2q + x2q̄
dPĩk̃→ijk = dxq dq̄
2π 2π (1 − xq )(1 − xq̄ )
dsq̄g dsqg dφ CF αs (Q2 − sq̄g )2 + (Q2 − sqg )2
= (5.301)
Q2 Q2 2π 2π sq̄g sqg
dk⊥2
dφ 2CF αs (1 − x⊥ ey )2 + (1 − x⊥ e−y )2
= 2 dy .
k⊥ 2π 2π 2
has been employed. With similar considerations, and using slightly different pro-
cesses, splitting kernels for other splittings such as qg → qgg, qg → q q̄ 0 q 0 etc. have
been obtained for the FF case, for instance in [805].
For splittings involving initial-state particles, two approaches have been pursued.
First of all, in the original implementation of an antenna shower in ARIADNE,
initial-state radiation has been re-interpreted as final-state radiation. This is
achieved by replacing the initial-state partons of a given antenna by the corre-
sponding hadron remnants in the final state, with the typically enhanced phase-
space due to their larger momenta being compensated by tunable phase-space
cuts. The origin of this idea can be traced back to [164], where emissions off
an extended colour source, such as a hadron remnant in a collision induced by
incoming hadrons, have been discussed.
An alternative approach, in parallel to the treatment in the other parton shower
algorithms, has been worked out in [895] and, in the framework of the VINCIA
code [577], in [826]. In this approach, emissions from the IF and II antenna are
treated in a standard perturbative language with suitably defined splitting kernels
obtained in a way very similar to the FF case.
2. Evolution and splitting parameters
The customary evolution parameter is the transverse momentum given in the form
of Eq. (5.297) or analogous for antennae with initial-state particles. Instead of an
explicit energy splitting parameter, usually, the rapidity of the emitted parton in
the c.m.- or Breit-frame of the emitting antennae is used, which of course can be
translated into light-cone splitting parameters.
3. Construction of kinematics
In the FF case, the construction of the splitting kinematics is best understood in
the rest frame of the antenna. In this frame the parameters xi , xj , and xk yield the
Parton shower simulations 355
energy fractions of the outgoing partons with respect to the full antenna. Orienting
the original partons along the z axis and assuming them to be massless,
Q
p̃i,k = . (5.303)
2
The first way, usually implemented, for instance in ARIADNE, is to have one of
the partons i and k keep its direction (given by ĩ or k̃) in the rest-frame of the
antenna and just rescale its momentum by the corresponding x. With the absolute
value of the transverse momentum fixed by k⊥ , only the azimuthal angle φ with
respect to the original axis must be selected. The parton which keeps its direction
typically is chosen in the following way. In the case of gluon emissions by a q q̄ or
a gg antenna, the less energetic of the two partons, i.e. the one with the smaller x
takes the transverse momentum, while the more energetic one, the one with the
larger x, just has its momentum compressed by its x. If the gluon is radiated off a
qg or a q̄g antenna, it is always the quark that retains its direction. Finally, in the
case of a gluon splitting into quarks it is always the other parton that keeps its
direction. Alternatively, phase-space mappings used in antenna subtraction for
NLO calculations [430, 561–563] could be employed. Emissions from IF and II
antennae are treated in very different ways in different realizations of the antenna
shower paradigm, although the same reasoning as for the dipole showers applies.
Care must be taken to ensure that these splittings continue to transfer transverse
momentum to the full final state in order to ensure the logarithmic accuracy in
the description of, say, the transverse momentum of Drell–Yan pairs or similar.
captures NLL effects in the parton shower evolution. The issue of formal accuracy is
not as clean-cut for the case of a gluon splitting into a quark–anti-quark pair. This is
because, formally speaking, this kind of process first appears at NLL accuracy such
that the freedom in the evolution parameter may yield a sub-leading effect related to
that choice. The same reasoning also applies to the choice of renormalization scale.
The effect of different choices on the invariant mass distribution of secondary quark
pairs is quite large, and especially so for the gluon splitting into heavy quarks, where
the differences of relative transverse momentum of the two quarks and their invariant
mass can be quite large.
Furthermore, there are a number of other obstacles when trying to analyse the
formal accuracy in the radiation pattern produced by a given parton shower. There
is a subtle way recoil schemes — the way the splitting kinematics is constructed for
each branching — impacts on observables and, possibly, on the logarithmic accuracy.
The most obvious example has already been discussed. For dipole or antenna showers,
systems made from an initial-state splitter with a final-state spectator naively may
decouple their kinematics from the rest of the parton ensemble and, in particular, from
the other final-state particles. As a consequence, in this case the transverse momentum
of the individual splitting is no longer transferred to the other final-state particles, and,
consequently, the resummation of logarithms in transverse momentum breaks down.
Correspondingly, it must be stressed that parton showers automatically implement
detailed four-momentum conservation in each emission of an additional parton. In
contrast, by far and large, this is not the case in analytic resummation. In most cases
the effects of momentum conservation induce non-logarithmic corrections (“power–
corrections”) of the form k⊥ /Q with Q a typical scale related to the splitter–spectator
pair. It is not entirely clear if (and how) such effects produce additional logarithms
when convoluted over many emissions.
To understand the dynamics provided by the parton showers in more detail, and to
develop the formalism further, consider the differential cross-section for the emission
of the first — typically the hardest — parton off a core process, modelled at the Born
level. Note that, due to its probabilistic nature, the parton shower does not change
the Born-level cross-section for a state with N external particles given by Eq. (3.1).
However, the process at hand and the specifics of its parton configuration, given by
their flavours and momenta, will influence the parton shower by providing it with the
scale µQ , the upper limit for further parton emissions. Thereby, this scale µQ also
defines the hard scale for the logarithms that are resummed by the parton shower
evolution.
The radiation pattern up to the first emission is given by the parton shower as
Parton shower simulations 357
2
ZµQ h i
(Born) (K) (K)
dσN = dΦB BN (ΦB ) ∆N (µ2Q , tc ) + dΦ1 KN (Φ1 ) ∆N (µ2Q , t(Φ1 )) .
tc
(5.304)
Here, combined splitting kernels for emissions off an N -body state, KN (Φ1 ), are in-
troduced and read X
KN (Φ1 ) = Kij;k (Φ1 ) . (5.305)
{ij;k}∈N
In this equation, of course, the sum over {ij; k} covers all viable combinations of
emitting, emitted, and spectator partons. Their very nature as an exponential allows
the introduction of compound Sudakov form factors,
(K)
Y (K)
∆N (T, t) = ∆ij;k (T, t) . (5.306)
{ij;k}∈N
Furthermore, t(Φ1 ) is the parton shower scale associated with the emission phase space
given by Φ1 .
The first term in the curly bracket above gives the no-emission probability down
to the parton shower cut-off scale tc , while the second term takes into account the first
(hardest) emission at t with the no-emission probability at higher scales again encoded
in the Sudakov form factor. The sum of the two terms integrates to unity reflecting the
Born configuration to either emit a parton or not. This simple probabilistic reasoning
typically is dubbed the “unitarity” of the parton shower. As a consequence, the cross-
section thus simulated is identical with the Born level one, while the pattern of the
first emission is determined by the parton shower.
Looking at the second term, the emission term, however, it is clear that further
emissions at lower scales should also be included. In fact, they emerge by iterating the
curly bracket in an appropriate fashion, leading to an expression of the form
2
ZµQ
(Born) (K) (K)
dσN = dΦB BN (ΦB ) ∆N (µ2Q , tc ) + dΦ1 KN (Φ1 ) ∆N (µ2Q , t(Φ1 ))
tc
Zt
(K) (K)
× ∆N +1 (t, tc ) + dΦ01 KN +1 (Φ01 ) ∆N +1 (t(Φ1 ), t0 (Φ01 ))
tc
(K) 0
× ∆N +2 (t , tc ) + . . . .
(5.307)
This is the explicit manifestation of the idea that in the soft and collinear limits
multiple emissions off a given parton configuration can be constructed recursively,
358 QCD to All Orders
taking advantage of the factorization of matrix elements and the corresponding phase
space,
dΦN +1 BN +1 (ΦN +1 ) = dΦN BN (ΦN ) KN dΦ1 Θ µ2Q (ΦN ) − t(Φ1 ) , (5.308)
for the Born-level cross-section and parton configuration, dressed with all possible
emissions.
5.4.1 Motivation
Results from fixed-order matrix elements and resummation incorporated in the parton
shower provide good descriptions of essential event characteristics in complementary
regions of phase space. The same also holds true for analytic resummation techniques,
like the QT resummation scheme discussed in Section 5.2. There this complementar-
ity results in supplementing the resummation part in W̃ij , Eq. (5.60), with a hard
remainder part Yij . It encodes the difference between the logarithmically enhanced
contributions from the Sudakov form factor and the full fixed-order extra emission
part of the real correction. In addition, virtual corrections can be encoded by adding
the loop correction; in QT resummation this is the term Hab , which quite often is
absorbed as a part of W̃ij . It thus allows the systematic correction of the approximate
Matching parton showers and matrix elements 359
resummation result, order by order, to the full result. In particular, the total cross-
section of the process can be recovered, order-by-order, and the radiation pattern of
hard emissions approaches the fixed-order result.
This pattern can also be found in parton shower simulations, where the effect of
corrections to the fixed-order results becomes most prominent in observables sensitive
to additional emissions, real or virtual. For example, due to its probabilistic nature,
the parton shower fails to account for any higher-order effects impacting on the total
cross-section of the simulated process. This could be cured trivially by multiplying
the cross-section fed into the parton shower with a suitable global K-factor. Such
a treatment would provide a fairly satisfying solution to the problem, provided the
patterns of additional particle radiation of the underlying fixed-order calculation and
the parton shower were not visibly different. This, however, is not necessarily the
case, and typically parton showers and fixed-order calculations differ substantially in
large parts of the emission phase space. In fact, fixed-order calculations are primed
to correctly describe the emission of additional, highly energetic particles at large
angles and typically fail to describe the softer and more collinear emissions due to the
occurrence of associated large logarithms which eventually may overcome the smallness
of the perturbative parameter, the coupling constant. In contrast, the parton shower,
constituting an expansion around the soft and collinear limits of particle radiation
excels at describing such emissions, while it typically is incapable of taking into account
the more complicated pattern of hard emissions. Stated in a slightly more extreme way:
in order to capture the full effect of the quantum nature of particle emission, quantum
field theoretical methods have to be applied — in practical terms this typically means
that full matrix elements must be calculated. They can then be used to correct the
classical parton shower picture in a systematic way, similar to the way this is achieved
in analytic resummation.
Broadly speaking, methods to combine parton showers and fixed order matrix
elements aim at combining the best features of both: of exact order-by-order calcula-
tions, which capture all quantum interferences and, possibly, higher-order effects due
to virtual corrections, and of the parton shower, which, depending on its formulation,
provides a simulation of further soft and collinear emissions to leading or next-to-
leading order accuracy. The aim of such a combination exercise always is to maintain
the fixed-order accuracy for the overall cross-section. At the same time, the fixed-order
accuracy of the hardest emissions should be guaranteed, but supplemented with those
leading logarithms that are captured by the parton shower’s Sudakov form factor. In
addition, all further, softer or more collinear emissions must still be described at the
intrinsic accuracy of the parton shower.
The persistent problem in any combination procedure, however, is that both the
matrix elements and the parton shower may allow for the emission of additional partons
off a core process, which would lead to an unwanted double-counting if not properly
taken into account.
In order to appreciate fully the difference of fixed-order calculations and parton
showers and the problem underlying any combination of the two, consider the diagram
in Fig. 5.16, where the orders of αs and the accompanying logarithms L are depicted
for the case of resolvable parton emissions in e− e+ → q q̄ + X. One could think of
360 QCD to All Orders
such resolvable partons as jets, defined by a suitable algorithm. Obviously, for every
additional jet emission, the order of αs is incremented by one, and up to two new
logarithms could emerge. This is the pattern of orders in αs and L at leading order;
inclusive higher-order corrections typically only add an order in the coupling without
necessarily introducing new logarithms. Therefore, in this diagram the fixed-order
matrix elements, being of a fixed order in αs , live on one (leading order only) or more
(higher-order corrections) vertical lines; in contrast, the parton shower, resumming
terms of the form αs L2n and possibly also terms αs L2n−1 , occupies the diagonals.
These lines, vertical and diagonal, cross. This indicates a double-counting, which could
either be positive, as an over-counting, when contributions from both are wrongly
added, or negative, when contributions are missed in both.
In this and the following sections, a variety of existing methods to include higher
fixed-order terms into the parton shower will be presented, starting with a discus-
sion of matrix-element corrections (MEC) [215, 416, 417, 786, 843] in this sec-
tion. This method effectively allows the inclusion of the full O (αs ) kinematics into
the parton shower, but without including the effect of the O (αs ) on the total rate.
This will be achieved in the next section by introducing existing NLO matrix-element
parton-shower matching algorithms (NLOPS) [543, 546, 630, 782]. An alterna-
tive approach, aiming at combining multiple fixed-order calculations into one inclu-
sive sample is by now known as multijet merging methods at leading order
(MEPS) [351, 695, 730, 744] and at next-to-leading order (MEPS@NLO) [535,
558, 631, 737]. Finally, an outlook is given to recently devised methods to even in-
clude NNLO matrix elements for simple processes into the parton shower simulation
(NNLOPS) [610, 633, 634, 652]. In most cases technical details will be ignored in
favour of clarity of the presentation. Readers interested in more technical and im-
Matching parton showers and matrix elements 361
plementation details and proofs of the respective accuracies are referred to the vast
literature on this subject.
where RN denotes the real correction to the process with N external particles, i.e. a
matrix element for (N + 1) external particles at Born level. The trick of the method
is to assume a modified splitting kernel K̃N , given by
and to use this kernel for the description of the first emission in the parton shower.
It should be stressed, though, that rather than evaluating the real-emission term RN
at a fixed scale, it is customary to use the same kinematics-dependent scale as in the
parton shower.
This implies that the equation for the Born cross-section including the first emission
through the parton shower of Eq. (5.304) in the section above becomes
2
ZµQ h i
(Born) (K̃) (K̃)
dσN = dΦB BN (ΦB ) ∆N (µ2Q , tc ) + dΦ1 K̃(Φ1 ) ∆N (µ2Q , t(Φ1 ))
tc
2
ZµQ
(R/B) 2 RN (ΦB × Φ1 ) (R/B) 2
= dΦB BN (ΦB ) ∆N (µQ , tc ) + dΦ1 ∆N (µQ , t(Φ1 )) .
BN (ΦB )
tc
(5.314)
Again, the terms in the curly brackets integrate to unity, indicating that the simulated
cross-section is identical to the Born level one. This time, however, the radiation
pattern is determined by the full real radiation matrix element as encoded in RN
rather than by the parton shower. This can trivially be seen by expanding the emission
term in Eq. (5.314) up to first order in the coupling, keeping in mind that RN has one
more power in αs than the corresponding Born term BN , and that effects of running
362 QCD to All Orders
αs are of higher order as well. All further emissions are still driven by the original
parton shower kernels.
In practical implementations, usually the original splitting kernels and parton
shower evolution parameters are used, and the parton shower is corrected to the mod-
ified splitting kernel through simple reweighting. Algorithmically, this means that first
test emissions are generated by the original parton shower and accepted with a prob-
ability given by
K̃N (Φ1 ) RN (ΦB × Φ1 )
PMEC = = . (5.315)
KN (Φ1 ) BN (ΦB ) × KN (Φ1 )
CF αs x21 + x23
dΦg Rqq̄ (Φqq̄ × Φg ) = Bqq̄ (Φqq̄ ) × dx1 dx3 (5.316)
2π (1 − x1 )(1 − x3 )
where
x1,3 = 2Eq,q̄ /Ec.m. ∈ [0, 1] (5.317)
are the energy fractions the massless quark and anti-quark carry after gluon emission.
The gluon emission phase space, after performing the azimuthal integration, is given
by dΦg ∝ dx1 dx3 .
The parton shower expression has to be obtained from the details of the map
relating its variables to the splitting kinematics. For the case of a virtuality-ordered
parton shower with z defining the energy components in the splitting, the virtual mass
and energy-splitting variables are given by
X dti CF αs 1 + zi2
dΦg Bqq̄ (Φqq̄ ) × K(Φg ) = Bqq̄ (Φqq̄ ) × dzi
ti 2π 1 − zi
i∈{q,q̄}
CF αs dx1 dx3
= Bqq̄ (Φqq̄ ) ×
2π (1 − x1 )(1 − x3 )
( " 2 # " 2 #)
1 − x1 x1 1 − x3 x3
× 1+ + 1+ . (5.319)
x2 2 − x3 x2 2 − x1
For the angular-ordered parton shower in HERWIG, the situation is a bit more compli-
cated due to the somewhat non-trivial phase-space limits, leading to
CF αs dx1 dx3
dΦg Bqq̄ (Φqq̄ ) × K(Φg ) = Bqq̄ (Φqq̄ ) ×
2π (1 − x1 )(1 − x3 )
" #
1 2
x1 + x3 − 2
× 1+ + x1 ↔ x3 , (5.320)
x1
x1 > 1 − z(1 − z)
x3 > 1 − x1 + zx1
~k 2 = E 2 yij;k zi (1 − zi )
⊥
pi pj
yij;k = = 1 − xk
pi pj + pj pk + pk pi (5.322)
pi pk 1 − xj xi + xk − 1
zi = = = .
(pi + pj )pk 2 − xj − xi xk
d~k⊥2
dyij;k
= , (5.323)
~k 2 yij;k
⊥
In all cases, the acceptance weight is given by the ratio of x21 + x23 and the terms in the
364 QCD to All Orders
Fig. 5.18 The ratio of the matrix element and the parton shower ex-
pression for the differential emission cross section of an additional gluon
in quark–anti-quark pair production in lepton annihilations, Eq. (5.319)
(virtuality-ordered parton shower, upper left), Eq. (5.320) (angular or-
dered parton shower, upper right) and Eq. (5.324) (Catani-Seymour parton
shower, lower left).
curly brackets. Of course, further parton shower formulations may lead to yet different
weights, depending on the details of the map between the parton shower parameters
and the kinematical quantities parameterizing the matrix element.
The algorithm then is to generate an emission through the parton shower, and to
accept it with a probability given by the ratio of the two expressions above. Profiles
for this ratio for different parton shower implementations are depicted in Fig. 5.18.
5.4.2.3 Limitations
While this technology is fairly transparent and straightforward in its implementation,
it is limited in its applicability. First of all, it depends on the parton shower expression
(or a suitable multiple of it) to be larger than the corresponding exact matrix element
in all emission phase space, which is not always the case. This is especially true for
production processes at hadron colliders, where the huge phase space for emissions off
Matching parton showers and matrix elements 365
the initial state is not necessarily filled by the parton shower, see below. In addition,
going to higher multiplicities the analytic parton shower expression becomes increas-
ingly untraceable, which translates into the problem that the rejection weight cannot
be constructed any more.
As already mentioned, there are situations, where the parton shower does not
entirely fill the phase space. There are two possible reasons:
1. In cases like the radiation of extra partons in production processes at hadron col-
liders, the emission phase space is typically constrained by an upper scale given
by the kinematics of the hard process. For example, in the production of vector
bosons, such a scale would be given by the (virtual) mass of the vector boson.
Since the transverse momenta in any parton emission generated by the parton
shower are bounded from above by the scale of the previous process, as a conse-
quence, only transverse momenta below the mass of the vector boson would be
generated — the parton shower would completely miss the large-p⊥ tail of the bo-
son kinematics. This problem naively could be cured by opening up the emission
phase space through a harder starting scale of the parton shower and a suitable
matrix element reweighting. This is sometimes referred to as “power shower” in
the literature, while sticking to the boson mass is dubbed “wimpy shower” [858].
However, the natural question then arises which scale-logarithms would actually
be resummed in the Sudakov form factor in each case, and it quickly becomes
apparent that in the power shower case the link to the standard analytic resum-
mation techniques is broken. There, the upper limit of the k⊥ –integral in the
Sudakov form factor is given by the boson mass, at odds with the power shower,
where the upper limit is given by the energy scale of the hadronic collision, a
violation of factorization theorems underlying all perturbative calculations.
2. Secondly, it is possible that the parton shower, while perfectly in line with re-
summation techniques, misses some regions in phase space by construction. This
is particularly true for angular-ordered parton showers like the ones implemented
in HERWIG [415, 750] and HERWIG++ [188, 579], see the upper-right panel of
Fig. 5.18 for the case of gluon radiation in e− e+ → q q̄. In this case the soft
matrix-element correction discussed so far needs to be supplemented with
a hard matrix-element correction. This essentially boils down to using the
matrix element to fill the phase-space region omitted by the parton shower. To
guarantee a smooth transition, in this hard correction renormalization and factor-
ization scale definitions as in the parton shower are used as well as some Sudakov
weight for the intermediate quark line.
where the sum over different subtraction terms is understood, and where the renor-
malized and infrared-subtracted virtual contribution has been combined into
(S)
ṼN (ΦB ) = VN (ΦB ) + IN (ΦB ) (5.327)
This construction only works out, if the full real-emission phase space ΦR can be
written in the factorized form as
ΦR = ΦB ⊗ Φ1 . (5.328)
If this is not the case, the additional phase space is guaranteed to be infrared finite
and the correct result of Eq. (5.325) can be recovered by merely adding the difference,
a difficulty which will be ignored in the following. In their absence Eq. (5.326) indeed
yields the full NLO cross-section upon integration over the Born-level phase space ΦB .
The terms B̄ therefore can be interpreted as fully differential cross-sections of Born-
level configurations with a next-to-leading order weight, or, stated slightly differently,
as Born-level configurations modified by a local K factor. The next-to-leading order
accuracy of course would not be spoilt by any unitary parton shower added to it,
so one just has to ensure that the pattern of the first emission is correct up to first
order in αs in order to arrive at a fully NLO accurate simulation. Going back to the
technology of matrix-element corrections introduced in Section 5.4.2 indicates how this
Matching parton showers and matrix elements 367
can be achieved: the full real-emission matrix element for the first emission must be
used, this is easily achieved by essentially replacing the parton shower kernels K with
R/B. Therefore, combining the B̄ terms with the first-order correct radiation pattern
of Eq. (5.314) results in a simulation which is correct to first order of the coupling
for both the inclusive cross-section and for the emission of the hardest parton. This
essentially is the core of the POWHEG method introduced for the first time in [543, 782]
and in heavy use ever since.
In this matching method, the differential rate up to the hardest emission is given
by
(NLO)
dσN = dΦB B̄N (ΦB )
2
ZµQ
(R/B) 2 RN (ΦB × Φ1 ) (R/B) 2
× ∆N (µQ , tc ) + dΦ1 ∆N (µQ , t(Φ1 )) .
BN (ΦB )
tc
(5.329)
It is straightforward to prove that this yields the correct NLO cross-section, since the
term in the second line integrates, again, to unity. In order to see that the radiation
of the first/hardest emission follows the exact real correction matrix element at first
order in the coupling, it suffices to note that the term RN /BN already is at first order
in the coupling. This allows to ignore, at this accuracy level, all next-to-leading terms
in B̄N , i.e. all terms stemming from real or virtual corrections or the corresponding
subtractions, since they would yield terms of order O(αs2 ).
One subtlety ignored so far is the choice of the “right” renormalization and factor-
ization scales in both B̄N and in RN /BN . By far and large it is advantageous to keep
to the choices made in the parton shower in the emission term, i.e. RN /BN . For the
integrated emissions in BN , on the other hand, it is probably better to keep choices in
line with the choices also made in B̄N and especially in the virtual part VN in order
not to spoil the exact cancellation of infrared divergences. In all cases, such choices are
essentially of higher order in αs and therefore do not hamper the fixed-order (NLO)
accuracy of the approach.
Similar to the case of matrix-element corrections, however, there are a number of
pitfalls beyond fixed-order accuracy. This is especially true for processes at hadron
colliders; as an example consider the case of Higgs boson production in gluon fusion.
First of all, as in the case of matrix-element corrections, it is not clear what scale to
pick as upper scale for the parton shower evolution. Standard resummation technology
suggests choosing a scale of the order of the Higgs boson mass mH as an argument
in the logarithms, i.e. µQ ≈ mH . This choice however does not allow a description
of transverse momenta of the Higgs boson in the high-p⊥ tail, at scales above mH .
Conversely, choosing µQ = mH in the equation above, Eq. (5.329), will automatically
constrain the phase space available for the hardest emissions to scales below mH .
This is at odds with maintaining O(αs ) accuracy over the full emission phase space.
Without modifying the algorithm outlined up to now, therefore a choice must be made
between logarithmic and fixed order accuracy of the approach in the high-p⊥ tails of
368 QCD to All Orders
additional emissions.
Assuming that the full phase space is opened up for the hardest emission, i.e.
µQ → Ecms of the hadronic collision, a second question naturally arises. The local K-
factor encoded in B̄N corresponds to the inclusive production of the n-body final state,
and in particular after integrating out additional partons in RN . By construction it is
applied to all events, and in particular to those where the hardest emission is harder
than the typical scale related to the n-particle state. This of course is questionable,
since a priori the K-factors for n-particle and for (n + 1)-particle final states do not
coincide. It is of course possible that such discrepancies are not so large, and that
therefore the tails of large-p⊥ of the produced system (in the example here the Higgs
boson) are well described. An illustrative study of this problem is depicted in Fig. 5.19.
However, as can be seen in the left panel of this figure, the tail of the distribution
differs significantly from the fixed-order, i.e. NLO, result. Even more, comparing with
another NLO matching method, MC@NLO discussed in the next section, Section 5.4.4,
it appears as if the latter interpolates between a low-p⊥ regime, where a K factor is
applied for both, which therefore agree with each other, and the region of large p⊥ ,
where only the POWHEG simulation is modified by the K factor. In the right panel, this
difference is unambiguously traced back to the influence of the K-factor. Replacing
the Born-term in the denominator of the emission kernel R/B in the Sudakov form
factor with B̄, Eq. (5.329) schematically becomes
2
ZµQ
(NLO) (R/B̄) 2 RN (R/B̄)
dσN −→ dΦB B̄N ∆N (µQ , tc ) + dΦ1 ∆N , (5.330)
B̄N
tc
where the by-now familiar phase-space arguments in the individual pieces have been
omitted. In this form, the higher-order enhancement is cancelled, and the p⊥ spectrum
of the NLO result is recovered. Keeping, on the other hand, the original form of
the emission kernel in Eq. (5.329), the result resembles more the NNLO result. It
appears however that this is purely a coincidence and not related to any systematic
improvement.
Fig. 5.19 POWHEG predictions for the Higgs boson p⊥ , pH ⊥ . In the top
panel the inclusion of the parton shower on pH ⊥ (POWHEG +HERWIG) is
compared with the NLO result and with the distribution where only the
first emission is simulated (POWHEG). In the lower panel a comparison is
made where R/B is replaced by R/B̄ and with the NNLO result. (Figures
taken from [139].)
370 QCD to All Orders
since then. Specifically, the decomposition of phase space is achieved with a smooth
function. For the example case of Higgs boson production in gluon fusion it reads
h2 p2⊥ (S) (H)
RN = RN + 2 = RN + RN (5.331)
p2⊥ + h2 p⊥ + h2
with p⊥ the transverse momentum of the Higgs boson. The new parameter h will
typically be of the order of the relevant resummation scale of the underlying Born
configuration, in the example here therefore h ≈ mH . Alternatively it could be “tuned”
to exact higher-order calculations or calculations involving the resummation at higher
logarithmic accuracy.9
Omitting again the phase-space arguments in the various parts, the differential
rate up to the first emission in this improved formalism therefore reads
2
ZµQ " #
(S)
RN
(NLO) (R(S) /B) 2 (R(S) /B) 2
dσN = dΦB B̃N ∆N (µQ , tc ) + dΦ1 ∆N (µQ , t)
BN
tc
+ dΦR R(H)
(5.332)
The first line describes emissions in the soft regime, multiplied by the modified K-
factor. For the example of Higgs boson production, it thereby experiences an enhance-
ment. The second line describes the radiation in the hard regime, and it is not modified
by any K-factor.
scale is provided globally for the full parton ensemble, which may therefore have a
meaning which in extreme cases may lead to sizable mismatches. Consequently, this
leads to an additional source of uncertainty purely related to the matching, which
could and possibly should be systematically assessed.
The obvious way out to is to ensure that the definition of the hardness scale or
transverse momentum in the fixed-order part of the simulation is identical to the
evolution parameter in the parton shower. Alternatively, if this cannot be achieved,
one could invoke truncated showering, already introduced in [782]. There, it was
noticed that the parton shower implementation in HERWIG uses angles instead of
transverse momenta as the evolution parameter. This leads to a situation where the
first emissions in the parton shower typically are large-angle emissions of rather soft
partons which will often result in a relatively small transverse momentum. At the same
time, the more collinear splittings into partons with larger energies quite often appear
towards the end of the shower evolution — implying that the hardest splitting, i.e.
the one with the largest transverse momentum, can essentially happen at all stages
of the evolution. This has to be accounted for in the matching, in order not to upset
the resummation of logarithms in the parton shower. The only way to achieve this
is to allow the parton shower to emit partons at larger angles, but lower transverse
momenta relative to the hardest emission fixed by the POWHEG formalism, which in
turn must be inserted at some point into the parton shower evolution. Such a strategy
in principle could be employed whenever there is a mismatch of evolution and hardness
scales in the parton shower and fixed-order parts of the simulation.
Historically, the first solution of how to match NLO matrix elements with the par-
ton shower has been provided by the MC@NLO method, pioneered in [546]. While,
somewhat loosely speaking, the POWHEG method is nothing but a matrix-element cor-
rection method supplemented with local K-factors, the MC@NLO method is closer in
spirit to analytic resummation. Similar to the way the calculation is organized there,
the real emission correction is decomposed into a part driven by Sudakov form factors
and realized by the parton shower, and a hard remainder. And, while the former will
experience higher-order corrections, like in QT resummation at NLO+NLL accuracy,
the latter will not. In particular, the decomposition in MC@NLO is given by
(S) (H)
RN (ΦR ) = RN (ΦR ) + RN (ΦR ) = SN (ΦB ⊗ Φ1 ) + HN (ΦR ) . (5.334)
The catch in MC@NLO is to identify the subtraction terms with the shower kernels
such that, symbolically,
X
SN (ΦB ⊗ Φ1 ) ≡ BN (ΦB ) ⊗ Kij;k (Φ1 ) = BN (ΦB ) ⊗ K(Φ1 ) . (5.335)
ijk
The MC@NLO version for the differential rate up to the first emission thus is given by
372 QCD to All Orders
2
ZµQ
(NLO) (K) 2 (K) 2
dσN = dΦB B̃N (ΦB ) ∆N (µQ , tc ) + dΦ1 K(Φ1 ) ∆N (µQ , t(Φ1 ))
tc
+ dΦR HN (ΦR ) ,
(5.336)
The virtual part of the NLO correction is applied only to those emissions that follow
the kinematics given by the parton shower. In turn the phase space of such emissions is
guaranteed to be in line with its leading logarithmic pattern. Since, by construction, the
hard remainder HN does not contribute any terms at the same logarithmic accuracy as
the parton shower, the logarithmic accuracy of the latter is automatically maintained.
Similar to its counterpart in QT resummation, the hard emission term in the second
line of Eq. (5.336) therefore has a dual role. It firstly corrects the hardest emissions from
the parton shower such that they follow at fixed order the exact matrix element. At the
same time it fills emissions in those regions which are inaccessible by the parton shower
with the exact leading-order pattern. Therefore, also the MC@NLO method fulfills the
fixed-order requirements for a successful NLO matching of matrix elements and the
parton shower. It can trivially be seen that it also maintains the logarithmic accuracy
of the parton shower, and potential problems with the arguments in the resummation
related to the choice of parton shower starting scale are avoided by construction: the
parton shower starts exactly at the same scale as it would without the matching: at all
(K)
orders and with help of the all-emissions operator En from Eq. (5.310) the expression
in Eq. (5.336) can be rewritten as
(NLO) (K) (K)
dσN = dΦB B̃N (ΦB ) EN (µ2Q , tc ) + dΦR H(ΦR ) EN +1 (µ2H , tc ) , (5.338)
divergences by modifying the leading colour subtractions. However, at fixed order, the
hard remainder term HN will modify this averaging and it will thereby guarantee that
the correct resolvable radiation pattern at NLO accuracy is always recovered.
Alternatively, it is also possible to modify the emission kernels for the first emission
in such a way that the full colour structure is directly recovered [630]. To do this, the
summing over colour structures inherent in the construction of the splitting kernels
usually employed in parton showers has to be undone allowing to recover all colour
terms. In the framework of a parton shower based on Catani–Seymour splitting ker-
nels [466, 778, 839], this translates into replacing the colour-averaged kernels discussed
in Section 5.3.2 with a sum over all dipole terms D̃ij;k . Phrased in other words, the
sum must go over all splitters ij and all spectators k irrespective of whether they are
colour-connected or not. Thereby structures like the colour-insertion operators Tij ·Tk
will appear, made explicit for instance in Eq. (3.167) for the case of two final-state
particles.
Upon evaluation in the explicit case, these operators however may become nega-
tive, leading to a negative weight for a splitting in this particular configuration. By
construction, the sum over all Tk will lead to T∗ij due to colour conservation in any
physical parton ensemble, and thus the splitting weights will be positive overall. How-
ever, in certain directions given by certain spectators k, the admixture of positive and
negative contributions will result in a negative contribution. At fixed order, such terms
do not pose a problem. But in addition to the correct fixed-order result, in the imple-
mentation in [630], these sub-leading colour structures are also part of the Sudakov
form factor, thereby accounting to some degree also for effects beyond fixed order. The
technical problem there is related to the fact that negative splitting weights lead to
negative arguments in the Sudakov form factors. This apparent violation of a prob-
abilistic interpretation manifests itself naturally in Sudakov form factors larger than
unity. Such “anti-probabilistic” features necessitate a modification of the showering
algorithm, cf. [630, 809] for technical details.
While in most cases the effects of including such sub-leading terms are small, there
are some observables for which they are surprisingly large. An example is provided by
the case of the forward–backward asymmetry AFB in tt̄ production at the TEVATRON,
where such effects have been studied for instance in [627].
shower algorithms that may not fill the full emission phase space, like, for instance, the
angular-ordered parton showers implemented in HERWIG [415] and HERWIG ++ [188].
Typically, these “holes” in the emission phase space are filled through the hard matrix
element corrections, which, by a combination of clever scale choices in αs analogous
to the parton shower and, eventually, some Sudakov weights will smoothly connect to
the parton shower description in the transition between the two regimes.
In MC@NLO, life may not be quite as simple. As the cross-section in the regime
of soft emissions, treated by the parton shower is modified by the local K factor —
essentially by the term Ṽ — this smooth transition may be lost and a mismatch
in the radiation pattern may emerge. The emergence of a similar pattern can also be
observed in the MC@NLO method as implemented in SHERPA [630] by constraining the
real emission phase space by measures that are incompatible with the parton shower
evolution.
showers that are ordered in k⊥ or opening angles correctly resum those next-to-leading
logarithms log(Q2⊥ /Q2 ) in singlet production, which are also present in the Sudakov
form factors found in analytic resummations of the same quantity.
In addition, a successful matching with fixed-order calculations at next-to-leading
order also includes all terms appearing in the collinear and hard contributions —
the formal accuracy of such samples therefore is NLO+NLL in the language of QT
resummation based on the Collins–Soper–Sterman approach.
in the Sudakov region. This in fact has been worked out in more detail by the
authors of the MINLO method in [609, 611], see Section 5.6.1, who combined the scale-
setting prescription of multijet merging methods discussed below with a corresponding
Sudakov reweighting, allowing them to push Qcut to values around or below O (1 GeV),
the infrared cut-off of the parton shower.
Going back to Fig. 5.16, the picture there as applied to multijet merging at leading
order translates into using the vertical lines, corresponding to fixed-order expressions
for jet production, with the number of jets increasing with the order of αs and to
combine them with the terms populating the diagonals. Immediately, the problem
of double-counting of some terms becomes apparent; it should be stressed that this
double-counting in general can be constructive, i.e. the same terms are attributed
twice, or it can be destructive, by not covering them at all. In order to remedy this
problem, a dual strategy comes into play. First of all, the matrix-element expressions
are evaluated with suitable scale choices for the strong coupling, in such a way that
their counterparts in the parton shower are emulated. In addition, they are weighted
with suitable Sudakov form factors. With the interpretation of the Sudakov form
factors as no-emission probabilities in mind, this second step transforms the matrix
elements, which describe the inclusive production of an N -jet system plus anything else
into matrix elements that describe the exclusive production of an N -jet system only,
with no further resolvable emission above Qcut . At the same time, the parton shower
is modified such that any further jet emissions are vetoed. There are various ways of
achieving this, which impact differently on the parametric accuracy provided by the
matrix elements and, in particular, by the parton shower. While maintaining the fixed
order accuracy of the former is fairly straightforward, the logarithmic accuracy of the
parton shower is harder to conserve.
The idea underlying actual algorithms can easily be understood going back to Sec-
tion 2.3.3, where k⊥ -jet rates in electron–positron annihilations to hadrons were ana-
lytically resummed, cf. Eqs. (2.184) and (2.186). With the interpretation of Sudakov
form factors as no–emission probabilities, the two- and three-jet rates, i.e. the proba-
bility for emitting no or only one jet, can be approximated, at logarithmic accuracy,
as products of such no-emission probabilities and the terms relevant for the emission
of a single parton,
2
R2 (Qcut ) = ∆q (µ2Q , Q2cut )
2
ZµQ " !
dq⊥2 2
αs (q⊥ ) ∆q (µ2Q , Q2cut )
R3 (Qcut ) = 2∆q (µ2Q , Q2cut ) 2 CF Γq (µ2Q , q⊥
2
)
q⊥ π ∆q (µ2Q , q⊥2)
Q2cut
#
2
× ∆q (q⊥ , Q2cut )∆g (q⊥
2
, Q2cut ) .
(5.339)
Multijet merging of parton showers and matrix elements 377
The integrated splitting kernels Γq,g have already been introduced in Eq. (2.181),
leading to Sudakov form factors ∆q,g , which, depending on whether αs is taken as
fixed or running assume the form of Eq. (2.182) or Eq. (2.183). The upper limit µQ of
the Sudakov form factors, and correspondingly of the integration over the transverse
momentum of the emitted gluon, is usually identified with the only hard scale of the
process, the centre-of-mass energy Ecms of the e− e+ pair. Since R2 and R3 are jet
rates, they are normalized to the total hadron production cross-section. Therefore the
production of the original quark–anti-quark pair in the electron–positron annihilation,
proceeding through electroweak interactions, has been factored out. The relationship
of the corresponding approximate cross-sections at O(αs ) is given by
2
ZµQ 2
2
dq⊥ 2 CF αs (q⊥ )
R3 (Qcut ) = 1−R2 (Qcut ) = 2 Γq (µQ , q⊥ ) + O(αs2 ) . (5.340)
2 2
q⊥ π
Q2cut
Remembering that Γq is nothing but the integrated splitting kernel it becomes quickly
apparent that this expression indeed is the O(αs )–approximation of a k⊥ -ordered par-
ton shower to the respective cross-section.10
In the parton shower language, Eq. (5.340) can be cast into
2
ZµQ (res)
dσ3 (Qcut )
R3 (Qcut ) = dΦ1 Kqg;q̄ (Φ1 ) + Kq̄g;q (Φ1 ) + O(αs2 ) = . (5.341)
σ (tot)
Q2cut
In order to improve this description of the three-jet rate as provided by the par-
ton shower with exact tree-level matrix elements, the expressions in the integral of
Eqs. (5.340) and (5.341) merely have to be replaced with the exact matrix elements.
Choosing the scale of αs in the matrix element as in the parton shower also includes
the corresponding logarithmic terms. This idea has already been encountered in Sec-
tion 5.4.2, where matrix-element corrections to the hardest emission were discussed.
Using them, the exact radiation pattern up to O(αs ) was recovered by reweighting the
parton shower with the fixed-order matrix element at the same perturbative order.
In contrast, multijet merging algorithms start from a matrix element, with an
inverted logic: instead of the parton shower being reweighted with the matrix elements,
the matrix elements are modified by terms stemming from the resummed expression.
This is achieved through a combination of adjusted scales of αs and reweighting with
the Sudakov suppression factors which have been omitted in the fixed-order expressions
in Eqs. (5.340) and (5.341). As a result, the procedure resums the same logarithms
as the parton shower does, but, in addition, reproduces the exact fixed-order result
provided by the tree-level matrix elements.
The remaining problem is to determine the adjusted scales for both the αs and the
In the discussion up to now, the values of t have been identified with the corresponding
Q2 , which would usually be the case; it is worthwhile to mention, however, that for
instance for angular ordered showers this is not quite the case. There an ordering in
hardness/transverse momentum Q does not usually also manifest itself in an ordering
in the emission angles. As in the case of next-to-leading order matching, truncated
showering as defined in [782], must be applied in such circumstances, see below for
a more detailed discussion of its effect. But, indeed, there are some further subtleties,
which, as already hinted at, most notably concern strategies for dealing with unordered
emissions. Such rather pathological cases will not be discussed here.
Nevertheless, with the information encoded in the parton history at hand, the
overall scale µR of a matrix-element configuration is determined as
Y
αsM +m (µ2R ) = αsM (µ2R,(core) ) · αs (µ2R,(i) ) , (5.345)
i∈m
Multijet merging of parton showers and matrix elements 379
(core)
where M is the power of αs in the hard core process, µR is the scale choice associated
(i)
to this process, and where the µR are the scales of the m QCD splitting nodes. In
a similar way, the evolution scales t(i) and the core scale t(core) define the Sudakov
suppression factors.
11 A fixed-order version of this idea has been provided in the BLACKHAT +SHERPA framework,
dubbed “exclusive sums”. There, NLO matrix elements for V + jets production with increasing jet
multiplicity are added, with the phase space for the real emission correction constrained to inner-jet
radiation only, i.e. such that it does not produce additional jets; see also a short description in [134].
380 QCD to All Orders
this conflicts with the second potential problem of evolving the jets through the parton
shower, namely the condition to maintain the intrinsic accuracy of the latter.
To see how this works out, consider, as an example the emission of a parton off
a two-parton configuration in e− e+ → jets. Such a configuration has been identified
with a two-jet event, and thus contributes with a rate given by R2 (Qcut ) as given in
Eq. (5.339). The event therefore already has been weighted with a Sudakov suppression
factor
ZµQ
2 dq⊥ α s C F µQ 3
R2 (Qcut ) = ∆q (µ2Q , Q2cut ) = exp −2 log −
q⊥ π q⊥ 4
Qcut (5.346)
αs →const. αs CF µQ 3 µQ
−→ exp − log2 − log
π Qcut 2 Qcut
where, for simplicity, effects of the running of the strong coupling are ignored. What
would the two-jet rate be for values QJ ≤ Qcut ? Applying the same logic, the result
should be given by R2 (QJ ). However, if the suppression factor above was combined
with the corresponding Sudakov suppression in a parton shower starting at Qcut
2
R02 (QJ ) = ∆q (µ2Q , Q2cut ) · ∆q (Q2cut , Q2J )
αs →const. αs CF 2 µQ 3 µQ 2 Qcut 3 Qcut
−→ exp − log − log + log − log
π Qcut 2 Qcut QJ 2 QJ
αs CF µQ 3 µQ µ Q Q J
= exp − log2 − log + 2 log log
π QJ 2 QJ Qcut Qcut
6 R2 (QJ )|αs →const.
=
(5.347)
results. This simple consideration shows that a naive treatment leads to an unwanted
and unphysical dependence on Qcut in the two-jet rate at QJ . In addition, it is apparent
that a parton shower starting at Qcut only yields very limited radiation just below this
scale — in fact, for a scale q → Qcut the radiation vanishes completely. This of course
would yield a completely unphysical radiation dip just below Qcut . Therefore, simply
starting the parton shower at Qcut is not an option.
The solution to this problem presents itself, when analysing the structure of emis-
sions mediated by the parton shower. Starting the parton shower at µQ and vetoing
every emission above Qcut yields an expression that reads, for a single quark leg,
ZµQ
dq⊥ αs CF
1+ Γq (µQ , q⊥ )
q⊥ π
Qcut
ZµQ Zq⊥
dq⊥ αs CF dr⊥ αs CF
+ Γq (µQ , q⊥ ) Γq (q⊥ , r⊥ ) + . . .
q⊥ π r⊥ π
Qcut Qcut
Multijet merging of parton showers and matrix elements 381
µQ 2
ZµQ Z
dq⊥ αs CF 1 dq⊥ αs CF
= 1+ Γq (µQ , q⊥ ) + Γq (µQ , q⊥ ) + . . .
q⊥ π 2 q⊥ π
Qcut Qcut
µQ
Z
dq⊥ αs CF
= exp Γq (µQ , q⊥ ) = ∆−1
q (µQ , Qcut ) , (5.348)
q⊥ π
Qcut
thus compensating the Sudakov suppression on the matrix element and eliminating
the dependence on Qcut for the two-jet rate at scales QJ ≤ Qcut . This interplay of
Sudakov rejection and vetoed parton showering was at the core of the proof of the
logarithmic accuracy of multi-jet merging in [351]. The original proof has later been
extended to also include initial state radiation, showing that a merging prescription
can be formulated which exactly maintains the logarithmic accuracy of the parton
shower, irrespective of implementation details [632].
In general this reasoning translates into an algorithm, where the parton shower
evolution of each parton starts at the scale where it was first produced, as constructed
from the parton history. This also gives rise to the Sudakov suppression weight on the
matrix element. When running the parton shower, however, all emissions that would
lead to the production of a jet above Qcut are vetoed. The corresponding algorithm
is also known as vetoed parton shower. This compensates, by construction, the
Qcut -dependence in the event inherited from the suppression applied on the matrix
element.
Two comments are in order here. First of all, while this compensation is, in prin-
ciple, accurate at the logarithmic accuracy of the Sudakov form factors employed,
there may be mismatches. This occurs if the analytic Sudakov form factors are not
exactly reflected in the parton shower, which typically is the case. Reasons for this,
of course, range from an ordering in an evolution parameter different from the trans-
verse momentum used in the k⊥ algorithm employed to construct the resummed jet
rates above, over non-logarithmic contributions emerging from finite terms in the z–
integral of the splitting functions, to the exact inclusion of recoil effects inside the
parton shower. These mismatches, despite typically being of sub-leading logarithmic
accuracy, may become numerically important and would then manifest themselves in
observables such as differential jet rates or similar.
In addition, when using parton showers with a sufficiently different ordering such
as, for instance, angular ordering, it quite often happens that the first emissions are
not the hardest ones according to the jet criterion. In such a case, vetoed showering
alone will not be sufficient, and measures must be taken not to upset the logarithmic
and colour structure produced by the parton shower. The solution to this problem
of a mismatch of the parton shower evolution parameter and the hardness ordering
consists of employing what is known as truncated showering [782]. In this formalism,
the parton shower is allowed to emit partons at a transverse momentum below the
relevant cut in hardness but with an evolution parameter above the one related to the
hard emission providing this cut. In this way a radiation pattern is generated that is
ordered in the evolution parameter of the parton shower but unordered in the hardness
382 QCD to All Orders
" ZµN
2
#
(K) (K)
dσ = dΦN BN ∆N (µ2N , tc ) + dΦ1 KN ∆N (µ2N , tN +1 )Θ(Qcut − QN +1 )
tc
(K)
+ dΦN +1 BN +1 ∆N (µ2N +1 , tN +1 )Θ(QN +1 − Qcut )
(5.349)
Multijet merging of parton showers and matrix elements 383
Here, QN +1 is the hardness scale related to the emission of the (N + 1)th particle. As
advertised before, this scale defines two different regimes, namely the parton shower
region QN +1 < Qcut and the matrix-element region QN +1 > Qcut . While in the
parton shower only expression, Eq. (5.304), such a differentiation is not being made,
in contrast here the description of the emission of the (N + 1)th particle is susceptible
to this difference.
This of course has some consequences. Taking a closer look at the square bracket
in the first line it becomes apparent that it does not integrate to unity any more, due
to the phase-space constraint — encoded through Θ(Qcut − QN +1 ) — present in the
second term, the emission term. The missing hard emissions are of course supplied
through the second line, explicit through the complementary constraint Θ(QN +1 −
Qcut ). There are two slight mismatches which prevent this term to also account for
the pieces lacking in the first line to integrate to unity. First of all, there is a mismatch
in the form of the emission term: the exact matrix element for the (N + 1)-particle
state is different from the exact matrix element for the N -particle state convoluted
with the parton shower kernel. This does not come as a surprise, as this was the very
reason multijet merging has been introduced in the first place.
On top of this, the phase space available for this additional emission, the (N + 1)th
particle, differs in both lines. In the first line the upper limit on the available phase
space is given by the relevant parton shower starting or resummation scale defined
through the N particle kinematics, µN , while in the second line it is the potentially
different (N + 1)-particle kinematics, which defines the corresponding scale µN +1 .
While in processes with a fixed hardest scale, such as electron–positron annihilations
to jets it is safe to assume that typically µN and µN +1 are very similar or even identical,
this is not true for processes at hadron colliders such as the production of lepton pairs
(the Drell-Yan process) in association with jets.
There, the emission of additional jets typically may offer phase-space regions associ-
ated with larger scales than processes without these additional jets — in the example
of Drell–Yan-type processes this would correspond to jet emissions taking place at
transverse momentum scales above the invariant mass of the lepton pair. Taken to-
gether, this leads to the fact that the combined contributions from the hard and soft
first emissions will not exactly combine with the no-emission term to yield unity. This
has been, in a somewhat sloppy use of language been coined “unitarity violation”.
It consequently leads to a variation of the total cross-section related to the inclusive
sample produced with respect to the Born result. Postponing details to later parts
of this section, it should be noted that this effect, while present in some of the more
widely used multijet merging implementations, has been taken care of by the UMEPS
algorithm [733, 806], which conserves inclusive cross-sections by suitably reshuffling
different contributions.
Taking into account first emissions only, but combining many Born matrix elements
up to a maximal number Nmax of external legs promotes the simple expression of
Eq. (5.349) to
384 QCD to All Orders
" ZtN #)
(K) (K)
× ∆N (tN , tc ) + dΦ1 KN ∆N (tN , tN +1 )Θ(Qcut − QN +1 )
tc
| {z } | {z }
no emission next emission no jet & below last ME emission
"N #"N #
Y
max max −1
Y (K)
+ dΦNmax BNmax Θ(Qj+1 − Qcut ) ∆j (tj , tj+1 )
j=N j=N
" tN
Zmax
(K) (K)
× ∆Nmax (tNmax , tc ) + dΦ1 KNmax ∆Nmax (tNmax , tNmax +1 )
tc
#
· Θ(QNmax − QNmax +1 ) .
(5.350)
Note that in the contribution from the Nmax configuration, the phase space filled by
the parton shower is not constrained by the jet cut Qcut but by the jet measure of the
last emission filled by the matrix element. This allows the parton shower to account
— within its limitations — for even higher jet multiplicities.
At this point it should of course be stressed that further emissions are based on
splitting kernels supplemented with the phase-space veto:
<Q MEPS
KN (Φ1 ) −→ KN (Φ1 ) = KN (Φ1 )Θ(Q − QN +1 ) . (5.351)
Taking this into account and applying it to the subsequent emissions encoded in the
(K)
parton shower evolution operator, cf. Eq. (5.310), EN becomes
(K,<Q)
EN (µ2Q , tc )
2
ZµQ h i
(K) <Q (K) (K,<Q)
= ∆N (µ2Q , tc ) + dΦ1 KN (Φ1 ) ∆N (µ2Q , t(Φ1 )) ⊗ EN +1 (t(Φ1 ), tc ) .
tc
(5.352)
Inserting this into the merging equation Eq. (5.350) above results in
Multijet merging of parton showers and matrix elements 385
X−1
Nmax
dσ = dΦn Bn Θ(Qn − Qcut ) En(K,<Qcut ) (µ2n , tc )
n=N (5.353)
(K,<QNmax )
+ dΦNmax BNmax Θ(QNmax − Qcut ) ENmax (µ2Nmax , tc )
This improved description has been implemented for a variety of parton showers,
most of which use an ordering parameter related to the transverse momentum of
the emissions and thus do not exhibit any real mismatch of evolution and hardness
parameters. For a more in-depth discussion of the various realizations the reader is
referred to the literature [632, 732]. Multijet merging with an angular-ordered parton
shower in the framework of HERWIG++ has been explored in [612], where the impact
of not employing truncated showering in the merging has been analysed, too.
12 In such a treatment, the full capture of truncated showering effects is not guaranteed, and as
a consequence some residual sub-leading logarithmic terms may be left out [632], which would be
present in the original parton shower algorithm.
386 QCD to All Orders
one-to-one match is not possible, either due to extra, unwanted jets being produced in
the parton shower or due to “losing” jets in the parton shower, the event is rejected.
This means that the Sudakov rejection factors that are applied locally, either through
analytic Sudakov form factors or by using the shower, as explained above, are applied
inclusively over the full parton configuration. As a by-product of this treatment, the
MLM approach penalizes “losing” jets which is not the case in the original merging
prescriptions. This algorithm has originally been implemented in ALPGEN [743, 744]; a
variant of it using the Durham k⊥ -algorithm has later been provided in the MADGRAPH
framework [146].
Despite the subtle differences in the different approaches, by far and large a good
agreement of predictions obtained between both methods can be observed. Respective
results have been reported for the case of W +jets production at the TEVATRON and
the LHC for example in [147].
The extension of the merging algorithm to also include photons is fairly straightfor-
ward: treating the emission of photons “democratically”, i.e. on the same footing as
QCD emissions allows to embed them into a multijet merging such that both the
number of QCD particles and photons will vary. Evaluating matrix elements at the
tree-level with NQCD QCD particles and Nγ photons is not a problem and can be
dealt with standard technology. There is also no difficulty in supplementing the jet
definition encoded in QJ and a corresponding cut Qcut with an isolation criterion Qγ
and a corresponding cut Qiso for the emission of photons. From the parton shower side,
things are similarly trivial and basically amount to supplementing q → qγ splitting
kernels, which typically can be obtained from the q → qg ones by suitably replacing
coupling factors. In principle it is also possible — and probably even desirable — to
also include γ → f f¯ splittings and to have different infrared cut-offs for the QCD and
the QED part of the parton shower, in other words to supplement tc with a tc,(QED) .
The latter typically would be chosen on scales of the order of the π 0 mass or similar,
which is of course possible due to the absence of a Landau pole in QED in the soft
regime.
Of course, the same logic could also be applied to extend this treatment to, e.g.,
the emission of the weak vector bosons, the W ± and Z bosons. There is one caveat,
though, related to the fact that the coupling of, say, W bosons to fermions is chiral
and therefore highly sensitive to the spin of the fermions and, consequently, introduces
non-trivial spin correlations in the parton shower. Including them in a systematic
way would imply that the easily implemented structure of more or less independent
emissions in a simple probabilistic fashion would have to be augmented with spin-
correlation matrices of the kind discussed in [825] or a treatment similar to the one
discussed in [779]. However, for the emission of one boson only some matrix-element
corrections could be applied which would by far and large capture such effects. This
has been studied in more detail in the framework of the PYTHIA event generator [396].
Multijet merging of parton showers and matrix elements 387
The idea underlying the “unitarization” of multijet merging is to conserve the fixed-
order cross-section of the inclusive process with lowest multiplicity — in the MEPS
master equation, Eq. (5.353), this would be the cross-section of the N -particle process,
Z Z Z
!
dσ (LO) = dΦN BN = dσ (UMEPS) . (5.354)
" ZµN
2
#
(K) (K)
dΦN BN ∆N (µ2N , tc ) + dΦ1 KN ∆N (µ2N , tN +1 )Θ(Qcut − QN +1 )
tc
≥Qcut ,(K)
= dΦN BN ∆N (µ2N , tc )
" ZµN
2
#
<Q ,(K) 2 <Qcut <Qcut ,(K) 2
× ∆N cut (µN , tc ) + dΦ1 KN ∆N (µN , t) (5.356)
tc
Again, because of the identical kernels in the individual terms and the identical phase-
space constraints, the square bracket integrates to unity. Using the probabilistic nature
of the parton shower, the Sudakov form factor, being interpreted as the no-emission
probability, can be recast as unity minus the emission probability. In other words, in
2
ZµN
≥Qcut ,(K) ≥Qcut ,(K)
∆N (µ2N , tc ) = 1 − dΦ1 K≥Qcut ∆N (µ2N , t) (5.357)
tc
the integrated parton shower emission rate above Qcut is given by the second term.
388 QCD to All Orders
Merging now the first emission through the corresponding Born matrix element yields
Similar replacements are applied in Eq. (5.350) to all brackets encoding the parton
shower evolution, with the exception of the Nmax -term. Consequently, all N -parton
inclusive cross-sections are governed by the input cross-sections associated with the
BN . Some comments are in order here. First of all, it is possible that a parton shower
cannot produce all particles or phase-space configurations entering these cross-sections.
In such a case the corresponding fixed-order cross-sections must be regarded as genuine
corrections to the parton shower and therefore they will be just added. This is also
true for the impact of the phase-space constraints given by the Θ functions. It is
not always guaranteed that integrating over the one-particle phase space will yield a
lower-multiplicity state that passes the Qcut criterion — sometimes one loses more
than one jet. Handling such contributions is ambiguous, since they could be regarded
as genuine fixed-order corrections as discussed in [806] or as contributions to even
lower multiplicity states as in [733].
While this is, by far and large, not too hard to implement, there is another some-
what more nagging problem. The jet-exclusive cross-section for the lowest-multiplicity
final state will be modified by the local K-factor discussed in Sections 5.4.3 and 5.4.4,
while the higher multiplicities will not be corrected and be at Born level only. This
can lead to discontinuities in the radiation pattern, in particular of the first, hardest
emission, and especially in those cases, where the K factors are fairly different from
unity, like, for instance, in the case of Higgs production through gluon fusion. In order
to remedy this situation, the higher-multiplicity part could be multiplied by an inter-
polating K-factor, kN , capturing the NLO correction to the lowest multiplicity N . For
example, for the MENLOPS method, where the NLO matching part is realized through
the MC@NLO algorithm, this interpolating K-factor may be given by something like
B̃N HN HN B̃N /BN for soft emissions
kN (ΦN +1 ) = 1− + −→ (5.360)
BN BN +1 BN +1 1 for hard emissions.
Such an interpolation works fairly well, since in the soft limit the real correction term
and its subtraction become very similar and therefore the hard remainder is given by
soft
HN = RN − SN −→ 0 , (5.361)
while in the hard limit the subtraction term is not very prominent and therefore
hard
HN = RN − SN −→ RN = BN +1 . (5.362)
(K)
+ dΦN +1 kN (ΦN +1 ) Θ(QN +1 − Qcut ) BN +1 ∆N (µ2N +1 , tN +1 )
" tZ
N +1 #
(K) (K)
× ∆N +1 (tN +1 , tcut ) + dΦ1 KN +1 ∆N +1 (tN +1 , tN +2 )Θ(Qcut − QN +2 )
tcut
(K) (K)
+ dΦN +2 kN (ΦN +2 ) Θ(QN +2 − Qcut ) BN +2 ∆N +1 (µ2N +2 , tN +2 )∆N (tN +2 , tN +1 )
"
(K)
× ∆N +1 (tN +2 , tcut ) + . . . .
(5.363)
390 QCD to All Orders
Here, the first three lines correspond to an MC@NLO simulation, where the phase
space for the (N + 1)th particle is constrained such that it does not yield another jet
— this is what the Θ–functions Θ(Qcut − QN +1 ) are there for. In order to make the jet
counting more explicit, another Θ-function, Θ(QN −Qcut ) has been added to highlight
that all other QCD particles in the N -particle Born-level final state should be jets.
Similarly, the fourth and fifth lines encapsulate the first additional tree-level matrix
element, merged into the sample, and supplemented with the interpolating K factor
kN . Again, no further jet emission is allowed. This could now continue like indicated by
the sixth line, to include higher and higher jet multiplicities. In any case, any further
emissions, as discussed already for the multijet merging at leading order, cannot result
in additional unwanted jets.
The idea underlying multijet merging with next-to-leading order matrix elements,
MEPS@NLO, is identical to multijet merging at leading order: towers of matrix elements
with increasing jet multiplicities are combined into one inclusive sample in such a way
that no double-counting occurs. Of course, as before, it is important to maintain the
accuracy of both the matrix elements, i.e. the cross-sections related to the processes
as well as the fixed-order accuracy of the first emission, and the parton shower, i.e. the
resummation of leading and the next-to-leading logarithms encoded in the showering.
This has been first achieved in [558, 631], where the first missing terms have ex-
plicitly been shown to be of order αs2 L3 /Nc2 — the colour-suppressed sub-leading
logarithms beyond shower accuracy. Alternative methods and implementations have
been presented in [535], and in [737, 806]. The latter two are closely related to one
another, and both guarantee the proper “unitarization” of the emissions in line with
the leading-order treatment by the same authors [733, 806].
The first three lines would correspond to an MC@NLO simulation, where, in complete
agreement with Eq. (5.363), the phase space for the (N + 1)th particle is constrained
such that it does not yield another jet, realized by Θ(Qcut −QN +1 ) in the two emission
terms, soft and hard. The next three lines would then stand for the next MC@NLO
simulation, for a process with one more particle in the final state.
Naively, it seems as if there is no double counting present here. However, this is not
entirely true. To see this, consider the emission term in the first, soft part of the lowest
(K)
order MC@NLO simulation, the term KN ∆N (µ2N , tN +1 ) in the first square bracket
and the corresponding hard emission part in the line below. Closer inspection reveals
that at first order in αs , there is some unwanted contribution to the phase space of
the next MC@NLO simulation. It stems from the expansion of the Sudakov form factor
accounting for no emissions harder than tN +1 in the combined soft and hard radiation
pattern, ranging over the full emission phase space from tN +1 up to µ2N :
2
ZµN
(K)
dΦN dΦ1 B̃N KN + dΦN +1 HN Θ(Qcut − QN +1 ) ∆N (µ2N , tN +1 )
tc
2
ZµN
(K)
= dΦN dΦ1 (BN ⊗ KN + HN ) + O αs2 Θ(Qcut − QN +1 ) ∆N (µ2N , tN +1 )
tc
2
ZµN
= dΦN +1 BN +1 Θ(Qcut − QN +1 ) 1 − dΦ1 KN + O αs2 . (5.365)
tN +1
The second part of the bracket in the last line therefore interferes with the emissions
of the MC@NLO simulation of the incremented multiplicity. This double-counting of
emissions must obviously be avoided. There are various way of achieving this, the
simplest one is by adding the second term in the square bracket above to the (N + 1)-
392 QCD to All Orders
MC@NLO simulation:
(K)
+ dΦN +1 Θ(QN − Qcut )Θ(Qcut − QN +1 ) HN ∆N (µ2N , tN +1 )
2
ZµN !
(K) BN +1
+ dΦN +1 Θ(QN +1 − Qcut ) B̃N +1 ∆N (µ2N +1 , tN +1 ) 1+ dΦ1 KN
B̃N +1
tN +1
" tZ
N +1 #
(K) (K)
×· ∆N +1 (tN +1 , tc ) + dΦ1 KN +1 ∆N +1 (tN +1 , tN +2 )
tc
As in the MENLOPS method described in Section 5.5.2, the first three lines merely
describe an MC@NLO simulation of the lowest multiplicity sample with an additional
jet veto applied to all emissions. In a similar way, the next lines describe an MC@NLO
simulation for the next higher multiplicity, modified by the term in round brackets in
the fourth line compensating for a double-counting of terms stemming from the lowest
multiplicity MC@NLO.
As already worked out, this term compensates contributions stemming from the
Sudakov form factor encoding the veto of hard emissions from the lower multiplicity,
the two first lines. Such a compensation must be introduced into the formalism in
order to maintain the NLO accuracy of the overall procedure. The fact that this
double-counting of NLO terms is introduced by the parton shower (or analytic Sudakov
form factors encoding the jet veto) renders these contributions potentially hard to
understand at first. Realizing, however, that Sudakov form factors encode some of
the NLO corrections in a leading-logarithmic approximation, this should not come as
a surprise. In fact, the treatment here actually is fully analogous to the one of the
Sudakov form factor in its interaction with higher-order matrix elements, presented
later in Section 5.6.1, and made explicit in Eq. (5.372) there.
Naively, such terms also seem to be hard to implement when generating the Su-
dakov rejection directly from the parton shower rather than analytically. This turns
out not to be entirely true. In close analogy to the discussion of vetoed emissions, cf.
Eq. (5.348), it can be seen that these terms correspond to a vetoed emission off the
N -particle state only. Instead of vetoing an event when an unwanted emission takes
place, it is merely the hard emission itself that should be vetoed.
NNLO and parton showers 393
Of course, the very same logic could be iterated to include even higher multiplicities.
In addition, further leading-order matrix elements could be added following the same
reasoning with an interpolating K-factor as in the MENLOPS algorithm.
This scale is also used for the strong coupling related to loop corrections or to the
soft emissions below the jet-threshold encoded for instance in the terms HN . They are
effectively ignored in the construction of the parton shower history, which relies on
tree-level configurations only.
When performing a scale variation of the renormalization scale in order to assess
the corresponding uncertainty, one must therefore modify the Born matrix element by
a factor such that
n
!
n 2 n 2 αs (µ̃2R ) X µ2i
αs (µR ) −→ αs (µ̃R ) 1 − β0 log 2 , (5.368)
2π i=1
µ̃R
while all higher order terms are just evaluated with the coupling taken at the new
scale µ̃2R . This is slightly different from the scale setting prescription in MINLO, where
the value of αs chosen for the additional real or virtual correction is fixed to be the
average of all other values of the strong coupling, cf. Eq. (5.375).
In a similar way, a new factorization scale can be chosen for the matrix elements,
but must be compensated for by a term of the form
Z1 x Z1 x
µ̃2F X dz a
X dz b
BN log 2 Pac (z) fc/ha , µ̃2F + Pbd (z) fd/hb , µ̃2F .
µF c=q,g z z z z
xa d=q,g x
b
(5.369)
5.6.1.2 MINLO-1
As already stated, the primary aim of MINLO-1 is to improve the behaviour of sin-
gle fixed–order calculations at NLO accuracy in the strong coupling in the Sudakov
region, without upsetting their formal NLO accuracy. This is an important step to-
wards improved stability of NLO calculations where a heavy system such as, e.g., a
gauge or a Higgs boson is produced in association with light jets. In such cases, the
stability of the calculation suffers with decreasing transverse momenta of the heavy
system, or, correspondingly, of the light jets, due to the emergence of increasingly
large logarithms, which must be resummed to all orders. As already encountered in
previous sections, a convenient way to achieve this stabilization is through inclusion of
now familiar Sudakov form factors. It should thus not be a big surprise that successful
algorithms achieving this aim, such as the MINLO-1 method, are further developing
ideas imported from multijet merging, presented in Section 5.5.3, but usually without
the additional intricacies related to the combination of various NLO calculations for
increasing multiplicities.
NNLO and parton showers 395
There are two ingredients in MINLO-1, already familiar from multijet merging.
First of all, the renormalization and factorization scales are chosen to capture better
the interplay of various scales in the process, and, secondly, the introduction of Su-
dakov weights which tame the instabilities emerging in the regime of small transverse
momenta. This proceeds by constructing a parton shower history for the underlying
Born-level configuration, omitting the softest emission in the real-emission contribu-
tion. As before, a strong ordering of the N nodal values in the hardness scale for the
emissions is assumed,
Q0 ≥ Q1 ≥ Q2 . . . ≥ QN −1 ≥ QN = Qcut , (5.370)
where Q0 denotes the hardest scale of the core process. For instance, in the case of
the production of a heavy singlet in association with a number of jets, Q0 typically is
the mass of the singlet. The weight corresponding to the individual phase-space point
is multiplied with an overall suppression factor, given by a product of Sudakov form
factors of all internal lines i of the Born configuration and a product of all Sudakov
form factors of all outgoing partons k of the Born configuration, which have been
produced through a splitting at the scale Qk ,
N
Y Y
S = ∆i (Q2i−1 , Q2i ) ∆k (Q2k , Q2cut ) . (5.371)
i=1 k
Here, the subscripts i and k in ∆i and ∆k denote the flavour of the internal and
external lines and the analytic Sudakov form factors from Eq. (2.183) are employed.
The two external partons emerging at the last branching, i.e. at scale QN , will be
associated with a Sudakov factor of ∆k (Q2cut , Q2cut ) = 1.
In order to compensate for higher-order effects induced by the Sudakov form fac-
tors, the Born-term is modified by a factor which captures the first order in the αs -
expansion of the analytic Sudakov form factors above. For each of the Sudakov form
factors, the corresponding first-order term in its expansion is denoted by ∆(1) . With
this notation, the correction factor is given by
X X (1)
1− ∆(1) (Q2i−1 , Q2i ) − ∆k (Q2k , Q2cut )
i k
Q2i−1 Qk 2
X Z dq⊥2 2
αs (q⊥ ) X Z dq 2 αs (q 2 )
2 2 ⊥ ⊥
= 1+ 2 Γi (Qi−1 , q⊥ ) + 2 Γi (Q2k , q⊥
2
),
i
q⊥ 2π q⊥ 2π
k
Q2i Q2cut
(5.372)
where the Γi are the integrated splitting functions introduced in Eq. (2.178), and
2
ZQk 2
(1) dq⊥ αs (t)
∆k (Q2 , Q20 ) = − 2 Γi (Q2k , q⊥
2
). (5.373)
q⊥ 2π
Q2cut
396 QCD to All Orders
Here, the first sum ranges over all internal lines i, while the second sum includes all
outgoing parton of the Born configuration. This factor is in complete analogy to the
one in Eq. (5.365), see also Section 5.5.3.
To see how the scale setting works out for the MINLO method, consider again a
Born-level configuration with M powers of the strong coupling related to the core
process and with m QCD emissions with corresponding scales µR,(i) . In tree-level
multijet merging methods, the overall scale µR is implicitly given by
Y
αsM +N (µ2R ) = αsM (µR,(core) ) · αs (µR,(i) ) , (5.374)
i∈N
is employed for them; the rationale for this choice is detailed in the publication [611].
Up to now, the MINLO-2 method has been formulated for the production of singlet
systems only, for reason that will become obvious. The idea in it is to basically feed
the next-to-leading order expressions in MINLO-1 for the production of such a singlet
system S in association with a parton j into the POWHEG formalism, thereby providing
the link to the parton shower. By allowing the parton j to become as soft or collinear
as the parton shower cut-off tc allows, the full one-parton emission phase space is filled
at NLO accuracy, and the second parton is distributed according to LO accuracy. This
is possible by reweighting
the Born-level singlet plus jet configuration with a Sudakov
form factor at O αs2 accuracy, i.e. including terms A2 and B2 from QT -resummation,
thereby compensating all dangerous logarithms of the low scale. The catch then is to
reweight the emergent event sample to a suitable distribution of the singlet at NNLO
accuracy, to achieve the overall NNLO+PS accuracy.
Taking as an illustrative example for this procedure the production of a Higgs bo-
son in conjunction with a parton, the B̄ term for H + j production can be written in
a similar way as before, including the Born term plus the real and virtual corrections.
Following the reasoning of MINLO-1 above, however, Sudakov form factors are invoked
to account for higher-order corrections which become large for small transverse mo-
menta, and their interplay with the genuine higher-order terms must be accounted for
by subtracting out their respective first order expansion. Taken together, and making
all factors of αs explicit,
NNLO and parton showers 397
h i
B̄(ΦB ) = αs (m2H ) αs (q⊥ ) ∆2g (m2H , Q2⊥ ) · B(ΦB ) 1 − 2∆(1) (m2H , Q2⊥ )
Z
+ αs (Q2⊥ ) Ṽ(ΦB ) + dΦ1 R(ΦB × Φ1 ) ,
(5.376)
where Q⊥ is the transverse momentum of the Higgs boson (and therefore, at Born-
level, the extra parton) with Q2⊥ ≥ tc .
Integrating Eq. (5.376) over the full phase space of the Born-level H + j configura-
tion will therefore, by construction, yield the cross-section for H + j at NLO accuracy,
including all terms that are singular in the limit Q⊥ → 0, giving rise to logarithms
of Q⊥ . All singular terms, formally up to O αs2 with respect to the inclusive Higgs
boson production, can of course also be obtained through the QT -resummation for-
malism at NNLL accuracy, which recovers all these logarithms. These are all terms
of the form αs L2 and αs L as well as αs2 L4 , αs2 L3 , αs2 L2 , and αs2 L, where the short-
hand notation L = log(Q2 /Q2⊥ ) has been used. In QT resummation the resummation
scale Q typically is identified to be of order of the singlet mass, in this case therefore
Q = O (mH ).
Therefore, including also the A2 and B2 terms into the reweighting Sudakov form
factors above, and fixing the scale of αs in the real and virtual contributions to Q2⊥
guarantees that the result not only is NLO-accurate for H + j, but also for inclusive
H production.
There is one minor problem remaining, though, namely that the Sudakov form
factors and coefficients given in Section 5.2.1, Eq. (5.65), Eq. (5.66), and Eq. (5.72),
are for QT resummation in the conjugate b⊥ -space. Here, however, the Sudakov form
factors are meant to be directly applied in transverse momentum space, and therefore
a translation between both must be applied. As noted in [609], this actually has been
worked out in [505] and essentially leads to adding a term
2
(q,g) (q,g)
∆B2,b⊥ →q⊥ = 4ζ(3) A1 , (5.377)
(q,g)
with A1 the corresponding soft coefficient, while the A2 remain unaltered. The B2
coefficients in the case of resummation directly in QT -space therefore are given by
(q,g) (q,g) (q,g)
B2,QT = B2,b⊥ + ∆B2 . (5.378)
These terms are used in the analytic NNLL Sudakov form factors, multiplied to
the matrix element, of the MINLO method such that
2
ZQ dq 2
(NNLL) 2 2 ⊥ 2 Q2 2
∆k (Q , Q0 ) = exp − 2 A(q⊥ ) log 2 + B(q⊥ ) , (5.379)
q⊥ q⊥
Q20
where
398 QCD to All Orders
2
2
2
2 αs (q⊥ ) αs (q⊥ )
A(q⊥ ) = A1 + A2
2π 2π
2 (5.380)
2 2
2 αs (q⊥ ) αs (q⊥ )
B(q⊥ ) = B1 + B2,QT ,
2π 2π
with the coefficients A from Eq. (5.65), B1 from Eq. (5.67), and the term B2 from
Eq. (5.378) with the B2 term in b⊥ -space given in Eq. (5.72). Note that the latter also
include finite loop-correction terms, making them process-dependent.
Finally, with both the inclusive process — in the example, inclusive Higgs produc-
tion — and the process with an additional jet — H+ jet production — NLO correct,
a seamless merging of the two multiplicities has been successfully achieved. This hap-
pened, and it is important to stress this, without any jet cut like in the usual multijet
merging. Instead, by carefully adjusting Sudakov weights and scales in αs it was pos-
sible to modify the H+ jet part in such a way that it automatically also accounts for
the inclusive part.
In [610] it was then further realized that this could be turned into a full parton-
level simulation, but accurate not only at NLO for both H and H+ jet production,
but in fact accurate at NNLO in the inclusive cross-section. In order to achieve this,
the rapidity distribution of the Higgs boson in the seamless merged sample merely
has to be reweighted to the NNLO distribution. The same algorithm was also applied
in a follow-up study, concerning Drell–Yan production [652]. There, the reweighting
proceeds in three dimensions, thereby capturing the dynamics of the lepton system.
Here, B̄ is the differential NLO cross-section for Born-level kinematics from Eq. (5.326),
Z
B̄N (ΦB ) = B(ΦB ) + ṼN (ΦB ) + dΦ1 RN (ΦB ⊗ Φ1 ) − SN (ΦB ⊗ Φ1 ) . (5.381)
The argument of the observable indicates the particle composition and phase space
from which it is evaluated. In view of this, the first line contains the observable after
NNLO and parton showers 399
no emission has taken place — the first term is the fixed-order result for a Born-level
configuration where all fixed-order parton emissions above tc in the real correction have
been subtracted, such that the one potentially remaining emission is below the parton
shower cut-off. The second term in the first line refers to those events, which, starting
from a real correction with a parton above the parton shower cut-off where vetoed
due to an unwanted emission harder than t1 . In this case the parton configuration
is projected onto the underlying Born configuration. The second line refers to those
events which starting from a real-emission configuration experienced a parton shower
evolution without unwanted emissions. This is signalled by the parton shower evolution
(K)
operator En (t, tc ; O), defined in full analogy to the Eq. (5.310),
While the expression in Eq. (5.381) captures all relevant terms to maintain fixed-
order accuracy, it does not account for the dependence of the real-emission contribu-
tion RN on the renormalization and factorization scales. This can trivially remedied,
however, by suitable weights, see also the original literature.
The same logic has been extended to match the parton shower to NNLO calcu-
lations for the production of colour singlets, performed with qT subtraction. There,
contributions with exactly zero, exactly one, and more than one emissions above tc
have been identified, leading to a somewhat lengthy expression, for which we refer to
the literature.
6
Parton Distribution Functions
XZ Z
1
was introduced to describe the calculation of the production of n-parton final states
in processes of the type h1 h2 → n + X with incoming hadrons h1 and h2 – usually
protons. The calculation of such a hard (parton-level) cross-section within perturbative
QCD relies on also using partons in the inital state and thus requires the knowledge
of the distribution of partons i in the hadrons h. This is most often achieved by
using a scheme called collinear factorization in which this distribution depends on the
longitudinal momentum fraction x of the partons with respect to the hadron, taken
at the factorization scale µF . The resulting distributions fi/h (x, µF ) are known
as parton distribution functions (PDFs), and parameterize the transition of incident
hadrons to incident partons, thereby absorbing all emissions at scales below µF , cf.
Chapter 2. In turn, the PDFs obviously depend upon this measure of the hardness of
the parton–level interaction, which will be of the same order as the renormalization
scale µR . As a consequence of the factorization of possible parton emissions into a
soft and collinear part, below µF , and a hard part, above µF , the latter are explicitly
described by the matrix element. Factorization theorems, proven for deep–inelastic
lepton–proton scattering and for Drell-Yan production of gauge bosons in hadron
collisions, assert that the PDFs in collinear factorization are process–independent.
There are however terms of higher dimension – so called higher–twist contributions –
that are process dependent. These terms are ignored throughout this book, since they
are suppressed by factors of the type m2p /µ2F . Note that in this chapter both µR and
µF will often be represented just as Q.
Pictorially, as was shown in Fig. 2.5 (left), more of the quantum fluctuations can be
resolved as the hardness scale, corresponding to the inverse of the time scale, increases.
Low–scale processes can only resolve longer time intervals, and with increasing scale,
The Black Book of Quantum Chromodynamics: A Primer for the LHC Era. John Campbell, Joey Huston, and Frank Krauss.
© John Campbell, Joey Huston, and Frank Krauss 2018. Published in 2018 by Oxford University Press.
DOI 10.1093/oso/9780199652747.001.0001
Parton Distribution Functions 401
smaller time intervals are being probed, and more of the quantum fluctuations inside
the hadron are resolved, which transfer momentum from high–x partons to low–x
partons. Thus, at higher µF one expects to find more of the momentum of the proton
given to gluons and sea quarks with relatively low values of the momentum fraction
x, as the quantum fluctuations creating these partons can be resolved. Conversely, the
population of partons containing a large fraction of the parent hadron’s momentum
will decrease.
The PDFs describing this dynamics are treated as being universal, i.e. they can
be determined by one set of processes and used to predict the cross-section for any
process, using the factorized form shown above. These parton distribution functions
currently can not be calculated perturbatively; ultimately, however, it may become
possible in the future to calculate these PDFs non-perturbatively, using lattice gauge
theory [310]. On the other hand, the evolution of PDFs with Q2 can be calculated
with a perturbative treatment using the DGLAP evolution equations, as sketched
in Section 2.1.3 and discussed in more detail in Section 6.1.
PDFs have been determined by global fits to data from a plethora of different
data. Modern global PDF fits use of the order of 3000 data points from a number
of processes from fixed target experiments, the TEVATRON, HERA, and the LHC. In
particular, data from deep-inelastic scattering (DIS), Drell-Yan (DY) and jet pro-
duction processes, have played a dominant role in the past. With the advent of the
LHC and its huge amount of data, also processes such as γ+jet, W +jets, and heavy
flavour production (including tt̄) have become increasingly important as the statistical
and systematic errors of the data sets have improved [827]. Many of these processes
have recently been calculated to NNLO, allowing the data to be used in PDF fits at
that order. At the moment this endeavour is actively pursued by a number of differ-
ent groups: ABM [136], CTEQ/CT [489, 551], HERAPDF [115, 413, 819], JR [649],
MSTW/MMHT [614, 755] and NNPDF [192, 194], which provide semi-regular updates
to their fits of parton distributions, when new data and/or theoretical developments
become available. Similarities and differences in the fitting procedure performed by
the different groups will be discussed in some detail in Section 6.2.
The most commonly used are those PDFs which perform a global analysis on a
broad variety of data using a variable flavour number scheme; the most recent updates
(CT14, MMHT2014, NNPDF3.0) are given in [194, 489, 614]. The resulting PDFs
are available at leading order (LO), next-to-leading order (NLO), and next-to-next-to-
leading order (NNLO) in the strong coupling constant αS , depending on the order(s) at
which the global PDF fits have been carried out. Some PDFs have also been produced
at what has been termed modified leading order [715, 846] (LO∗ or similar), in an
attempt to reduce some of the problems that result from the use of LO PDFs in parton
shower Monte Carlo programs. These PDFs are no longer in wide use, however. The
choices of parameterization for these PDFs are then discussed, along with the impact
of the use of particular renormalization and factorization schemes. Given the wide
kinematic coverage of the data, the parameterization of the PDFs must be flexible
enough to describe the parton distributions over a wide range of x and Q2 , and to not
introduce artificial correlations between different x regions. Modern PDFs take into
account the finite charm and bottom quark masses (using a variety of heavy quark
402 Parton Distribution Functions
mass schemes) in their fits to data, particularly to data from deep-inelastic scattering.
This has consequences for low-mass cross-sections at the LHC that will be examined.
Technical aspects related to the different orders, schemes and parameterizations are
highlighted in Section 6.2.2.
There are two different classes of technique employed for the PDF determination:
those based on the Hessian approach, and those using a Monte Carlo approach. The
background behind both classes will be discussed in this chapter, and some of the actual
procedures employed in the fitting of the PDFs and the determination of the respective
errors will be highlighted. An important consequence of the method chosen is the way
uncertainties in the fits are handled. These uncertainties also have implications on the
accuracy of overall cross-section calculations. This is followed by a similar discussion
of the choice of the value of αs (mZ ) in the global fits. The data in the global fits are
mostly from strong interaction physics, and they are thus sensitive to the exact value
of αs (mZ ). As a consequence, the global fit itself can be used to determine it. Since
αs (mZ ) is a universal parameter, however, another approach is to assume the world
average value, and to not let αs be a free parameter in the fit. Both approaches can be
and have been used. Recently, most groups have provided PDFs at the world average
value of αs (mZ ), along with PDFs at alternate values of αs (mZ ). Issues related to the
fitting technology, impacting on estimates of the intrinisic uncertainties of PDFs and
their impact on cross-sections are discussed in Section 6.3.
The next section is devoted to a discussion of PDF correlations. The examination
and understanding of correlations between two fitted PDFs, or between a PDF and
a cross-section, or between two cross-sections is crucial to a detailed appreciation of
PDFs at the LHC. In adition, such correlations can be used to decrease the PDF
uncertainties for the ratios of such quantities at the LHC. In Section 6.4, the resultant
PDFs at LO (modified LO), NLO, and NNLO are presented. Parton luminosities are
defined and are shown for several important initial states at the LHC in Section 6.5.
PDF uncertainties using inputs from several PDF groups are then defined, using the
PDF4LHC accords. Correlations among LHC processes are examined using these PDF
combinations, and finally, in Section 6.6 several useful PDF tools that are currently
available are described.
(1)
The leading order splitting functions Pij have previously been given in Eq. (2.33).
They represent the kernel of this evolution equation and clearly couple different PDFs
together. Note that the symmetries of QCD mean that, at this order, splitting functions
involving anti-quarks are simply equal to their quark counterparts. Before turning to
the form of the DGLAP equation and the splitting kernels at higher orders in the next
section, the calculation of the kernels Pij at leading order will be sketched below.
k2⊥ µ
k µ = (1 − x) pµ + p0µ + k⊥ . (6.6)
(1 − x)ŝ
Here k2⊥ denotes the positive square of the transverse momentum. Similarly,
k2⊥
q 2 = (p − k)2 = , (6.7)
1−x
showing that the propagator of i is proportional to k2⊥ . As a consequence, the corre-
sponding matrix element for the process under emission of k diverges with k⊥ → 0.
This is the collinear limit, where the transverse momentum of the emitted particle
vanishes. This singular behaviour also triggers divergences in the phase space integra-
tion over k. All these divergences however cancel, according to the Kinoshita–Lee-
Nauenberg theorem, and as seen previously, when virtual corrections are added.
404 Parton Distribution Functions
To proceed, the phase space integral over particle k has to be cast into a form
useful for further consideration. With a bit of algebra it is easy to show that
d4 k 2 1 2 k2⊥
dΦk = (2π)δ(k ) = dxdβd k⊥ δ (1 − x)β −
(2π)4 (2π)3 ŝ
dxdk2⊥
= . (6.8)
16π 2 (1 − x)
Adding in the emission of the final state parton k, and ignoring all other possible
emissions, yields a higher–order contribution to the cross-section of X production, and
Eq. (6.4) becomes
1 2
(0)
dσ̂jj 0 →X (p, p0 ) = dΦX Mjj 0 →X (p, p0 )
2ŝ
Z1 Z 2
1 1 dx (0)
+ 2
dΦX d2 k2⊥ Mjj 0 →ikj 0 →kX (p, p0 ) .
2ŝ 16π 1−x
0
(6.9)
(1)
The leading-order splitting functions Pji (x) in collinear factorization are then
defined as the part of the second line of the equation above that diverges in the
collinear limit:
2 2
1 − x αs (1) (0) 0 1 2 (0) 0
P (x) Mij 0 →X (xp, p ) = lim k Mjj 0 →ikj 0 →kX (p, p ) .
x 2π ji 16π 2 k⊥ →0 ⊥
(6.10)
The x in the denominator of the left-hand side of the equation arises from the fact
that the matrix element for ij 0 → X has an incoming flux given by 2xŝ instead of the
original 2ŝ.
(1)
6.1.1.2 Calculating the kernels: Pqq
(1)
To calculate the splitting function Pqq , i and j are quark lines and k is a gluon.
Clearly, gauge invariance of the overall matrix element for the associated production
of X and the gluon k is only guaranteed if amplitudes for the emission of the gluon
off all coloured lines are included. The question is whether these contributions exhibit
collinear divergent behaviour.
It can be shown that this is not the case when choosing a smart physical gauge
where the gluon’s polarization vector µ is transverse to both p0 and k. To see this,
an additional axis n⊥ ⊥ k⊥ with n2⊥ = 1 is introduced, which allows the explicit
construction of the polarization vectors as
√
µ 2 k⊥ 0µ 1
± = p + √ k µ ± inµ⊥ . (6.11)
(1 − x)ŝ 2 k⊥ ⊥
In addition to the orthogonality requirements underlying their construction,
PDF evolution: the DGLAP equation revisited 405
· p0 = · k = 0 . (6.12)
they satisfy
k⊥ k⊥
± · k⊥ = √ and ± · p = √ . (6.13)
2 2(1 − x)
While the square of the graph with the propagator of i boasts two propagators and is
4
thus proportional to 1/k⊥ , all other graphs have at best one of these propagators. And
since in the squared amplitude there are two scalar products involving the polarization
2
vector, the numerator of the squared amplitude is proportional to k⊥ . This leaves no
2
overall divergence in 1/k⊥ the squared amplitude apart from the one stemming from
the graph with the i-propagator.
(0)
Analysing the structure of the reduced amplitude Mij 0 →X , it becomes clear that
its square can be written as
(0) 2
Mij 0 →X = ū(xp, λ) [γµ M µ ] u(xp, λ) , (6.14)
where the spinors refer to the intermediate quark i. Because it is a massless quark,
and since incoming and outgoing spinors come with the same helicity, this is the only
allowed structure. Decomposing M according to
with m⊥ a vector in the same transverse plane as k⊥ and using the Dirac equatoin
elimiates the term proportional to p. Also, since the helicities are the same, the spin-
flip operator γ · m vanishes when sandwiched between the two spinors. In the end,
therefore
(0) 2
Mij 0 →X = ū(xp, λ) [γµ bp0µ ] u(xp, λ) . (6.16)
realizing that
2
|ū(xp, λ) (γ · p0 ) u(xp, λ)| = (2xpp0 )2 = x2 ŝ2 (6.17)
identifies b and therefore
p0µ (0) 2
Mµ = Mij 0 →X . (6.18)
2xŝ
These identities can now be rolled out to the calculation of the emission matrix
element in the collinear limit. Including factors of 1/2 and 1/Nc for the average over
incoming quark helicities and colours,
2
(0)
Mjj 0 →ikj 0 →kX (p, p0 )
0
g 2 Tr [T a T a ] X p/ − k/ p/ (0) 2 p/ − k/ ∗
= ū(p, λ) / ± M ij 0 →X / u(p, λ)
2Nc (p − k)2 xŝ (p − k)2 ±
λ,±
g 2 Tr [T a T a ] X 2
0 ∗ (0)
= Tr p
/ /
± (p
/ − k/) p
/ (p
/ − k
/) /
± M ij 0 →X
2Nc xŝ(−2p · k)2 ±
406 Parton Distribution Functions
2g 2 CF 2
2 (0)
= (1 + x ) Mij →X .
0 (6.19)
xk2⊥
k2⊥
2kp =
1−x
2kp0 = ŝ(1 − x)
X kµ p0ν + kν p0µ
± ∗±
µ ν = −gµν + . (6.20)
±
kp0
1 − x αs (1) 1 2g 2 CF 8παs CF (1 + x2 )
Pqq (x) = 2
× (1 + x2 ) = (6.21)
x 2π 16π x 16π 2 x
and therefore
(1) 1 + x2
Pqq (x) = CF , (6.22)
1−x
as expected from Eq. (2.33).
Z1 hα i
s (1) 2 2
finite = dx Pqq (x) |Mqj 0 →X (xp)| + (1 + V ) |Mqj 0 →X (xp)|
2π
0
Z1 hα i
s (1) 2
= dx Pqq (x) + δ(1 − x) (1 + V ) |Mqj 0 →X (xp)| , (6.23)
2π
0
Z1 hα i
s (1)
dx Pqq (x) + δ(1 − x) (1 + V ) = 1 (6.24)
2π
0
PDF evolution: the DGLAP equation revisited 407
and therefore the virtual part must compensate the real emission encoded in the
splitting function:
Z
1−δ
αs (1)
V = − lim dx Pqq (x)
2π δ→0
0
Z
1−δ
αs CF 2 αs CF 3
= lim dx − (1 + x) = + 2 log δ . (6.25)
2π δ→0 1−x 2π 2
0
This looks quite cumbersome. A solution, however, presents itself by realizing that
the “splitting functions” Pij actually are not really functions but in fact distributions
that will always act on some other functions f (x) such as PDFs or similar. These
functions are typically regular at x = 1 and it is therefore meaningful to define a
way to regulate the divergent behaviour of the Pij at x = 1 independent of the limit-
procedure above. Introducing the “+” prescription, applicable for distributions that
diverge at x = 1, such as 1/(1 − x),
1 ! f (x) − f (1)
f (x) = , (6.27)
1−x + 1−x
cf. Eq. (2.10). This prescription will take care of the log δ-term, located at x = 1 by
virtue of the δ-function and therefore, finally,
" #
1 + x2 3
Pqq (x) = CF + δ(1 − x) . (6.28)
1−x + 2
This actually implies that the x–integral over Pij vanishes exactly,
Z1
dx Pqq (x) = 0 , (6.29)
0
(2)
Fig. 6.1 Processes corresponding to the NLO splitting functions Pqi qj
(2)
(left) and Pqi q̄j (right). Unobserved partons are indicated in the figure by
parentheses.
(2)
The inclusion of higher–order terms, Pij , introduces several subtleties that did not
arise at leading order. The first is that, at leading order, the flavour structure of
the evolution is trivial: for two flavours of quark, qi and qj , the splitting function
(1)
trivially vanishes unless i = j, i.e. Pqi qj ∝ δij . This is no longer true at the next order,
as indicated in Fig. 6.1 (a). Contributions originating from the splitting of a virtual
gluon into a quark-antiquark pair of different flavour, qi → qj (qi q̄j ), give rise to kernels
that are no longer diagonal in flavour space. Moreover, the same process also gives rise
to kernels that directly couple quarks and antiquarks, qi → q̄j (qi qj ), cf. Fig. 6.1 (b).
Therefore Eq. (6.2) must be generalized in order to account for effects at higher
orders. The evolution can be written as,
fqi /h (x, Q2 )
∂ fq̄i /h (x, Q2 )
∂ log Q2
fg/h (x, Q2 )
(6.31)
2 Z Pqi qk xz Pqi q̄k xz Pqg xz fqk /h (z, Q2 )
1
αs (Q ) dz
= Pq̄i qk xz Pq̄i q̄k xz Pq̄g xz fq̄k /h (z, Q2 ) ,
2π z
x Pgqk xz Pgq̄k xz Pgg xz fg/h (z, Q2 )
with all of the distributions coupled together. The presence of some amount of anti-
quarks in the parent quark means that there is no physically-meaningful separation
into “valence” and “sea” contributions beyond leading order. However it is possible to
identify combinations of quark PDFs that can be identified with the valence contribu-
tion as follows. The nature of the leading order splitting functions suggests that they
be decomposed according to,
V S
Pqi qj = δij Pqq + Pqq ,
Pqi q̄j = δij PqVq̄ + PqSq̄ . (6.32)
In order to solve the coupled evolution equations, Eq. (6.31), it is useful to introduce
the following combinations of the splitting functions [550],
V
P(±) = Pqq ± PqVq̄ ,
S
PQQ = P(+) + 2nf Pqq
PDF evolution: the DGLAP equation revisited 409
for nf flavours of light quarks. Similarly it is convenient to express the parton distri-
butions in terms of the following quantities,
that is, into either a sum or difference of quark and antiquark distributions. The
evolution expressed in Eq. (6.31) can now be written in a much simpler form in which
the components decouple significantly. Two combinations satisfy a particularly simple
form of the evolution,
Z1
∂ 2 αs (Q2 ) dz
f (−) (x, Q ) = P(−) fq(−) /h (z, Q2 ) (6.36)
∂ log Q2 qi /h 2π z i
x
Z1
∂ 2 αs (Q2 ) dz
Si (z, Q ) = P(+) Si (z, Q2 ) , (6.37)
∂ log Q2 2π z
x
These evolution equations can be solved separately; the solutions for fq(−) /h (x, Q2 )
i
and Si (x, Q2 ) are referred to as non-singlet contributions. The remaining quark
singlet combination, fQ(+) /h (x, Q2 ), remains coupled to the gluon contributions, but
in a fashion akin to the leading order evolution (cf. Eq. (6.2)),
∂ fQ(+) /h (x, Q2 )
∂ log Q2 fg/h (x, Q2 )
Z1 (6.39)
αs (Q2 ) dz PQQ xz PQg xz fQ(+) /h (z, Q2 )
= .
2π z PgQ xz Pgg xz fg/h (z, Q2 )
x
Eqs. (6.36), (6.37), and (6.39) are sufficient to determine the evolution of all of the
individual PDFs at a given order, if the corresponding splitting functions are known
to the same accuracy.
The first corrections to the leading-order splitting functions were originally com-
puted in Refs. [421, 549]. Their forms are reproduced here for completeness. The two-
loop gluon splitting function receives contributions from two different colour structures,
410 Parton Distribution Functions
(2) Γ1 1 h (1) i
PggA (z) = Pgg (z) + δ(1 − z) CA (−1 + 3ζ3 ) + β0
8 CA +
h i 1 h i 1
(1)
+ Pgg (z) −2 ln(1 − z) + ln z ln z + Pgg (1)
(−z) S2 (z) + ln2 z
+ 2 + 2
2
2 4(9 + 11z ) 277 277 2
+ CA 4(1 + z) ln z − ln z − + 19(1 − z) + z
3 18z 18
13 3 13
+ β0 − (1 − z) − z 2 + (1 + z) ln z , (6.41)
6z 2 6
h 4 20 i
(2)
PggF (z) = CF −δ(1 − z) + − 16 + 8z + z 2 − 2(1 + z) ln2 z − 2(3 + 5z) ln z ,
3z 3
where the plus-part of the first-order splitting functions has been defined previously in
Eq. (2.33). This equation also introduces both the two-loop cusp anomalous dimension,
4
Γ1 = CA (4 − π 2 ) + 5β0 (6.42)
3
as well as the function,
π2
S2 (z) = −2 Li2 (−z) − 2 ln(1 + z) ln z − . (6.43)
6
(2)
The corresponding result for Pgq (z) is,
h 101 π 2 i
(2)
Pgq (z) = CA Pgq(1)
(z) ln2 (1 − z) − 2 ln(1 − z) ln z − − (1)
+ Pgq (−z)S2 (z)
18 6
2 36 + 15z + 8z 2 56 − z + 88z 2
+ CF 2z ln(1 − z) + (2 + z) ln z − ln z +
3 18
− CF Pgq(1)
(z) ln2 (1 − z) + [3Pgq
(1)
(z) + 2zCF ] ln(1 − z)
2−z 2 4 + 7z 5 + 7z
+ CF ln z − ln z +
2 2 2
n h 5 i o
(1)
+ β0 Pgq (z) ln(1 − z) + +z . (6.44)
3
The quark splitting functions employ the decomposition of Eq. (6.32). The com-
ponents are,
(2) Γ1 (1 + z 2 )
PqqV (z) = CF
8 (1 − z)+
Fitting parton distribution functions 411
3 1 1 π 2
π2
+ δ(1 − z)CF CF − + 6ζ3 + CA − 3ζ3 + β0 +
8 2 4 8 6
2h i
1 + z 3 1 + z 3 + 7z
− CF2 2 ln(1 − z) + ln z + ln2 z + ln z + 5(1 − z)
1−z 2 2 2
1 1 + z2 2
+ CA CF ln z + (1 + z) ln z + 3(1 − z)
2 1−z
1 1 + z2
+ β0 ln z + 1 − z ,
2 1−z
(2) 1 + z2 h 1 i
Pqq̄V (z) = (2CF − CA )CF S2 (z) + ln2 z + (1 + z) ln z + 2(1 − z) ,
1+z 2
(2) 8 20 56
PqqS (z) = TR CF −(1 + z) ln2 z + 1 + 5z + z 2 ln z + − 2 + 6z − z 2 ,
3 9z 9
(6.45)
and
1−z 1−z π2
(2)
Pqg (z) = CF TR z 2 + (1 − z)2 ln2
− 2 ln − + 5 + 2 ln(1 − z)
z z 3
1 − 2z 2 1 − 4z 9
− ln z − ln z + 2 − z
2 2 2
h 22 109 π 2 i
+ CA TR z 2 + (1 − z)2 − ln2 (1 − z) + 2 ln(1 − z) + ln z − +
3 9 6
2 2
2
+ z + (1 + z) S2 (z) − 2 ln(1 − z) − (1 + 2z) ln z
68z − 19 20 91 7
+ ln z + + + z . (6.46)
3 9z 9 9
The solution of the PDF evolution equations given earlier, in the presence of these
splitting functions, is obtained numerically. Some examples of the effects of DGLAP
evolution on the PDFs will be given later in this chapter.
d2 σ 2πα2
= 4
1 + (1 − y)2 F2 − 1 − (1 − y)2 xF3 − y 2 FL . (6.47)
dxdy xyQ
This introduces a parameterization of the cross-section in terms of the structure
functions F2 , F3 , and FL . This formula is correct for neutral current (NC) pro-
cesses where the exchanged particle is a photon or a Z-boson. For charged current
(CC) interactions the particle exchanged is a W -boson and, for a beam of the appro-
priate helicity lepton to facilate the weak interaction, Eq. (6.47) must be multiplied
by an additional coupling and propagator factor,
2
GF m2W Q2
2 . (6.48)
4πα Q2 + m2W
In the quark-parton model the relationship between the structure functions and
the PDFs is:
X
F2NC = x Cq fq (x, Q2 ) + fq̄ (x, Q2 ) , (6.49)
q
X
F3NC = Cq0 fq (x, Q2 ) − fq̄ (x, Q2 ) , (6.50)
q
CC(−)
F2 = 2x fu (x, Q2 ) + fd¯(x, Q2 ) + fs̄ (x, Q2 ) + fc (x, Q2 ) + . . . , (6.51)
CC(−)
F3 = 2 fu (x, Q2 ) − fd¯(x, Q2 ) − fs̄ (x, Q2 ) + fc (x, Q2 ) + . . . . (6.52)
Here, Cq and Cq0 represent the combinations of couplings and propagators necessary
to account for the separate photon and Z-boson contributions to the neutral current
process. In particular, F3 would be zero for the case of pure photon exchange. The
charged-current process corresponds to the case of an incident electron, so that a W −
is exchanged, with the corresponding result for W + obtained by interchanging d ↔ u
and s ↔ c. Since the contribution of sea quarks and anti-quarks is equal, measurements
of F3NC are particularly useful probes of the valence quark distributions.
Nevertheless, the broad picture described above still holds to some degree in global
PDF analyses. Again, Q here refers to a scale representing the hardness of the inter-
action, as for example the jet transverse momentum in a calculation of the inclusive
jet cross-section.
A NLO (NNLO) global PDF fit requires thousands of iterations and thus thousands
of estimates of NLO (NNLO) matrix elements. The NLO (NNLO) matrix elements
require too much time for evaluation to be used directly in global fits. Previously, a K-
factor (NLO/LO or NNLO/LO) was calculated for each data point used in the global
fit, and the LO matrix element (which can be calculated very quickly) was changed in
the global fit (multiplied by the K-factor). Currently, a routine such as fastNLO [686]
or Applgrid [329] is often used for fast evaluation of the NLO matrix element with
the new iterated PDF. Practically speaking, both provide the same order of accuracy.
Even when fastNLO or Applgrid is used at NLO, a K-factor approach (NNLO/NLO)
is still needed at NNLO, at least until the fastNLO/Applgrid technique can be adapted
for NNLO calculations (in progress at the completion of this book).
The data from DIS, DY, and jet processes utilized in PDF fits cover a wide range
in x and Q2 . HERA data [77, 115] are predominantly at low x, while the fixed target
DIS [169, 170, 217, 900] and DY [769, 874] data are at higher x. Collider jet data at
both the TEVATRON and LHC [9, 61, 86, 103, 116, 117, 119, 126, 173, 360] cover a broad
range in x and Q2 by themselves and are particularly important in the determination
of the high x gluon distribution. Jet data from the LHC have now been used in global
PDF fits, and their importance will increase as high statistics data, and their detailed
systematic error information, are published. In addition, jet production data from
HERA have been used in the HERAPDF global PDF fits [78, 79, 389, 390].
As an example, the kinematic coverage of the data used in the NNPDF2.3 fit [192]
is shown in Fig. 6.2.
There is a tradeoff between the size and the consistency of a data set used in a
global PDF fit, in that a wider data set contains more information, but information
coming from different experiments may be partially inconsistent. Most of the fixed
target data have been taken on nuclear targets and suffer from uncertainties in the
nuclear corrections that must be made [693]. This is unfortunate as it is the neutrino
fixed target data that provide most of the quark flavour differentiation, for example
between up, down, and strange quarks. As LHC collider data become more copious,
it may be possible to reduce the reliance on fixed target nuclear data. For example,
the rapidity distributions for W + , W − , and Z production at the LHC (as well as the
TEVATRON) are proving to be very useful in constraining u and d valence and sea
quarks, as described in Chapter 9.
There is considerable overlap, however, for the kinematic coverage among the
datasets with the degree of overlap increasing with time as the full statistics of the
HERA experiments have been published. Parton distributions determined at a given
x and Q2 ‘feed-down’ or evolve to lower x values at higher Q2 values, as discussed
in Chapter 2. DGLAP-based NLO and NNLO pQCD should provide an accurate de-
scription of the data (and of the evolution of the parton distributions) over the entire
kinematic range present in current global fits. At very low x and Q2 , DGLAP evolu-
tion is believed to be no longer applicable and a BFKL [189, 517, 708, 709] description
414 Parton Distribution Functions
Fig. 6.2 The kinematical coverage in x and Q2 for the data sets in-
cluded in the NNPDF2.3 global PDF fit. Reprinted with permission from
Ref. [192].
should be used. No clear evidence of BFKL physics is seen in the current range of
data; thus all global analyses use conventional DGLAP evolution of PDFs.
There is a remarkable consistency between the data in the PDF fits and the pertur-
bative QCD theory fit to them. The CT,MMHT, and NNPDF groups use over 3000
data points in their global PDF analyses and the χ2 /DOF for the fit of theory to
data is on the order of unity, for both the NLO and NNLO analyses. For most of the
data points, the statistical errors are smaller than the systematic errors, so a proper
treatment of the systematic errors and their bin-to-bin correlations is important. All
modern day experiments provide the needed correlated systematic error information.
The H1 and ZEUS experiments have combined the data from the two experiments
from Run 1 at HERA (and now Run 2) in such a way as to reduce both the systematic
and statistical errors, providing errors of both types of the order of a percent or less
over much of the HERA kinematics [77]. In the Run 1 combination,for example, 1402
data points are combined to form 742 cross-section measurements (including both neu-
tral current and charged current cross-sections). The combined data sets, with their
small statistical and systematic errors, form a very strong constraint for all modern
global PDF fits. Thus, it can be hard for other data sets, for example from the LHC,
to match the statistical and systematic errors of the HERA data. The manner of using
the systematic errors in a global fit will be discussed later in Section 6.3.
The accuracy of the extrapolation to higher Q2 depends on the accuracy of the
Fitting parton distribution functions 415
original measurement, any uncertainty on αS (Q2 ) and the accuracy of the evolution
code. Most global PDF analyses are carried out at NLO and NNLO. Both the NLO and
the NNLO evolution codes have now been benchmarked against each other and found
to be consistent [190, 191, 251, 262, 470, 572]. Most processes of interest have been
calculated to NLO and there is the possibility, as discussed previously, of including data
from these processes in global fits. Fewer processes have been calculated at NNLO [134,
296]. The processes that have been calculated include DIS, DY, diphoton [340], tt̄
production [427], and inclusive W ,Z, and Higgs boson + jet production [269, 271, 272,
274]. Late in the writing of this book, the complete NNLO inclusive jet production
cross-section has been completed (a monumental feat), but the results are not yet in
the form to be easily used in global PDF fits [422]. Typically, jet production has been
included in global PDF fits using NLO matrix elements. Threshold corrections [445,
676] can be used to make an approximate NNLO prediction, but the corrections are
valid only over a limited phase space at the LHC, thus greatly reducing the size and
power of the jet data in the global fits. Thus, any of the NNLO global PDF analyses
discussed here are still approximate for this reason, but in practice the approximation
should work reasonably well. The NNLO corrections for the inclusive jet cross-section
have been found to be small (and relatively constant for the LHC phase space), if a scale
equal to the transverse momentum of the jet is used [335].1 The CT14, MMHT2014,
and NNPDF3.0 PDFs follow different philosophies regarding the use of LHC jet data in
NNLO fits. CT14 makes no cuts on the LHC jet data, MMHT2014 doesn’t include the
LHC jet data, and NNPDF3.0 uses only the jet data for which threshold resummation
provides a reasonable prediction. Current evolution programmes should be able to
carry out the evolution using NLO and NNLO DGLAP to an accuracy of a few percent
over the hadron collider kinematic range, except perhaps at very large and very small
x.
The kinematics appropriate for the production of a state of mass M and rapidity
y at the LHC is shown in Fig. 6.3 [320]. For example, to produce a state of mass
100 GeV and rapidity 2 requires partons of x values 0.05 and 0.001 at a Q2 value of
1 × 104 GeV2 . Compare this figure to the scatterplot of the x and Q2 range included
in the recent NNPDF2.3 fit and it is clear that an extrapolation to higher Q2 (M 2 ) is
required for predictions for many of the LHC processes of interest. As more Standard
Model processes are included in global PDF fits, the need for extrapolation will be
reduced.
Fig. 6.3 A plot showing the x and Q2 values needed for the colliding
partons to produce a final state with mass M and rapidity y at the LHC
(14 TeV).
state partons. The collinear emissions result in pole terms of the form 1/, where
is the dimensional regularization parameter. Basically the scheme definition specifies
how much of the finite corrections to subtract along with the divergent pieces. Almost
universally, the M S scheme is used; using dimensional regularization, in this scheme
the pole terms and accompanying log 4π and Euler constant terms are subtracted.2
PDFs are also available in the DIS scheme (where the full order αs corrections for F2
2 Within the MS scheme, PDFs can also be defined for a fixed number of flavours, which then have
a validity over the kinematic range for which that number (and only that number) of flavours can be
present in the proton.
Fitting parton distribution functions 417
data, and the commensurate change in the fitted PDFs, there is basically no modifi-
cation for predictions at high Q2 at the LHC.
It is also possible to use only leading-order matrix element calculations in the
global fits which results in leading-order parton distribution functions, which have
been made available, for example, by the CTEQ [489, 818], MSTW/MMHT [614,
755] and NNPDF [191, 194] groups. For many hard matrix elements for processes
used in the global analysis, there exist K factors significantly different from unity.
Thus, one expects there to be noticeable differences between the LO and NLO parton
distributions (and indeed this is often the case, especially at low x and high x).
Global analyses have traditionally used a generic form for the parameterization of
both the quark and gluon distributions at some reference value Q0 :3
The reference value Q0 is usually chosen in the range of 1–2 GeV. The parameter A1
is associated with small-x Regge behaviour while A2 is associated with large-x valence
counting rules. We expect A1 to be approximately -1 for gluons and anti-quarks, and of
the order of 1/2 for valence quarks, from the Regge arguments mentioned in Chapter 2.
Counting rule arguments tell us that the A2 parameter should be related to 2ns − 1,
where ns is the minimum number of spectator quarks. So, for valence quarks in a
proton, there are two spectator quarks, and we expect A2 = 3. For a gluon, there
are three spectator quarks, and A2 = 5; for anti-quarks in a proton, there are four
spectator quarks, and thus A2 = 7. Such arguments are useful, for example in telling
us that the gluon distribution should fall more rapidly with x than quark distributions,
but it is not clear exactly at what value of Q that the arguments made above are valid.
The first two factors, in general, are not sufficient to completely describe either
quark or gluon distributions. The term P (x; A3 , ...) is a suitably chosen smooth func-
tion, depending on one or more parameters, that adds more flexibility to the PDF
parameterization. P (x; A3 , ...) is chosen so as to tend towards a constant for x ap-
proaching either 0 or 1, so that the limiting behaviour is given by the first two terms.
In general, both the number of free parameters and the functional form can have
an influence on the global fit. A too-limited parameterization not only can lead to a
worse description of the data, but also to PDFs in different kinematic regions being
tied together not by the physics, but by the limitations of the parameterization. Note
that the parameterization forms shown here imply that PDFs are positive-definite. As
they are not physical objects by themselves, it is possible for them to be negative,
especially at low Q2 . Some PDF groups (such as CT) use a positive-definite form for
the parameterization; others do not. For example, the MSTW2008 gluon distribution is
negative for x < 0.0001, Q2 = 2GeV 2 . Evolution quickly brings the gluon into positive
territory.
The CT14 fit uses 28 free parameters (many of the PDF parameters are either
fixed at reasonable values, or are constrained by sum rules). There are a total of 8 free
3 Recently, there has been a trend towards the use of more sophisticated forms of parameterization
in global fits, but the physics arguments listed here are still
√ valid. For example, in the CT14 global
fit [489], P(x) is defined by a fourth-order polynomial in x; the polynomial is then re-expressed in
terms of Bernstein polynomials in order to reduce correlations among the coefficients.
Fitting parton distribution functions 419
parameters for the valence quarks, 5 for the gluon and 15 for the sea quarks.
The MMHT2014 fit uses 20 free parameters, while the NNPDF fits effectively has
259 free parameters. The NNPDF approach attempts to minimize the parameterization
bias by exploring global fits using a large number of free parameters in a Monte Carlo
approach. The general form for NNPDF can be written as fi (x, Qo ) = ci (x)N Ni (x),
where N Ni (x) is a neural network, and ci (x) is a “pre-processing function”.
In the past, PDFs were often made available to the world in a form where the
x and Q2 dependence was parameterized. Now, almost universally, the PDFs for a
given x and Q2 range can be interpolated from a grid that is provided by the PDF
groups, or the grid can be generated given the starting parameters for the PDFs (see
the discussion on LHAPDF in Section 6.6). All techniques should provide an accuracy
on the output PDF distributions on the order of a few percent or better.
The parton distributions from the CT14 NNLO PDFs are plotted in Fig. 6.5 at a
Q2 value of 2 GeV 2 (near the starting point of evolution) and in Fig. 6.6 at a Q2 value
of 10000 GeV 2 (more typical of LHC processes). At the lower Q2 value, the up quark
and down quark distribution peak near x values of 1/3, the remnant of the spike in
the primitive model described in Chapter 2. The charm PDF is very suppressed as
it is produced entirely by evolution, and the Q2 value is near the starting scale for
the evolution. There is no bottom quark distribution since it is below threshold. At
higher Q2 values, the up and down quark peaks become shoulders due to the effects of
evolution. At high Q2 , the gluon distribution is dominant at x values of less than 0.1
with the valence quark distributions dominant at higher x. One of the major influences
of the HERA data has been to steepen the gluon distribution at low x.
The CT14 up quark, up-bar quark, b-quark, and gluon distributions are shown as a
function of Q2 for x values of 0.001, 0.01, 0.1 and 0.3 in Figs. 6.7 and 6.8. At low x, the
PDFs increase with Q2 , while at higher x, the PDFs decrease with Q2 . Both effects
are due to DGLAP evolution, as discussed previously. An x value of approximately
420 Parton Distribution Functions
Fig. 6.7 The CT14 NNLO up quark, up-bar quark, c quark, and gluon
parton distribution functions evaluated as a function of Q2 at x values of
0.001 (left) and 0.01 (right).
0.1 is the pivot − point for gluon evolution; at this x value, the gluon distribution
changes little as the value of Q2 increases. It can also be seen that both the charm
(and bottom) quark distributions are generated perturbatively from gluon splitting,
and thus the distributions are zero below threshold for heavy quark pair production
and rise rapidly thereafter with increasing Q2 . It is also possible for there to be intrinsic
charm, where charm quarks are present below threshold. However, no strong evidence
has been observed to date for the existence of intrinsic charm.
Fitting parton distribution functions 421
Fig. 6.8 The CT14 NNLO up quark, up-bar quark, c quark and gluon
parton distribution functions evaluated as a function of Q2 at an x value
of 0.1 (left) and 0.3 (right).
Fig. 6.9 The momentum fractions carried by the CT14 NNLO quark and
gluon distributions, as a function of Q. The gluon distribution in the right
figure is shown without (solid) and with (dotted) the presence of a top
quark PDF.
The average proton momentum carried by each parton species is shown in Fig. 6.9.
As Q2 increases, the momentum carried by up and down valence quarks decreases,
while the momentum carried by the gluon and by sea quarks increases. For typical LHC
hard-scattering scales, the gluon carries slightly less than 50% of the parent proton’s
momentum. Note that a 5-flavour scheme is most commonly used, i.e. the charm
and bottom (but not top) can appear as sea quarks once the Q value is sufficient
to pair produce them from gluon splitting. It is also possible to allow top quarks in
the sea in a 6-flavour scheme; even at the highest Q values, only about 1% of the
proton’s momentum is carried by top quarks. The momentum added to the top quark
distribution comes primarily from the gluon distribution.
The photon is also a parton constituent of the proton, just as a quark or gluon
is, and can be produced from QED radiation from quark lines [754]. This source of
422 Parton Distribution Functions
photons is known as the inelastic component. There is also a (well-known) elastic com-
ponent, which cannot be ignored, resulting from coherent electromagnetic radiation
from the proton as a whole, leaving the proton intact [292, 587]. Both components
contribute to photon-induced processes at the LHC. The inelastic component evolves
with Q2 , while the elastic component is relatively constant (varying with the running
of the QED coupling). To include the photon PDF, the QCD evolution of the par-
tons in the proton now has to be expanded to a QCD+QED evolution, where the
QED aspect involves the electromagnetic coupling α(Q2 ) instead of αs (Q2 ) and the
corresponding splitting function is Abelian rather than non-Abelian. At the LHC,
especially for the 13-14 TeV running, processes involving photons in the initial state
will become increasingly important, c.f. γγ → W W , or photon-initiated production
of W H. There is little hadronic data to directly constrain the photon PDF, but it
has been known that less than 1% of the proton’s momentum is carried by photons.
The first attempt (MRST2004QED) to model the inelastic component just considered
photon emission from quark lines, using the known quark distributions, and using ei-
ther the current quark mass (few MeV) or the constituent quark mass (few hundred
MeV) as a cutoff [754] (see also Refs. [587, 772]). The disparity in the cutoffs leads to
a wide range in the possible size of the photon PDF. Other attempts to determine the
photon distribution used Drell-Yan data from the LHC [193, 194, 275](NNPDF2.3qed
followed by NNPDF3.0qed) to fit for the photon PDF.4
Another approach [80, 837] (CT14qed,CT14qed inc) used data from the scattering
process ep → eγX measured by the ZEUS collaboration [391], leading to an upper
constraint on the total inelastic photon PDF momentum, at a scale of 1.3 GeV, of
approximately 0.14% (and a lower constraint of 0). This is to be compared to the total
elastic photon PDF momentum fraction of 0.15%. The photon PDFs for the inelastic
component only, assuming the maximum intrinsic momentum fraction of 0.14%, and
the total photon PDF, including the elastic component as well, are shown in Fig. 6.10
for Q=1.3 GeV (left) and Q=85 GeV (right). The dominance of the inelastic compo-
nent at high Q can be observed.
Recently, the photon PDF has been determined to a high precision (1-2%) using
electron-proton scattering data, considering an equivalence between a template cross-
section calculated either using proton structure functions, or a photon PDF [746].
The resulting PDF (LUXqed) is in good agreement with CT14qed inc and is at the
lower edge of the NNPDF2.3qed photon PDF uncertainty band at high x, as shown
in Fig. 6.11.
The charm quark distribution also has a dynamic component, generated through
gluon splitting into a cc̄ pair.5 The photon/charm ratio increases with increasing Q
and increasing x value. One reason for the variation in Q is that while αs decreases
with Q, α remains approximately constant (actually, it rises slightly). At low x, the
photon/charm ratio is of the order of 5-10%, due to the difference in coupling constants
4 NNPDF3.0qed improves on the NNPDF2.3qed photon PDF with a correct treatment of α(α L)n
s
terms in the evolution. These resulted in a large uncertainty for the photon PDF, as the errors in the
reasonably precise high-mass Drell-Yan data are still large compared to the expected contributions
from photon-initiated processes.
5 There may also be an intrinsic charm component at low Q, but the evidence is not convincing.
Fitting parton distribution functions 423
Fig. 6.10 The photon PDF from the inelastic component only, with a
photon momentum fraction of 0.14%, and the total photon PDF, including
the elastic component, at a Q value of 1.3 GeV (left) and 85 GeV (right).
Fig. 6.11 The ratio of various photon PDF sets to that of the LUXqed
photon PDF set, all evaluated at a scale of 100 GeV. Note that the vertical
axis scales are different for the sub-plots. Reprinted with permission from
Ref. [746].
(αs vs. α) and the larger gluon than quark (primarily up quark) distribution at low
x. (Remember that the photon does have a substantial elastic component at small x
which is included in this ratio.) At high x, the dominance of the valence up quark
over the gluon results in the photon distribution becoming larger than the charm
distribution.
424 Parton Distribution Functions
function may contain the full set of correlated errors, or only a partial set. The corre-
lated systematic errors may be accounted for using a covariance matrix, or as a shift
to the data, adopting a χ2 penalty proportional to the size of the shift divided by the
systematic error. The two methods should be equivalent. Below we discuss the shift
method.
In the following description, CT10 NLO PDFs [713] are used (similar considerations
apply at NNLO), along with the HERA Run I combined (H1+ZEUS) neutral current
(e+ p) cross sections [77], to discuss the use of the Hessian formalism. A comparison of
the HERA data and the NLO predictions using the CT10 PDFs is shown in Fig. 6.12. On
the left, the data are presented in unshifted form, and on the right, optimal systematic
error shifts have been applied in the manner detailed below. There is good agreement
between the combined HERA Run I data and the CT10 NLO predictions with a global
χ2 of about 680 for the 579 data points, which is typical for the global fit PDFs.
The HERA Run I combined data have Nλ = 114 independent sources of experimen-
tal systematic uncertainty, with parameters λα that should obey a standard Gaussian
or normal distribution. The contribution of the HERA dataset to the χ2 can be written
as !2
XN Nλ
X Nλ
X
2 1
χ ({a}, {λ}) = D k − T k ({a}) − λ α βkα + λ2α , (6.54)
s2k α=1 α=1
k=1
where N is the total number of points and Tk (a) is the theory value for the kth data
426 Parton Distribution Functions
The χ2 function is minimized with respect to the size of the systematic error shifts λα
using the algebraic procedure described. It is also possible to add a penalty term to
the χ2 function that prevents relatively unconstrained PDF parameters from reaching
values that might lead to unphysical predictions in regions where experimental data
are sparse (e.g. very low x).
As expected, a better agreement of data with theory is observed when the sys-
tematic error shifts are allowed. It is important to check that the systematic error
parameters λα (a) contribute to the χ2 an amount on the order of the total number
of systematic errors (114) and (b) that the sizes of the parameters follow a Gaus-
sian distribution. For the case of CT10 and the HERA Run I data, the systematic
error contribution to the total χ2 is 65, or somewhat better than expected, and their
distribution is approximately Gaussian-distributed, as shown in Fig. 6.13.
All systematic errors are not equally important, though, and it is also crucial to
verify that no ’major’ systematic error needs to be shifted by several sigma. Given the
precision of the HERA Run I data, the size of the systematic error shifts required are
relatively small. This need not be the case, for example, for the case of inclusive jet
production, either at the TEVATRON or the LHC.
The Hessian method results in the production of a central (best fit) PDF, and
a set of error PDFs. In this method, a large matrix (26 × 26 for CT10 and 28 × 28
for CT14), with dimension equal to the number of free parameters in the fit, has to
PDF uncertainties 427
6 As more data is included, more PDF parameters in the global fit can be set free, resulting in a
larger number of eigenvectors.
7 But the total error also includes other sources of uncertainty, for example from possible parame-
terization bias.
428 Parton Distribution Functions
excursion of 1 (for a 1σ error) is too low of a value in a global PDF fit. These global fits
use data sets arising from a number of different processes and different experiments;
there is a non-negligible tension between some of the different data sets. In addition,
the finite number of PDF parameters used in the parameterizations (parameterization
bias) also leads to the need for a larger tolerance. Thus, a larger variation in ∆χ2 is
required for a 68% CL. For example, CT10 uses a tolerance T=10 for a 90% CL error,
corresponding to T=6.1 for a 68% CL error,8 while MSTW uses a dynamical tolerance
(varying from 1 to 6.5) for each eigenvector.
The uncertainties for all predictions should be linearly dependent on the tolerance
parameter used; thus, it should be reasonable to scale the uncertainty for an observable
from the 90% CL limit provided by the CT error PDFs to a one-sigma error by dividing
by a factor of 1.645. Such a scaling will be a better approximation for observables
more dependent on the lower number eigenvectors, where the χ2 function is closer to
a quadratic form.
Even though, the data sets and definitions of tolerance are different among the
different PDF groups, we will see later in this chapter that the PDF uncertainties at
the LHC are fairly similar. Note that relying on the errors determined from a single PDF
group may be an underestimate of the true PDF uncertainty, as the central results
among the PDF groups can in some cases differ by an amount similar to this one-sigma
error. (See the discussion later in this chapter regarding benchmarking comparisons of
predictions and uncertainties for the LHC.)
Each error PDF results from an excursion along the “+” and “−” directions for each
eigenvector. Consider a variable X; its value using the central PDF for an error set (say
CT14) is given by X0 . Xi+ is the value of that variable using the PDF corresponding
to the “+” direction for eigenvector i and Xi− the value for the variable using the
PDF corresponding to the “−” direction. The excursions are symmetric for the larger
eigenvalues, but may be asymmetric for the more poorly determined directions. In
order to calculate the PDF error for an observable, a Master Equation should be
used:
v
uN
uX
∆Xmax = t [max(Xi+ − X0 , Xi− − X0 , 0)]2
+
i=1
v
uN
uX
−
∆Xmax = t [max(X0 − Xi+ , X0 − Xi− , 0)]2 . (6.56)
i=1
will be negative (or vice versa), and thus it is trivial as to which term is to be included
in each quadratic sum. For the higher number eigenvectors, however, the “+” and
“−” contributions may be in the same direction (see for example eigenvector 17 in
Fig. 6.15). In this case, only the most positive term will be included in the calculation
of ∆X + and the most negative in the calculation of ∆X − . Thus, there may be less than
N terms for either the “+” or “−” directions. There are other versions of the Master
Equation in current use but the version listed above is the “official” recommendation
of the authors.
There are two things that can happen when new PDFs (eigenvector directions) are
added: a new direction in parameter space can be opened to which some cross-sections
will be sensitive to (an example of this is eigenvector 15 in the CTEQ6.1 error PDF
set, which is sensitive to the high x gluon behaviour and thus influences the high
pT jet cross-section at the TEVATRON and LHC). This particular eigenvector direction
happens to be dominated by a parameter which affects mostly the large x behaviour
of the gluon distribution.
In this case, a smaller parameter space is an underestimate of the true PDF error
since it did not sample a direction important for some physics. In the second case,
adding new eigenvectors does not appreciably open new parameter space and the new
parameters should not contribute much PDF error to most physics processes (although
the error may be redistributed somewhat among the new and old eigenvectors).
In Fig. 6.15, the PDF errors are shown in the “+” and “−” directions for the
20 CTEQ eigenvector directions for predictions for inclusive jet production at the
TEVATRON from the CTEQ6.1 PDFs. The excursions are symmetric for the first 10
eigenvectors but can be asymmetric for the last 10, as they correspond to less well-
determined directions.
Either X0 and Xi± can be calculated separately in a matrix element/Monte Carlo
program (requiring the program to be run 2N + 1 times) or X0 can be calculated with
the program and at the same time the ratio of the PDF luminosities (the product of
the two PDFs at the x values used in the generation of the event) for eigenvector i
(±) to that of the central fit can be calculated and stored. This results in an effective
sample with 2N +1 weights, but identical kinematics, requiring a substantially reduced
amount of time to generate. PDF re-weighting will be discussed later in this chapter.
As an example of PDF uncertainties using the Hessian method, the CT10 and
MSTW2008 NLO uncertainties for the up quark and gluon distributions are shown in
Figs. 6.16 and 6.17. While the CT10 and MSTW2008 PDF distributions and uncer-
tainties are reasonably close to each other, some differences are evident, especially at
low and high x.
After the initial diagonalization of the Hessian matrix, it is also possible to diago-
nalize any one chosen function of the fitting parameters while maintaining a diagonal
form for the χ2 function [816]. This additional function could be a particular cross-
section, say for Higgs boson production through gg fusion. It may be that such an
observable may be dominated by a few eigenvector directions, something which will
be illuminated by the additional diagonalization. It is also possible to determine the
particular direction in eigenvector space that has the greatest sensitivity to a partic-
ular observable, i.e. the steepest gradient. This will become important when looking
430 Parton Distribution Functions
Fig. 6.15 The PDF errors for the CDF inclusive jet cross-section in Run I
for the 20 different eigenvector directions contained in the CTEQ6.1 PDF
error set. The vertical axes show the fractional deviation from the central
prediction and the horizontal axes the jet transverse momentum in GeV.
Reprinted with permission from Ref. [867].
through gg fusion at 8 and 13 TeV using the CT14 PDFs is shown in Fig. 6.19. The
parabolic curve has been determined using the Lagrange Multiplier method, while the
points indicate the results of the Hessian analysis for 90% CL (∆χ2 = 100). (The
dashed curve shows the results of applying the Tier-2 penalty (a penalty that prevents
the agreement with any particular experiment from degrading too greatly) when the
χ2 of any experiment starts seriously degrading.
Table 6.1 is reproduced below from Ref. [489], showing both the PDF and the
PDF+αs (mZ ) uncertainties determined from the Hessian and the Lagrange Multiplier
methods for Higgs boson production through gg fusion at NNLO. The results indicate
both the agreement between the Hessian and Lagrange Multiplier techniques and the
efficacy of scaling the 90%CL Hessian uncertainty by a factor of 1.645 to get the 68%
CL uncertainty.
Fig. 6.17 A comparison of the CT10 and MSTW2008 gluon PDF uncer-
tainty bands at Q2 = 104 GeV2 . The NNPDF2.3 central gluon PDF is also
shown for comparison.
where Nrep is the number of replicas of PDFs in the Monte Carlo ensemble. The
uncertainty for any observable is calculated as the standard deviation of the sample.
Nrep 2 2
1/2
σF = hF[{q}] i − hF[{q}]i
Nrep − 1
434 Parton Distribution Functions
1/2
Nrep
1 X 2
= F[{q (k) }] − hF[{q}]i . (6.58)
Nrep − 1
k=1
This equation provides the 1-sigma error on any observable; one advantage of the
Monte Carlo approach is that any confidence-level can be calculated by removing the
appropriate upper and lower PDF outliers. The NNPDF collaboration provides sets of
Nrep =100 and 1000 replicas. For most applications, the smaller replica set is sufficient.
A central set corresponding to the average of the replicas
Nrep
1 X (k)
q (0) ≡ hqi = q . (6.59)
Nrep
k=1
6.3.4 Meta-PDFs
It is possible to re-fit and to re-parameterize, in a common functional form, the error
PDFs from a number of PDF fitting groups. The result is an ensemble of PDFs that
encompasses the uncertainty of all of the PDF error sets included. The ensemble can
also be expanded to cover the combined PDF+αs uncertainties, instead of just the
PDF uncertainties alone. The ensemble can be transformed into a Hessian basis, and
then only a limited number of the (important) eigenvectors can be retained, leading to
a smaller ensemble that provides PDFs corresponding to both the central behaviour
of the group of PDFs (for example, CT14, MMHT14, and NNPDF3.0) and to the full
uncertainty range. Such PDFs are known as meta-PDFs [552] and they can make it
easier to calculate PDF(+αs ) uncertainties for any observable at the LHC. In addition,
by using the technique of data set diagonalization, the number of error PDFs needed
to describe the P DF + αs uncertainties for all Higgs production processes for all LHC
energies can be reduced to 8.
observable. One additional complication with respect to their use in matrix element
programmes is that the parton distributions are used to construct the initial state par-
ton showers through the backward evolution process. The space-like evolution of the
initial state partons is guided by the ratio of parton distribution functions at different
x and Q2 values. Thus the Sudakov form factors in parton shower Monte Carlos will be
constructed using only the central PDF and not with any of the individual error PDFs
and this may lead to some errors for the calculation of the PDF uncertainties of some
observables. However, it was demonstrated in Ref. [578] that the PDF uncertainty for
Sudakov form factors in the kinematic region relevant for the LHC is minimal, and the
weighting technique can be used just as well with parton shower Monte Carlos as with
matrix element programmes.
22
20 PDF
14
NNPDF2.3 NLO central PDF
12
10
8
6
4
2
0
50
PDF Error (percent)
40
30
20
10
0
-10
-20
-30
-40
-50 -3
10-4 10 10-2 10-1 x
αs (mZ ) by producing PDFs at different fixed αs (mZ ) values. There is now a consensus
to use αs (mZ ) = 0.118 as a central value (basically an approximation/truncation of
the current world value) in global PDF fits,at both NLO and NNLO, and to publish
alternative fits with αs (mZ ) values in intervals of ±0.001 around that central value.
It is expected that the LO value of αs (mZ ) is considerably larger than the NLO value
(0.130 compared to 0.118 for the CTEQ/CT PDFs, for example).
There is a correlation/anti-correlation between the value of αs (mZ ) used in the
global PDF fit and the gluon distribution; whether there is a correlation or anti-
correlation depends on the gluon x range being considered. At low x (less than 0.1),
a decrease in the value of αs (mZ ) results in an increase in the gluon distribution and
vice versa, i.e. there is an anti-correlation. The net impact is to reduce the sensitivity
of cross-sections that depend on both the value of αs (mZ ) and the gluon distribution
in this x range to variations in the value of αs (mZ ). The sensitivity becomes smaller
as the x value approaches 0.1. In the x range from 0.1 to 0.8, there is a correlation
between the value of αs (mZ ) and the gluon distribution, with the correlation becoming
larger as the x value increases.
The diagonalization technique can also be used with respect to the value of αs (mZ );
in fact, it can be shown, using this technique, that, within the quadratic approximation,
the uncertainty in αs (mZ ) is uncorrelated with the PDF uncertainty [714]. Thus the
combined PDF+αs can be calculated by computing the 1 − σ PDF uncertainty with
PDF uncertainties 437
45
PDF
40
35
CT10 NLO 90% CL errors
20
15
10
0
25
PDF error (percent)
20
15
10
5
0
-5
-10
-15
-20
-25 -3
10-4 10 10-2 10-1 x
αs (mZ ) fixed at its central value, and adding in quadrature the 1 − σ uncertainty in
αs (mZ ) (and of course, this can also be done for any other desired confidence level).
X = X0 + ∆X cos θ, (6.60)
Y = Y0 + ∆Y cos(θ + ϕ), (6.61)
438 Parton Distribution Functions
90
PDF
80
CT10 NLO 90% CL errors
70
MSTW2008 NLO 90% CL errors
60
xf(x,Q2)
40
30
20
10
0
25
PDF error (percent)
20
15
10
5
0
-5
-10
-15
-20
-25
-3
10-4 10 10-2 10-1 x
where the parameter θ varies between 0 and 2π, X0 ≡ X(~a0 ), and Y0 ≡ Y (~a0 ). ∆X and
∆Y are the maximal variations δX ≡ X − X0 and δY ≡ Y − Y0 evaluated according
~ and ∇Y
to the M aster Equation, and ϕ is the angle between ∇X ~ in the {ai } space,
with
~ · ∇Y
~ XN
∇X 1 (+) (−) (+) (−)
cos ϕ = = Xi − Xi Yi − Yi . (6.62)
∆X∆Y 4∆X ∆Y i=1
The quantity cos ϕ characterizes whether the PDF degrees of freedom of X and Y
are correlated (cos ϕ ≈ 1), anti-correlated (cos ϕ ≈ −1), or uncorrelated (cos ϕ ≈ 0).
If units for X and Y are rescaled so that ∆X = ∆Y (e.g., ∆X = ∆Y = 1), the
semimajor axis of the tolerance ellipse is directed at an angle π/4 (or 3π/4) with
respect to the ∆X axis for cos ϕ > 0 (or cos ϕ < 0). In these units, the ellipse reduces
to a line for cos ϕ = ±1 and becomes a circle for cos ϕ = 0, as illustrated by Fig. 6.24.
These properties can be found by diagonalizing the equation for the correlation ellipse.
Its semi-minor and semi-major axes (normalized to ∆X = ∆Y ) are
PDF uncertainties 439
sin ϕ
{aminor , amajor } = √ . (6.63)
1 ± cos ϕ
p p
The eccentricity ≡ 1 − (aminor /amajor )2 is therefore approximately equal to |cos ϕ|
as |cos ϕ| → 1.
The ellipse itself is described by
2 2
δX δY δX δY
+ −2 cos ϕ = sin2 ϕ. (6.64)
∆X ∆Y ∆X ∆Y
A magnitude of | cos ϕ| close to unity suggests that a precise measurement of X
(constraining δX to be along the dashed line in Fig. 6.24) is likely to constrain tangibly
the uncertainty δY in Y , as the value of Y shall lie within the needle-shaped error
ellipse. Conversely, cos ϕ ≈ 0 implies that the measurement of X is not likely to
constrain δY strongly.9
The values of ∆X, ∆Y, and cos ϕ are also sufficient to estimate the PDF uncertainty
of any function f (X, Y ) of X and Y by relating the gradient of f (X, Y ) to ∂X f ≡
∂f /∂X and ∂Y f ≡ ∂f /∂Y via the chain rule:
q
~ 2 2
∆f = ∇f = (∆X ∂X f ) + 2∆X ∆Y cos ϕ ∂X f ∂Y f + (∆Y ∂Y f ) . (6.65)
Fig. 6.25 Contour plots of the correlation cosine between two PDFs, for
the up quark (left) and the gluon (right).
s 2 2
∆f ∆X ∆X ∆Y ∆Y
= m − 2mn cos ϕ + n . (6.66)
f0 X0 X0 Y0 Y0
For example, consider a simple ratio, f = X/Y . Then ∆f /f0 is suppressed (∆f /f0 ≈
|∆X/X0 − ∆Y /Y0 |) if X and Y are strongly correlated, and it is enhanced (∆f /f0 ≈
∆X/X0 + ∆Y /Y0 ) if X and Y are strongly anticorrelated.
As would be true for any estimate provided by the Hessian method, the correlation
angle is inherently approximate. Eq. (6.62) is derived under a number of simplifying
assumptions, notably in the quadratic approximation for the χ2 function within the
tolerance hypersphere, and by using a symmetric finite-difference formula for {∂i X}
that may fail if X is not monotonic. Even with these limitations in mind, the correlation
angle is a convenient measure of the interdependence between quantities of diverse
nature, such as physical cross-sections and parton distributions themselves.
Correlations can be calculated between two PDFs, fa1 (x1 , µ1 ) and fa2 (x2 , µ2 ) at
a scale µ1 = µ2 =85 GeV. In the figure below,the self-correlations for the up quark
(left) and the gluon (right) are shown. Light (dark) shades of grey correspond to cosφ
close to 1 (-1). Each self-correlation includes a trivial correlation (cosφ = 1) when x1
and x2 are approximately the same (along the x1 = x2 diagonals). For the up quark,
this trivial correlation is the only pattern present. The gluon distribution, however,
also shows a strong anti-correlation when one of the x values is large and the other
small. This arises as a consequence of the momentum sum rule.
PDF correlations for physics processes at the LHC will be discussed later in this
chapter.
Resulting parton distribution functions 441
10 These observations are true in general for comparison of any sets of LO and NLO PDFs.
442 Parton Distribution Functions
In most cases, LO PDFs will be used not in fixed order calculations, but in pro-
grammes where the LO matrix elements have been embedded in a parton shower
framework. In the initial state radiation algorithms in these frameworks, shower pa-
trons are emitted at non-zero angles with finite transverse momentum, and not with a
zero kT implicit in the collinear approximation. It might be argued that the resulting
kinematic suppression due to parton showering should be taken into account when
deriving PDFs for explicit use in Monte Carlo programmes. Indeed, there is substan-
tial kinematic suppression for production of a low-mass (10 GeV) object at forward
rapidities due to this effect, but the suppression becomes minimal once the mass rises
to the order of 100 GeV [715].
NLO PDFs be used. Increasingly, most processes of interest have been included in
NLO parton shower Monte Carlos. Here, the issue of LO PDFs becomes moot, as
NLO PDFs must be used in such programs for consistency with the matrix elements.
As a result, the use of modified LO PDFs has been decreasing.
Fig. 6.28 CT14 NNLO PDFs as a function of x for Q = 2 GeV (left) and
Q =100 GeV (right). Reprinted with permission from Ref. [489].
low Q values.11 At higher Q values, though, the differences are reduced. As the PDF
uncertainties are dominated by the experimental errors of the data included in the
PDF fits, the uncertainties at NNLO will be similar to those determined at NLO.
As mentioned previously, the most recent PDF set from the CTEQ-TEA group is
CT14 [489]. The NNLO PDFs from CT14 are shown in Fig. 6.28 for a Q values of 2
GeV and 100 GeV.
Differences between the CT14 and the CT10 PDFs for the up quark and the gluon
distributions are shown in Fig. 6.29. The differences are relatively small, within the
error bands of either PDF set, and tend to be the most significant at low x and high
x where the PDFs are the most unconstrained. One of the most important changes is
not easily visible in these plots; that is of the gluon distribution in the x region around
0.01. The changes from CT10 to CT14 are small, but have an impact on the PDF
uncertainty for Higgs boson production at the LHC, as discussed in Section 6.5.1.
As described in Chapter 2, the gg fusion cross-section for Higgs production to
NNNLO has been completed. Since it will be quite some time before any PDFs are
produced at this order, there arises the question as to the level of error produced when
NNLO PDFs are used with such NNNLO calculations. It has been shown [531] that
the error should be much smaller than the level of the difference between the NLO
and the NNLO matrix elements.
11 For example, at low Q, the order α2 evolution in the NNLO PDF suppresses g(x, Q) and increases
s
q(x, Q) relative to NLO.
CT14 and parton luminosities 445
Fig. 6.29 A comparison of the CT10 and CT14 up quark (left) and gluon
(right) distributions. Reprinted with permission from Ref. [489].
provide a useful estimate of the size of an event cross-section at the LHC. Below we
define the differential parton-parton luminosity dLij /dŝ dy and its integral dLij /dŝ:
dLij 1 1
= [fi (x1 , µ)fj (x2 , µ) + (1 ↔ 2)] . (6.68)
dŝ dy s 1 + δij
The prefactor with the Kronecker delta avoids double-counting in case the partons are
identical. The generic parton-model formula
XZ 1
σ= dx1 dx2 fi (x1 , µ) fj (x2 , µ) σ̂ij (6.69)
i,j 0
R Fig. 6.30 shows a plot of the√luminosity function integrated over rapidity, dLij /dŝ =
(dLij /dŝ dy) dy, at the LHC s = 13 TeV for various parton flavour combinations,
for the CT14 PDFs. The gluon-gluon PDF luminosity dominates at low mass, the
gluon-quark PDF luminosity for masses from 300 GeV to approximately 2 TeV, with
the quark-quark luminosity being largest for all masses above 2 TeV.
flavour schemes used, etc. However, as time has progressed, the tendency has been for
the three groups to have results that are in good agreement with each other.
Consider for example the quark-antiquark and gluon-gluon PDF luminosity uncer-
tainties as a function of mass, at NNLO, for CT14, MMHT2014, and NNPDF3.0 (and
the prior, to be defined in the following) for an LHC centre-of-mass energy of 13 TeV,
shown in Fig. 6.31 from Ref. [196]. The central values and the size of the uncertainties
can vary among the 3 PDF groups at low-mass and at high-mass, but in the precision
mass region (say from 50-500 GeV), both the central values and the uncertainties are
in remarkably good agreement with each other (especially so for the gluon-gluon case).
This was not the situation for the gluon-gluon luminosity for the previous round of
PDF fitting (CT10, MSTW08 and NNPDF2.3), where the envelope of the uncertainty
bands for the 3 PDF groups yielded a PDF uncertainty for Higgs boson production
through gg fusion that was about a factor of 2.5 larger than the PDF uncertainty
band for any of the individual PDFs [196].13 The resultant PDF uncertainty was sim-
ilar in size to the NNLO scale uncertainty for the cross-section [618]. As the scale
uncertainty at NNNLO has shrunk to the order of 2-3% [156], the use of the older
generation of PDFs would have left the PDF(+αs (mZ )) uncertainty as the largest
source of uncertainty for that cross-section, and implicitly for the determination of the
Higgs couplings and other parameters that depend on the absolute knowledge of the
13 The CT14 gg PDF luminosity increased by about 1% in the Higgs region at 13 GeV (compared
to the older generation CT10), the MMHT2014 decreased by about 0.5%, and the NNPDF3.0 PDF
luminosity decreased by about 2-2.5%.
CT14 and parton luminosities 447
Fig. 6.31 A comparison of the PDF luminosities for the prior, CT14,
MMHT2014 and NNPDF3.0 are shown for the gg initial state (left) and
the q q̄ initial state (right) for a centre-of-mass energy of 13 TeV. Reprinted
with permission from Ref. [295].
Fig. 6.32 A comparison of the PDF luminosities for the prior, the 30 PDF
set and the 100 PDF set, is shown for the gg initial state (left) and the q q̄
initial state (right) for a centre-of-mass energy of 13 TeV. Reprinted with
permission from Ref. [295].
15 The NNPDF3.0 set is already in this formulation and the CT14 and MMHT2014 Hessian sets
can be converted to such.
16 The low-mass difference is basically due to low-mass final states produced at rapidities beyond
the acceptance of the LHC detectors, which were not included in the construction of the 30 PDF set.
CT14 and parton luminosities 449
Table 6.2 The correlation coefficients between various Higgs production cross-sections at
13 TeV. In each case, the PDF4LHC15 NNLO prior set is compared to the Monte Carlo
and with the two Hessian reduced sets, and the results from the three individual sets, CT14,
MMHT14 and NNPDF3.0.
correlation coefficient
PDF Set
tt̄, Htt̄ tt̄, hW tt̄, hZ ggh, htt̄ ggh, hW ggh, hZ
PDF4LHC15 nnlo prior 0.87 -0.23 -0.34 -0.13 -0.01 -0.17
PDF4LHC15 nnlo mc 0.87 -0.27 -0.35 -0.10 0.07 -0.01
PDF4LHC15 nnlo 100 0.87 -0.24 -0.34 -0.13 -0.02 -0.17
PDF4LHC15 nnlo 30 0.87 -0.27 -0.43 -0.13 -0.04 -0.23
CT14 0.09 -0.32 -0.44 -0.26 -0.03 -0.18
MMHT14 0.90 -0.22 -0.52 0.08 -0.18 -0.33
NNPDF3.0 0.90 -0.17 -0.21 0.18 0.52 0.49
(right), is shown in Fig. 6.33 [441]. From these plots, the level of agreement of the
predictions from the PDF sets provided by the PDF4LHC group with each other, and
with the current and previous generation of global PDF sets, can be determined.
The correlation coefficients between various Higgs boson production processes at
13 TeV are shown in Table 6.2 [295]. Note the spread in correlation coefficients among
the three global PDFs. No more than a single digit accuracy should be ascribed to the
correlation numbers.
A small number of error PDFs can be very useful in instances where the PDF
uncertainties are used as nuisance parameters. It is possible, for example, to reduce
the number of error PDFs needed to describe Higgs physics and backgrounds down to a
smaller number, on the order of 7, using the METAPDF technique, without significant
loss of precision [552]. Other techniques are available which also reduce the number of
error PDFs needed [333].
450 Parton Distribution Functions
17 http://hepdata.cedar.ac.uk/pdf/pdf3.html
18 http://apfel.mi.infn.it
Summary 451
Such a step is in fact unnecessary, and most programs have the ability to use PDF event
re-weighting to substitute a new PDF for the PDF used in the original generation.
For each event generated with the central PDF from the set, PDF weights, for the
error PDFs, can also be determined. Only one Monte Carlo event sample is generated
but 2N + 1 (e.g. 57 for CT14) PDF weights are obtained using
f (x1 , Q; Si )f (x2 , Q; Si )
Wn0 = 1, Wni = (6.71)
f (x1 , Q; S0 )f (x2 , Q; S0 )
6.7 Summary
An accurate knowledge of parton distribution functions is crucial for precision LHC
phenomenology. In this chapter, the techniques for the determination of PDFs were
described, as well as techniques for the determination of the uncertainties of the PDFs.
Global PDF fits involve data from deep-inelastic scattering, Drell-Yan and inclusive
jet production, with increasing contributions from single photon and top production.
Previous generations of fits used only data from the TEVATRON, HERA, and fixed target
experiments, but the copious data from Run 1 (and now Run 2) at the LHC are starting
to have a significant impact.
The evolution PDFs using the DGLAP equation was revisited in more detail. Evo-
lution is the great equalizer, as differences among PDFs from different groups, and the
sizes of PDF uncertainties, decrease with increasing Q2 . In general, there is a larger
difference between PDFs determined with a leading order framework and a next-to-
leading order framework, with a much smaller difference when going from NLO to
NNLO. LHC predictions carried out with LO PDFs can differ greatly both in normal-
ization and shape from those carried out in a purely NLO framework. NLO shapes
can often be recovered with LO matrix elements by using NLO PDFs.
452 Parton Distribution Functions
Parton PDF luminosities were defined and a new framework (PDF4LHC15) for
combining PDFs from the three global PDF fitting groups was described. The PDF
uncertainties for the three PDF groups can be summarized with a limited number of
error PDFs, as low as 30. Lastly, a number of useful PDF tools were described.
PDFs will continue to improve as more data from the LHC is included, and as more
crucial processes are calculated at NNLO. As for the theory, and for the LHC data,
what is presented here is just a snapshot of a rapidly developing field.
7
Soft QCD
When looking at event displays at hadron colliders, it becomes apparent that the
perturbative picture developed so far does not yet cover all aspects of what can be
seen. First of all, there are many events with only a few — if any — particles hitting
the central regions of the detector, either as charged tracks or as energy deposits in the
calorimeters. Quite often these particles are relatively soft, with transverse momenta
at around or below 1 GeV. At the beginning of this chapter, in Section 7.1, the soft
inclusive physics underlying such events will briefly be discussed. Building on a short
presentation of the ideas behind typical models for soft strong interactions, which are
quite often based on Pomeron and Regge pole physics, total hadronic cross-sections and
their parametrizations will be introduced. This will extend to elastic and diffractive
processes, which populate the forward regions of the detector, usually with low-p⊥
particles.
Increasing the energy scales, the next section, Section 7.2 will focus on multiple
parton scattering. This phenomenon is closely related to the fact that hadrons are
extended objects, containing many partons. In contrast to the usual factorization the-
orems underpinning the perturbative machinery discussed at great length in the first
chapters of this book, this may lead to more than one parton pair interacting with
each other. With increasing available energies, secondary parton–parton scatters may
start populating regions of phase space usually considered to be mainly driven by per-
turbative physics; this is true in particular for the multiple production of hard objects,
such as gauge bosons or jets. But even without such relatively spectacular manifesta-
tions of this phenomenon, multiple parton scatterings contribute to the overall particle
yield in collisions, to the overall energies of jets, etc.. This manifestation of multiple
scattering is often called the “Underlying Event”. Models describing this part of the
overall event structure will also be introduced in Section 7.2.
In the penultimate section of this chapter, Section 7.3, some light will be shed on
the transition of the partons produced in the hard interaction, the parton showering,
and the underlying event into the observable hadrons. While to date this transition can
quantitatively be described by phenomenological models only, some qualitative ideas
underlying their construction could be tested, and this interplay will be discussed.
Finally, Section 7.4 rounds off this chapter with a very brief description of the
The Black Book of Quantum Chromodynamics: A Primer for the LHC Era. John Campbell, Joey Huston, and Frank Krauss.
© John Campbell, Joey Huston, and Frank Krauss 2018. Published in 2018 by Oxford University Press.
DOI 10.1093/oso/9780199652747.001.0001
454 Soft QCD
3. Inelastic interactions, where the initial state gets seriously distorted, typically due
to sufficiently hard QCD interactions leading to a number of final-state particles
not only in the forward direction but also in the central detector region.
By far and large, the total, inelastic, elastic, and diffractive cross-sections increase with
energy, where typically
σtot > σinel > σel > σSD > σDD > σCXP (7.1)
and all being of the order of 1-100 mb at hadronic centre-of-mass energies that have
been accessed until now.
Here l labels the angular momentum and the polar angle θ can be expressed through
the Mandelstam variables s and t as
2t
cos θ = 1 + . (7.3)
s
The al in the equation above are called partial wave amplitudes, and, correspond-
ingly, this expansion is also called partial wave expansion. It has already been
456 Soft QCD
encountered before, in Section 5.2.5, where also the connection of amplitudes and
cross-sections through the optical theorem has been discussed in detail.
By continuing the angular momentum l to the complex plane, it is possible to
rewrite the summation in the expansion above as an integral, namely
I
1 P l, 1 + 2t
s
Aab→cd (s, t) = dl (2l + 1) a(l, t) . (7.4)
2i sin(πl)
C
These poles come with factors (−1)l from the sine, thus violating the inequality along
the imaginary axis. In order to guarantee convergence for infinitely large l, two analytic
functions a(η=±) (l, t) are introduced, such that the integral is separately finite for
either of them. The η = ±1 are called “signatures” of the corresponding partial
waves, and
Total cross-sections and all that 457
I " #
1 X η + e−iπl P l, 1 + 2t
Aab→cd (s, t) = dl (2l + 1) s
a(η) (l, t) . (7.6)
2i η=±
2 sin(πl)
C
In a next step, the contour C surrounding the positive real l axis is closed in the
complex-l plane by adding a half-circle and the line −C 0 , running in parallel to the
imaginary l axis at Rl = −1/2, thus ranging from l = −1/2 + i∞ to l = −1/2 − i∞.
The overall result equals the residues of all poles inside this closed integration path,
labelled by nη , thus reflecting their signature. Assuming the integral to vanish for large
absolute values of l, |l| → ∞, this leaves only the poles and the integration along C 0 .
The various pieces of this manipulation are sketched in Fig. 7.1. After it, the amplitude
reads
− 12 +i∞
Z " #
1 X η + e−iπl P l, 1 + 2t
Aab→cd (s, t) = dl (2l + 1) s
a(η) (l, t)
2i η=±
2 sin(πl)
− 21 −i∞ (7.7)
" #
X X η+e −iπαj (t)
P αj (t), 1 + 2t
s
+ βj (t) .
η=± j∈nη
2 sin(παj (t))
The new poles reside at positions αj (t) in the complex-l plane, replacing the values l
in the “regular” real poles and contribute with a strength according to their partial
wave amplitudes, denoted as βj (t). They are called Regge poles with even and odd
signatures, depending on the value η = ±. There may be additional, more complicated
analytic structures in the complex-l plane, like branch cuts, etc., but they are beyond
the scope of this very brief introduction.
where the γ are the “couplings” of the Reggeon to the incoming and outgoing par-
ticles, cf. Fig. 7.2. If α(t) assumes an integer value, sin[α(t)π] will vanish, and the
amplitude will develop a pole. For positive integers this can be understood as the
resonant exchange of a physical particle, for negative integers the Γ function will lead
to a cancellation of the contributions. This is to be compared with the exchange of
particles with mass m and spin J = α(m2 ) for positive t (or in the s channel), which
also exhibits a resonant structure at 0 < t, s = m2 . This observation led Chew
and Frautschi to plot the spins of hadrons against their masses squared, discovering
straight lines, as shown in Fig. 7.3. Such graphs are also known as Chew–Frautschi
plots, and they provide a motivation to parameterize the Regge trajectories α(t)
as
α(t) = α(0) + α0 · t (7.10)
for all values of t, and in particular for the physical region of negative t. For a more
detailed discussion, the reader is referred to the literature, for example [548].
The Reggeon amplitude will be equal to the forward elastic amplitude for t → 0,
if a = c and b = d. In this case, the optical theorem expressed in Eq. (5.131) links the
total cross-section to such an amplitude, and therefore
In the specific case of the ρ-ω trajectory, which is related to processes where isospin
is exchanged (∆I = 1 processes), the exponent of s is smaller than 0, and there-
fore the cross-section decreases with increasing s. This in fact has been observed
experimentally, and it is in line with the Pomeranchuk theorem asserting that
the cross-sections for all scattering processes involving any charge exchange vanish
asymptotically [790, 814]. Conversely, at asymptotically high energies, processes with
the exchange of the quantum numbers of the vacuum dominate the cross-section [530].
In fact, all experiments to-date exhibit a total hadronic scattering cross-section that
increases slowly with the centre-of-mass energy of the hadronic collision. Attributing
this behaviour to a single Regge pole, then it must carry the quantum numbers of the
Total cross-sections and all that 459
Fig. 7.3 The Chew–Frautschi plot of the ρ-ω trajectory, including a fit to
the linear form of α(t).
vacuum: this specific Regge trajectory is called the pomeron or Pomeranchuk pole.
The increase of the cross-section with energy means that its intercept αP (0) > 1.
At this point, it is important to stress that reggeons in general and the pomeron in
particular are, per se, poles or cuts in the complex plane of the scattering amplitude,
and not particles. The identification of reggeon exchange with the exchange of a tower
of physical particles is not neccessarily a coincidence, since physical particles can and
will be exchanged in scattering processes, but this relation is not bi-directional: there
may be poles/cuts that cannot be identified with known physical particles. Incidentally,
the pomeron is a prime example for it: it cannot be related to any known physical
particle.
To rephrase this in a different way: the pomeron is not a particle and it thus actually
has a nature different from Regge trajectories like the one in the example above, the
ρ-ω trajectory, for two reasons. First of all, to-date, there are no strongly interacting
particles with integer spins that could serve as manifestations of the trajectory for
t > 0 or, stated differently, as resonances in s-channel scattering. This relates to the
fact that in a picture inspired by perturbative QCD, the pomeron is thought of as
the exchange of gluonic degrees of freedom, arranged in a color-singlet state. The very
existence of purely gluonic bound states, essentially hadrons made of gluons only, also
known as glueballs, remains a subject of speculation. Secondly, in the QCD picture,
460 Soft QCD
however, the exchange of gluons does not lead to a simple pole, but rather to a branch
cut. This indicates that a simple particle interpretation like for other Reggeons is not
obvious: there is just no unique relation between mass and spin, a hallmark of “proper”
particles.
Fig. 7.4 The total pp (blue) and pp̄ (red) cross-sections compared to the
simple pomeron/reggeon fits of Eq. (7.17)
(pp, pp̄)
where the exponent η related to the reggeon and the normalization σR are also
fitted by [485] resulting in
56.08 mb for pp
η = 0.4525 and σR = (7.17)
98.39 mb for pp̄ .
The comparison of this fit with data taken from the Review of Particle Physics by the
Particle Data Group [799] is exhibited in Fig. 7.4.
Another addition that is frequently made is the inclusion of a “hard pomeron”,
which reflects the fact that the pomeron as obtained from perturbative QCD would
yield a completely different — and larger — intercept αP − 1 ≈ 0.5 at leading order.
This value is at odds with data, which would favour a much smaller intercept for the
hard pomeron of about αP −1 ≈ 0.25, more in line with higher-order results. However,
there is some evidence of the hard pomeron in the structure function F2 and in the
interaction pattern of more exclusive processes.
Finally, note that similar fitting strategies have also been applied to different reac-
tions such as πp or Kp–scattering. It is interesting to realize that the scaling behaviour,
i.e. the exponents in, say, the simple fit of Eq. (7.12) are identical and that only the
462 Soft QCD
normalization changes by about a factor of 2/3.1 This is the additive quark rule,
which was crucial in establishing the pomeron picture, see e.g. [717].
There is another important theoretical property of the total cross-section: asymp-
totically it cannot possibly increase faster than log2 s, with s the centre-of-mass energy
squared of the incident particles. This is the Froissart bound [547], a manifestation of
the unitarity requirement underpinning every reasonable field theory, supplemented
with the idea that the S-matrix is analytic. Ultimately, this bound will have to kick
in and thereby modify the relatively simple fits above.
Pictorially speaking, the Froissart bound ensures that the proton behaves asymp-
totically like a “black disk”. In partonic language this means that the parton density
is limited, thus limiting or counter-balancing the amount of parton creation at larger
scales as driven by the DGLAP equations, Eq. (2.31). In practical terms this means
that at large enough scales, there must emerge non-linear terms in the DGLAP equa-
tions, which account for parton recombination effects at large densities.
Denoting with ~q the three-vector of momentum transfer such that in the high-energy
limit
t = ~q 2 = ~q⊥2 . (7.19)
In this limit, the elastic scattering amplitude and, correspondingly, its Fourier trans-
form are purely imaginary. This allows to rewrite it as,
" ! #
1 Ω(s, B~ ⊥)
~ ⊥) =
a(s, B exp − −1 , (7.20)
2i 2
where the eikonal function or opacity Ω(s, B ~ ⊥ ) has been introduced. The trick
is now to identify the Regge-parameterization with the eikonal function rather than
directly with the amplitude,
η
~ s s
Ω(s, B⊥ ) ∝ ΩP · + ΩR · + ... . (7.21)
GeV2 GeV2
1 In fact, [485] finds a factor of 0.63 and relates this small deviation from 2/3 to the radius of the
pion. For the Kp total cross-section the factor is even less, which may reflect an even smaller kaon
radius, the fact that the pomeron couples differently to strange quarks, or merely the somewhat worse
quality of the data entering the fit. Also, of course, Reggeon trajectories containing strangeness may
start playing a role.
Total cross-sections and all that 463
As a consequence, the cross-section will remain finite, as the exponential of Eq. (7.20)
will never exceed unity — the hadronic disc will therefore never become blacker than
black.
The relationship between the total cross-section and the elastic scattering ampli-
tude gives rise to a number of further relations. For example, expressed through the
eikonal, the total, elastic and inelastic cross-sections read
Z " !#
1 Ω(s, B ~ ⊥)
2
σtot (s) = Im(T (s, t = 0) = 2 d B⊥ 1 − exp −
s 2
Z Z !2
2 Ω(s, B~ ⊥ )
~ ⊥
σel (s) = 4 d2 B⊥ a(s, B = d2 B⊥ exp −
2
Z " !#
σinel (s) = σtot (s) − σel (s) = 2 ~
d B⊥ 1 − exp −Ω(s, B⊥ ) . (7.22)
7.1.4 Diffraction
Assuming both the Good–Walker states and the physical ones to be orthogonal and
normalized,
464 Soft QCD
N
X
hφi |φk i = δik and |αji |2 = 1 , (7.25)
i=1
implies that the matrix formed by the αji is unitary. The elastic amplitude of the
incident particle |Ψi = |ψ1 i is therefore given by
D E X D E
2
Ψ T̂ Ψ = |α1i | Ti = T̂ , (7.26)
i
the average over the diffractive eigenstates. Here it has been used that the scattering
operator T̂ is diagonal in the basis spanned by the Good–Walker states. Therefore the
elastic cross-section, given by the amplitude squared, is proportional to the square,
In contrast, the amplitude for the diffractive production of any other state |ψk6=1 i
reads
X
ψk T̂ Ψ = ∗
α1i αik Ti , (7.28)
i
To have the diffractive component only, the elastic component must be subtracted.
The interpretation of this is that the cross-section for the transition to any excited
state, for diffraction, is given by the fluctuations
D E D E
σdiff. exc. ∝ T̂ 2 − T̂ 2 . (7.30)
Introducing such diffractive eigenstates implies that the opacity Ω depends on the
eigenstates that scatter. The cross-sections of Eq. (7.22) must be expressed in terms
of the now eigenstate-dependent opacities Ωik and the expansion coefficients αi and
αk of the incident particles:
Z X
2 2 Ωik (Y, B⊥ )
σtot (Y ) = 2 d2 B⊥ |αi | |αk | 1 − exp −
2
i,k
Z X 2
2 2 Ωik (Y, B⊥ )
σel (Y ) = d2 B⊥ |αi | |αk | 1 − exp −
2
i,k
Total cross-sections and all that 465
Z X
2 2
σinel (Y ) = d2 B⊥ |αi | |αk | 1 − exp −Ωik (Y, B⊥ ) . (7.31)
i,k
The c.m.-energy squared s of the incident hadrons has been replaced by a rapidity
s
Y = log . (7.32)
m2had
for the combination of elastic scattering and diffraction of incident beam particle 1
and similar for a combination of elastic scattering and incident particle 2.
2 The pomeron intercept in this model is given by αP (t) = 1 leading to a constant cross-section.
466 Soft QCD
Fig. 7.5 The relationship between pomeron amplitude and total cross–
section. The vertical dashed line indicates either that the imaginary part of
the amplitude has to be taken, or it symbolizes a sum over all final states,
n.
Fig. 7.6 The Low–Nussinov pomeron: in this model the pomeron is asso-
ciated with the exchange of a single gluon. In the sketch, the thick blobs
represent the gluons interacting with any of the valence quarks of the in-
coming hadrons.
Fig. 7.8 High-mass single diffraction (left) and central exclusive produc-
tion (right). Note the occurrence of a triple-pomeron vertex λ on the left
plot, which plays an important role also in modelling the recombination of
partons.
two gluons forming a colour singlet on the amplitude level — the pomeron thus is not
cut in such cases, see Fig. 7.7 for the case of low-mass diffraction. High-mass diffraction
or central exclusive production processes can thus be thought of as a combination of
cut and uncut pomerons, as exhibited in Fig. 7.8. In this case, a triple-pomeron vertex
appears.
It is interesting to note, however, that in some phenomenological models pomerons
are thought of as objects with particle-like properties, like, e.g., an internal structure
that could be resolved in a way similar to that of “usual” hadrons. In particular, in
these models processes such as single diffraction or central-exclusive production are
identified as the t-channel exchange of pomerons as physical particles, which are then
subjected to a hard interaction with a quark or gluon or with each other. Contact to
QCD is then made by defining an (equivalent) pomeron flux or similar, accompanying
the incident hadrons, and a pomeron structure function, effectively parton distribution
functions for these objects. While to some degree these models seem to work, they are
of course essentially nothing but effective, albeit simplistic, parameterizations of a
much more complicated mechanism; the basic problem with them is that the pomeron
is just not a particle. In fact, in perturbative QCD the pomeron is not a simple pole,
but rather a branch cut in the complex plane, which in turn renders any interpretation
468 Soft QCD
First of all, and most importantly, quantum numbers such as momentum, flavour,
and colour must be conserved. For the latter, the limit of infinitely many colours is
usually employed, which ensures that for every colour there is a unique colour partner,
carrying the anti-colour. This assumption impacts on both the modelling of the beam
remnants and the hadronization, where the latter is driven by the colour-singlet
structures formed in previous stages of the event simulation.
To illustrate this while keeping things as simple as possible, imagine an event at
the LHC, where a W boson decaying into a lepton–neutrino pair was created, ud¯ →
W + → `+ ν. Further assume that the quarks did not undergo any parton showering
or that they radiated only gluons. Then, the two quarks, the u and the d¯ must be
extracted from the incident protons. Knowing their Bjorken-x at the cut-off scale of
the initial-state parton shower defines how much momentum is left for the proton
remnants, which will consist of other partons. These remnant partons will in turn also
compensate for the flavour and colour degrees of freedom, ensuring that these quantum
numbers are also conserved. In the case of the u quark extracted from a proton this is
straightforward: naively, the protons have |uudi as valence quark flavours. Extracting
a u therefore just leaves ud as the flavours of the corresponding remnant. Using the
notion of diquarks as carriers of baryon number, discussed in more detail in the context
of hadronization models below, cf. Section 7.3.1, this remnant will therefore consist
of a (ud) diquark. It will be assigned the anti-colour of the u quark colour and will
take the full remaining momentum, thereby fulfilling the requirement that the parton
configuration replacing the proton is colour-neutral and carries all of its momentum.
A quick comment is in order here. Frequently in Monte Carlo simulations there are
cases, where the three constitutent quarks have to be distributed over a diquark and
a quark. The most naive way of achieving this would consist in merely giving each
combination the same probability — in such a case the only small subtlety is that the
diquarks come as spin-0 and some of them possibly also in spin-1 states. Alternatively,
in cases where the original |uudi configuration of a proton is still available, one could
use the proton wave-function in terms of quarks and diquarks [727],
1 1 1
|pi = |uudi = √ |ui|(ud)0 i + √ |ui|(ud)1 i + √ |di|(uu)1 i . (7.35)
2 6 3
This is how the more intricate case of the incident d¯ would be handled — extracting
d¯ from |uudi leaves |uud + di as the flavour content of the proton, which must then
be decomposed into one diquark, carrying the baryon number, and two quarks. One
of the quarks must carry the colour matching the d’s ¯ colour, while the other quark
and the diquark will form another colour singlet. Assuming the most likely outcome
for the flavour structure according to Eq. (7.35), a (ud)0 diquark and an u quark
will be formed in addition to the additional, flavour-compensating d. This leaves the
task of distributing colours, answering the question if the u and the d¯ or the d and
the d¯ form a colour-singlet. Again, different solutions present themselves, ranging from
equal probabilities to a picture, where the dd¯ stem from a gluon splitting, thus forming
a colour octet. In this picture, the d and the (ud)0 and the u and the d¯ are colour
singlets.
Replacing the u and d¯ quarks in the simple example above with, say, sc̄ → W − does
470 Soft QCD
not alter the picture dramatically. In fact, following the initial-state parton shower to
lower and lower scales, at some point the charm threshold is crossed and as a result
the c̄ quark will have become a gluon. In such a case, say an s quark and a gluon
will be extracted from the incident protons. The first proton remnant will consist of
the flavours |uud + s̄i, translating into one of the quarks forming a singlet with either
the diquark or the s̄, and the respective left-over s̄ or diquark carrying the colour of
the s quark. Assuming that the ss̄-pair had emerged from the splitting of a fictitious
gluon below the scale of the dissociation will fix the colours such that the s quark will
form a singlet with the diquark and the remaining quark will form a singlet with the
s̄. For the proton, where the gluon is extracted, the structure is simpler. The flavour
content of the proton remnant will still be |uudi. The gluon carries a colour and an
anti-colour, which will be compensated by an corresponding anti-colour assigned to
the diquark and a corresponding colour assigned to the quark. The quark therefore
will form a singlet with the colour-partner of the s quark in the other proton, while
the diquark will be colour-connected to the c quark emerging from the g → cc̄ splitting
that occurred in the initial-state parton shower. For a pictorial representation of the
two examples discussed, cf. Fig. 7.9.
produced in ud¯ → W + and sc̄ → W − — are entirely given by the parton shower. In
the first example, where the relatively unlikely case of no emission was assumed, the
W + boson therefore would have zero transverse momentum. Events like this therefore
would lead to a visible spike of the p⊥ distribution of the W boson, or of Drell–Yan
pairs in the case of q q̄ → ``¯ events, at p⊥ = 0 GeV. This of course is not what is
being seen experimentally, where the p⊥ distribution apporaches zero for vanishing
transverse momentum, and then increases from there until it reaches the Sudakov
peak, which in the case of vector bosons at the LHC is located at about 5 GeV.
Therefore, there must be some source of relatively small transverse momentum for
such systems, beyond the parton shower.
It is quite straightforward to assume some kind of “Fermi motion” of the partons
inside the incident hadrons. In the case of collisions with hadrons in the initial state,
this would manifest itself as some additional intrinsic or primordial transverse
momentum (intrinsic or primordial k⊥ ) that the beam partons assume in the
break-up of the hadron.
This presents another non-perturbative effect, similar to the soft form factor of
QT resummation first encountered in Eq. (2.172) and further discussed in Chapter 5.
And similar to the parameterizations used there, it is customary to employ some rela-
tively simple form for the generation of the intrinsic k⊥ , like, e.g., a simple Gaussian,
supplemented with a cut-off to prevent the generation of too large transverse mo-
menta. The parameters of such functions must then be fitted to data, usually the low
transverse momentum region of the lepton-pair in Drell–Yan processes.
In the dissociation of the second proton into partons, its momentum also has to
be distributed. This is yet another issue where no first principles are available and
simple ideas are invoked to guide the modelling. One of these ideas would be to select
momenta — or, analogously, the Bjorken-x — of the quark and gluon partons emerging
in the hadron break-up according to the PDF at the small parton shower cut-off scale
O (1 GeV). The remaining momentum would be associated with the diquark degrees
of freedom.
Even more sophisticated models can be devised for the break-up of the incident
hadrons, improving the way, flavours, momenta, or colours are distributed over the
outgoing quarks, see for example [855]. But any model will become more involved
when the underlying event is added to the simulation.
consists of all contributions associated with the additional scatters in multiple parton
scattering, their fragmentation, and effects coming from remnants.
It is remarkable that, even with this relatively restrictive definition, the activity
attributed to the underlying event is significantly higher than in soft collisions, usually
associated with minimum bias events, at the same energy. In addition, fluctuations in
particle multiplicity and energy flows are significantly larger in the underlying event.
A simple interpretation is that with increasing scales being probed in the collision,
the relative impact parameter of the two incident hadrons must decrease — in other
words, by biasing the event selection towards larger momentum scales, the overlap
between the colliding hadrons becomes larger, offering other partons a higher chance
of interactions. This leads to an effect known as the jet pedestal effect: the hard
objects sit on a “pedestal” of underlying activity, which becomes larger with increasing
scale of the hard process.
Fig. 7.10 Nch vs. pZ ⊥ at spp = 7 TeV in the transverse (left panel)
and towards (right panel) region, as measured by the ATLAS collabora-
tion [28]. Charged tracks with |η| < 2.5 and p⊥ > 500 MeV are consid-
ered. Reprinted with permission from Ref. [28].
p⊥ vs. pZ
P
Fig. 7.11 ⊥ at spp = 7 TeV in the transverse (left panel)
and towards (right panel) region, as measured by the ATLAS collabora-
tion [28]. Charged tracks with |η| < 2.5 and p⊥ > 500 MeV are consid-
ered. Reprinted with permission from Ref. [28].
larger total transverse momentum, but that they also become harder — their average
transverse momentum increases as well, cf. Fig. 7.12.
Fig. 7.12 hp⊥ i vs. pZ⊥ at spp = 7 TeV in the transverse (left panel)
and towards (right panel) region, as measured by the ATLAS collabora-
tion [28]. Charged tracks with |η| < 2.5 and p⊥ > 500 MeV are consid-
ered. Reprinted with permission from Ref. [28].
final-state particles through their multiplicity. One of the reasons for this increase is
that they also alter the overall colour flow of the event, adding more colour sources,
and thus providing many more directions along which soft emissions can proceed. This
ultimately translates into many more seeds of hadron production.
Another effect is directly related to jets. From previous considerations, it is clear
that QCD final-state radiation carries away energy from the primary hadrons, which
may end up outside the jet. For relatively simple, cone-shaped jets with a radius of R
this yields a contribution that is roughly proportional to log(1/R), coming from the
integral over opening angles. Hadronization corrections, discussed later in this chapter,
scale like 1/R, and also contribute negatively to the overall jet energy, as some of the
hadrons emerging may end up outside the jet. On the other hand, the underlying event
adds energy back to the jets, usually in proportion to the jet area, ∝ R2 . For more
details, cf. [432].
At the moment models for the underlying event fall, broadly speaking, into two cate-
gories: first of all, there are models based on simple parton-parton scattering, imple-
mented in standard event generators such as PYTHIA, HERWIG, or SHERPA. Alternative
models based on Regge-theory [598] and employing the notion of cut pomerons form
the basis of the underlying event and minimum bias modelling in event generators
such as PHOJET [514] and EPOS [889].
Concentrating on the former, more simplistic class of models first, the logic un-
derlying their construction is that the cross-section for parton-parton scattering with
transverse momentum larger than some cut-off p⊥,min , σ2→2 (p⊥,min ) becomes larger
than the total proton–proton cross-section σpp,tot ,
Multiple parton interactions and the underlying event 475
Zs
dσ̂2→2
σ2→2 (p⊥,min ) ≡ dp2⊥ ≥ σpp,tot . (7.36)
dp2⊥
p2⊥,min
p⊥,min ≈5 GeV
Here, σ̂2→2 is given by Eq. (2.52), where the matrix element squared |Mab→n |2 is
given by the sum of all partonic 2 → 2 QCD scatters at leading order. At the LHC the
saturation of the total cross section occurs for values of p⊥,min of the order of about
5-10 GeV, depending on the c.m.-energy of the protons, on the PDFs being used for
the parton-level calculation, and on the choice of renormalization and factorization
scales.
The interpretation of Eq. (7.36) is pretty straightforward: if the partonic scattering
cross-section is larger than the total inelastic or, most probably the non-diffractive
(ND) hadronic cross section, then there must be more than one partonic scatter per
hadron interaction,
σ2→2 (p⊥,min )
σ2→2 (p⊥,min ) ≥ σpp,ND −→ hNscatters (p⊥,min )i ≡ ≥ 1 . (7.37)
σpp,ND
This presents the starting point for a class of relatively simple models for the
underlying event. In these models the underlying event emerges as the superposition
of largely independent parton-parton scatters, with two partons that are oriented back-
to-back in the transverse plane. The number of these scatters is distributed according
to a Poissonian distribution defined by hNscatters (p⊥,min )i in Eq. (7.37), but possibly
reduced by one, if the hardest partonic event — the signal event — is a QCD event.
The number of actual scatters Nscatters is either predetermined by a Poissonian, as in
JIMMY [298], the model implemented in the HERWIG family of event generators, or it is
generated dynamically, as in the model [857] realized in the PYTHIA event generators.
The basis of this latter model is a structure that looks like a Sudakov form factor
and, analogous to the case encountered already in the construction of parton showers,
yields the probability of no further scatter to happen between a higher scale Q2 and
a lower scale t ≥ p2⊥,min :
2
ZQ
1 dσ̂2→2
∆(UE) (Q2 , t) = exp − dp2⊥ . (7.38)
σpp,ND dp2⊥
t
Equating this with a random number allows the transverse momentum squared t, at
which the next scatter appears, to be determined, implying an ordering in a hardness
scale given by p2⊥ . Once the p⊥ of the scatter is fixed, also the Bjorken–x of the
incoming particles (or the rapidity of the overall system) can be selected from the
differential parton-level cross-section dσ̂2→2 /dp2⊥ .
This naive treatment has an unpleasant implication, namely a steep dependence on
the value of p⊥,min , driven by the divergent structure of dσ̂2→2 /dp2⊥ ∝ 1/p4⊥ or worse
for small values of p⊥ . To cure this problem, a phenomenological ansatz is often used,
476 Soft QCD
namely replacing p2⊥ with (p2⊥ +p2⊥,0 ). This also allows the elimination of p⊥,min , as the
partonic cross-section is suitably regularized. Focusing on the approximate behaviour
of the differential cross-section with respect to p2⊥ and keeping only these terms, this
implies a reweighting of the differential cross-section Eq. (7.38) by a factor
Comparison with data suggests that this new parameter, p⊥,0 scales with the c.m.-
energy of the colliding hadrons, in a way similar to the total cross section:
η
E
p⊥,0 (E) = p⊥,0 (Eref ) . (7.40)
Eref
Here Eref is some reference scale and the exponent η is related to the pomeron intercept
driving the rise of the total hadron cross section.
This relatively simple way of generating the series of 2 → 2 parton scatters forming
the underlying event can be further modified by assuming a distribution of partons
in the incoming hadron. This means that the PDFs entering the simulation become
also dependent on an impact parameter b, basically the distance of the two colliding
hadrons. Usually it is assumed, though, that the impact parameter dependence is on
average only, and therefore there are no “positions” in transverse space associated
with the individual scatters. In addition, it is assumed that the impact parameter
dependence factorizes from the PDFs such that
b b
fi/h1 x1 , µF ; fj/h2 x2 , µF ; = fi/h1 (x1 , µF ) fj/h2 (x2 , µF ) A(b) , (7.41)
2 2
where A(b) denotes a matter overlap function. Different parameterizations are used
for A(b), including single and double Gaussians and forms that are inspired from the
usual electromagnetic nucleon form factors. In some recent publications, these overlap
functions have also been assumed to depend on the Bjorken–x of the partons [419].
What they all have in common is that the number of scatters Nscatters increases with
the overlap A(b). An important consequence of adding the impact parameter is that the
b–integrated distribution of Nscatters becomes broader — in other words, the underlying
event model supports larger fluctuations, in agreement with data.
The individual partonic sub-events undergo parton showering, which will decorre-
late the two outgoing partons in the transverse plane and also contribute to the overall
yield of hadrons produced. One way to impose four-momentum conservation on the
event is to reduce the total energy of the incident hadrons by the amount carried away
by the initial partons after the parton shower of each partonic sub-event has termi-
nated. This is the logic underlying the models in PYTHIA and SHERPA, where it is
assumed that the PDFs for the beam hadron after some partons have been extracted
are just the PDFs of the same hadron, but with reduced energy. Alternatively one
could also just stop the generation of additional scatters if they exceed the total en-
ergy of the hadronic system, possibly even removing the last parton scatter, which is
Multiple parton interactions and the underlying event 477
the algorithm in HERWIG. More complicated models introduce correlations between the
sub-events beyond this simplest four-momentum and flavour conservation and thereby
possibly also modify the PDFs, cf. for instance [855].
Further details that must be taken care of in these simplest models are the colour
flows between the individual partonic sub-events, which will influence the production
of hadrons during hadronization. While infrared-safe observables such as energy flows
etc. will remain more or less unaltered by hadronization, hadron multiplicities and
the soft part of their energy and p⊥ -distributions are highly sensitive to this. This
adds yet another dimension of model building and assumptions into the simulation,
and consequently various models differ quite a lot. The answers here range from a
completely random assignment of colours, with the only constraint of overall colour
conservation to models, in which the overall “length” in η-φ space of distances between
colour-connected pairs of partons is minimized.
The model presented here has to be taken with more than one pinch of salt. Due
to the low scales probed — down to p⊥,min — also the range of Bjorken–x extends to
fairly low values of the order of 10−6 , which in turn introduces a sizable dependence
on the parton distribution function used in the calculation. This is just one indicator
of a more generic problem: any fixed-order calculation probing such scales becomes
unstable, not least due to the emergence of large logarithms of the BFKL type. It
therefore cannot be over-stressed that models such as the ones presented here certainly
are overly simplistic and are at best only able to give some qualitative ideas about the
true physics.
In the latest PYTHIA versions, PYTHIA 6.4 and PYTHIA 8, the simple model outlined
above has been enhanced by two additional ideas.
First of all, “interleaving” of the multiple parton scatters with the parton showering,
and, in particular, initial-state parton showering, has been introduced, which basically
amounts to a competition between both. To facilitate this, a combined no-emission
probability has been introduced, which schematically reads
2
2
ZQ
dP(Q , t) dPPS dPMPI dPPS dPMPI
= + · exp − dp2⊥ + . (7.42)
dp2⊥ dp2⊥ dp2⊥ dp2⊥ dp2⊥
t
Naively, this seems to factorize and therefore to not alter the pattern of parton emis-
sion through the parton shower (indicated by subscript “PS”) and through multiple
parton interactions (“MPI”). However, the effect of flavour and especially momentum
conservation will change the patterns, since both parton showering in the initial state
and multiple parton scatters take energy out of the incident hadrons, and therefore
the cross-talk impacts on the individual parts. As a by-product of this idea, it becomes
possible that two parton scatters, that have been independent from each other when
they were generated at relatively high p⊥ -scales, are found to point to a common “an-
cestor” parton in their initial-state evolution, see also the left panel of Fig. 7.13. This
478 Soft QCD
Fig. 7.13 Sketch of possible improvement of simple models for the under-
lying event, available in recent versions of PYTHIA: Two scatters can have
a common “ancestor” parton (left panel), or there can be rescattering ef-
fects (right panel). In both cases, parton showering effects and emissions
of secondaries have been ignored.
where the superscript dir denotes direct production in one two-parton scattering and
σeff is a process-independent parameter with the dimensions of a cross-section. The
factor m before the double-parton scattering contribution is used to obtain the correct
symmetrization; it is m = 1 for X = Y and m = 2 otherwise. The ⊗ symbol in the
ratio and the usage of differential cross-section hints at the necessity to apply identical
cuts on the final state and on possible correlation effects in the two-parton PDFs.
A number of subsequent refinements and alterations include, among others, at-
tempts to connect σeff with total hadronic cross-sections or the geometric size of the
hadrons [637, 742], a scaling of this quantity with the scale at which the hadron is
being probed, and some naive inclusion of correlation effects in two-parton PDFs.
The latter can be achieved, for instance, by writing the two-parton PDFs as a simple
Multiple parton interactions and the underlying event 479
product
fp1 p2 /h (x1 , x2 ; µ2F ) = (1 − x1 − x2 )fp1 /h (x1 ; µ2F )fp2 /h (x2 ; µ2F ) (7.44)
of the conventional single-parton PDFs [153, 637]. This simple picture has been re-
fined by considerations concerning various types of correlation effects [464, 465, 748],
by invoking DGLAP-type evolution equations for parameterizations of multi-parton
PDFs [679, 845, 861, 903], including also different resolution scales [832, 833], and by
the use of generalized two-parton distribution functions, introduced in [258–260].
With the running of the LHC a proper description of DPS has found renewed
interest, as documented in a wide range of publications, which typically go beyond
the simplistic picture outlined above. In any case, any models for multiple parton
scattering must be compared with suitable data, like, for instance [89, 109, 113] from
the TEVATRON and [17, 379, 387, 656] from the LHC. It is fair to state here that any
further development of such models and probably also any serious attempt to arrive
at a first-principles based theory will necessitate further measurements of such effects.
Table 7.1 Indicative cross-sections for direct and DPS component of typ-
ical process combinations at an 8 and a 13 TeV LHC. All cross-sections
are evaluated at next-to-leading order with the central set of the CT14
NLO PDF [489], and with renormalization and factorization scales equal
to the scalar sum of the transverse momenta of the final-state particles, HˆT
The gauge bosons decay into one (different) family of leptons each, so that
branching ratios have been factored in. Jets, including b jets, are defined in
the anti-kT (D = 0.4) algorithm with a transverse momentum of 20 (30)
GeV at 8 (13) TeV; to avoid the well-known problem of diverging di-jet
cross-section for identical p⊥ cuts, for the jj process the leading jet must
(1)
have a transverse momentum of p⊥ ≥ 25 (35) GeV. Photons are subject
to the same transverse momentum cuts as the jets. A value of σeff = 15
(20) mb is assumed for the two energies.
To illustrate this idea further, consider the case of ZZ production. In the DPS
contribution, there will usually be two pairs of leptons, whose transverse momenta
compensate each other, as in the equation above. In contrast, in the direct contribu-
tion, the two Z bosons themselves will recoil against each other, and their individual
transverse momentum usually will be larger than in the single-Z case, therefore leading
Hadronization 481
to the transverse momentum of the lepton pair being different from zero,
(``)
p~⊥ > 0, (7.46)
and the individual transverse momenta not compensating each other. Cuts like these,
looking for recoiling pairs, etc., of course further enhance the DPS contribution over
the direct component and allow for a better signal (DPS) to background (direct) rate.
7.3 Hadronization
7.3.1 Some qualitative statements
To understand and estimate the visible impact of hadronization, consider the case
of jet production at a lepton collider. From LEP data it is known that the rapidity
spectrum of hadrons is more or less flat, when rapidity is defined with respect to the
main event axis, usually assumed to be the direction of the q q̄ pair at leading order.
At rapidities of the order of log mhad /E hadrons cannot be produced anymore, due to
energy conservation effects, resulting in the hadron spectrum vanishing fast. At the
same time, the transverse momentum spectrum of hadrons with respect to the same
axis roughly follows a Gaussian profile, see Fig. 7.14. Defining an expectation value of
this profile,
Z∞ Z∞ 2
p 1
hρi = dp⊥ p⊥ ρ(p⊥ ) = dp⊥ p⊥ exp − ⊥2 ≈ ≈ mhad ≈ 1 GeV (7.47)
σ Rhad
0 0
ZY Z∞
E = dy cosh y dp⊥ p⊥ ρ(p⊥ ) = hρi sinh Y
0 0
ZY Z∞
P = dy sinh y dp⊥ p⊥ ρ(p⊥ ) = hρi (cosh Y − 1) . (7.48)
0 0
something of the order of about 2–3 GeV for a 100 GeV parton jet. This is an indication
that in order to fully understand jet physics at the LHC, on the level of about 10%, it
is important to also study non-perturbative effects such as hadronization.
Similar to the PDFs fragmentation functions fulfil certain sum rules; for instance,
as every parton q must eventually hadronize into at least one hadron,
XZ
1
To push the analogy a bit further, the dependence of the fragmentation functions on
the factorization scale µF is logarithmic and given by evolution equations similar to
the ones already encountered for the PDFs, see below. However, ignoring this scaling
behaviour and assuming that Dq/h (z, µF ) ≡ Dq/h (z) implies that the probability to
find a 10 GeV hadron h in a jet emerging from a 20 GeV parton q is identical to
finding a 20 GeV hadron h of the same kind in a jet emerging from a 40 GeV parton
Hadronization 483
q. This in turn supports the notion of universality of the hadronization process once
it is described at the same factorization (or hadronization) scale.
3 For example, the relatively simple but hugely successful Cornell potential [496] reads
κ
V (r) = − + σr , (7.52)
r
with the Coulomb parameter κ and the string tension σ fitted to quarkonia masses.
484 Soft QCD
Fig. 7.15 Field lines in electromagnetic (left) and strong (right) inter-
actions, spanned by two poles, like static electron–positron and quark–-
anti-quark pairs. While in electrodynamics the field lines occupy all space,
with a strength decreasing with the distance from the poles, in strong in-
teractions they “bunch up” in a flux tube with finite diameter between the
two poles.
the models that will be discussed below, is relatively simple: hadrons are bound states
of quarks and antiquarks.
In the case of mesons, this is fairly obvious and straightforward to realize. They are
made of quark–anti-quark pairs such that one can write ΨM = |q1 q̄2 i for their flavour
part, and the only non-trivial issue is related to the flavour wave functions of neutral
states such as η or η 0 which have some mixing in flavour and are thus a superposition
¯ and |ss̄i. Practically speaking, since the parton shower implicitly also
of |uūi, |ddi,
acts in the limit of infinitely many colours, Nc → ∞, every colour degree of freedom
will have one and only one partner with the corresponding anti-colour. In a world
without gluons, the colours would be carried by quarks and the anti-colour would be
carried by anti-quarks, and there would be unique pairings of both. Any of these pairs
would then carry the flavour quantum numbers of mesons.
In contrast, in the large-Nc limit, the composition of baryons is not entirely straight-
forward. As baryons consist of three constituent quarks, for Nc = 3 their colour indices
are completely anti-symmetric, realized by a Levi–Civita tensor in colour space. This
is how they form an overall colour-singlet. For Nc → ∞ such a simple reasoning of
course would not work any more. Instead, usually the models underlying hadroniza-
tion resort to the notion of diquarks, hypothetical bound states of two quarks or two
anti-quarks. For large Nc , the colour state of two such combined quarks, a sextet, can
be re-interpreted as an anti-triplet, which allows the formation of binary bound states
of a quark and a diquark, like a meson. The flavour part of suitable baryon wave--
functions, expressed as quark–diquark bound states has been discussed, for example,
in [727]. In the hadronization models implemented so far, the diquarks come as spin-0
and spin-1 particles, which is what can be expected from an s-wave system made from
two spin-1/2 objects. In order to maintain Fermi statistics, though, spin-0 diquarks
can only exist for two different quark flavours.
Diquarks are non-perturbative objects which act as some mnemonic device to pro-
duce, carry and trace baryon number. Consequently, they are not thought to be pro-
duced in the perturbative phases of event generation, in particular the parton shower,
but rather in those phases that are characterized by scales around typical hadron mass
scales. This in turn means that usually only diquarks consisting of two light quarks,
u, d, or s, are being produced, which have constituent masses up to the order of 1
GeV. The absence of diquarks containing one or two heavy quarks, c or b, implies
that in typical simulation programs doubly or triply heavy baryons are absent. On the
other hand, this restriction also implies that ordinary heavy baryons, such as Λc or
Λb , consist of a heavy quark and a light diquark, a picture that is qualitatively well
aligned with similar picture in heavy quark effective theory in which heavy hadrons
consist of a heavy quark and some “brown muck” around them.
Various other ways to generate and trace baryon quantum numbers have been
suggested, for instance by identifying Y-shaped string junctions where a “baryon”
centre plays the role of a Levi–Civita tensor and is connected to three quarks through
strings [854], but they will not be discussed further here.
486 Soft QCD
2Eh
x = √ ∈ [0, 1] (7.53)
s
of the hadron with respect to the c.m.-energy, and its angle θ with respect to the
electron beam axis, allows to write the differential hadron production cross-section as
X Z dz 1
1 dσ h h 2 s x
= F (x, µ ) = Ci z, αs , 2 Dh/i , (7.55)
σe+ e− →qq̄ dx i
z µ z, µ2
x
In this expression, the gi (s) signify the couplings of the quarks to the γ ∗ /Z, and
are given by linear combinations of their electrical, vector, and axial charges and the
corresponding boson propagator terms. At leading order, the Cq, q̄ for quarks and anti-
quarks are identical, and the contribution for the gluon Cg only emerges at O (αs ),
cf. [783]. Written in this form the analogy of the fragmentation functions F with the
structure functions F1,2 in DIS and of the parton densities fi/h with the fragmentation
densities Dh/i becomes fairly striking.
The fragmentation functions (FFs) Dh/i (z, µ2 ) encode, at leading order, the prob-
ability that a hadron h can be found in the hadrons stemming from the fragmentation
of parton i, carrying a light-cone momentum fraction z of the parton. This is in com-
plete analogy with the parton distribution functions (PDFs). In full analogy to the
PDFs again, the FFs experience logarithmic scaling violations. They depend on the
fragmentation scale µ2 as expressed through the evolution equation through
X Z dz
1
∂Dh/i (x, µ2 ) x
2
= Pji (z, α s ) D h/j , µ , (7.57)
∂ log µ2 j
z z
x
in parallel to the DGLAP evolution equations for the PDFs encountered, for instance,
in Eq. (2.31).
7.3.2.2 Parameterizations
Similar to the PDFs, discussed in Chapter 6 and in particular in Section 6.2.2, the
evolution equations in Eq. (7.57) are used to connect data with the form of the FFs
at some fixed lower scale. This lower scale µ0 typically is of the order of a few ΛQCD
for light flavours like u, d, or s quarks, or gluons and of the order of the heavy quark
mass in the case of c or b quarks. A typical ansatz, especially for light hadrons and
flavours for the FFs at this scale is of the form [279, 624, 687, 697]
488 Soft QCD
h h
Dh/i (z, µ20 ) = Ni z αi (1 − z)βi , (7.58)
but also slightly more complicated forms have been proposed, see [447, 448]. This is
especially true for heavy quark fragmentation.
Typically, a number of symmetries are assumed, for instance a symmetry between
particles and anti-particles
Similarly, for instance for the case of pions, the suppression of “sea-quark” formation
of pions out of the “wrong” quark type is often assumed. Therefore, inequalities like
the following ones are expected to hold
Dπ+ /d (z, µ20 ) = Dπ+ /s < Dπ+ /u (z, µ20 ) = Dπ+ /d¯
(7.60)
DK + /ū (z, µ20 ) = DK + /d, d¯ < DK + /u (z, µ20 ) < DK + /s̄ .
Here the latter inequality in the second equation reflects the suppression of secondary
ss̄ production formation in the fragmentation process: in order to produce a K + out
of a u quark, the strangeness quantum number of a s̄ quark must be produced through
non-perturbative ss̄ formation. Due to the larger mass of strange quarks, this, however,
is more unlikely than the corresponding non-perturbative uū formation in the case of
a leading s̄ quark fragmenting into a K + . This reasoning will be recovered under the
keyword of strangeness suppression later in this chapter. Further assuming “valence
enhancement” of the FFs, the idea that at large z the quantum numbers of the quark
flavour components would dominate the fragmentation process motivates assumptions
like [697]
+ + +
βdπ π
= βs, π
s̄ = βu, d¯ + 1
+ + + +
(7.61)
βūK K
= βd, K
d¯ = βu + 1 = βs̄K + 2 .
Z1 " #
X
2
dz z Dh/i (z, µ ) = 1, (7.62)
0 h
which states that all hadrons stemming from the fragmentation of a single parton
should carry its total energy.
Actual fits to FFs at LO and NLO, similar to the ones for PDFs, have been
performed by different groups. In Fig. 7.16 results for one set of these fits for charged
pions [446] and protons [447] at NLO, at the low scale of µF = 1 GeV are displayed;
the fitting function the authors use is given by
Hadronization 489
Fig. 7.16 Fragmentation functions for charged pions (left) and protons
(right) at next-to-leading order and µF = 1 GeV, with parameterizations
taken from [446, 447].
N z α (1 − z)β 1 + γ(1 − z)δ
Dh/q (z, µF = 1 GeV) = , (7.63)
B(2 + α, 1 + β) + γ B(2 + α, 1 + β + δ)
7.3.3.2 Shortcomings
There are a number of short–comings in the original version of this model, the most
obvious one being the notable absence of any gluons in its original version. There,
Hadronization 491
hadronization proceeds fully recursively and through quark degrees of freedom only,
giving rise merely to mesons, but not to baryons. The introduction of diquarks would
remedy the latter point, without any real addition to the model, but the former point
is a bit more tricky and was never really fully addressed since more involved models
such as string or cluster fragmentation took over, see the next sections. These improved
models did not suffer from some of the more fundamental issues the Feynman–Field
model had by construction.
There are also some issues with the way the splitting process progresses. Due to
the form of the probability density ρ(η) for the momentum fraction η = 1 − ξ of
the residual quark chosen in the original model, the model is sensitive to whether the
production of mesons occurs from the q or the q̄ end.4
Finally, note that the way it has been formulated, the model is not Lorentz-invariant
and therefore the result has some dependence on the actual frame the hadronization
is performed in.
Cluster models, which are mainly based on the idea of preconfinement, have been
discussed very early [899] and have been implemented in the framework of an event
generator for e− e+ annihilations into hadrons in [528, 533]. They are the model of
choice for the simulation of hadronization in the event generators HERWIG, HERWIG++
and SHERPA.
The key idea in this class of models was to enforce non-perturbative splittings of all
gluons into quark–anti-quark pairs at the end of the parton shower. As a consequence,
since the model is formulated in the Nc → ∞ limit, colour-singlet clusters are formed,
which consist of predominantly neighbouring quark–anti-quark pairs. The clusters will
have the flavour quantum numbers given by the quark–anti-quark pairs, of mesons or,
if they consist of a quark–diquark pair, of baryons. Once formed their masses are
distributed in a continuous spectrum. The typical mass of these objects is relatively
low and driven by the infrared cut-off Q0 of the parton shower and by the masses of
their constituents. While the bulk of low-mass clusters will be reinterpreted as actual
hadrons, and enter hadron decays after some momentum reshuffling to force them onto
their mass-shell, the high-mass tail of the distribution is seen as a washed out spectrum
of excited hadrons. These clusters will decay into further, lighter clusters until they
reach the mass scale of hadrons. This leads to a distribution of primordial hadrons
that closely follows the pattern resulting from the parton shower, a manifestation of
local parton–hadron duality (LPHD).
4 In the original conception of the model in [527] it was noted that the energy distribution of
primary mesons hints at ρ(η) peaking close to 1, the distribution in the meson momentum fraction
ξ must be peaked at low ξ to agree with data. However, the inclusion of gluon emissions certainly
improves the situation, but of course it would mean that a way to deal with gluons inside this model
must be defined.
492 Soft QCD
In a first step in cluster fragmentation, the gluons must decay into quark–anti-quark
pairs or, if the phase space allows it, into the usually somewhat heavier diquark pairs.
There are two ways this is achieved in practical implementations. Either, as encoded in
the HERWIG and HERWIG++ realizations of the cluster fragmentation idea, the gluons
acquire a non-perturbative mass, which allows a two-body decay in the rest frame
of the now-massive gluon. This gluon mass (mg ) then becomes a parameter of the
fragmentation model that has a direct impact on what flavours are actually produced
in the transition from the perturbative to the non-perturbative regime. Usually, mg ≈
1 GeV. With quark constituent masses around ΛQCD , mu,d ≈ 350 MeV and ms ≈
450 MeV, the gluon decays happen just above the threshold and, consequently, there
is not much phase space left for the quarks. As a result, isotropic decays of the gluons
will lead to the produced pairs to follow the gluon direction to good approximation,
thus encoding LPHD. Alternatively, in SHERPA, the gluons are kept massless and decay
by borrowing some four-momentum from a colour-connected spectator parton.
For both HERWIG++ and SHERPA, the availability or lack of phase space for the
gluon decay dictates the available flavours and influences the actual flavour of the quark
or diquark pair into which the gluons decay. While in HERWIG++, only light quark pairs
are accessible, due to the relatively low non–perturbative gluon mass, in SHERPA all
flavours can eventually be produced. To exclude the soft, non-perturbative production
of heavy flavours, c and b quarks or diquarks containing them, they are explicitly
disallowed. In addition, “popping” probabilities P→f f¯ modify the relative abundance
of the permitted flavours f in the non-perturbatively enhanced gluon splitting process.
These quantities will reoccur when the decays of clusters will be discussed. They enter
the HERWIG++ as well as the SHERPA cluster fragmentation model.
After the gluons have been decayed, primary clusters are formed from the unique
Hadronization 493
This is, possibly, further modified by multiplet weights reflecting the fact that for
given flavour configurations there may be different viable hadrons belonging to dif-
ferent multiplets which are defined by their spin and excitation quantum numbers.
Correspondingly there will be relative weights for the pseudoscalar, vector, etc.. me-
son multiplets. This effectively leads to an altered production rate of hadrons from
494 Soft QCD
heavier multiplets with respect to the lighter ones. This becomes necessary because
the large available phase space for lighter hadrons is usually not correctly compensated
by the increasing number of polarizations for heavier hadron species.
Having selected the hadrons, the only missing input for the cluster decay is their
orientation. In the original version of cluster fragmentation models, the cluster decays
have been isotropic, but this does not correctly cover leading-particle effects, in the
xP distribution of hadrons. As a consequence, also highly non-isotropic decays are
used now, especially for those clusters where one of the constituents stems from the
perturbative phase of the event simulation and could thus be identified as a leading
parton. Naively, this means that the decay contributions must peak for vanishing
transverse momentum of the outgoing hadron with respect to the cluster axis, typically
n
with a form like 1/k⊥ or a Gaussian. This introduces a parameter determining the
strength of the peak, in addition to some parameter that defines the scale up to which
C → hh decays occur.
Together with the transverse momentum this fixes the masses of the new clusters
which can then be formed by merely arranging the four constituents.
Hadronization 495
However, for the new clusters, again decisions of how they decay further must be
made, along the lines just described.
t = r0 = E0 /σ, each will have energy and momentum E0 and, therefore, light-cone
momenta of 2E0 . After another time r0 they will have swapped positions and again
be at a distance of 2r0 . This “yo-yo” motion will repeat itself and after a total time
τ = 4r0 the quarks are back to where they started. The total area A covered by the
string during that time is given by eight right-angled triangles with legs of length r0 ,
thus
8r2
σ 2 A = σ 2 0 = 4E02 = m2 . (7.69)
2
A simple calculation shows that the area is Lorentz-invariant, for details cf. [162]. This
motivates to identify bound states — hadrons, or, more precisely, mesons – of mass m
with such configurations of (massless) q q̄ pairs connected through a linear force-field
and bouncing back and forth with the speed of light.
quickly fading external magnetic field and the density of the Cooper pairs. Conversely,
for a type II superconductor, ξ λ. The magnetic field will penetrate deeply into
the superconductor and will have a large overlap with the region where the Cooper
pairs are located. As a result, the volume and shape of the boundary region will be
maximized.
Extending this to the case at hand, of a string break-up, is straightforward, since the
string actually represents a linear potential, a constant field. As a consequence, the
actual flavour of the q q̄ pairs, whose production triggers the break-up, is determined
by their mass, with relative probabilities given by
!
πm2q
Pqq̄ ∝ exp − , (7.71)
σ
with σ the string tension. Assuming quark masses as above (mu,d = 350 MeV, ms =
450 MeV, m(ud0 ) = 700 MeV) and hadronic scales around ΛQCD for this parameter,
σ ≈ ΛQCD /fm ≈ 0.2 GeV2 , leads to relative probabilties of
Allowing the produced pair to have a transverse momentum, and remembering the
idea of a Gaussian distribution for the bulk of the produced hadrons, cf. Fig. 7.14 in
Section 7.3.1, the expression above is replaced by
498 Soft QCD
!
πm2q πp2 πm2⊥
Pqq̄ −→ Pqq̄ (p⊥ ) ∝ exp − exp − ⊥ = exp − . (7.73)
σ σ σ
Fig. 7.18 Break-up dynamics of a string: the string breaks at two points
i and j, and the resulting quarks meet at point ij
Consider the situation sketched in Fig. 7.18 where a quark qi from the break-up
point i and an anti-quark q̄j from the break-up point j meet at a point ij. For simplicity
only one spatial dimension x is taken into account, as this problem is practically
(1 + 1)–dimensional. Since the massless quarks move with the speed of light in the
linear potential V (x) = σx, their energies and momenta are given by
The system is at rest, when its momentum is zero, or, in other words, if the two break-
ups i and j happen at the same time ti = tj . In general the overall rapidity of the ij
system is
1 (xi − xj ) + (ti − tj )
yij = log . (7.76)
2 (xi − xj ) − (ti − tj )
The requirement of a positive mass squared of the combined system,
!
m2ij = Eij
2
− p2ij = σ 2 (xi − xj )2 − (ti − tj )2 > 0 , (7.77)
translates into the requirement that the two vertices i and j are separated by a space-
like distance and thus causally disconnected. This implies that all string break-ups
Hadronization 499
The flavours and the transverse momenta of the hadrons are given by the Gaussian
distribution in Eq. (7.73). The selection of the actual hadron is then further driven by
multiplet-specific weights, corresponding to the same logic already encountered in the
cluster fragmentation model. After this, the only quantity left to fix is the splitting
parameter z of the longitudinal momenta between the hadron and the residual string.
The overall result for this splitting, however, should be independent of which axis
is chosen for the decays in the sequence. To implicitly guarantee boost invariance along
the longitudinal axes, the corresponding light-cone momentum fraction z of the hadron
is used. Demanding this independence leads to the Lund parametrization for the
fragmentation function describing the process string → string + hadron:
(1 − z)a bm2
f (z) = N exp − ⊥ , (7.78)
z z
where a and b are free parameters of the model and m is the mass of the hadron.
Typically there are two sets of parameters a and b for strings with a quark and a
diquark end, but this tends not to describe the production of heavy hadrons very
well. In PYTHIA, therefore, more fragmentation functions, from Eq. (7.64), for heavy
hadrons are also available.
The splitting of strings into hadrons and smaller strings can be iterated until only
a very light system, in the middle of the original string, is retained. For such light
systems procedures similar to cluster decays into hadrons are usually employed, for
more details cf. [853].
carrying energy and momentum, and the simple yo-yo motion of q q̄ pairs is replaced
by a more intricate pattern. This picture implies that the gluon experiences twice the
string force, since there are two strings attached to it, rather than the single strings
attached to the quark. This also reflects the ration of Casimir operators of gluons and
quarks, CA and CF with a ratio of CA /CF = 9/4 ≈ 2. As a consequence, hadron
production is enhanced in the sectors of phase space spanned by the qg and the g q̄
strings, while hadron production in the q q̄ sector is massively suppressed. This is a
direct consequence of the string picture, which can be viewed as an effective model of
QCD at low energies in the limit of infinitely many colours, Nc → ∞. The resulting
pattern of hadron production has already been anticipated in the discussion of the
drag effect in Section 5.1.2. It is interesting to note here that for relatively soft gluons
the kink becomes less and less prominent — the picture of gluons as kinks in a string
therefore automatically provides a smooth connection with the parton shower. The
actual treatment of these kinks in string fragmentation models is relatively subtle and
the reader is referred to the original literature [849, 850], the PYTHIA manual [853], or
the review [290].
7.3.5.6 Advanced features of baryon production
The string fragmentation picture developed so far leads to a strong correlation of
baryons in phase-space, since a diquark produced in a string break-up from which a
baryon emerges leads to the corresponding anti-diquark as the new end point of the
residual string. This strong correlation however is challenged by an OPAL measurement
of ΛΛ̄ correlations at LEP [101]. This re-inforced the idea of the so-called pop-corn
mechanism [165], which relates the production of baryons to the popping of more
than one q q̄ pair. This proceeds in such a way that two quarks of two pairs conspire to
form one diquark while two anti-quarks of two such pairs form an anti-diquark. This
would effectively lead to configurations where one or two mesons are formed between
two baryons, like BM B̄, BM M B̄, etc.., thereby decorrelating them. This picture is
effectively realized by supplementing the potential string break-ups of Eq. (7.78) with
yet another one, namely
them in any detail in a book like this — a quick glance at the PDG [229] is testament
to the incredible amount of data and knowledge gathered up to today. So, instead of
trying to have an exhaustive discussion, which would be beyond the scope of this book,
a brief overview supplemented with references to a number of relevant review articles
must suffice. The focus will be put on those particles and their decays that are phe-
nomenologically most relevant. Typically these are the τ -leptons, hadrons containing
open heavy flavour like B and D mesons, and quarkonia states such as the J/Ψ.
2 g µν −
p pµ ν
gW m2W e2 g µν p2 GF p2
= + O = √ g µν + O . (7.80)
8 p2 − m2W 8m2W sin2 θW m2W 2 m2W
Integrated over the phase space of the three outgoing particles, Eq. (7.79) yields the
well-known partial decay width for weak decays of leptons, namely
2
G2F m5τ m`
Γτ →ντ `ν̄` = 3
f with f (x) = 1 − 8x + 8x3 − x4 − 12x2 log x , (7.81)
192π m2τ
taking into account the fact that for simple reasons of energy and charge conservation,
the only allowed combinations of quarks are ūd and ūs, reflected also in the CKM–-
matrix element in front of the current.
In the simplest case, when only one pseudo-scalar hadron P S is produced, like a
π − or a K − , the current reduces to
µ
JP S = Vuq P S ūū γL uq 0 = −iVuq fP S pµP S
µ
(7.83)
or branching ratios of about 11 % and 0.7 % for the decay into a single charged pion
or kaon.
In principle, currents similar to the one in Eq. (7.83) could also be written for the
production of vector or tensor particles, but in reality these particles typically decay
fairly quickly and it is thus more useful to model currents with more than one hadron
in the final state. Such more complicated final states in τ decays consist of a number
of pions, kaons and η-mesons, and they typically exhibit relatively rich structures
resulting from a variety of different resonances such as ρ’s, K ∗ ’s, a1 ’s, etc.. As an
example, consider the next complicated case of the production of a charged hadron
h− , such as a π − or a K − , accompanied by a neutral hadron h0 like a π 0 , K 0 , or η.
Then,
µ − 0
Jh− h0 = Vuq h h ūū γL uq 0
µ
√ µν qµ qν h− h0 2 µ h− h0 2
= 2 Vuq g − 2 ph− , ν − ph0 , ν FV (q ) + q FS (q ) ,
q
(7.86)
with q = ph− + ph0 and where two form factors, FV and FS have been introduced. A
typical ansatz to parameterize the form factors has been provided in [703], giving rise
to the phenomenological Kühn–Santamaria model. It has been further refined for
instance in [529, 706] and forms the basis for the successful TAUOLA package [646] and
other implementations of τ -decays in HERWIG++ [597] and SHERPA. In this ansatz, the
form factors are composed of a number of Breit–Wigner resonances of the form such
that, for instance,
−
π0 1 X m2V
FVπ (s) = P 2 , (7.87)
αV mV − s − imV ΓV (s)
V
V
Hadron decays 503
where the factor αV define the relative weight of the contribution, the sum over V
includes V ∈ {ρ, ρ0 , ρ00 } and ΓV (s) is the scale-dependent total width of the resonance.
These terms, and in particular the relative weight α are typically fitted to experimen-
tal data; in order to further improve the agreement, sometimes additional resonances
are introduced in this kind of phenomenological model, which do not necessarily cor-
respond to a known physical state.
Alternatively, for the construction of these currents or the relevant form factors,
low–energy effective theories of QCD could be invoked. The most appropriate candi-
date appears to be chiral perturbation theory (χPT); for a comprehensive review
see [554]. The idea underlying this effective theory is to understand chiral symmetry
as one of the fundamental symmetries of the SU (3)F ⊗ SU (3)F Lagrangian with the
quark mass terms breaking it [433–435, 553, 794]. This allows for an expansion in
ratios of quark masses over typical momentum transfers in mesonic processes. This
idea has been extended to also include resonances such as the vector mesons named
above and their coupling to the pseudo-scalars in resonance chiral perturbation
theory [494, 495], RχPT.
where q and q 0 are the constituents of the meson M . For pseudo-scalar mesons P S,
which as constituents of the lowest lying multiplet are the ones that typically decay
through the weak interaction, again the hadronic part of the matrix element can be
replaced with Eq. (7.83) involving the decay constants for the heavy mesons, summa-
rized in [799] as
Of course such decays of the light pseudo-scalars π ± and K ± typically do not play a
role for LHC physics, since the weak interaction leads to a long lifetime of these objects
which in turn usually reach the detector. However, the very same decay modes of heavy
mesons are relevant for a number of reasons. First of all, on their own they provide an
interesting laboratory testing the interface of perturbative QCD and derived effective
theories with lattice QCD and data and thus allow for the measurement of important
quantities necessary for the description of the phenomenologically relevant rare decays.
In addition, due to the presence of heavy quarks the weak decays would be potentially
susceptible to interactions that are sensitive to the quark mass. They thereby allow
for a relatively clean way to study, for instance, the effects of interactions mediated by
charged Higgs bosons. The case of a B meson decay illustrates this, where the inclusion
of charged Higgs bosons would modify the above partial width by a factor [635]
2
2 m2
r = 1 − tan β 2P S . (7.91)
mH ±
where ξ(vh · vH ) denotes the Isgur–Wise function [640, 642], which depends on the
velocities vh and vH of the two mesons h and H. ξ(vv 0 ) is normalized such that
ξ(1) = 1 + O 1/m2Q . (7.94)
Similar equations also hold true for cases with a more complicated spin structure,
cf. [784], where due to the heavy quark symmetry the form factors enjoy similar prop-
erties and some remarkable relations among each other.
It should be noted here that such an analysis can also be extended to decays of
heavy baryons [568, 569, 643, 745] and to heavy-to-light transitions, to cases where
h is a light meson, see [293, 519, 641] for some early work. In contrast, the hadronic
state X in semi-leptonic decays can also be composed of a variety of hadrons, leading
to inclusive semi-leptonic decays when all configurations are summed over. From
a simple physical point of view such configurations stem from the fragmentation of the
outgoing quark produced in the weak decay and the spectator quark or diquark which
together with the decaying quark have formed the hadron. In this simplistic picture,
the fragmentation does not change the partial widths on the parton level and the
inclusive partial widths are thus given by the quark-level expression only, the quark
current is not replaced by hadronic matrix elements. In principle this would allow for
a fairly straightforward determination of relevant quantities describing the decay, like
for instance the CKM element Vqq0 present in the transition amplitude of Eq. (7.92).
In view of the larger number of form factors in heavy-to-light transitions this was
thought to be particularly relevant for the case of Vub [639, 719]. The experimental
difficulty in such a measurement, however, is to collect all relevant final-state particles
and to cover all possible phase space for them. This poses a challenging problem at
hadron colliders such as the LHC.
6 As a first result of this symmetry consider for example the mass splitting between pseudo-scalar
and vector mesons with the same flavour quantum numbers such as B ∗ and B:
mB ∗ − mB ∝ 1/mB ,
and as a result the quadratic mass differences are nearly constant
m2D∗ − m2D ≈ m2D∗ − m2D ≈ m2D∗ − m2Ds ≈ 0.5 GeV2 .
s
506 Soft QCD
with perturbatively calculable coefficients rn [212]. The picture underlying this factor-
ization is as follows: due to the large mass difference between the incident B meson
and the pions the q q̄ pair forming one of them will be fairly collimated. As it is a
colour singlet, soft gluons with momentum of the order of ΛQCD will not see it and
thus decouple at leading order in ΛQCD /mB . Similar reasoning also holds true for
the spectator quark, which in the B rest frame typically also carries momentum of
order ΛQCD . If it does not take part in any way in the hard interaction, it merely con-
tributes the quantum number to the pion formed on the b-quark side of the current.
This is then absorbed into a corresponding B → π form factor already encountered
in the treatment of semi-leptonic decays. Doe to the size of this colour singlet and its
composition, therefore the two pions at leading order factorize nicely. This reasoning
could be extended to some degree to other final states, and in particular also to those
containing a heavy meson instead of the pion [213].
Fig. 7.19 Some example Feynman diagrams for rare decays of heavy
quarks: b → sγ (left) and b → s``¯ (centre and right). The graph in the
middle motivated the slightly unintuitive name of “penguin-diagrams”.
b → s``,¯ b → sss̄ etc.. Quite often these processes are also known as “penguin”
decays. The most prominent one at the LHC to date are the decays B → K ∗ µ− µ+ and
Bs → φµ− µ+ , triggered by the quark-level b → s``¯ transition. In addition, although
not a decay, also mixing in systems of neutral mesons such as B 0 B̄ 0 -mixing, falls
into the category of such processes.
In FCNC processes, the change of flavour quantum numbers along the quark lines
typically stems from a loop consisting of W bosons and a combination of quarks —
u, c, and t in the examples depicted in Fig. 7.19. By invoking the unitarity of the
CKM matrix and by assuming massless charm quarks, the contribution of the c
and u quarks can be summed and combined with the t contribution using the same
combination of CKM elements Vbt Vts∗ .7 This is the celebrated GIM mechanism [581].
As a consequence, rare processes such as b → sγ are driven by the large mass hierarchy
between the up-type quarks; conversely, similar processes such as c → uγ are GIM-
suppressed.
The interest in rare processes is fed by the observation that typically the particles
running in the loops are fairly heavy — W bosons and t quarks — and that similar
loops could also originate from hitherto unknown heavy particles from new physics
scenarios. Therefore, such rare processes allow indirect probes of physics beyond the
SM. In order to systematically study such processes, operator product expansion
7 To see how this works assume that the loop contributions to a process such as b → sγ can be
written as a linear combination of functions that depend on the ratio of quark and W -mass multiplied
with the corresponding CKM elements. Then
! ! !
m2t ∗ m2c ∗ m2u ∗
Mb→sγ ∝ f V V
bt ts + f V V
bc cs + f Vbu Vus
m2W m2W m2W
!
mc , mu →0 m2t ∗ ∗ ∗
−→ f Vbt Vts + f (0) (Vbc Vcs + Vbu Vus )
m2W
" ! #
m2t ∗
= f − f (0) Vbt Vts ,
m2W
where the unitarity of the CKM matrix has been used in the form
∗ ∗ ∗ ∗ ∗ ∗ ∗
X
0 = Vbq Vqs = Vbt Vts + Vbc Vcs + Vbu Vus −→ Vbt Vts = − (Vbc Vcs + Vbu Vus )
q
508 Soft QCD
(OPE) is frequently used [893], which rests on a factorization of short- and long-
distance contributions to a given process. This results in operators describing the latter,
multiplied by Wilson coefficients taking account of the former. The result of the
loop and possibly higher-order corrections drives the coefficients, which are therefore
the relevant quantities sensitive to potential new physics effects. The construction of
this operator basis has been intensively discussed in the literature, for a comprehensive
review cf. [289].
the annihilation into three gluons, two gluons and a photon, or three photons, see also
Fig. 7.20. By far and large, the rates calculated for leptonic, photonic and hadronic
decays follow the pattern suggested by these parton–level considerations. A simple
back-of-the-envelope estimate suggests that the branching ratio to a single lepton is
given by
e2c α e2c α
BRj/Ψ→``¯ ≈ P 2 ≈ ≈ 5% , (7.96)
Cαs3 (mc ) + ec α2 ef 3
5αs (mc ) + 4αe2c
f
where ec = 2/3 is the charge of the charm quark, and the factor C = 5 stems from
the colour factor related to the transition of a singlet to three gluons including their
symmetrization. The sum over accessible fermions includes e, µ, u, d, and s for the
case of charmonia, yielding a factor of four. This crude estimate of a branching ratio
of 5% has to be compared with the measured branching ratio of about BRj/Ψ→``¯ =
5.94% [799] for the decay of a J/Ψ into one lepton flavour. It has to be noted, though,
that this result of course depends crucially on the choice of scale in αs .
Decays of this kind are important for two reasons. First of all, they provide a test of
perturbative QCD at relatively low scales, which is interesting in its own right. Further-
more, and probably far more importantly, at hadron colliders such as the TEVATRON or
the LHC, the lepton pairs produced in decays of J/Ψ, Ψ0 , or Υ mesons, at well-defined
and relatively sharp invariant masses of about 3.097 GeV, 3.686 GeV, and 9.46 GeV,
respectively, allow, together with leptons from the Z boson, a calibration of the muon
chambers and electromagnetic calorimeters. They therefore directly contribute to the
overall success of every measurement with leptons in the final state.
8
Data at the TEVATRON
In this chapter, many of the ideas developed in the book up to now will be put
to the test, focusing on the comparison of QCD theory predictions from fixed–order
calculations, analytic resummation calculations, and full hadron–level simulations with
data from the TEVATRON era. Not only did the experiments at the TEVATRON test the
theory of QCD over a wide range of scales, but in addition they also probed interesting
non–perturbative aspects such as soft QCD interactions and multiple parton–parton
scattering. While such an environment is very close to the conditions encountered at
the LHC, the experimental data discussed here have been taken at much lower energies
and in a comparably-reduced phase space.
Nevertheless, the goal of this chapter is to appreciate the precision of theory and
experiment achieved at the TEVATRON, that was instrumental in inspiring a profound
faith in the technology that is now employed at the LHC. We will review some of
the salient analyses at the TEVATRON, dealing with many of the same processes that
will also be discussed for the LHC. The aim is not to present a complete review of
TEVATRON physics, including all of the most up-to-date results, but to discuss those
physics topics which will serve as a good pedagogical introduction to QCD physics at
the LHC. This will help to set the scene for Chapter 9, where results from Run I of
LHC will be confronted with theoretical predictions in the same spirit.
The Black Book of Quantum Chromodynamics: A Primer for the LHC Era. John Campbell,
Joey Huston, and Frank Krauss.
c John Campbell, Joey Huston, and Frank Krauss 2017.
Published in 2017 by Oxford University Press.
Minimum bias and underlying event physics 511
The transverse momentum distribution for charged tracks with pseudo-rapidity less
than 1 is shown in Fig. 8.1 for CDF in Run II, compared to the similar distribution
in Run I [74, 75]. As expected, the majority of tracks are at very low transverse
momentum. On average, there are slightly more than 2 tracks per unit pseudo-rapidity
for events in which at least one track has a transverse momentum larger than 0.5 GeV.
Tracks in the higher pT range are constituents of jets; as can be observed in the figure
there is a larger cross-section for the production of these tracks at 1.96 TeV than at
1.8 TeV. Fig. 8.2 shows that the track–pT distribution is reasonably well-described by
PYTHIA using Tune A.1
Late in its career, the TEVATRON carried out an energy scan, running at energies
of 300 and 900 GeV, in addition to its normal Run II energy of 1.96 TeV. The data
obtained from the relatively short runs proved to be very useful in tuning models for
minimum bias production. The average particle density (dn/dη) for charged particles
1 Tune A refers to the values of the parameters describing multiple-parton interactions and initial
state radiation which have been adjusted to reproduce the energy observed in the region transverse
to the jet.
512 Data at the TEVATRON
with |η| ≤ 0.8 and pT ≥ 0.5 GeV is shown in Fig. 8.3 [72]. An extrapolation predicts
a charged particle density of 3.1 at 7 TeV and 3.3 at 8 TeV.
Fig. 8.3 The average track multiplicity distribution for CDF in the central
rapidity region (|η| ≤ 0.8), for tracks with pT ≥ 0.5 GeV, as a function of
the centre-of-mass energy. Reprinted with permission from Ref. [72].
For example, the geometry for one study is shown in Fig. 8.5, where the towards and
away regions have been defined with respect to the direction of the leading jet.
Of the two transverse regions indicated in Fig. 8.5, the one with the largest trans-
verse momentum is designated the TransMAX region and the one with the lowest, the
TransMIN region.2 The transverse momenta in these two regions is shown in Fig. 8.6.
As the lead jet transverse momentum increases, the momentum in the TransMAX
region increases; the momentum in the TransMIN region does not. The amount of
transverse momentum in the TransMIN region is consistent with that observed in
high multiplicity minimum bias events at the TEVATRON. At the parton level, the
TransMAX region can receive contributions from the extra parton present in NLO
2 A similar analysis was carried out in Run I, using cones of radius 0.7, at the same η as the lead
jet in the event and ±90o in φ, to define the MIN and MAX regions [127].
514 Data at the TEVATRON
Fig. 8.6 The sum of the transverse momenta of charged particles inside
the TransMAX and TransMIN regions, as a function of the transverse
momentum of the leading jet. from Ref. [133]. The solid curves are the
predictions from PYTHIA and the dashed curves are the predictions from
HERWIG. Reprinted with permission from Ref. [133].
inclusive jet calculations. The TransMIN region can not. There is good agreement be-
tween the TEVATRON data and the PYTHIA tunes, not surprising since the data was
used in the creation of the tunes.
Drell-Yan production 515
3 A tower refers to the lateral segment of a calorimeter reading out a specific region in ∆η and ∆φ.
There may be more than one longitudinal division of the calorimeter within the tower, in which case
the energies of all of the longitudinal divisions are added together to form the tower energy.
516 Data at the TEVATRON
The theoretical systematic errors are primarily from the PDF uncertainty. Such cross-
sections have thus proven to be useful as inputs to global PDF fits.
The transverse momentum distribution for Z bosons, measured at CDF in Run II
of the TEVATRON [65], is shown in Fig. 8.9 along with comparisons to predictions from
RESBOS and FEWZ2. The FEWZ2 and RESBOS predictions agree well with the data
at high transverse momenta. The inset shows the low transverse momentum region,
where the RESBOS prediction also matches the data, including the turn-over of the
cross-section for pT < 5 GeV, as expected from a resummation calculation. This very
low pT region is sensitive to non-perturbative effects. FEWZ2 is not shown for the low-
pT region as, being a fixed-order calculation, it will not provide sensible predictions.
It is worth noting that the CDF pT data have been plotted with a very fine binning,
made possible by the excellent tracking resolution of the CDF detector. Such binning
allows a better determination of the low pT physics.
For Drell-Yan production, the average transverse momentum of the lepton pair
has been measured as a function of their invariant mass. Fig. 8.10 [65] shows the
measurement made by CDF in Run II. The average transverse momentum increases
roughly logarithmically with the square of the Drell-Yan mass, as expected from the
discussion in Section 2.3.2. The data agree well with the default PYTHIA 6.2 prediction
using Tune A. Also shown are two predictions involving tunes of PYTHIA that give
larger or smaller values for the average Drell-Yan transverse momentum as a function
of the Drell-Yan mass. The Plus/Minus tunes were used to estimate the initial-
state-radiation uncertainty for the determination of the top mass in CDF. Most of
the tt̄ cross-section at the TEVATRON arises from q q̄ initial states, so the Drell-Yan
measurements serve as a good model. This will not be true at the LHC, where the
dominant initial state for tt̄ production is gg.
Inclusive jet production 517
Fig. 8.8 The Z rapidity distribution from DØ in Run II. Reproduced with
permission from Ref. [94].
Run II, the Midpoint jet algorithm was developed, which pushed the IR-safety problem
to a higher order (NNLO).4 The SISCone algorithm, which is IR-safe to all orders,
was developed late in Run I, too late for any analyses,5 but Monte Carlo comparisons
using this algorithm and the Midpoint algorithm were carried out for many analyses. A
few analyses were carried out with the (IR-safe) kT algorithm. The anti-kT clustering
algorithm was developed too late for any TEVATRON analyses.
The IR-safety problem applies only to fixed-order calculations, i.e. any of the jet
algorithms mentioned above are IR-safe when applied to data/Monte Carlo, simply,
because for hadrons there is no infrared problem with the minimal hadron mass being
mπ ≈ 135 MeV. The differences between the Midpoint and SISCone algorithm are
finite and typically of the order of a few percent at most, so it is perfectly acceptable
to use a SISCone jet algorithm in a fixed-order prediction to compare to data taken
with the Midpoint algorithm. The stochastic instabilities introduced in Monte Carlo
simulations, and inherent in the data itself, tend to level the playing field for many of
the jet algorithms, since the extra effective seed in the Midpoint algorithm and the
extra seeds in the SISCone algorithm have little impact [508].
4 The Midpoint algorithm also introduced the use of the kinematic variables p and y, in contrast
T
to the variables ET and η used for the earlier algorithms.
5 Tradition and inertia can make it difficult for experiments to give up old jet clustering algorithms,
even if better algorithms are available. Thus, it is very good that both ATLAS and CMS have used the
anti-kT algorithm from the start.
Inclusive jet production 519
Fig. 8.10 The average transverse momentum for Drell-Yan pairs from
CDF in Run II, along with comparisons to predictions from PYTHIA. Re-
produced with permission from Ref. [65].
8.3.1 Corrections
For comparison of data to theory, the calorimeter tower energies clustered into a jet
must first be corrected for the detector response. The calorimeters in the CDF ex-
periment (or basically any experiment) respond differently to electromagnetic showers
than to hadronic showers, and the difference varies as a function of the transverse
momentum of the jet. The detector response corrections are determined using a de-
tector simulation in which the parameters have been tuned to test-beam and in-situ
calorimeter data. PYTHIA, with Tune A, is used for the production and fragmentation
of jets. The same clustering procedure is applied to the final state particles in PYTHIA
as is done for the data. The correction is determined by matching the calorimeter jet
to the corresponding particle jet. An additional correction accounts for the smearing
effects due to the finite energy resolution of the calorimeter. At this point, the jet is
said to be determined at the hadron level.
One of the observables that is crucial to be able to described is the jet shape.
A study of this quantity is shown in Fig. 8.11 [122], where the jet energy away from
the core of the jet (i.e. in the annulus 0.3 < R < 0.7) is plotted as a function of the
transverse momentum of the jet. The general feature of these curves, that jets become
more collimated as the jet transverse momentum increases, can be understood as due
to three effects: First, power corrections that tend to broaden the jet decrease as 1/pT
or 1/p2T ; second, a larger fraction of jets are quark jets rather than gluon jets; third,
the probability of a hard gluon to be radiated (the dominant factor in the jet shape)
520 Data at the TEVATRON
Fig. 8.11 The fraction of the transverse momentum in a cone jet of radius
0.7 that lies in the annulus from 0.3 to 0.7, as a function of the transverse
momentum of the jet. Comparisons are made to several tunes of PYTHIA
(left) and to the separate predictions for quark and gluon jets (right).
Reprinted with permission from Ref. [122].
decreases as αS (p2T ). As can be seen in Fig. 8.11, the PYTHIA predictions using Tune
A describe the data well, even better than with the default PYTHIA prediction. In fact
a reasonable description of the jet shape can also be provided by the pure parton-
level NLO prediction [510], perhaps supplemented by non-perturbative corrections, as
discussed in Section 4.1, and in Ref. [680].
For data to be compared to a parton level calculation, the theory must be corrected
to the hadron level.6 In general, the data should be presented at the hadron level, and
the corrections between hadron and parton level should be clearly stated. In retrospect
this seems obvious, but the TEVATRON jet measurements were one of the first analyses
where this was true.
The hadronization corrections consist of two components: the subtraction from
the jet of the underlying event energy discussed in Section 8.1 and the correction for
a loss of energy outside a jet due to the fragmentation process. The hadronization
corrections can be calculated by comparing the results obtained from PYTHIA at the
hadron level to the results from PYTHIA when the underlying event and the parton
fragmentation into hadrons has been turned off. The underlying event energy is due
to the interactions of the spectator partons in the colliding hadrons and the size of the
correction depends on the size of the jet cone. As discussed earlier in this chapter, the
rule-of-thumb has always been that the underlying event energy in a jet event looks
very much like that observed in minimum bias events.
6 In some analyses at the TEVATRON, the data was corrected to the parton level using the inverse
of the parton to data corrections.
Inclusive jet production 521
The fragmentation correction accounts for the daughter hadrons ending up outside
the jet cone from mother partons whose trajectories lie inside the cone (also known as
splash-out); it does not correct for any out-of-cone energy arising from perturbative
effects as these should be correctly accounted for in a NLO calculation. It is purely a
power correction to the cross section. The numerical value of the splash-out energy is
roughly constant at 1 GeV for a cone of radius 0.7, independent of the jet transverse
momentum. This constancy may seem surprising. But, as just discussed, as the jet
transverse momentum increases the jet becomes more collimated. The result is that
the energy in the outermost annulus, the region responsible for the splash-out energy,
is roughly constant. The correction for splash-out derived using parton shower Monte
Carlos can be applied to a NLO parton level calculation to the extent to which both
the parton shower and the two partons in a NLO jet correctly describe the jet shape.
The two effects of underlying event and splash-out produce corrections which go in
opposite directions. Therefore they partially cancel when computing the total correc-
tion for parton level predictions. For a jet of radius 0.7, the underlying event correction
is larger, so the correction for the parton level prediction is positive. The total correc-
tion is of the order of 7% for the lowest transverse momentum values in the inclusive
jet cross-section measurement, decreasing rapidly to less than 1% at higher pT val-
ues (falling roughly as 1/p2T , as would be expected for such power corrections). The
correction is roughly independent of rapidity. For a jet cone radius of 0.4, the fragmen-
tation correction is somewhat larger (increasing as 1/R for small R) but the underlying
event correction scales by the ratio of the cone areas (R2 ) [432]; as a result the two
effects basically cancel each other out over the full transverse momentum range at the
TEVATRON.
Note that these two corrections deal with non-perturbative physics only. The as-
sumption for the comparison to a NLO parton-only prediction is that the perturbative
aspect of the jet shape is reasonably well-described by one gluon (in the NLO cal-
culation) as with the parton shower (in the Monte Carlo). Thus, the fragmentation
corrections determined for the latter can be applied to the former. Studies of the jet
shape at NNLO should prove useful in testing this assumption.
Fig. 8.12 The inclusive jet cross-section from CDF in Run II. Reprinted
with permission from Ref. [116].
with the CTEQ6.1 and MRST2004 PDFs, is shown in Fig. 8.13 for the five rapidity
regions in the analysis [61].
A renormalization and factorization scale of pjet
T /2 has been used in the calculation.
Typically, this leads to the highest predictions for inclusive jet cross-sections at the
TEVATRON (for R=0.7), as discussed in Section 4.1.7 There is good agreement with
the CTEQ6.1 predictions over the transverse momentum range of the prediction, in
all rapidity regions. The MRST2004 predictions are slightly higher at lower pT and
slightly lower at higher pT , but still in good overall agreement.
As noted before, the CTEQ6.1 and MRST2004 PDFs have a higher gluon at large
x as compared to previous PDFs, due to the influence of the Run I jet data from
CDF and DØ. This enhanced gluon provides a good agreement with the high pT CDF
Run I measurement as well. The DØ inclusive jet data taken in Run II, however, do
not favour this higher gluon at large x, but instead prefer a weaker gluon. As will be
observed in the next chapter, the inclusive jet data from the LHC do not yet provide
a definitive answer. The curves indicate the PDF uncertainty for the prediction using
the CTEQ6.1 PDF error set. The shaded band indicates the experimental systematic
uncertainty, which is dominated by the uncertainty in the jet energy scale (on the order
of 3%). It is important to note that for much of the kinematic range, the experimental
Fig. 8.13 The inclusive jet cross-section from CDF in Run II, for several
rapidity intervals using the Midpoint cone algorithm, compared on a linear
scale to NLO theoretical predictions using CTEQ6.1 PDFs. Reprinted with
permission from Ref. [61].
systematic errors are less than PDF uncertainties; thus, the use of this data has proven
to be useful in global PDF fits.
Fig. 8.14 The ratios of the inclusive jet cross-sections measured with
the kT algorithm (with D = 0.7) to those measured with the Midpoint
algorithm (with R = 0.7) from CDF in Run II, for several rapidity intervals,
with comparisons to the predictions of a NLO fixed-order QCD calculation
and from PYTHIA. The results are from Ref. [61].
into either. This is a feature endemic to any cone algorithm (including SISCone), but
not to the kT family of jet algorithms. Thus, this is another advantage for the use of
the anti-kT algorithm at the LHC.
The TeV4LHC workshop writeup [133] recommended the following solution to the
problem of unclustered energy with cone jet algorithms. The standard midpoint algo-
rithm should be applied to the list of calorimeter towers/particle/partons, including
the full split/merge procedure. The resulting identified jets are then referred to as
first pass jets and their towers/particles/partons are removed from the list. The same
algorithm is then applied to the remaining unclustered energy and any jets that result
are referred to as second pass jets. There are various possibilities for making use of the
second pass jets. They can be kept as separate jets, in addition to the first pass jets,
or they can be merged with the nearest first pass jets. The simplest solution, until
further study, is to keep the second pass jets as separate jets.
It was originally thought that with the addition of a midpoint seed, the value of
Rsep used with the NLO theory could be returned to its natural value of 2.0 (cf.
Section 2.1.6). Now it is realized that the effects of parton showering/hadronization
result in the midpoint solution virtually always being lost. Thus, a value of Rsep of
1.3 (for split/merge fraction f=0.75) is required for the NLO jet algorithm to best
model the experimental one. The inclusive jet theory cross-section with Rsep = 1.3 is
approximately 3 − 5% smaller than with Rsep = 2.0, decreasing slowly with the jet
transverse momentum.
8.3.4 Inclusive jet production at the TEVATRON and global PDF fits
Inclusive jet production receives contributions from gg, gq, and qq(qq) initial states
as shown in Fig. 4.1. Thus, in principle, this process is sensitive to the nature of all
the PDFs. The experimental precision of the measurement, along with the remaining
theoretical uncertainties, means that the cross-sections do not serve as a meaningful
constraint on the quark or antiquark distributions. However, they do serve as an
important source of information on the gluon distribution, especially at high x. The
addition of the jet data from CDF and DØ resulted in a larger gluon distribution at
high x than present in PDFs determined without the TEVATRON jet data. The influence
of the high ET Run I jet cross-section on the high x gluon is evident. There is always
the danger of sweeping new physics under the rug of PDF uncertainties. Thus, it is
important to measure the inclusive jet cross-section over as wide a kinematic range as
possible, as was done by DØ in Run I [103] and by CDF [61] and DØ [86] in Run II.
The generic expectation is that most signals of new physics would tend to be central
while a PDF explanation should be universal, i.e. fit the data in all regions.
As inclusive jet production probes high scales, it serves as a useful observable
to search for the presence of quark compositeness. Fig. 8.15 compares the DØ jet
cross-sections measured in Run I to the NLO QCD predictions using the CTEQ6.1
PDFs, along with the cross-sections for jet production including a four-Fermi contact
interaction (as discussed in Section 4.1). The mass scale of the contact interaction, Λ
(cf. Eq. (4.18)), is probed at three values (1.6, 2.0 and 2.4 TeV), assuming constructive
interference [867]. The cross-section is plotted as a ratio to the pure QCD prediction.
The effect of the contact term is limited to the central rapidity regions (with of course
526 Data at the TEVATRON
Fig. 8.15 Comparison of the DØ inclusive jet data from Run I to the
pure NLO QCD prediction using the CTEQ6.1 PDFs, and to the addition
of a contact interaction term with Λ values of 1.6, 2.0 and 2.4 TeV (solid
curves, from top to bottom). Reprinted with permission from Ref. [867].
the size of the effect decreasing with increasing mass scale). This DØ data was used in
the determination of the CTEQ6.1 PDFs. If there were a contact interaction, then the
PDFs would need to be refit, comparing the data to the theory with compositeness
included.
Fig. 8.16 (left) The energy distribution in an isolation cone about the
photon direction. Shown are the contributions from true photons and from
backgrounds. (right) The resultant photon signal fraction as a function of
photon transverse energy. Reprinted with permission from Ref. [58].
events fall on the tail of the (rapidly falling) jet fragmentation function, but the large
rate of jet production means that the background can not be ignored.
To reduce the jet backgrounds, the photons are typically required to be isolated,
loosely at the trigger level, and more tightly off-line, similar to what is done for elec-
trons. For example, in the CDF measurements in Run II, a requirement is made that
the additional energy in a cone of radius R = 0.4 about the photon direction is less
than 2 GeV, once the pileup energy from additional minimum bias events has been
subtracted. An isolation requirement reduces not only jet backgrounds, but also contri-
butions from photon fragmentation functions. This isolation is tighter than the typical
isolation cuts applied at the LHC.
An example of a photon isolation distribution is shown in Fig. 8.16 (left) [58]. The
negative energy tail is a result of the pileup subtraction. The photon fraction in each
kinematic bin can be determined using templates of the isolation energy for photon
candidates and backgrounds for each kinematic bin. The resulting true photon fraction
is shown in Fig. 8.16 (right). The photon fraction of the sample rises rapidly towards
one as the photon candidate transverse momentum increases, as expected, since the
fixed isolation cut requires the fraction z of the jet momentum taken by the leading π 0
also increases. This is a general rule-of-thumb for both the TEVATRON and the LHC:
an isolated photon-like object is almost always a real photon.
The resulting cross-section is shown in Fig. 8.17 [58]. Good agreement is observed
with the NLO prediction from JETPHOX, except perhaps at low ET , where the data
is higher than theory. This has been observed in several other TEVATRON photon
measurements, and has been attributed to the effects of soft gluon radiation. However,
as will be seen in the next chapter, this has not been seen for similar measurements
at the LHC. Note that the cone isolation cut greatly suppresses the fragmentation
contribution to photon production, but does not explicitly remove it.
The photon + jet cross-section is interesting in its own right and, as discussed
already in Section 4.2, as an input to PDF fits. It can also be used for a calibration
of the jet energy scale. The electron energy scale is known very precisely from mea-
528 Data at the TEVATRON
Fig. 8.17 The isolated single photon cross-section measured in CDF com-
pared to NLO QCD predictions from JETPHOX [348]. Reprinted with per-
mission from Ref. [58].
Fig. 8.18 (left) The sources for uncertainty of the jet pT response in the
central rapidity region. Here, the variable E = pT,γ cosh ηjet is used on
the horizontal axis, as the resolution is better than for jet energy. (right)
The resultant jet energy scale uncertainty, as a function of jet transverse
momentum, for the central rapidity region. Reprinted with permission from
Ref. [98].
pT response is shown in Fig. 8.18 (left) and is dominated by the photon energy scale
error. The method is restricted to the central rapidity region and is limited by the
statistics of the photon + jet sample to jets below 350 GeV. The energy scale can
be transferred to non-central rapidities using transverse momentum balancing in dijet
events. The energy scale for jets above that transverse momentum must be extrap-
olated. The final fractional uncertainty on the jet energy scale is shown in Fig. 8.18
(right) for the central rapidity region.
Diphoton production has also been extensively studied at the TEVATRON. In CDF,
for example, a study was carried out using the full Run II data sample, using the
same cuts and analysis techniques as for single photon production [67].8 In general,
the phenomenology is richer than for single photons. For example, the transverse
momentum cuts on the photons sculpt the diphoton pT spectrum, giving rise to a
bump in the spectrum often referred to as the Guillet shoulder [254, 255]. This can be
seen in Fig. 8.19, which shows a CDF measurement of the diphoton pT distribution [67],
with the shoulder around 35 GeV. This feature is due to configurations in which the
photon pair is accompanied by significant additional hard radiation.
As pointed out in Section 4.2.3, in particular in Table 4.1, this distribution is
difficult to describe with the first few orders of a fixed-order calculation. An NLO cal-
culation of the total diphoton cross-section, corresponding to the MCFM prediction in
Fig. 8.19, gives the first non-trivial prediction. Since the diphoton pair recoils against a
single parton, this calculation is insufficient to capture the shoulder and the description
of the data is relatively poor. In the NNLO calculation, which accesses configurations
with two recoiling partons for the first time, the shoulder begins to be reproduced
and the theoretical description is much improved. The SHERPA prediction, which here
8 Diphoton production has backgrounds from either one or both photons being faked by jets. Again,
as the ET of the photons increases, the purity fraction increases.
530 Data at the TEVATRON
mentum cut is 20 GeV.9 Below a W boson pT of 20 GeV the W boson recoils against
more than one hard jet, and the effects of soft gluon emission also become important.
The HT distribution (the sum of the transverse momenta of all jets and leptons
(including neutrinos) is plotted in Fig. 8.20 (right) as a function of the jet multiplicity.
The HT variable is particularly sensitive to higher order QCD effects, but is also
often chosen as a variable in which to look for the presence of BSM physics. There
is a significant variation observed in the level of agreement between the data and the
predictions evident in the figure, possibly allowing for the possibility of improvements
to these predictions. Note in particular the tendency of the NLO BLACKHAT+SHERPA
predictions to lie below the data at high HT for the ≥ 1 jet bin. We will return to this
observation in Chapter 9.
Note that the DØ measurements were carried out with the midpoint Cone jet
algorithm, but the results were corrected to the (infra-red safe) SISCone jet algorithm
using SHERPA. For the leading jet transverse momentum, this correction is very small
as the two algorithms are very close in behaviour.
A measurement of Z+jets production was carried out at CDF using the full data
sample [76]. The jet multiplicity distribution is shown in Fig. 8.21 for data com-
pared to BLACKHAT+SHERPA (left), and ALPGEN +PYTHIA, POWHEG +PYTHIA and
LoopSim+MCFM (right). Recall that this last prediction is obtained by ameliorating
an exact NLO calculation with an approximate treatment of NNLO effects, according
to the procedure described in Section 3.4.2. The midpoint jet algorithm with R = 0.7
was used (in contrast to the other V +jets measurements which used R = 0.4). It is no-
ticeable that the measured cross-section for Z+ ≥ 3 jets is approximately 30% larger
than the BLACKHAT+SHERPA prediction, contrary to what has been observed at the
LHC using the anti-kT jet algorithm with R = 0.410 and with W + ≥ 3 jets (using
R = 0.5) at the TEVATRON [96]. The Monte Carlo predictions are in better agreement
with the high jet multiplicity data, albeit with larger uncertainties. Looking back at
Fig. 4.22, the cross-section predictions for the SISCone jet algorithm (for ≥ 4 jets, but
the situation is roughly similar for ≥ 3 jets) tend to peak at smaller scales than do the
predictions for the anti-kT jet algorithm. The differences increase as the jet multiplicity
and the jet size increases. The peak cross-section values for the TEVATRON measure-
ment are actually quite similar for the two algorithms, as discussed in Section 4.3 [227].
The Z+ ≥ 3 jet cross-section was not determined using the anti-kT jet algorithm in
the CDF measurement, but a comparison was carried out using simulated data. The
resulting cross-sections, for the two jet algorithms, are much closer than implied by
the BLACKHAT+SHERPA predictions using a scale of HT /2.
Fig. 8.20 The W boson transverse momentum (left) and HT (right) dis-
tributions in inclusive W + n-jet events, for 1 ≤ n ≤ 4, measured by DØ.
Reprinted with permission from Ref. [99]. The measurements are compared
to several theoretical predictions.
being investigated depend on the decays of the two W ’s. The most useful (combination
of rate and background) final state occurs when one of the W ’s decays into a lepton
and neutrino and the other decays into two quarks. Thus, the final state consists of
a lepton, missing transverse energy and of the order of four jets. The number of jets
may be less than 4 due to one or more of the jets not satisfying the kinematic cuts, or
more than 4 due to additional jets being created by gluon radiation off the initial or
final state. Because of the relatively large number of jets, a smaller cone size (R = 0.4)
has been used for jet reconstruction, with the CDF analyses in Run 2 using the JetClu
cone algorithm and the DØ analyses the Midpoint cone algorithm. No top analysis
has been performed using the kT jet algorithm. There is a sizeable background for
tt̄ production at the TEVATRON 533
this final state through QCD production of W + jets. Two of the jets in tt̄ events are
created by b quarks; thus there is the additional possibility of an improvement of signal
purity by the requirement of one or two b-tags.
Top pair events also have a harder HT (sum of the transverse energies of all jets,
leptons and missing transverse energy in the event) than does the W + jets back-
ground. This is due to the harder spectrum of the jets from tt̄ decays (compared to
the background), resulting from the large mass scales inherent in top production. A
requirement of large HT thus improves the tt̄ purity.
The jet multiplicity distribution for the top candidate sample from CDF in Run II
is shown in Fig. 8.22 for the case of one of the jets being tagged as a b-jet (left) and two
of the jets being tagged (right) [118]. The requirement of one or more b-tags greatly
reduces the W + jets background in the 3 and 4 jet bins, albeit with a reduction in
the number of events due to the tagging efficiency. The b-tagging efficiency at the
TEVATRON was typically of the order of 40%. As will be seen in the next chapter, the
b-tagging efficiency is higher at the LHC, primarily due to a larger rapidity coverage
for the silicon detectors. The high jet multiplicity double-b-tagged events are almost
exclusively tt̄ events.
The top pair cross-section has been measured in a variety of final states (depending
on the W boson decays) with a variety of techniques. A compilation of measurement
results from CDF and DØ is shown in Fig. 8.23 [70]. The cross-sections from the two
experiments agree with each other and with the theoretical predictions.
534 Data at the TEVATRON
Fig. 8.22 The expected number of W + jets events that are b-jet tagged
(left) and double-b-tagged (right), indicated by source. Reprinted with per-
mission from Ref. [118].
state kinematics difficult. For the final state with 6 jets (2 b-jets), complete kinematic
information is available; however, the QCD background from six-jet production is very
high.
The high statistics for top production accumulated at the TEVATRON has allowed
cross-checks of the calibration techniques for the top mass reconstruction, as for exam-
ple the calibration of the light quark jet response using the decay of the hadronically
decaying W boson (in tt̄ events) into light quarks.
There are two main techniques at the TEVATRON for top mass determination:
the template method and the matrix element method. In the template method, the
information from the lepton + jets final state is input to a χ2 determination, where the
reconstructed top mass is a free parameter. The χ2 is minimized for each possible way
of assigning the 4-vector information for each of the four leading jets to the top decay
products (if any jets are b-tagged, then they are required to be from the top decay and
not from the W boson decay). The χ2 expression has terms for the uncertainty on the
measurements of the 4-vectors of the decay products. There are two possible solutions
for the longitudinal momentum of the neutrino, so both are used. The minimum chi-
square solution (for jet assignment, neutrino solution and mtop ) is chosen for each
event.
The matrix element method has the ability to use theoretical information from
the matrix element, retaining all of the hard scattering correlations, in the top mass
determination. A likelihood is determined for each event that the theoretical model
from the matrix element describes the kinematics of the event. The technique is very
CPU-intensive, and until recently was restricted to the use of leading order matrix
elements. In [318], the matrix element method was extended to allow the calculation
of next-to-leading order weights on an event-by-event basis.
In either method, the determination of the top mass is obtained by comparing data
with Monte Carlo predictions. Thus, the top mass can not be strictly identified with
any precise theoretical definition, such as the M¯S mass or the pole mass discussed in
Section 4.5. However, the differences should be smaller than the current uncertainties
on the top mass from the TEVATRON measurements, but may be an issue for future
more precise determinations at the LHC.
A compilation of top mass determinations from the TEVATRON, with the global
mass fit dominated by lepton+jets final states, is shown in Fig. 8.24 [599]. The top
mass has been determined from the TEVATRON measurements to 0.64 GeV, a precision
of about 0.4%. All of the individual determinations of the top mass are consistent with
each other. These determinations of the top quark mass, together with the measured
W -boson mass, provide an indirect constraint on the mass of the Higgs boson. This is
shown in Fig. 8.25 [599].
The precision of the top mass determination at TEVATRON has reached the point
where some of the systematics due to QCD effects must be considered with greater
care. One of the potentially important systematics is that due to the effects of initial-
state radiation. Jets created by initial state radiation may replace one or more of the
jets from the top quark decays, affecting the reconstructed top mass. In the past,
the initial state radiation (ISR) systematics was determined by turning the radiation
off/on, leading to a relatively large impact. A more sophisticated and more correct
536 Data at the TEVATRON
Fig. 8.24 A compilation of the top quark mass measurements from CDF
and DØ, from arXiv:1608.01881.
treatment was adopted in Run II, where the tunings for the parton shower Monte
Carlos were modified leading to more/less initial-state radiation, in keeping with the
uncertainties associated with Drell-Yan measurements as discussed in Section 8.5. The
resultant tt̄ pair transverse momentum distributions are shown above in Fig. 8.26. The
changes to the tt transverse momentum distribution created by the tunes are relatively
modest, as is the resultant systematic error on the top mass determination.
Note that the peak of the tt transverse momentum spectrum is somewhat larger
than that for Z production at the TEVATRON, due to the larger mass of the tt system.
As both are produced primarily by qq initial states, the difference is not as large as it
is at the LHC, where the primary tt̄ production mechanism is through gg fusion.
It is also interesting to look at the mass distribution of the tt system, as new physics
(such as a Z 0 [622]) might couple preferentially to top quarks. Such a comparison for
Run II is shown in Fig. 8.27 without any signs for a high mass resonance [69]. The
simulation of the Standard Model tt̄ signal for this analysis was carried out with
POWHEG using MSTW2008 NLO PDFs. Often, in previous TEVATRON studies, a LO
Monte Carlo such as PYTHIA was used, along with a LO PDF. Note that if we compare
the predictions for the tt mass distribution at LO and NLO, we see that the NLO cross-
section is substantially less than the LO one at high mass. Further investigation shows
that the decrease of NLO compared to LO at high mass is found only in the qq initial
state and not in the gg initial state. In fact, at the TEVATRON, the ratio of NLO to
tt̄ production at the TEVATRON 537
Fig. 8.25 Implications of the measured top quark and W -boson masses
for the mass of the Higgs boson, from ref. [599].
Fig. 8.26 The PYTHIA predictions for the tt transverse momentum using
the Plus/Minus tunes. We thank Prof. Un-Ki Yang for this figure.
LO for gg initial states grows dramatically with increasing top pair invariant mass.
This effect is largely due to the increase in the gluon distribution when going from
CTEQ6L1 in the LO calculation to CTEQ6M at NLO. For instance, at x ∼ 0.4 (and
hence an invariant mass of about 800 GeV) the gluon distribution is about a factor two
larger in CTEQ6M than in CTEQ6L1, giving a factor four increase in the cross-section.
Conversely, the quark distribution is slightly decreased at such large x. If NLO PDFs
were used for both the numerator (NLO) and denominator (LO), this dramatic effect
538 Data at the TEVATRON
Fig. 8.27 The tt mass distribution measured by CDF in Run II. Reprinted
with permission from Ref. [69].
would not exist. This is an example of the danger if LO PDFs are used, especially in
discovery physics regions.
In any case, the absolute contribution of the tt̄ cross at high masses from gg initial
states at the TEVATRON is small, due to the rapidly falling gluon distribution at high
x. The dominant tt̄ production mechanism at the LHC in all mass regions is through
gg fusion.
the measurements. The inclusive asymmetry values from experiment and the NNLO
calculation are shown in Fig. 8.28. There is an ambiguity in the manner in which the
asymmetry can be calculated, i.e. whether the exact results are used for the numerator
and denominator, or whether the asymmetry ratio is expanded in terms of powers of
αs (mZ ). In the figure, the exact results at each order are shown in capital letters,
while the small letters refer to the expanded version of the calculation. The first four
theory predictions in the figure are QCD-only while the second four are QCD+EW.
The expanded version results in a larger value for the asymmetry with smaller errors.
The authors of [428] prefer the exact result, so that is used in the figures below.
The asymmetry typically is measured as a function of the three dynamical variables:
∆Ytt̄ , mtt̄ , and pT,tt̄ . The expectation is that at NLO in QCD the asymmetry should
increase linearly with the variables ∆Ytt̄ and mtt̄ ,11 while it should switch signs at
non-zero pT (the ISR/FSR radiation is the only contribution to the asymmetry away
from pT,tt̄ = 0).
At the TEVATRON, the asymmetry is not a subtle effect [68]. It is observable even
at the raw data level, as shown in Fig. 8.29 (left). The effects of the background and
of the non-perturbative effects are to dilute the net asymmetry, so after corrections
for these effects, the asymmetry increases, as shown in Fig. 8.29 (right). Note that the
parton level asymmetry is larger than the background-subtracted asymmetry, which
is larger than the asymmetry at the reconstructed level.)
Comparisons of the parton-level NLO and NNLO QCD (no EW) asymmetry pre-
dictions to the CDF and DØ measurements for ∆Ytt̄ (left) and mtt̄ (right) are shown
11 The asymmetry is primarily proportional to β, the velocity of the top (or anti-top) in the tt̄
centre-of-mass frame [141].
540 Data at the TEVATRON
Fig. 8.30 A comparison of the pure QCD (NLO and NNLO) tt̄ asym-
metry predictions to the data from CDF and DØ, as a function of (left)
the rapidity separation between the t and the t̄, and (right) the tt̄ mass
distribution. Reprinted with permission from Ref. [428].
in Fig. 8.30, from [428]. The CDF results tend to be higher than those from DØ, but
both are in statistical agreement. Note that many physics models that could increase
the asymmetry at high mtt̄ should also cause a change in the observed tt̄ mass spec-
trum. However, as shown in Fig. 8.27, no such deviation is observed, and the mass
distribution agrees with NLO predictions.
Predictions at NLO and NNLO (pure) QCD for the asymmetry as a function of
the transverse momentum of the tt̄ pair are shown in Fig. 8.31 (left). The NNLO
corrections greatly decrease the size of the (negative) asymmetry for non-zero ptTt̄
values, leading to the net increase in the inclusive asymmetry discussed previously. The
CDF data are shown in Fig. 8.31 (right), compared to predictions from POWHEG and
tt̄ production at the TEVATRON 541
Fig. 8.31 (left) Predictions for the QCD asymmetry as a function of the
tt̄ transverse momentum at NLO and NNLO. Reprinted with permission
from Ref. [428]; (right) the CDF tt̄ asymmetry distribution as a function
of the transverse momentum of the tt̄ pair, compared to predictions from
POWHEG and PYTHIA. Reprinted with permission from Ref. [68].
Fig. 8.32 (left) The distribution of the discriminant used for the sepa-
ration of single top production and its backgrounds. The black solid line
shows the total background. (right) The measured single top production
cross-sections from CDF and DØ, and the TEVATRON combination, com-
pared to a prediction at NLO+NNLL. Reprinted with permission from
Ref. [73].
was important to separate the event selections into orthogonal search regions, so that
the multivariate analyses could be optimized for each region. For example, for the
H → W + W − decay mode, where both W bosons decay leptonically, it was useful for
the searches to be broken into 0,1 and ≥ 2 jet final states.
One of the backgrounds for the V H(→ bb̄) searches is V Z(→ bb̄) production;
however, this also serves as a useful calibration tool. The background-subtracted dis-
tribution for the reconstructed dijet mass in V Z final states is shown in Fig. 8.33. The
reconstructed cross section agrees well with the Standard Model prediction.
The best-fit cross Higgs boson signal cross section as a function of Higgs boson mass
for the final TEVATRON result is shown in Fig. 8.34 (left). A broad excess in the range
of 120-140 GeV can be observed; also shown are the expectations for the production
of a Higgs boson with either the Standard Model cross section, or the Standard Model
cross section multiplied by a factor of 1.5. This excess has a significance of 3.0 standard
deviations and can be associated with the production of a Higgs boson with a cross
section that is a factor 1.44+0.59
−0.54 times the SM prediction. The best-fit values for the
cross sections from the four decay modes shown in Fig. 8.34 (right) are all consistent
with each other, and with the SM predictions.
8.8 Summary
The TEVATRON was the first hadron-hadron machine in which modern techniques
could be used for event reconstruction and analysis, and for comparison of data to
theory. Jet algorithms were developed that allowed more precise theoretical compar-
isons, although in most cases not with the full all-orders infrared-safety desired. Some,
but not all, measurements were presented at the hadron level, with complete informa-
tion about the parton-to-hadron corrections. Results at the hadron level allow theorists
544 Data at the TEVATRON
This chapter on LHC results presents the culmination of the theoretical techniques de-
veloped in the earlier chapters, along with the data analysis experiences from TEVATRON.
Here, a wide range of exemplary data from Run I at 7 and 8 TeV will be discussed.1
For the cross-sections discussed in this chapter, the data have been corrected for all
experimental effects, so that effectively they are at the hadron level. Corrections for
detector inefficiencies and resolution effects have been taken into account by an unfold-
ing procedure. The theoretical predictions have been corrected for non-perturbative
effects, either in a Monte Carlo framework, or by the addition of non-perturbative
corrections to parton-level predictions.
Note that the latter approach is often ignored by experimenters at the LHC, outside
of the Standard Model groups, even though in many cases it offers the highest precision
comparison. This is especially true if the cross section has been calculated to NNLO.
If the cross-section is suitably inclusive, as for example the transverse momentum
distribution for the leading jet in Higgs+≥ 1 jet events, resummation effects should be
small, and a comparison of the data to a NLO or NNLO fixed order prediction should
certainly be carried out.2
Cross-sections for Standard Model processes at the LHC have been measured over
14 orders of magnitude, as shown in Fig. 9.1 for the ATLAS experiment. In general,
SM predictions are in remarkable agreement with data. It is a hallmark of the abilities
of the human mind that the technology developed and the calculations made in the
past decades prove to be so amazingly powerful and accurate.
The only grain of salt in this success story is that up to now no clear sign of
any physics Beyond the Standard Model (BSM) has shown up, despite a plethora of
searches.3 The higher energy (and higher integrated luminosity) for Run II holds the
promise that a threshold for new physics may be reached. However, it is clear that the
signatures for new physics may be subtle and a thorough understanding of pQCD is
1 At the time of the writing of this book, first measurements at 13 TeV were available. However,
comparisons have been limited to data from the 7 and 8 TeV running.
2 For this example, the cross-section is not totally inclusive, in the sense that a p
T requirement
has been imposed on the jets, typically 30 GeV. But, the resulting restriction on the phase space for
gluon emission is minimal and the effects of resummation thus are suitably small [198].
3 Unfortunately, the 750 GeV bump in the diphoton mass spectrum appears to have gone away in
the most recent 13 TeV data.
The Black Book of Quantum Chromodynamics: A Primer for the LHC Era. John Campbell, Joey Huston, and Frank Krauss.
© John Campbell, Joey Huston, and Frank Krauss 2018. Published in 2018 by Oxford University Press.
DOI 10.1093/oso/9780199652747.001.0001
Data at the LHC 547
Table 9.1 Results of measurements of total cross-sections σtot , elastic cross-sections σel , and
the elastic slope B at the LHC.
Collab. Ec.m. Result
ATLAS 7 TeV σtot = 95.35 ± 0.38 (stat.)±1.25 (exp.)±0.37 (extr.) mb [33]
B = 19.73 ± 0.14 (stat.)±0.26 (syst.) GeV−2
ATLAS 8 TeV σtot = 96.07 ± 0.18 (stat.)±0.85 (exp.)±0.31 (extr.) mb [1]
B = 19.74 ± 0.05 (stat.)±0.25 (syst.) GeV−2
TOTEM 7 TeV σtot = 98.0 ± 2.5 mb [167]
σel = 25.1 ± 1.1 mb
σinel = 72.9 ± 1.5 mb
8 TeV σtot = 101.7 ± 2.9 mb [166]
σel = 27.1 ± 1.4 mb
σinel = 74.7 ± 1.7 mb
Fig. 9.2 The differential elastic cross-section dσel /d|t| at the 7 TeV LHC.
Reprinted with permission from Ref. [167].
550 Data at the LHC
where t is the momentum transfer between the protons, and the elastic cross-section
is obtained from the number of elastic events Nel divided by the integrated luminosity
L. Here ρ ≈ 0.15 captures the effect of the small real parts of the elastic amplitude.
A similar expression relates the elastic slope B (the slope of the elastic cross section
at |t| → 0) to the total cross-section. Obviously, in this type of determination of σtot ,
one must extrapolate the differential elastic cross-section dσel /d|t| and it is important
extend the measurement to the smallest possible t. Results from the TOTEM collabora-
tion [167], including parameters of simple fits to different features, on this observable
are exhibited in Fig. 9.2, where the peak as |t| → 0 is evident.4
It is worth noting that in contrast to the TEVATRON with its pp̄ collisions,5 it is
possible at the LHC to use the van-der Meer method [877], which allows the direct
determination of the luminosity in collisions of like-sign charged particles. Practically
eliminating the large systematic uncertainties due to the luminosity in turn translates
to significantly reduced uncertainties on the cross-section measurement. The results
of the measurements quoted here put the cross section fits and the models and as-
sumptions underlying them, cf. Section 7.1.3, to a stringent test; many of the models
predicted σtot and related quantities to be somewhat larger than observed.
Moving on to measurements of inelastic cross-sections, it is fairly obvious that
one could define the inelastic cross-section as the difference of total and elastic cross-
section,
σinel = σtot − σel . (9.2)
This is precisely what the TOTEM collaboration uses in their determination of the
inelastic cross-sections, which for this reason are also quoted in Table 9.1. However,
quite often a distinction is being made between low-mass diffractive and “truly”
inelastic events. Defining the scaled diffractive mass of the dissociating proton ζ
through
M2
ζ = 2X , (9.3)
Ec.m.
inelastic events are defined as those events where the larger of the two diffractive
masses MX – or, correspondingly ζ – is larger than some critical value. At the LHC
4 The β parameter defines the beam envelope. Its value at the collision point is termed β∗. For the
highest luminosities,it is desirable to have a small value of β∗. Measurements of the elastic scattering
cross-section, however, require special runs with large values of β∗.
5 As a reminder, the luminosity uncertainty at the TEVATRON was on the order of 6%.
Total cross-sections, minimum bias and the underlying event 551
MX > 15.7 GeV or ζ > 5·10−6 is often used.6 Events that do not satisfy this condition
are then dubbed single- or double-diffractive, and their cross-section is often given
w.r.t. the inelastic one. The corresponding results at c.m.-energies of Ec.m. = 7 TeV
are exhibited in Table 9.2.
In Fig. 9.3 the results for the total and elastic cross-sections in pp and pp̄ colli-
sions are compared to data from lower energy measurements and from (higher energy)
cosmic ray measurements. It is clear from the collider data that the cross-sections
continue to increase logarithmically with the centre-of-mass energy of the reactions.
Looking at the data from the highest energies, obtained by astroparticle experiments,
it also appears that the data continue to rise with energy above the current collider
reach. The unitarization of the total hadronic cross-section has not set in, and
the Froissart bound is still ahead of us.
6M
X is the (diffractive) mass of the system emerging from the proton breakup, and Ec.m. is the
hadronic centre-of-mass energy.
552 Data at the LHC
Fig. 9.3 Total and elastic cross-sections for pp and pp̄ scattering vs. cen-
tre-of-mass energies ranging from a few GeV to 60 TeV,along with the
results of a global fit. Reprinted with permission from Ref. [1].
such a distinction of the different event categories is made, and in the simulation of
MB events, quite often an admixture of inelastic and single and double diffractive
event samples must be used. Historically, this has led to correcting the MB data for
diffractive events, which are effectively subtracted by the use of event generators,
typically PYTHIA. Quite often the data have been extrapolated to the full phase space,
i.e. to all pseudorapidities and to zero transverse momentum, again by invoking the
simulation tools. This can be risky.
MB measurements at the LHC include only those events which satisfy relatively
inclusive requirements on visible particles, such as the requirement of a certain, small
number of (charged) particles with a minimal, usually low, transverse momentum
inside the acceptance region of the detector. Usually, this boils down to the requirement
of something like one to six charged particles with a minimal transverse momentum
between about 100 and 500 MeV inside a pseudo-rapidity regime given by |η| < 2.5 or
similar.
In Fig. 9.4 such data, taken by the ATLAS collaboration [2] are shown, based on
events with at least one charged particle, where all charged particles are inside the
interval |η| < 2.4 and have transverse momenta p⊥ > 500 MeV. The particles are
distributed relatively evenly in pseudo-rapidity, forming a plateau. (If a smaller cut
on the transverse momentum (100 MeV) is instead used, the distribution has a slight
peaking around |η|=2.) All MC distributions predict a flat η distribution, but there
are some differences in the normalization. The PYTHIA 6 ATLAS MC09 tune was fit
to data from 200 GeV to 1.96 TeV. The PYTHIA 6 AMBT1 tune derived from the
MC09 tune, but was also fit to early LHC minimum bias data from 0.8 and 7 TeV. It
Total cross-sections, minimum bias and the underlying event 553
Fig. 9.4 Single charged particle spectra in Minimum Bias events at the
LHC at 7 TeV: pseudorapity left), transverse momentum right). All results
refer to events with at least one charged particle in the interval |η| < 2.4
with a transverse momentum of p⊥ > 0.5GeV. Reprinted with permission
from Ref. [2].
is not surprising then that the best agreement is with the AMBT1 tune, although the
MC09 tune also works well. In any case, the charged particle density for MB events at
the LHC is larger than that observed at the TEVATRON (Section 8.1). The transverse
momentum distribution falls seven orders of magnitude between 500 MeV and 10 GeV.
The high transverse momentum range is hard to fit; here, the AMBT1 and MC09 tunes
actually perform the worst.
In Fig. 9.5 (left), the charged particle multiplicity distribution is shown for MB
events. The various predictions agree reasonably well with the data and each other
at low nch , but there are clear deviations at large values of nch , and no prediction
describes the data well. The mean transverse momentum is plotted versus the track
multiplicity in Fig. 9.5 (right). The figure indicates that the greater the charged track
multiplicity, the larger the mean track multiplicity is. This is not surprising in that
the larger the charged particle multiplicity is in an event, the more likely it is that the
protons have suffered a violent collision. One possible surprise is the degree of linearity
of the correlation for charged particle multiplicities above 20. The ATLAS AMBT1 tune
describes this correlation the best.
It is clear that such inclusive distributions present a formidable challenge to our
understanding of the data and of the strong interaction. Typically, the level of agree-
ment is in the range of a few to about 30%, even for the steepest distributions. That
554 Data at the LHC
Fig. 9.5 Single charged particle spectra in Minimum Bias events at the
LHC at 7 TeV:charged particle multiplicity left), mean transverse momen-
tum versus track multiplicity (right). All results refer to events with at least
one charged particle in the interval |η| < 2.4 with a transverse momentum
of p⊥ > 0.5GeV. Reprinted with permission from Ref. [2].
the event generators are able to describe the data to this level is somewhat remarkable,
even acknowledging that the MC parameters were often tuned to this data, in addition
to lower energy data. This agreement could not necessarily have been expected from
the beginning, especially keeping in mind that the simulation of MB events is based
on relatively primitive paradigms, namely multiple parton–parton scattering in
collinear factorization (see also Sections 7.1 and 7.2). While this provides some con-
fidence that inclusive features of MB physics are under control well enough to allow
for more complex measurements, it should not be forgotten that the agreement is less
than perfect.
One of the places where such cracks in the otherwise relatively good description of
MB data shows up, is in the production of individual hadron species, something that
may be dubbed “hadro-chemistry, and in particular in the production of hadrons
with multiple strange quarks or of baryons. As an example, consider the case of hyperon
and cascade production (Λ and Ξ production), studied by the CMS collaboration [655],
with some characteristic distributions shown in Fig. 9.6. Not surprisingly, the rapidity
distributions for both the Λ and Ξ− are fairly flat, as in the inclusive case above.
However, the normalizations of the predictions are off by sizable factors at both 0.9
and 7 TeV for all predictions. The ratio of Ξ− /Λ production is also more or less flat
in rapidity, and the normalization of the predictions is again off. The ratio of MC to
Total cross-sections, minimum bias and the underlying event 555
Fig. 9.6 Strange particle spectra in Minimum Bias events at the LHC at
7 TeV: The rapidity distribution of Λ baryons is shown on the left and the
rapidity distribution of Cascade baryons Ξ is shown on the right, compared
to several MC predictions. Reprinted with permission from Ref. [655].
data, for the transverse momentum distributions of several strange particles, is shown
in Fig. 9.7, again at both 0.9 and 7 TeV. There is a sizable model dependence to the
description of these data, and no MC prediction describes the data well.
When comparing these data with various MC predictions, it is obvious that the
event generators struggle to achieve an agreement for baryons which is of similar
quality as in the inclusive case in Fig. 9.4 and Fig. 9.5. This is a testament to the
fact that the parameters of hadronization have been tuned to e− e+ annhilation data,
typically from LEP 1, involving no incoming hadrons and therefore no highly energetic
sources of additional colour such as the beam remnants in hadronic collisions. This is
a clear hint at some deficiencies in our current understanding of some of the aspects of
particle production at the LHC, while predictions for the bulk of particle production
are under reasonably good control.
Another region where the description of data is less than perfect has been studied
by the ATLAS collaboration in [16], namely the emergence of rapidity gaps. These
are regions in pseudorapidity where no particle above a certain minimal transverse
momentum is observed. In this study, rapidity gaps with respect to the edges of the
detector at η = ±4.9, and in different bins of the minimal p⊥ , are analysed. The
largest rapidity gap, ∆ηF , is reported. Diffractive events are responsible for the region
of large rapidity gaps. Regions of relatively small gap sizes, of up to ∆ηF ≈ 3, are
dominated by fluctuations in standard QCD events, which emerge from the absence
of parton radiation into that region (and its interplay with the hadronisation model).
It is clear, therefore, that for p⊥ of up to O (1 GeV), this is a highly model-dependent
statement, with non–perturbative effects such as colour reconnections or so having a
strong impact. This in turn depends on the admixture of non–diffractive and diffractive
events, rendering this type of observable an interesting testbed for various reaction
556 Data at the LHC
Fig. 9.7 The ratio of MC predictions to data for Kso , Λ and Ξ transverse
momentum distributions. Reprinted with permission from Ref. [655].
mechanisms.
In Fig. 9.8 (left), the cross-section as a function of ∆η F is shown for particles
with transverse momenta above 200 MeV (the lowest limit for this measurement),
along with several MC predictions. The PYTHIA tunes have been fit to ATLAS data. In
Fig. 9.8 (right), the same data is shown, now compared to separate predictions, from
PYTHIA 8, of the non-diffractive, single diffractive and double diffractive components.
Note the exponentially falling non-diffractive component at small gap sizes. At larger
gap sizes,there is a plateau, corresponding to a combination of single-diffractive and
double-diffractive processes. All MC models are able to reproduce the general trends
of the data, but none provide a perfect description.
It is noteworthy that the observables are defined entirely on the basis of visible,
and therefore physical final states, allowing for easier (and unbiased) interpretation of
Total cross-sections, minimum bias and the underlying event 557
Fig. 9.9 Underlying Event observables with respect to the leading charged
track transverse momentum in the transverse region at LHC at 7 TeV: the
charged particle density Nchg (left) and the mean of the charged particle
transverse momentum, hp⊥ i (right). Only charged particles with |η| < 2.4
and p⊥ > 500 MeV have been considered. Reprinted with permission from
Ref. [6].
data. It is interesting to note that all of the predictions fall short of the ATLAS data
at 7 TeV. The agreement is better, typically within 10% or less, for predictions with
subsequent tunes that have incorporated this data, as might be expected.
A similar analysis based on jets rather than leading tracks has been presented by
CMS [362] and several of the results are displayed in Fig. 9.10. By the use of jets,
the range of the measurement is greatly extended from that obtained using only the
leading track. The mean charge density and the mean summed transverse momentum
are shown in the figure, compared to several MC predictions. The best agreement is
with the PYTHIA predictions using the Z1 tune. However, data from CMS at 7 TeV was
used in defining this tune, so the level of agreement is not surprising. The other tunes,
developed using lower energy data, also provide good agreement for the mean charge
density distribution, but not for the mean summed transverse momentum distribution.
Fig. 9.11 provides a more differential look at the UE data, using the same ob-
servables as in the previous plot. Good agreement between the data and the MC
predictions is found, especially for PYTHIA using the Z1 tune.
The event generators generally work well at the LHC for the description of the UE.
This finding further strengthens the statement made in the discussion of MB data, that
the event generators are adept in describing the bulk of LHC events. This indicates
that the ideas underlying the construction of the non–perturbative models responsible
for the simulation of MB and UE physics cannot be completely off the mark. However,
similar to the case of MB, some deficiencies in the UE simulations start to appear
when going to more taxing regions of phase space.
Total cross-sections, minimum bias and the underlying event 559
Fig. 9.10 Underlying Event observables with respect to the leading jet
transverse momentum in the transverse region at LHC at 7 TeV: the charged
particle density Nchg (top left), the sum of the charged particle transverse
P
momenta p⊥ (top right), and their distributions (bottom left and right),
all with respect to the leading jet transverse momentum. Only charged
particles with |η| < 2 and p⊥ > 500 MeV have been considered. Reprinted
with permission from [362].
Fig. 9.11 Underlying Event observables with respect to the leading jet
transverse momentum in the transverse region at the LHC at 7 TeV: the
charged particle density Nchg distribution (left), the sum of the charged
P
particle transverse momenta p⊥ distribution (right). Only charged par-
ticles with |η| < 2 and p⊥ > 500 MeV have been considered. Reprinted
with permission from Ref. [362].
560 Data at the LHC
Fig. 9.12 Some example diagrams illustrating single (left) vs. double-par-
ton (right) scattering production of W jj
An interesting feature of the Underlying Event (UE) is that the secondary interactions
of the hadron constituents that give rise to most of the UE can themselves become
hard and give rise to physical objects such as additional jets, or even gauge bosons.
To illustrate the latter consider the production of same–sign W pairs, such as
W + W + pair production. Clearly, parton–level processes such uu → W + W + dd must
be invoked already at the lowest order in the Standard Model. The cross-section for
this process at the LHC is of the order of a few (about five) femtobarns, and, using the
simplifed model for double-parton scattering (DPS), Eq. (7.43), would yield a similar
size for the DPS production cross-section of W + W + pairs. This renders same-sign W
pairs a smoking gun for double-parton scatttering.
However, the first measurement of DPS at the LHC was achieved with the associated
production of a W boson with two jets, W jj, by the ATLAS collaboration in [17].
The defining feature of DPS in contrast to the production of the same final state in
one single partonic interaction, cf. Fig. 9.12 for an illustation of the two production
modes, is that in the former the W and the di-jet systems are kinematically decoupled,
and each of them typically has a relatively small total transverse momentum. This
motivated ATLAS to use the total transverse momentum of the di-jet system or the
total transverse momentum of the two jets divided by the sum of their individual
transverse momenta,
pj1 + p~j⊥2 |
|~
∆njets = j⊥ (9.4)
|~ pj⊥2 |
p⊥1 | + |~
as sensitive observables. The latter, ∆njets yields numbers between 0 – when the two
jets balance each other exactly, with the same transverse momentum oriented back-
to-back – and 1 – when the jets point into the same direction. In this observable, the
DPS region is clearly related to relatively low values of this quantity.
This is shown in the left panel of Fig. 9.13, where ATLAS data are compared with a
combination of simulation results and a DPS sample constructed from data. The for-
Jets 561
Fig. 9.13 (left) The distribution of ∆njets is shown for W jj events, along
with the two templates described in the text. (right) The extracted value of
σef f from this measurement is plotted along with other measurements of
this parameter at the LHC and at lower energies. Reprinted with permission
from Ref. [17].
9.2 Jets
9.2.1 Inclusive jet production
At the LHC, in contrast to the TEVATRON, IR-safe jet clustering algorithms, in partic-
ular the anti-kT algorithm, are universally used. As mentioned previously, the anti-kT
jet algorithm was developed only near the end of the activity of the TEVATRON. In
addition to its ease of use with fixed-order predictions, the anti-kT algorithm provides
jets that are very close to perfect cone-shaped, allowing an easy determination of the
effective jet area. ATLAS typically uses jet sizes of 0.4 and 0.6 [31], while CMS uses
jet sizes of 0.5 and 0.7 [383, 405]. Both experiments will expand the range of jet sizes
used, in particular to be able to compare directly to the other experiment’s results,
562 Data at the LHC
for example with a common jet size of 0.4. Here, we discuss CMS results for inclusive
jet cross-section measurements with the anti-kT (D = 0.5) and anti-kT (D = 0.7) jet
algorithms, at a centre-of-mass energy of 8 TeV. Similar results exist for ATLAS with
the anti-kT (D = 0.4) and anti-kT (D = 0.6) algorithms.
The calorimeter coverage for both ATLAS and CMS goes out to rapidities of the
order of 4.5. However, many of the analyses involving jets restrict themselves to jets
in more central rapidity regions (typically yjet ≤ 2.5 − 3), where more tracking in-
formation is available. The tracking information serves both to improve the energy
resolution of the measured jet, and as a way of discriminating between jets from the
event of interest and jets produced by pileup. Fewer tools are available at higher ra-
pidities, but this region is still important for many physics results. Jet measurements
at high y provide useful information in PDF fits and for discriminating between the
VBF and gluon-gluon fusion mechanisms for Higgs boson production, for example. A
number of analyses at both ATLAS and CMS have measured jets out to the full rapidity
coverage, as will be shown in this chapter.
In CMS, jet measurements are conducted primarily with the particle-flow event re-
construction algorithm [400], in which tracking and calorimetry information is used in
a framework optimized to provide the best jet energy resolution. An offset correction is
used to remove the energy contributed by additional proton-proton interactions [303].
The offset method calculates a rapidity-dependent energy density ρ, which when mul-
tiplied by the jet area, provides an indication of the energy to be subtracted from the
jet. Most of the pileup is due to collisions in the same bunch-crossing, with smaller
contributions from out-of-time pileup. Note, though, that unlike at the TEVATRON, the
underlying event energy is subtracted by the offset correction along with the pileup
energy. The non-perturbative corrections for the underlying event can effectively be
added back to the data for easier comparison to Monte Carlo predictions. This offset
method is becoming the standard for both experiments.
Results for the measurement of the inclusive jet cross-section for the anti-kT (D =
0.7) jet algorithm for all rapidity intervals are shown in Fig. 9.14 compared to NLO
QCD predictions using CT10 PDFs, and modified by non-perturbative corrections [405].
A linear comparison of the data to NLO jet cross-section predictions with various
PDFs, for the central rapidity region, is shown in Fig. 9.15. The jet measurement
reaches transverse momenta greater than 2 TeV, with the integrated data sample of
19.7 fb−1 . Better agreement is observed for the anti-kT (D = 0.7) results than for the
results using the anti-kT (D = 0.5) algorithm (not shown), perhaps indicating that
while the NLO prediction (where at most two partons can be in a jet) describes the
jet shape reasonably well, it does not describe the jet shape completely. See also the
discussion below.
Over most of the kinematic range, the experimental uncertainties are smaller than
the theoretical uncertainties (both PDF and scale). The scale uncertainties have greatly
improved upon completion of the NNLO jet calculation. Given the spread of PDF
predictions, this data should be useful in parton distribution function fits.
Note that NLO electroweak corrections are already on the order of approximately
5% at 1 TeV and increase fairly rapidly with jet transverse momentum, cf. the simple
estimate provided in Eq. (3.232) and Fig. 3.12. This estimate is confirmed by an exact
Jets 563
Fig. 9.15 The ratio of the CMS inclusive jet cross-section measured using
the anti-kT (D = 0.7) jet algorithm to NLO predictions using the CT10
PDFs, as well as the ratios of NLO predictions using other PDFs to pre-
dictions using CT10. Reprinted with permission from Ref. [405].
for the presence of new contact interactions, for example due to quark compositeness.
At the LHC, the dijet cross-section has been measured out to dijet masses of 5 TeV.
Typically, the measurement is divided into bins of y ∗ , with y ∗ defined as half of the
absolute value of the rapidity difference between the two jets. Using the kinematic re-
lations of Section 4.1 it is straightforward to show that, at leading order, this quantity
is related to the centre-of-mass scattering angle θ∗ by | cos θ∗ | = tanh y ∗ . Thus high
values of y ∗ correspond to larger values of cos θ∗ . A measurement of the dijet mass
cross section, in bins of y ∗ , carried out by ATLAS [27] is shown in Fig. 9.17, compared
to NLO parton level predictions from NLOJET++. The measurement in the plots has
been carried out with the anti-kT (D = 0.6) jet clustering algorithm (similar mea-
surements are available with the anti-kT (D = 0.4) algorithm). Jets are reconstructed
using topological cell clusters [39]. These clusters are determined from calorimeter
cells and local hadronic calibration weighting. The latter depends on the good 3-D
Jets 565
Fig. 9.16 The ratio of the CMS inclusive jet cross-section measured us-
ing the anti-kT (D = 0.5) jet algorithm to the jet cross-section using
D = 0.7. Comparisons are made to LO and NLO theoretical predictions,
with and without non-perturbative corrections. Reprinted with permission
from Ref. [383].
both tree-level effects of order (ααs , α2 ) and weak loop effects of order (ααs2 ) [469].
The probability for the NLO predictions to describe the data, taking into account
the experimental uncertainties, is indicated in the plots for each y ∗ bin. Both the CT10
and HERAPDF1.5 PDFs describe the data well, except perhaps in the y ∗ interval from
1.0 to 1.5. Contact interactions preferentially produce events at low y ∗ compared to
QCD. Thus, the most sensitive region to search for the effects of contact interactions
is at low y ∗ (< 0.5) and high dijet mass (> 1.31 TeV). Using a model of QCD+contact
interactions with left-left coupling and destructive interference, the ATLAS data is
sufficient to exclude contact interactions at a scale Λ less than 7.1 TeV (using the
CT10 PDFs). Similar results are also available from CMS [376].
Dijet production is not expected to be well-described by fixed-order predictions
in kinematic configurations where either a large rapidity interval exists between the
two jets, and/or there is a veto on the existence of a third jet in the rapidity interval
bounded by the dijet system. In these situations, higher order corrections can become
important, and logarithmic terms depending on the rapidity separation between the
two leading jets7 or on the average transverse momentum of the dijets may need to
be resummed in order to achieve a good description of the data.
The region where two jets, with a fixed pT threshold, are separated by a large
rapidity interval corresponds to a large value of ŝ and a small value of t̂. Such regions
7 Technically, the logarithmic terms depend on the dijet mass, but in situations where the jets
are separated by a large rapidity interval and the transverse momenta of the jets are similar, the
argument of the logarithm reduces to the rapidity separation.
Jets 567
Fig. 9.18 Electroweak corrections for the dijet mass cross-section from
ATLAS for several different y ∗ bins. Reprinted with permission from
Ref. [27].
are dominated by t-channel gluon exchange and one expects a linear growth of jet
multiplicity with increasing ∆yjj . This can be observed in Fig. 9.19, which shows the
mean number of jets (defined by pT > 20 GeV) in the rapidity interval bounded by the
dijet system, for different average transverse momenta for the tagging jets. The ATLAS
data is compared to predictions from HEJ and POWHEG [3]. Here, the ∆yjj interval
is defined by the two jets in the event that have the greatest rapidity separation. If
instead, the two highest pT jets were used, the growth with ∆yjj would be reduced
by about a factor of two [3]. Thus, rapidity ordering may be more efficient in rejecting
gluon-gluon fusion production of a Higgs boson, in order to measure VBF Higgs boson
production, than using the two highest pT jets, as is currently done. The reason for the
faster increase with rapidity ordering can be easily understood. If the bounding jets
are the two highest pT jets, then any additional jets produced in the gap are required
to be in the transverse momentum range determined by the jet cutoff and the second
highest pT jet in the event. For rapidity ordering, there is no such bound, and there
is effectively a larger phase space. There are practical considerations, however, in the
experimental difficulties with dealing with jets at very forward rapidities, especially in
high pileup conditions.
568 Data at the LHC
Fig. 9.19 The average jet multiplicity in the rapidity interval between
two jets separated by the largest rapidity interval, as a function of the
dijet rapidity interval. Results are shown for different values of the average
transverse momenta of the tagging jets. Reprinted with permission from
Ref. [3].
Jets 569
Fig. 9.20 The ATLAS inclusive jet multiplicity distribution at 7 TeV com-
pared to LO and NLO predictions from NJET. The jets are measured using
the anti-kT (D = 0.4) jet algorithm. The plot is taken from Ref. [183] using
data from Ref. [4]. Reprinted with permission from Ref. [183].
In the figure, comparisons are made to POWHEG, coupled either with the PYTHIA
or HERWIG parton shower and to HEJ, at the partonic level. The POWHEG predic-
tions include a full NLO partonic description of the dijet system and the PYTHIA and
HERWIG parton showers provide a resummation of soft and collinear gluon radiation.
The HEJ formalism provides a leading logarithmic resummation of terms proportional
to the rapidity separation of the two jets, embedded in a framework that includes
fixed-order corrections from multi-jet matrix elements.
ATLAS, for example, has measured final states with up to 6 jets at 7 TeV, with the
requirement that the leading jet have a transverse momentum greater than 80 GeV
and additional jets have transverse momentum greater than 60 GeV. The anti-kT jet
algorithm with D = 0.4 and D = 0.6 was used. A comparison of the ATLAS data for the
jet multiplicity distribution is shown in Fig. 9.20, along with LO and NLO predictions
from NJET [183]. The NLO calculation significantly decreases the scale uncertainty
from that obtained at LO, and in general is in better agreement with the data, with
the exception of the 2-jet bin, where there are large negative NLO corrections. For
the higher jet multiplicities (for the bins where NLO predictions are available), the
ratio between theory and data is in the range of 1.2–1.3. As Ref. [183] notes, the main
driver for the difference between the LO and NLO predictions is the use of LO PDFs
for the former. If NLO PDFs are used for both predictions, the results at the two
orders are very close. In particular, if a scale of ĤT /2 is used, the ratio of the NLO to
LO predictions tends to be very flat as a function of the relevant kinematic variables.
This same behaviour has been observed for W/Z+jets at the TEVATRON, as discussed
earlier, and will be encountered again in the context of W/Z+jets at the LHC.
In Figure 9.21 (left), the ratio of the cross-section for the production of (n + 1)
and n jets for the ATLAS data is compared to NLO predictions using several PDFs.
Within uncertainties, the predictions agree with the ATLAS data for σ4 /σ3 and σ5 /σ4 .
In this case, the LO and NLO predictions for the ratios are within 10% of each other
due to cancellations of PDF effects. In Figure 9.21 (right) is shown the σ3 /σ2 ratio
(for D = 0.6) as a function of the lead jet transverse momentum. Good agreement is
obtained for predictions from all PDFs at large leading jet transverse momentum.
Fig. 9.21 (left) The ratio of the cross-sections for (n + 1) and n-jet pro-
duction, for n = 2, 3 and 4, in ATLAS data [4] and from the theoretical
predictions of NJET +SHERPA. The jet clustering uses the anti-kT (D = 0.4)
algorithm. (right) The 3-jet to 2-jet ratio as a function of the leading jet
transverse momentum using anti-kT (D = 0.6) jet clustering. Predictions
are shown at LO and NLO for several PDFs. Reprinted with permission
from Ref. [183].
larger masses (0.3 ≤ m/(pT R) ≤ 0.5) dominated by a single hard gluon emission.
This shape means that the average jet mass is above the peak value. A simple rule-
of-thumb, that describes the result of an exact NLO calculation of the jet mass to
reasonable accuracy, is that the average mass of a jet at the LHC (13 TeV) measured
with the anti-kT jet algorithm is approximately given by hmi = 0.16 pT R [508]. Since
the factor of αs should be accompanied by the colour chargep of the hardest parton
in the jet, gluon jets should have an average mass a factor of CA /CF greater than
quark jets. The general formula is for an average of gluon and quark jets. The jet mass
distribution does not depend strongly on the centre-of-mass energy, but will depend
on the jet algorithm (as well as the jet size). At the particle level, jet masses will be
larger than at the parton level, due to non-perturbative contributions but with the
difference growing smaller with increasing jet pT .
The jet mass is an interesting variable to measure, not just because of the pertur-
bative QCD aspects, but because a jet may be massive because it’s a (boosted) W
boson, or a top quark, or even a Higgs boson [106]. For example, it was shown that the
discrimination of the signal for a Higgs boson decaying into a bb̄ pair can be improved
in the kinematic region where the Higgs is at high transverse momentum, and the bb̄
final state is reconstructed within a single (fat) jet [297].
With a few exceptions [64], there was little investigation into jet masses at the
TEVATRON, but this information has become an integral part of many analyses at
the LHC [8], especially in the context of jet grooming. In general, all jet grooming
572 Data at the LHC
techniques are designed to provide a separation between the decays of heavy objects
and the QCD branchings that are a normal aspect of parton evolution inside jets. In
addition, the grooming techniques try to remove soft energy depositions inside the jets,
which can arise both from the underlying event in a hard collision, as well as from the
multiple minimum bias interactions that are present in high luminosity LHC running
conditions. Three of the major grooming tools are filtering [297], trimming [699], and
pruning [512, 513], which are described below in the context of a CMS analysis.
CMS has measured jet mass distributions in both V +jets and inclusive dijet events,
and has examined the impact of filtering, trimming, and pruning on the jet mass distri-
butions [377]. In filtering, a jet determined through a regular jet clustering algorithm,
most typically the anti-kT algorithm for the LHC, is re-clustered using the Cambridge-
Aachen algorithm with a smaller jet size (R = 0.3 for the CMS analysis). The resulting
new sub-jets are ordered in transverse momentum and the jet is redefined using only
the three hardest sub-jets. In the trimming algorithm, jets are again re-clustered using
a smaller jet size (using the kT -clustering algorithm), and sub-jets are only kept if
they pass the requirement that pT sub > fcut λhard . Typically, λhard is chosen to be
equal to the transverse momentum of the original jet. For this CMS analysis, Rsub has
been chosen to be 0.2 and fcut to be 0.03. The pruning algorithm re-clusters the con-
stituents of a jet with the Cambridge-Aachen algorithm, using the original parameters,
but requiring that for two sub-constituents i and j, the softer of the two constituents
is removed when the following conditions are satisfied:
min(p⊥ i , p⊥ j )
zij = < zcut (9.5)
p⊥ i + p⊥ j
mJ
∆Rij > Dcut ≡ α · , (9.6)
p⊥
where mJ and pT are the mass and transverse momentum of the (original) jet, and
the parameters zcut and α have been chosen to be 0.1 and 0.5.
Fig. 9.22 shows distributions of the ratios of jet mass distributions after grooming
(using either the filtering, trimming, or pruning techniques), for reconstructed data,
for reconstructed simulated PYTHIA 6 events, and for generator-level PYTHIA 6 events.
The events are from inclusive dijet production and the jets have been reconstructed
with the anti-kT (D = 0.7) jet algorithm. All distributions have been corrected back to
the particle level, to allow for direct comparison to theoretical predictions. In general,
filtering results in the smallest changes to the original mass distributions, followed by
trimming and then pruning (with the parameters chosen in the CMS analysis).
Fig. 9.23 and Fig. 9.24 show the unfolded jet mass distributions for anti-kT (D =
0.7) jets from Z → ll+jet events for ungroomed jets and pruned jets respectively,
with the parameters for the grooming as described above.8 The data are shown for
four different jet transverse momentum bins and are compared to predictions from
PYTHIA 6 and HERWIG ++, with the tunes indicated in the legends. The Sudakov
suppression at low jet mass, peaking and then slow fall-off with increasing jet mass
described earlier, can be seen for the ungroomed jet distributions in Fig. 9.23. The
8 Results for filtered and trimmed jets can be found in Ref. [377].
Jets 573
Fig. 9.22 The differential probability distributions for jet mass ratios
for groomed jets to ungroomed jets for three different grooming tech-
niques. The data are CMS dijet events,and the Monte Carlo predictions
use PYTHIA 6. The Monte Carlo predictions are given both at the gener-
ated and reconstructed levels. Reprinted with permission from Ref. [377].
dip for low jet mass fills in with the use of an aggressive grooming procedure such as
pruning. The peak region of the jet mass distribution receives substantial contributions
from soft gluon emissions at wide angle. After pruning, these emissions are largely
removed, resulting in events in the peak region migrating to lower masses [506].
In general, the data are in good agreement with the Monte Carlo predictions, except
perhaps for smaller jet masses. Ref. [377] notes that an aggressive grooming procedure
(like pruning) tends to lead to better agreement between data and the Monte Carlo
simulation, and that the data/Monte Carlo agreement is better in general for the
V +jets analysis than for the dijet analysis, perhaps indicating that quark jets (more
typical in V +jet final states) are better modelled in Monte Carlo than gluon jets.
Formerly, it was thought difficult to provide analytic calculations that describe the
impact of the jet grooming techniques on distributions, such as jet masses, but great
progress has been made in recent years [431]. These calculations not only provide a
better understanding of the different jet grooming techniques, but have also allowed
for the development of new tools, such as the mass-drop tagger [431] and the soft drop
tagger [722].
574 Data at the LHC
Fig. 9.24 The same comparison as in Fig. 9.23, for the unfolded pruned–
jet mass distributions for Z → ll + jet events. Reprinted with permission
from Ref. [377].
Drell-Yan type production 575
One of the primary benchmarks, and often one of the first cross-sections to be mea-
sured, is that for W and Z production. ATLAS and CMS have both measured the W
and Z cross-section at 7 TeV (during the 2010 running) [12, 361] and in addition
CMS has measured the cross-sections at 8 TeV (in special low luminosity, and thus
low pile-up, running conditions) [380]. The cross-sections are measured in both the
electron and muon channels, with similar requirements for the transverse momenta
and rapidities for the electron and the muon. The data are corrected for the effects of
final-state QED radiation and an isolation cut is placed on the lepton candidates. The
differential cross-sections are then combined after extrapolating each measurement to
a common fiducial kinematic region. For ATLAS, the missing transverse energy for W
boson production is required to be greater than 25 GeV and the transverse mass is
required to be greater than 40 GeV. For CMS, no explicit cut is placed on the missing
transverse energy, but the missing transverse energy distribution is used to determine
the background. In CMS, the Z boson candidates are required to have a mass between
60 and 120 GeV, while for ATLAS the range is 66 to 116 GeV.
The cross-sections are measured in the fiducial regions as well as being extrap-
olated to the full phase space. The latter involves a calculation for the geometrical
and kinematic acceptances for the measurement, and thus the introduction of possible
model dependence. Typically, POWHEG +PYTHIA is used for the extrapolation. The
theoretical systematic uncertainties for the acceptance calculation can be evaluated
by varying the PDFs (using the PDF4LHC prescription for the three global PDF
families), examining the impact of NNLL soft gluon resummation using the program
RESBOS, and examining the impact of higher-order corrections by varying the renor-
malization and factorization scales in the program FEWZ within a factor of two. The
effects of higher-order EW corrections can also be simulated by the use of the pro-
gram HORACE. The effect of extrapolating from the fiducial to the full phase space is
typically to increase the cross section by a factor of about 2(2.5) for W (Z) produc-
tion. The total uncertainties for the extrapolation corrections for W and Z production
range between approximately 1 and 1.5%, with each of the sources mentioned above
contributing.
The ratio of the W to Z cross-section can be especially interesting, as many of
the systematic errors, from both experiment and theory, cancel out. The total W
and Z cross-sections measured by CMS at 8 TeV are shown in Fig. 9.25, along with
the NNLO predictions from the three global PDF sets. The (solid) ellipse indicates
the 68% CL region for the total experimental uncertainty. The (open) ellipses for the
theory predictions indicate the 68% CL PDF uncertainties from each group. The three
predictions are all consistent with the CMS data, so there is no great discrimination
among them provided by the total cross section measurements. All of the PDFs provide
a somewhat lower prediction for the Z boson cross-section than observed in the data,
though. The ATLAS cross-sections are consistent with those from CMS. The ratios of
various W and Z cross-sections from CMS at 8 TeV with NNLO predictions are plotted
in Fig. 9.26.
576 Data at the LHC
PDFs. It is interesting to note that the W − cross-section is larger than the W + cross-
section at high muon pseudorapidity, perhaps counter to expectations. The W + (W − )
boson at high rapidity is produced primarily from the collision of an up quark (down
quark) with large momentum fraction x, and a d¯ (ū) anti-quark with low momentum
fraction x. As the high-x up quark distribution is larger than that of the down quark
(and the low-x anti-quark distributions are essentially equal), one would expect that
the W + cross-section would be higher than that of the W − boson. This would be true
if the cross-sections were plotted against the rapidity of the W boson, cf. Fig. 2.16.
However, the muon in the decay of the W − tends to travel in the direction of the boson
Drell-Yan type production 579
in the boson centre-of-mass frame (and the muon in the decay of the W + opposite the
direction). Thus, the µ− leptons tend to have higher transverse momenta than the µ+
leptons, and binning versus the muon pseudorapidity results in a greater proportion
of µ− at high pseudorapidity (cf. the discussion at the end of Section 2.2.3).
to about 200 GeV. The range at 8 TeV extends from a transverse momentum at 1 GeV
to approximately 900 GeV. There is an appreciable broadening of the pT distribution
in going from 1.96 TeV to 8 TeV. This is expected as there is more phase space for
gluon radiation at the higher energy, due to the lower average x values for the colliding
partons.
The data for all three experiments have been compared to a number of theoretical
predictions, varying from NNLO fixed-order, to NNLO, but also including the effects
of soft gluon resummation at NNLL, to parton shower Monte Carlos, in some cases
including fixed-order information at NLO and with various tunes. A comparison of
the normalized cross-section to the resummed predictions from the RESBOS program
(NNLO+NNL) is shown in Fig. 9.30 (right). Good agreement with the data is observed
below 20 GeV. There is a dip around 40 GeV, and then a rise in the theory prediction
with respect to the data at high transverse momentum. The dip is in the transition
region where there is a matching between the resummed part of the RESBOS prediction
and the fixed-order part. Improvements in this matching should reduce this dip. The
rise at high pT is just an artefact of the scale choice which does not take into account
the transverse momentum of the Z boson (and thus results in too small of a scale).
The absolute (un-normalized) high pT (≥ 20GeV) Z boson cross-section is shown
in Fig. 9.31 (left), compared to the NNLO predictions of NNLOJET [565]. The NNLO
corrections are relatively small, but result in a significant reduction of the scale un-
certainty. The data are above the absolute prediction for most of the Z boson pT
Drell-Yan type production 581
range. (There is also a 2.8% luminosity uncertainty which is not shown.) If instead,
the data and theory are normalized to the total Z boson cross-section in this mass
region, the result is a significant improvement in the agreement, as shown in Fig. 9.31
(right). There is a tension between the data cross-section and the theory prediction,
also observed to some extent in Fig. 9.25 and Fig. 9.26.
The transverse momentum distributions have also been measured, and compared
to NNLOJET, in various mass and rapidity bins. More details can be found in Ref. [52]
and Ref. [565].
4. ALPGEN +PYTHIA and SHERPA provide good agreement for up to 4 jets in the final
state, but the predictions from the two programs diverge for higher jet multiplicities.
The measured transverse momentum distribution for the lead jet for W + ≥ 1
jet events is shown in Fig. 9.33, compared to a number of theoretical predictions.
It is noticeable that the BLACKHAT +SHERPA predictions undershoot the data at
high transverse momentum. In this region, significant contributions are expected from
processes such as qq → qqW , basically dijet production with a W boson emitted from
a quark line. This process grows with the transverse momentum of the jets primarily
because W emission becomes competitive with hard gluon emission when the jet pT
is much larger than the W boson mass.
The LoopSim [757] (cf. Section 3.4.2) and BLACKHAT +SHERPA exclusive sums [134]
predictions include more contributions from such final states, but these can be seen
to have little impact. Note that EW corrections at 1 TeV are negative, which would
increase the size of the discrepancy. SHERPA and ALPGEN +PYTHIA each provide bet-
ter agreement with the higher pT range, but with the larger theoretical uncertainties
(not shown) inherent with LO predictions. The prediction from MEPS@NLO, which
includes NLO information for W + 1,2 jets, is still below the data at high transverse
momentum, but closer than BLACKHAT +SHERPA. A similar effect, albeit limited to
somewhat smaller pT can be seen with Z+jets in ATLAS [20]. However, the situation
is not as clear for W/Z+jets measurements in CMS [660, 668].
The agreement for the inclusive lead jet transverse momentum distribution is better
when compared to the NNLO theory prediction from Ref. [276], as shown in Fig. 9.34.
It is interesting as well that better agreement at high pT is observed with the NLO
prediction from this paper, albeit with a large scale uncertainty. The discrepancy
Drell-Yan type production 583
Fig. 9.32 The cross-section for the production of a W boson plus jets as a
function of the inclusive jet multiplicity. The statistical uncertainties for the
data are shown by the vertical bars, and the combined statistical and sys-
tematic uncertainties are shown by the black-hashed regions. The data are
compared to predictions from BLACKHAT +SHERPA, ALPGEN +PYTHIA,
SHERPA, and MEPS@NLO. The left-hand plot shows the differential cross–
sections and the right-hand plot show the ratios of the predictions to the
data. Reprinted with permission from Ref. [44].
between the two NLO predictions may be due to different forms for the central scale
used in the two theory calculations, an indication that the optimal choice of scale can
often be difficult.
Similar comparisons are shown in Fig. 9.35 for the exclusive final state in which only
one jet is present (above the jet pT threshold of 30 GeV). The transverse momentum
range is more limited than for the inclusive case as the production of a very high pT
jet, and no other jets, is very strongly Sudakov-suppressed. In contrast to the inclusive
case, the BLACKHAT +SHERPA prediction is in very good agreement with the data.
This is somewhat of a surprise, as the presence of two disparate scales (the pT of the
lead jet compared to the jet pT threshold of 30 GeV) should lead to the presence of
large logarithms, which should spoil any agreement with the fixed-order prediction.
Here again, though, note that the EW corrections are even larger (and negative) for the
exclusive case than for the inclusive case. A similar level of agreement was observed for
the ATLAS Z +1 jet exclusive jet pT distribution. This mystery was solved in Ref. [273],
where it was shown that the ATLAS analysis removed only the jet (and not the event)
in the situation where there is an overlapping jet and a lepton (within ∆R < 0.4).
Thus, an event classified as a W/Z plus exactly one jet event may also have a second
584 Data at the LHC
Fig. 9.33 The cross-section for the lead jet transverse momentum for
W + ≥ 1 jet events, with theory comparisons as in Fig. 9.32. The theoretical
predictions have been scaled to the data to allow for easier comparisons of
the shapes. Reprinted with permission from Ref. [44].
Fig. 9.34 The cross-section for the lead jet transverse momentum for
W + ≥ 1 jet events, with theory comparisons from Ref. [276]. Reprinted
with permission from Ref. [276].
distribution for Z+jets (as for W +jets) follows a staircase pattern indicative of Berends
scaling, as discussed in Sections 4.1 and 4.3. This scaling is a result of the same pT
cut being applied to every jet, and to the non-Abelian nature of the gluon branching
process, i.e. each final state gluon itself carries a colour charge and can itself radiate.
If instead, there is a difference in the transverse momentum threshold between the
leading jet in the event, and any additional jets, a Poisson-type behaviour will instead
be evident. In Fig. 9.39 (right),the lead jet pT is required to be greater than 150 GeV,
while still retaining a cut of 30 GeV on the other jets. The result is described well by
a Poisson scaling,
σ (Z + (n + 1) jets) n̄
= , (9.7)
σ (Z + n jets) n
with an expectation value n̄ = 1.04 ± 0.04. Note that there is basically no suppression
for the second jet emission, given the core process of Z+jet, with the jet having a
transverse momentum greater than 150 GeV. For both situations, the data are well-
described by the theoretical predictions.
Fig. 9.35 As for Fig. 9.33, but for the lead jet transverse momentum in
W + exactly one jet events. Reprinted with permission from Ref. [44].
Fig. 9.36 As for Fig. 9.33, but for the fifth jet pT in W + ≥ 5 jet events.
Reprinted with permission from Ref. [44].
Drell-Yan type production 587
Fig. 9.37 As for Fig. 9.33, but for the HT distribution in W + ≥ 1 jet
events. Reprinted with permission from Ref. [44].
processes are important as backgrounds for other physics studies, such as associated
Higgs boson production (V H), where the Higgs boson decays into a bb̄ final state, for
single top production [14] or for searches for physics beyond the Standard Model [23].
However, they are also interesting from the standpoint of perturbative QCD, since
the presence of a heavy quark mass scale introduces additional complications into the
calculation. As in the case of single top production, cf. Section 4.6, the calculation may
either be performed in a 4-flavour scheme (in which the only active quark flavours are
u, d, c, s) or the 5-flavour scheme, in which the b parton is also present in the initial
state. As an example, consider the Born-level predictions for the production of a final
state containing a W -boson and at least one b-jet. In the 4-flavour scheme this can
be described by the processes q q̄ → W bb̄ and gq → W bb̄q, where the presence of
the initial-state gluon in the latter is important at the LHC. In the 5-flavour scheme
this second process is replaced by bq → W bq. The advantages and drawbacks of each
scheme have been summarized in Section 4.6.
Such processes have been measured at the TEVATRON by both the CDF [62] and
DØ [92] collaborations. The CDF collaboration found a result for the W + b jet cross-
section larger than the SM prediction, while DØ measured a cross-section smaller
than the SM prediction, although both were consistent within the quoted theoretical
uncertainties. Such measurements have also been carried out at the LHC [11, 381].9
ATLAS, for example, has measured the fiducial W + b jet cross-section, as a function
of the jet multiplicity and as a function of the b − jet transverse momentum [19].
9 Similar results are also available for Z + b jet cross-sections [26, 367, 372, 382].
588 Data at the LHC
The results are reported either with the single-top contribution subtracted, or not-
subtracted. They have the same (inclusive) final state, but single top production has
different kinematics, so can be separated.
As for other measurements involving W boson decay, a high pT isolated lepton
(either a muon or electron) is required, along with a requirement on a minimum
missing transverse energy. The analysis requires one or more jets, with one (and only
one) jet being tagged as a b-quark jet. It is necessary to veto on events with two
or more b-tagged jets to reduce the background from the (sizeable) tt̄ cross-section.
Jets are reconstructed with the anti-kT (D = 0.4) jet clustering algorithm and a jet
threshold of 25 GeV. The jets are required to have a rapidity |y| < 2.1, so that the
jet lies within the tracking (and thus b-tagging) region, and any jets within a distance
∆R = 0.5 of the lepton candidate are removed. Jets are tagged as originating from
b-quarks using a combination of two tagging algorithms. The first algorithm involves
either explicit reconstruction of a secondary vertex consistent with originating from a
b-quark decay. The second calculates the impact parameter significance of each track
within the jet to determine the probability of the jet being a b-quark jet.
The measurement of this process has backgrounds, both from processes where a
real b is present in the final state (single-top, tt̄, and multi-jet), and from processes
where a jet has been mis-tagged as a b-jet (such as W + c − jet and W + light
Drell-Yan type production 589
jets). The backgrounds can be largely determined from the data itself, either by the
presence of different kinematics (for the first category of backgrounds), or from different
characteristics of the b-tagged jets (for the second category). For example, in the 2-jet
bin, the contributions from W + b and single-top processes are comparable. However,
since most of the single top events have a relatively narrow (W, b−jet) mass distribution
around the top quark mass, this distribution can be used to discriminate the two
processes.
The measured (unfolded) cross-section for the production of a W boson and a b-jet
is shown in Fig. 9.40 as a function of the jet multiplicity. Results are shown for the
electron, muon and combined (electron+muon) channels, and comparisons are made
to theoretical predictions from fixed order (MCFM) and parton shower Monte Carlo
predictions (POWHEG +PYTHIA and ALPGEN +HERWIG). The Monte Carlo predictions
use the 4-flavour scheme, while the MCFM prediction includes higher-order corrections
from the 5-flavour scheme, i.e. allowing b-partons in the initial state. The fixed-order
predictions have been corrected for non-perturbative effects and for double-parton
scattering, where a W boson, and a bb̄ final state, are produced in two separate proton-
proton collisions. The latter cannot be ignored for this measurement at the LHC, and
amounts to a 25% correction to the total cross-section, concentrated in the lowest pT
bins. The kinematics of this process are complex and are reflected in the choice of the
central scale for the theoretical calculations,
590 Data at the LHC
10 For example, for the 8 TeV analysis, the requirement on the isolation energy is: ETiso < 4.8 GeV +
4.2 × 10−3 ETγ .
592 Data at the LHC
7 TeV data from Run I. The non-tight photons fail the tight identification criteria in one
or more categories, and thus are likely to be produced as a result of jet fragmentation.
In general, the non-tight photon candidates are less isolated than the tight ones. By
determining the fraction of loose photons present in the tight isolation region, the
photon backgrounds can be determined for each kinematic bin. The resulting photon
purity is shown in Figure 9.43 (right) for two rapidity intervals in the 7 TeV analysis.
The imposition of the isolation cut results in a high photon purity, which approaches
100% for high photon transverse energy. As stated in Section 8.4, an isolated high
transverse energy photon candidate is much more likely to be a true photon, than to
be a product of jet fragmentation.
The resulting inclusive photon cross-sections for a centre-of-mass energy of 8 TeV
Drell-Yan type production 593
Fig. 9.43 Left: the transverse energy in the isolation cone for photon can-
didates (both tight and non-tight) for photon transverse energies above
100 GeV. The non-tight distribution has been normalized to the tight dis-
tribution in the (background-rich) region above 15 GeV. Right: the result-
ing photon purity is shown as a function of the photon transverse energy
in the ATLAS barrel and end-cap regions. The shaded bands indicate the
statistical uncertainties. Reprinted with permission from Ref. [30].
are shown in Fig. 9.44 for four rapidity intervals. A large dynamic range is represented
in this plot, from 30 GeV to over 1 TeV. There was a considerable reduction in the
size of the experimental systematic errors from the 7 TeV analysis to the 8 TeV
analysis. The data is systematically below the NLO predictions from JETPHOX [348]
for transverse momenta below 500 GeV. A somewhat better agreement with the data
is achieved with the PeTeR prediction [208]. Here, the calculation is again carried out
at NLO, but in addition threshold logarithms are resummed at next-to-next-to-leading
accuracy. Note that since the calculations are at NLO, the scale uncertainty is still
sizable. The uncertainty will be reduced once a NNLO calculation for inclusive photon
production is completed, which will add to the attractiveness of the process as an
input into global PDF fits.11
Similar methods have been used by CMS [359, 384] for measuring the isolated inclu-
sive photon cross-section, with similar results obtained. More potential information on
the PDFs of the colliding protons can be obtained by measuring the distribution of the
accompanying jet, in addition to the photon, at the cost of a somewhat less-inclusive
theoretical prediction [13, 384]. As at the TEVATRON, the inclusive photon+jet data
are very useful for the calibration of the jet energy scale.
11 Late in the editing of this book, the NNLO calculation has in fact been completed, in Ref. [316].
594 Data at the LHC
Fig. 9.44 The ATLAS isolated photon cross-section, for four rapidity re-
gion. The inner error bars on the data points indicate statistical errors
only, while the outer error bars show the statistical and systematic errors
added in quadrature. The error band given is that from the PeTeR calcula-
tion,and corresponds to the combination of the scale, PDF and electroweak
undertainties. Reprinted with permission from Ref. [51].
One of the key Higgs boson final states at the LHC is the decay into two photons. The
Higgs boson diphoton signal is typically swamped by the much larger QCD diphoton
production rate, but with fine enough diphoton mass resolution, the presence of a
narrow Higgs boson resonance can be detected (and indeed has been). Nevertheless,
it is important to understand QCD production of diphotons, especially as approxi-
mately half of the production proceeds through gg scattering, the same initial state
that dominates Higgs boson production. A cut is placed on the transverse energy of
the leading (second leading) photon of 25 GeV (22 GeV). The slight asymmetry helps
to reduce any instability in the higher-order calculations. As for single photon pro-
duction, backgrounds due to jet fragmentation are suppressed by the imposition of an
isolation cut, similar to the one used for the inclusive photon measurement. This iso-
lation cut also suppresses production mechanisms where one or both photons results
from photon fragmentation from a quark line. Results from an ATLAS measurement of
diphoton production at 7 TeV [18] are shown in Fig. 9.45 for the diphoton transverse
momentum. As at the TEVATRON, there is a shoulder (the Guillet shoulder) at a trans-
verse momentum of approximately 50 GeV, corresponding to events that also have a
low azimuthal separation. The agreement with the DIPHOX +GAMMA2MC prediction is
poor for these two variables, but much better for the 2γ NNLO and SHERPA predictions.
The key aspect for both of the latter calculations is the presence of tree-level 2 → 4
processes, needed to describe the two observables.
Vector boson pairs 595
The diphoton mass distribution is shown in Fig. 9.46. The agreement with the
2γ NNLO prediction is good for the entire mass range; there is a disagreement with
the DIPHOX prediction at low diphoton masses, where the 2 → 4 subprocesses are
important, and to a lesser extent for masses of a few hundred GeV, where NNLO
corrections may be important.
Similar results have been obtained by CMS [378].
596 Data at the LHC
9.4.2 Dibosons
The measurement of diboson final states provides an important test of the non-Abelian
nature of the Standard Model, and a sensitivity to anomalous triple gauge boson
couplings. In addition, the W W and ZZ final states are important for the measurement
of the Higgs boson decays into those two channels. There is a large background for
the measurement of W W production from tt̄ production; thus, commonly a jet veto
requirement is applied to reduce the latter. Cross-sections then are commonly corrected
Vector boson pairs 597
for the geometric and kinematic acceptances as well as for the impact of the jet veto
to obtain a fully inclusive cross-section [21, 53, 375, 673], more easily compared to
theoretical predictions. The imposition of a jet veto restricts the phase space for gluon
emission and thus results in an increased uncertainty in the predicted cross-section.
However, in the case of diboson production, the scale dependence at NLO is inherently
small, and thus the increased uncertainty for the vetoed cross-section is still less than
the experimental systematic uncertainty, and may not represent the full theoretical
uncertainty.
W W production is typically measured in the final state where both W bosons
decay into a lepton (electron or muon) and a neutrino. Thus, the signature consists
of two high pT leptons and a substantial amount of missing transverse energy. The
dominant sub-process is q q̄ → W W , with much smaller contributions (approximately
10% total) from gg → W W and gg → H → W W . The main backgrounds come from
Drell-Yan, top (tt̄ and single top) production, W +jets, and diboson (W Z, ZZ, W γ)
production. The top background is suppressed by the requirement that there can be
no jets with transverse momentum above 25 GeV within a rapidity interval of ±4.5.12
The fiducial cross-section is corrected for identification and isolation requirements
and detector resolution effects. The total W W cross-section is calculated by correcting
the fiducial cross-section for the extrapolation to the full W W phase space. A small
excess with respect to the NLO predictions is observed by ATLAS, with similar results
obtained in CMS. There has been speculation that this excess might be the result of
new physics [423, 424, 522, 647, 828]. However, there are a few caveats. The W W
cross-section has recently been calculated to NNLO, resulting in an increase over the
NLO result of the order of 10% [557], thus significantly decreasing the excess. (A
further 2% increase in the theoretical prediction results from a 2-loop calculation of
the subprocess gg → W W [324].) In addition, Ref. [768] points out that the ATLAS
fiducial cross-section is in agreement with the theoretical prediction for the same, and
the disagreement for the total cross section results from the extrapolation to the full
phase space using the POWHEG box. The Monte Carlo result for the jet-veto efficiency
overestimates Sudakov suppression effects with respect to a calculation using analytic
resummation. This is one of the perils of comparisons at the fully inclusive (corrected)
level, compared to fiducial comparisons.
A comparison of the NLO and NNLO predictions, and of ATLAS and CMS data,
for W W production from Ref. [557] is shown in Fig. 9.47.
Other diboson final states (for example ZZ, W Z, W γ, Zγ) have also been mea-
sured by both ATLAS and CMS [22, 41, 375, 665]. A summary of CMS results is shown
in Fig. 9.48, where good agreement with NLO and NNLO Standard Model predictions
is observed. Differential distributions can be used to place limits on anomalous cou-
plings. For example, in Fig. 9.49 are shown (left) the unfolded transverse momentum
distribution for the leading Z boson and (right) the four-lepton reconstructed mass
distribution, both from CMS [664]. The presence of anomalous triple gauge couplings
would manifest itself as deviations from the Standard Model predictions at high Z
pT / high four-lepton mass. In both cases, good agreement with the Standard Model
predictions is observed.
12 These specific cuts are for ATLAS but are similar for CMS.
598 Data at the LHC
Fig. 9.47 The ATLAS and CMS W W cross-sections compared to the pre-
dictions at NLO and NNLO from Ref. [557]. Reprinted with permission
from Ref. [557].
The Standard Model has been extremely successful at LEP, HERA, the TEVATRON
and now at the LHC. However, as discussed earlier, the SM is incomplete, and one
of the main goals of the LHC is the discovery of new physics beyond the current
paradigm. Such Beyond-the-Standard Model (BSM) physics is mostly expected at high
mass scales, where the energies of previous colliders would not have been sufficient to
discover it already. The signatures of BSM physics are mostly comprised of the same
observables considered in this chapter: photons, leptons, jets (with or without b-tags)
and missing transverse energy, but with cuts appropriate to the expected higher mass
scale. Often the dominant contribution to these final states comes from the production
of vector bosons, either singly or in pairs, plus jets. In that case the measured SM
cross-sections can serve to determine the backgrounds to new physics processes, for
instance by extrapolating to new kinematic regions. The possibility that new physics
(such as stop pair production) could be hiding in the W W cross-section measurement,
for final states involving two leptons and large missing transverse energy, was already
mentioned in Section 9.4.2.
As an example, consider a BSM search performed by CMS using the full 2012 data
Vector boson pairs 599
sample at 8 TeV, targeting new physics in multi-jet final states [386]. To focus on the
region most sensitive to BSM signals, the search requires both large amounts of total
transverse energy of the jets (HT ) and missing transverse energy 6 HT . Specifically,
the search region is defined by the requirements that there be three or more jets,
with pT ≥ 50 GeV and |η| ≤ 2.5, a total transverse energy sum of the jets greater
than 500 GeV, and a missing transverse energy greater than 200 GeV. This final
state is sensitive to the production of pairs of squarks and gluinos, where the squarks
(gluinos) each decay into one (two) jets and a lightest supersymmetric partner (LSP).
There are substantial contributions to this final state from Z(→ ν ν̄) + jets and from
W (→ `ν)+jets (and tt̄ production), when an electron or muon is lost or when a τ lepton
decays hadronically. The cross-section for Z(→ ν ν̄) + jets is estimated using the larger
measured cross-section for γ+jets, correcting for the electroweak coupling differences,
and making use of the similar kinematic properties exhibited by the two processes. To
reduce the backgrounds from W +jets and tt̄ processes, events with isolated electrons
or muons with transverse momentum greater than 10 GeV are vetoed. The surviving
events are those in which any leptons escape detection, and thus a good knowledge of
the lepton reconstruction efficiency is necessary to accurately predict this background.
In addition the analysis is subdivided into bins that correspond to the number
of jets in the final states, 3–5 jets, 6–7 jets and 8 or more jets, in order to retain
sensitivity to possible longer cascades of squark and gluino decays. The CMS data is
shown as a function of 6 HT for one of the jet bins in Figure 9.50, and compared to the
predicted backgrounds and possible signals for several squark and gluino production
and decay modes. The number of events observed in the data is consistent with the
number expected from SM background processes. For low jet multiplicities the primary
background for high values of 6 HT is from Z(→ ν ν̄) + jets events, whereas in the
highest jet multiplicity bin the largest background comes from W +jets and tt̄ events.
Smaller backgrounds result from QCD multi-jet production, where the large 6 HT is
produced primarily from heavy flavour decays inside the jets or from jet energy mis-
measurement. This background is more significant (but still sub-leading) for the higher
jet multiplicity bins. The larger the number of jets, the greater is the chance that one
or more jets will contribute significant 6 HT due to the causes mentioned above.
9.5 Tops
9.5.1 Distributions: top-pairs (plus jets)
Measurement of the top pair cross-section at the LHC allows for precision tests of QCD,
particularly with the theoretical prediction now known to NNLO. The top pair cross-
section is significantly larger at the LHC than at the TEVATRON, partially because of the
dominance of the gg initial state for top production at the LHC, and the rapid increase
of the gg PDF luminosity with energy, as discussed in Chapter 6. A comparison of
the cross-section measurements at the TEVATRON and at the LHC (7 and 8 TeV) is
shown in Fig. 9.51. Top pair production can be measured in a number of final states,
depending on the decay modes of the W bosons that are produced. The most useful of
these states have at least one leptonic W -decay, with any leptons produced at high-pT
and well-isolated. In the dilepton mode the two leptons are of opposite charge and
Tops 601
Fig. 9.50 The observed 6 HT distributions from CMS for events with
HT ≥ 500 GeV and jet multiplicities from 35. The data are compared
to the SM backgrounds and the predictions for several different SUSY sce-
narios. Reprinted with permission from Ref. [386].
are accompanied by two jets. In the case where one W -boson decays hadronically, the
final state corresponds to a single lepton and three or four jets. The jets should have
transverse momenta above 25 − 30 GeV and at least one of them should be tagged
as a b-jet. The jet threshold is higher at the LHC than the TEVATRON because of the
larger underlying event, as well as the greater pileup through much of the running at
7 and 8 TeV. The better tracking detectors in ATLAS and CMS than at the TEVATRON
have resulted in higher b-tagging efficiencies, typically of the order of 80%, compared
to the 50-60% efficiencies for CDF and DØ. ATLAS and CMS both use the anti-kT jet
algorithm for jet reconstruction, with jet sizes of 0.4 and 0.5 respectively used for the
two experiments (at 7 and 8 TeV). The cross-sections determined from the different
final states agree with each other, as do the results from the two experiments. The
experimental results also agree well with the NNLO+NNLL predictions of Ref. [427].
602 Data at the LHC
It is notable that, despite the impressive precision of the theoretical prediction, the
experimental errors are smaller still.
The top mass can be measured as well from the same final states used in the cross-
section measurements. A compilation of the top mass measurements at ATLAS and
CMS compared to the TEVATRON average, the LHC average, and the world average,
is shown in Fig. 9.52. The precision at the LHC is still not at the same level as the
final measurements from the TEVATRON, but it should surpass the latter in Run 2.
Theoretical issues (such as recombination effects and uncertainties as to what exactly
the measured top mass represents), discussed in Section 4.5, will now have to be
addressed.
Differential measurements of tt̄ final states allow for additional precision tests of
perturbative QCD, probes of high mass regions sensitive to new physics, and more de-
tailed information on tt̄ kinematics useful for PDF determination. Differential predic-
tions for tt̄ production at NNLO have been recently calculated [426, 429]. Differential
measurements have been performed for such variables as the tt̄ mass, the tt̄ rapidity
distribution, the transverse momentum of the top quark, and the tt̄ transverse momen-
tum distribution (as well as others). In Ref. [56], the experimental results have been
unfolded both to a fiducial particle-level phase space and to a fully-corrected phase
space. The former has less of a model dependence, and thus smaller uncertainties, and
the latter is often more appropriate for comparison to higher-level predictions.
Tops 603
The tt̄ mass and rapidity distributions, corrected to the full phase space, are shown
in Fig. 9.53 for the ATLAS 8 TeV measurements. (Results at 7 TeV for ATLAS can be
found at [40] and 7 and 8 TeV results for CMS can be found at [371] and [662]. It is
noteworthy that ATLAS and CMS have adopted the same binning for the fully-corrected
distributions, allowing for easier future combinations.) The data are compared both
to a POWHEG +PYTHIA 6 prediction and to the NNLO differential (fixed-order) pre-
diction, both using the MSTW2008 NNLO PDFs. Good agreement with the data
is observed for both predictions for the mtt̄ distribution, while the NNLO prediction
agrees better with the ytt̄ data. In general, the agreement seems to be better for NNLO
comparisons than for NLO comparisons, and better with the more recent PDFs than
with previous generations. There is still a sizable PDF sensitivity, especially at high
mtt̄ and ytt̄ , paving the way for the inclusion of this data into global PDF fits. One
604 Data at the LHC
Fig. 9.53 The ATLAS tt̄ normalized, differential cross-sections for the tt̄
mass (left) and tt̄ rapidity (right), compared to theoretical predictions
at NNLO using the MSTW2008 PDFs. Reprinted with permission from
Ref. [56].
caveat is that electroweak effects, which can be sizable, are also not yet available for
all observables [705].
Measurements of the tt̄ asymmetry at the LHC are much more difficult than at
the TEVATRON due both to the symmetric nature of the colliding beams, and the
dominance of the (symmetric) gg subprocess for tt̄ production. Also, as a result of this
symmetry, the forward-backward asymmetry measured at the TEVATRON (∆y = yt −
yt̄ ) is no longer a useful variable, and the charge asymmetry instead (∆y = |yt | − |yt̄ |)
must be measured. Perturbative QCD predicts top anti-quarks to be produced more
centrally than top quarks. The expected value for this asymmetry is smaller than the
forward-backward asymmetry measured at the TEVATRON. A number of measurements
have been carried out by both ATLAS and CMS of the inclusive charge asymmetry [10,
32, 50, 363, 365, 670, 672, 674], and the results are in agreement with the NLO+EW
prediction, as observed in Fig. 9.54. Measurements have also been made for high tt̄
mass (mtt̄ > 600 GeV) and for boosted tt̄ systems (βtt̄ > 0.6) [32], where the effects
of any new physics may be expected to be magnified [128]. No deviation from the
SM predictions is observed for these special kinematic regions. Note that unlike the
TEVATRON, NNLO predictions for this observable are not yet available for the LHC (at
the time of this book).
limits for the s− channel contribution, which has of course already been measured at
the TEVATRON [47]. Since the tW final state can be distinguished from the other single
top final states, its analysis is usually conducted separately from the other two.
As at the TEVATRON, the signature for t channel production involves the presence of
an isolated high pT lepton, substantial missing transverse energy, and two or three jets,
with at least one jet tagged as a b-jet and with one jet at high rapidity. The rapidity
distribution for the (untagged) forward jet serves as a good discriminant for separating
t-channel single top production from the s-channel mode and other SM backgrounds.
Measurements of the t-channel cross-section at the LHC, and the theoretical prediction
of NLO QCD, are shown in Fig. 9.56. The cross-sections measured by ATLAS and CMS
are in good agreement both with each other and with the theoretical prediction at
this order. Although the NNLO prediction for this cross-section [288] has not been
compared in this figure, it also agrees very well. For instance, the comparison with the
combined results of ATLAS and CMS for the (t + t̄) single top t-channel process is,
8 T eV 8 T eV +0.8
σAT LAS+CM S = 85 ± 12 pb, σN N LO = 83.9−0.3 pb . (9.9)
606 Data at the LHC
The difference between the two uncertainties underscores both the difficulty of mea-
suring this process at the LHC and the precision of the NNLO calculation.
Due to the LHC being a pp collider, the production of single top quarks is larger than
that of single anti-top quarks. The NNLO prediction for the ratio of t/t̄ production is
1.825 ± 0.001. This ratio is sensitive to the distribution of up and down type quarks
in the proton, as well as possible new physics that may couple to the W tb vertex.
The ratios measured by ATLAS and CMS are consistent with the Standard Model
predictions, albeit with relatively large statistical and systematic uncertainties, as
observed in Fig. 9.57. If one assume that no anomalous form factors are present at the
W tb vertex it is possible to extract the size of the (t, b) CKM matrix element, |Vtb |.
ATLAS and CMS measure 1.02 ± 0.07 and 0.998 ± 0.041 respectively, where the total
error includes both the experimental and theoretical errors added in quadrature.
Fig. 9.57 A comparison of the measured ATLAS (left) [24] and CMS
(right) [657] ratios of single top to single anti-top production with NLO
predictions using various PDFs. Reprinted with permission from Refs. [24]
and [657].
608 Data at the LHC
calculations of both the Higgs boson production cross-sections and the decay branch-
ing ratios [161, 296]. Higgs boson final states involve the measurements of photons,
leptons, jets (including b-tagged and c-tagged jets), τ leptons, and missing transverse
energy, i.e. the building blocks of the LHC SM measurements discussed in this chap-
ter. The tools developed for SM measurements, both theoretical and experimental,
can be directly adapted for measurements of the Higgs boson production and decay
rates, and its properties. For example, for some of the Higgs boson measurements, a
better signal-to-background ratio can be gained by requiring the Higgs boson system
to be boosted. Boosted systems have received a great deal of attention at the LHC, as
discussed in Section 9.2.4 and, for example, Ref. [431]. The knowledge of the SM pro-
cesses also serves to improve the determination of the backgrounds to measurements
of Higgs boson final states. Some of the backgrounds can be determined from the data;
for others some dependence on theoretical predictions is necessary.
The relative rates for the production of a 125 GeV SM Higgs were shown in
Fig. 4.53. The dominant production mode is gg fusion for all centre-of-mass ener-
gies. The other modes added to the discovery potential but are also important for a
complete understanding of the Higgs boson properties. For example, the VBF process
probes the couplings to W/Z bosons, while tt̄H probes the coupling to the top quark.
The decay branching ratios have been previously shown in Fig. 4.55. For a Higgs boson
mass of 125 GeV, the dominant decay is into a bb̄ pair, followed closely by a decay into
W W ∗ . As the Higgs mass is below the threshold for W W production, one of the W
bosons has to be off mass-shell. The decay into two photons has one of the smallest
branching ratios, but was still important for the discovery because of the precision in
which the 4-vectors of the two photons could be measured. With sufficiently precise
resolution, the two photon mass peak for the Higgs boson can be discerned from the
copious backgrounds for QCD diphoton production. To some extent, the ATLAS and
CMS detectors were designed to optimize the search for the Higgs boson. For example,
both experiments chose solutions for their electromagnetic calorimetry that prioritized
precise energy resolution so as to be able to reduce the observed width of the Higgs
boson in its two photon final state.
The total number of inelastic pp collisions in Run 1 at the LHC was of the order of
1.5 × 1015 . In total, over 500,000 Higgs bosons were produced per experiment in these
final states (before acceptance and reconstruction).
The discovery of the Higgs boson (or rather of a new particle with a signature con-
sistent with the Higgs boson, as noted in the discovery) occurred on July 4, 2012, with
both ATLAS and CMS observing a significance for a signal at 125 GeV of approximately
5 sigma. The discovery resulted from approximately 5 fb−1 of data at a centre-of-mass
energy of 7 TeV and approximately 6 fb−1 at 8 TeV. Further data-taking increased
the integrated luminosity at 8 TeV to over 20 fb−1 , allowing not only 10 standard de-
viation evidence for the Higgs boson, but also detailed investigations of its couplings.
For some of the final states, differential distributions were also measured.
The discovery and first measurements of the Higgs boson in Run 1 required the
development of sophisticated analysis techniques designed to optimize the signal-to-
background discrimination in the various analysis channels. This optimization could be
applied directly to cut-based analyses, or as input to multivariate analysis techniques.
Higgs boson 609
Fig. 9.58 The ATLAS diphoton mass distribution in the Higgs search re-
gion. The fractions of the events resulting from real diphotons, from pho-
ton+jet events and from jet-jet events have been estimated using a double
two-dimensional sideband method as discussed in the text. Reprinted with
permission from from Ref. [29].
in ATLAS requires (at least) two photons in the (absolute) rapidity range less than 2.37
(excluding the crack region 1.37 < |η| < 1.56 where the energy resolution is degraded).
A requirement is made that the ratio of the photon transverse energy to the diphoton
mass (ET /mγγ ) is less than 0.35 (0.25) for the leading (2nd leading) photon. The
photon transverse energy cuts are larger than for the diphoton measurement discussed
in Section 9.4.1 and more asymmetric. Such a large (asymmetric) cut emphasizes the
Higgs boson signal over the continuum diphoton background.
The mass spectrum for Higgs diphoton candidates with these cuts at 8 TeV is shown
in Fig. 9.58. Notice that the real QCD diphoton signal dominates over photon-jet and
jet-jet backgrounds, where one or more jets mimics a real photon (for example with a
π o which takes most of the momentum of the jet). The isolation cut greatly reduces
the background rate. The real diphoton and jet background fractions are determined
by the use of a double two-dimensional sideband method involving (1) loose and tight
photon identification criteria and (2) loose and tight photon isolation criteria [29].
As for the inclusive Higgs boson cross-section, the dominant subprocess involving a
diphoton final state is gg fusion (87%), followed by V BF , V H (5%), and tt̄H (1%). As
the signal to background ratio is small, and varies according to the Higgs subprocess,
the diphoton events are assigned to 12 exclusive categories, with each category opti-
mized to maximize the expected signal strength of the subprocess (for example, the
presence of leptons for V H, two widely separated jets for V BF , etc). The expected
Higgs boson 611
Fig. 9.59 The diphoton mass spectrum measured in ATLAS in the 7 and
8 TeV data. Each event has been weighted by the signal to background
ratio for the category to which it belongs. The solid red curve shows the
sum of the signal (for a Higgs of mass 125.4 GeV) and background fits.
Reprinted with permission from Ref. [29].
diphoton mass resolution also differs among the various categories. These exclusive
categories can be combined into the four channels discussed above (gg fusion, V BF
(7%), V H and tt̄H).
For each category of event, a weight is determined based on the expected signal
to background for that category, where the S/B ratio is determined from SM theory.
The (S/B) weighted distribution is shown in Fig. 9.59, where a clear bump is evident
at about a mass of 125 GeV. The width of the bump is entirely determined by the
resolution of the photon measurements, as the intrinsic width for a 125 GeV Higgs
boson is on the order of 4 MeV. The signal strengths for the diphoton channels are
shown in Fig. 9.64.
Fig. 9.60 (left) The 4 lepton mass spectrum measured in ATLAS in the 7
and 8 TeV data. (right) A plot of the sub-leading vs leading dilepton pair
masses, where the 4 lepton mass was required to be between 120 and 130
GeV. The dominant probability for the leading pair to be at the Z pole
mass can be observed. Reprinted with permission from Ref. [42].
to the Z mass being termed the leading dilepton pair with the second dilepton pair
being formed from the remaining two leptons. The leading pair is required to have
a mass between 50 and 106 GeV. Electromagnetic radiation from the leptons can
often be measured in the electromagnetic calorimeters and used to correct the lepton
momentum. Collinear photons are associated with muons and non-collinear photons
are associated with either electrons or muons. Both track and calorimeter isolation
requirements are applied to the leptons, after the event-by-event subtraction of the
underlying event and pileup energy.
As for the diphoton final state, the ZZ ∗ event candidates are assigned to cate-
gories (4 in this case; high mass 2 jets (VBF-enriched), low mass 2 jets (VH-enriched),
additional lepton (VH-enriched), and gg fusion) in order to optimize the Higgs cross-
section determination. For the V BF category the dijet mass is required to be above
140 GeV; for the low mass 2 jets V H category, the dijet mass is required to be between
40 and 130 GeV.
The resultant 4-lepton mass distribution is shown in Fig. 9.60 (left), where a clear
(but statistically limited) peak is observed at about 125 GeV [42]. Note also the pres-
ence of the 4-lepton decay of the Z boson at 90 GeV (useful for calibration) and the
large increase of the 4-lepton cross-section once both Z’s can be on mass-shell, above
200 GeV. The mass distribution for the two dilepton pairs is shown in Fig. 9.60 (right),
where the dominance of the leading pair to be at the Z-pole mass is evident. The signal
strengths for the ZZ ∗ channels are shown in Fig. 9.64.
Higgs boson 613
discussed in Section 8.7, this was the primary search channel for the Higgs boson
at the TEVATRON. The measurement of the associated production of a vector boson
(W/Z) with a Higgs boson decaying into a bb̄ pair allows for the direct measurement
of the coupling of the Higgs boson to b-quarks. There is still a significant background
from V bb̄ production, itself not perfectly understood, as discussed in Section 9.3.5.
In the Higgs boson analysis in this channel, the events are first categorized ac-
cording to the number of leptons (0,1 and 2), jets (2 or 3 with traverse momenta
great than 20 GeV and (absolute) rapidity less than 2.5 (inside the b-tagging range)
and b-tagged jets. Events are rejected if any additional jets with transverse momenta
great than 30 GeV are found with rapidity greater than 2.5 (in order to reduce tt̄
backgrounds). Dedicated boosted decision trees are then constructed for each channel,
with the boosted decision trees trained to separate the associated production signal
from the backgrounds. The weighted event distribution is shown in Fig. 9.62. The
signal strengths for the V H(→ bb̄) channels are shown in Fig. 9.64 [48]. There is a
significance of 1.4 with an expected significance of 2.6.
Fig. 9.62 The distribution of mbb̄ after the subtraction of all backgrounds
(except for diboson) in ATLAS in the 8 TeV data. The contributions have
been summed weighted by their values of expected Higgs signal to back-
ground. Reprinted with permission from Ref. [48].
measurements by ATLAS and CMS for these 2 final states are shown in Fig. 9.65 [36]. It
would be potentially interesting if the 4-lepton and diphoton final states had different
masses, but the hierarchy between the 2 states is opposite for ATLAS and CMS, pointing
to statistical fluctuations as the root cause. The measurements are all consistent with
each other, allowing a combined determination of the Higgs boson mass from Run 1 of
mHiggs = 125.09 ± 0.21(stat) ± 0.11(syst). Note that the dominant error is statistical;
both the statistical and systematic errors should significantly improve in Run 2 of the
LHC. This particular value for the Higgs boson mass has interesting implications for
the (meta)stability of the vacuum.
As mentioned previously, the width of a 125 GeV Higgs boson is too small to
be measurable, on the order of 4 MeV. However, as discussed in Section 4.8, the
high mass ZZ ∗ region is sensitive to Higgs boson production through off-shell and
background interference effects. Amazingly enough, approximately 15% of Higgs boson
production in the ZZ ∗ channel occurs above the ZZ threshold. The cross-section for
H → ZZ ∗ is comparable to the cross-section for continuum production of gg → ZZ
(with which it destructively interferes) above this threshold. The dominant sub-process
for high mass ZZ final states is through q q̄ → ZZ. The leading order cross-section for
gg → ZZ is through a box diagram as shown in Fig. 4.57. At the time that the 8 TeV
analyses were carried out, the NLO (2-loop) calculation for this process was beyond
Higgs boson 617
Fig. 9.64 The joint ATLAS +CMS Higgs boson signal strengths, by final
state and production mode. Reprinted with permission from Ref. [54].
Fig. 9.65 The measured ATLAS and CMS Higgs boson masses separated
by final state. Reprinted with permission from Ref. [36].
the current technology. Given the progress in calculating two-loop integrals with two
massless and two massive external lines, this calculation has now been carried out.
The resulting QCD corrections increase the gg → ZZ cross-section by the order of
50-100%, depending on the exact scale choice [323].
The ATLAS analysis varies the possible K-factors for this process as part of the
systematic uncertainties (CMS assumes the same K-factors for both resonant and non-
resonant gg → ZZ production). The ratio of the off-shell to on-shell signal strengths is
directly proportional to the Higgs width. The 4-lepton mass distribution in the ATLAS
Higgs search is shown in Fig. 9.66 in the mass range from 220-1000 GeV, along with
618 Data at the LHC
the contributions from the Standard Model, including that of the Higgs boson [37].
The dashed line indicates the impact of an off-shell coupling 10 times the SM value.
Assuming that the relevant Higgs boson couplings are independent of the energy scale
of the Higgs production, the combination of the ZZ and W W results yields 95%
confidence level upper limits for ΓH /ΓSM
H in the range between 4.5-7.5 (with ATLAS
and CMS having similar results).
boson are jettier than events produced from the QCD continuum background. This
is expected as the bulk of Higgs boson production occurs through gg fusion into a
colour-singlet (Higgs boson) final state, a situation that leads to a large probability
for the production of additional jets, as discussed in Section 4.8.4.
The number of events is much more limited for the case of the 4-lepton final
state, but it is still useful to combine the two measurements within a common fiducial
volume, especially as they are consistent with each other. The Higgs boson transverse
momentum distribution for the diphoton, the 4-lepton and the combined final states
is shown in Fig. 9.69 [43]. The two final states produce similar results. The Higgs
boson transverse momentum distribution in the data is observed as somewhat shifted
towards higher pT compared to the theoretical predictions.
The jet multiplicity distributions for the combined diphoton and 4-lepton final
states are shown in Fig. 9.70 for the inclusive (left) and exclusive (right) cases [43].
620 Data at the LHC
The measured cross-sections show somewhat jettier final states than predicted by the
various theory predictions for the gg fusion process shown.13 Also shown in the figures
(and added to the gg fusion results) are the contributions from V BF , V H, tt̄H and
bb̄H production. These contributions form a more significant fraction of the total
production as the jet multiplicity increases.
The transverse momentum distribution for the lead jet is shown in Fig. 9.71 (left),
compared to several theoretical predictions [43]. The ∆y distribution between the two
leading jets is shown in Fig. 9.71 (right) [35]. Due to limited 4-lepton statistics for this
observable, the data for this plot is only from the diphoton final state. The excess of
data over theory occurs more for small jet rapidity separations. Note that larger jet
separations are dominated by VBF production.
A useful summary of the diphoton channel differential cross-sections, compared to a
number of theoretical predictions, is shown in Fig. 9.72, from Ref. [35]. For reference,
13 The differential distributions reported by the CMS collaboration in [671] are closer to the SM
predictions.
Higgs boson 621
Fig. 9.70 The ATLAS Higgs boson jet multiplicity distribution, combining
the diphoton and ZZ ∗ channels. Reprinted with permission from Ref. [43].
the NNLO Higgs+≥ 1 jet fiducial cross-section (not shown on the plot) is 10 fb−1
(including top quark mass effects) [325].
Fig. 9.71 (left) The inclusive lead jet transverse momentum distribution
measured in ATLAS in the 8 TeV data from Ref. [43]. The dijet rapidity
separation between the two leading jets for Higgs + ≥ 2 jets. Reprinted
with permission from Ref. [35].
degree for which this was possible). Good agreement was observed between fixed or-
der predictions and predictions involving parton showering/resummation for inclusive
observables, such as the lead jet pT distribution for H+ ≥ 1 jet. That is, contrary
to some conventional wisdom, parton showers and/or resummation do not affect fixed
order results, for suitably inclusive observables.
9.6.7 Spin-parity
9.7 Outlook
9.7.1 Standard Model physics at the LHC
Two key aspects of Run II (and beyond) physics at the LHC relate to higher precision
and extended kinematic reach. No clear signs of beyond Standard Model physics have
been discovered (to date) at the LHC. Searches for new physics will necessarily require
precision measurements of SM processes (especially of Higgs boson production) seeking
deviations that may indicate the presence of new physics. The higher running energy
Outlook 625
Fig. 9.74 (left) One of the diagrams for VBS. (right) The |∆yjj | distribu-
tion for events passing the cuts for the inclusive region. The cut in |∆yjj |
denoting the VBS region is indicated. The W ± W ± jj prediction has been
normalized to the Standard Model value. Reprinted with permission from
Ref. [25].
and increased luminosity also result in greater access to the TeV range, in which signs
of new physics may be more obvious.
In most cases, increased precision means the calculation of a process to NNLO in
QCD, and often NLO in EW. There has been great progress in the calculation of LHC
processes to NNLO and beyond, as detailed both in this book and in workshops such as
Les Houches [161]. The technology for 2 → 2 processes at NNLO is already relatively
mature, and the reasonably near future should see NNLO being extended to 2 → 3
processes. Higgs boson production through gg fusion has already been calculated at
N3 LO and the next obvious extension is to carry out the calculation of Drell-Yan
production to this order. So far, only PDFs at NNLO are available, but for ultimate
precision a determination of PDFs at N3 LO may be necessary. Of course, this also
requires the calculation of the processes in global PDF fits at this order. At this level
of precision, NLO EW corrections can become equally important as those from NNLO
(and above) QCD. Above the TeV scale, EW effects are often not subtle, and the
radiation of W and Z bosons will compete with QCD gluon radiation. In most cases,
the NNLO QCD and NLO EW calculations may factorize, but in some instances mixed
corrections may be required, especially if the QCD corrections are large. In general,
the higher the order of calculation, the smaller the scale dependence will be. However,
some care will still have to be taken with regards to the choice of a physical scale for
the process, especially in the presence of a complex final state.
In the TeV range, photon-initiated processes will become increasingly important,
for example for high-mass W boson pair production. Most of this book has concen-
trated on fixed-order calculations, but the TeV range also means that high-x effects will
become important and threshold resummation corrections will be crucial to calculate.
From the experimental side,larger data samples by definition mean smaller statis-
tical errors, and often smaller systematic errors due to an improved knowledge of the
experimental measurement. A high integrated luminosity necessarily requires a high
instantaneous luminosity and the presence of many pileup events. This will in many
626 Data at the LHC
cases degrade the quality of the experimental measurements, and require the increase
of trigger/analysis thresholds. It will still be crucial, however, to have the ability to
trigger on standard benchmarks such as W and Z boson production. Even though the
experimental environment may be daunting, the ATLAS and CMS experiments have
been designed to operate in those conditions.
The culmination of Run 1 at the LHC was the discovery of the Higgs boson by ATLAS
and CMS. A precision determination of its properties, though, requires the higher
energy and integrated luminosity of Run 2 (and beyond). Data samples of 300 fb−1
are projected for each experiment in Run 2 at a centre-of-mass energy of 13-14 TeV.
Through the full running of the LHC, an integrated luminosity of 3000 fb−1 can be
expected. Cross-sections for Higgs boson production subprocesses are a factor of 2-4
times higher than at 8 TeV, as seen in Fig. 4.53. In this section, we briefly describe
the experimental improvements in precision expected for the full Run 2 (and beyond)
data sample, and the needed theory improvements to best match those experimental
improvements. The discussion roughly follows that in Refs. [161, 439].
The largest production cross-section at 13 TeV remains that of the gg fusion sub-
process. As discussed in Section 9.6, the current experimental uncertainties for this
subprocess are on the order of 20-40%. Theoretically, the uncertainty was formerly on
the order of 15%, with the scale and PDF+αs (mZ ) uncertainties both having roughly
equal values. The calculation of the gg Higgs boson production to NNNLO has reduced
the scale uncertainty to 2-3%, while the recent PDF4LHC combination has resulted in
a PDF uncertainty of the same order. The recommendation for the αs (mZ ) variation
from the PDF4LHC combination results in a similar level of uncertainty on the gg
fusion cross-section.
The experimental uncertainty is expected to decrease to less than 10% (4%) in the
300 fb−1 (3000 fb−1 ) data sample. This may require an improvement of the theoretical
accuracy for the production cross-section, with a knowledge of the combined NNLO
QCD+EW contributions retaining the top quark mass effects [161].
With a data sample of 300 fb−1 , a very rich program of measurements of Higgs
boson + jets final states is possible. There is a comparable (or larger) increase in
the Higgs boson + jet cross-section as observed for inclusive Higgs production. As the
production proceeds primarily through a top quark loop, it is important to probe inside
that loop to understand the dynamics of production, and in particular to determine if
any BSM particles may contribute to the loop. Each experiment will have on the order
of 3000 events (in the diphoton channel) with a jet with transverse momentum above
the top quark mass. With 3000 fb−1 , the reach in jet transverse momentum is over
700 GeV. At jet transverse momenta of this order, there is a very large suppression
(over a factor of 5) of the Higgs+jet cross-section due to finite top-mass effects (over
the effective theory). Even without the presence of new physics, there may be new
dynamics present at these scales. To properly understand the physics of Higgs boson
+ jet production, it is necessary to calculate the finite top quark mass effects at NLO
QCD+NLO EW.
Outlook 627
Higgs boson final states with at least 2 jets are crucial to understand Higgs boson
couplings, especially the coupling to vector bosons through the vector boson fusion
process. With a 300 fb−1 (3000 fb−1 ) data sample, this coupling can be determined
to the order of 5% (2-3%). On the theoretical side, this may require the calculation
of both vector boson fusion and gluon gluon fusion production of the Higgs boson +
2 or more jets to be known to NNLO QCD and for the finite top mass effects to be
known to NLO QCD+NLO EW.
Higgs boson couplings to b-quarks are known primarily through associated pro-
duction (V H). Currently, this coupling is known to the order of 50%. With 300 fb−1
(3000 fb−1 ), this can be improved to 10-15% (7-10%). On the theoretical side, one
bottleneck has been the knowledge of the gg → HZ process, currently known only to
LO. This process has a sizable contribution to the total rate, and contributes signifi-
cantly to the total theoretical uncertainty. It is desirable to combine Higgs production
and decay to the same order, NNLO in QCD and NLO in EW.
With 300 fb−1 (3000 fb−1 ), the top quark Yukawa coupling should be measured
to the order of 15% (5-10%) (through the tH and tt̄H subprocesses). Current t(t̄H is
only known to LO in QCD and tt̄H is known to NLO. For a full understanding of this
coupling, it is desireable to know both cross-sections (with top quark decays) to NLO
QCD including NLO EW effects.
Fiducial cross-sections have been measured for several of the Higgs boson (+jets)
channels in Run I. With the higher statistics of Run II, this will happen for more final
states. For most channels in Run I, however, the end result has been measured signal
strengths and multiplicative coupling modifiers. In Run II there will be a transition
to simplified template cross-sections (STCS), as discussed in [161]. The primary goals
of the STCS method are to maximize the sensitivity while minimizing the theory
dependence of the measurement. This entails: the combination of decay channels, the
measurement of cross-sections rather than signal strengths, and the determination of
cross-sections for specific production modes. The physics interpretation (and model-
dependence) is left for the final stage of the analysis.
10
Summary
The resolution of the LHC detectors, both calorimetry and tracking, is superior to
that of the CDF and DØ detectors. Tracking, in particular, has higher precision and
extends to a higher rapidity than possible at the TEVATRON. The improvement in com-
puting power has meant that detailed event simulations, tracing the electromagnetic
and hadronic showers, are possible for a variety of physics processes, allowing a better
understanding of the detector response.
The theoretical tools and analysis techniques available to LHC physicists are for the
most part more sophisticated than those available at the TEVATRON. Fixed-order pre-
dictions at NLO (interfaced to parton shower programs) are available for basically any
reasonable process, and NNLO calculations for 2 → 2 processes have reached a degree
of maturity, with calculations of 2 → 3 processes to be expected. The gg → H process
has been calculated to NNNLO and similar calculations for Drell-Yan production are
not far off. The higher order calculations have resulted in smaller theoretical uncer-
tainties from scale variations. Since it is possible that new physics may not show up
as a clear peak on a distribution, but rather in subtle variations from SM predictions,
precision comparisons are crucial for discovery/exclusion of BSM physics.
In the precision physics region (50–500 GeV), PDF uncertainties are small for most
1 The authors hold out hope that new physics will indeed be discovered at the LHC.
The Black Book of Quantum Chromodynamics: A Primer for the LHC Era. John Campbell, Joey Huston, and Frank Krauss.
© John Campbell, Joey Huston, and Frank Krauss 2018. Published in 2018 by Oxford University Press.
DOI 10.1093/oso/9780199652747.001.0001
Successes and failures at the LHC 629
2 A prediction for the underlying event is present in every parton shower Monte Carlo program,
but not in fixed-order calculations. For these, non-perturbative corrections must be calculated by the
experimenters to allow comparison of parton-level predictions to hadron level observables.
630 Summary
jets using the jet tracking information, and reject jets if too much of the jet energy
arises from pileup contributions. Alas, this is possible only for jets produced in the
precision tracking region (|y| ≤ 2.5) and pileup jets are much more of a problem at
more forward rapidities. Unfortunately, this is a region where jet identification can be
crucial, as for example in measuring the tagging jets in VBF Higgs production. The
problem will only get worse as the instantaneous luminosity increases. The solution is
to provide more information to discriminate between pileup and hard scatter jets (such
as timing for the forward calorimetry), or to simply raise the jet transverse momentum
cutoff for forward jets.
at 8 TeV to a much more significant 15% at 100 TeV. More study would clearly be
required in order to obtain a true estimate of the impact of such events on the physics
that could be studied at higher energies, but these simplified arguments can at least
give some idea of the potentially troublesome issues.
As an example of the behaviour of less-inclusive cross-sections at higher energies,√
Fig. 10.2 shows predictions for H + n jets + X cross-sections at various values of s
and as a function of the minimum jet transverse momentum. The cross-sections are all
normalized to the inclusive Higgs production cross-section, so that the plots indicate
the fraction of Higgs events that contain at least the given number of jets. The inclusive
Higgs cross-section includes NNLO QCD corrections, while the 1- and 2-jet rates are
computed at NLO in QCD. All are computed in the effective theory with mt → ∞.
The extent to which additional jets are expected in Higgs events is strongly depen-
dent on how the jet cuts must scale with the machine operating energy. For instance,
consider a jet cut of 40 GeV at 14 TeV, a value in line with current analysis projections.
For this cut, approximately 20% of all Higgs boson events produced through gluon fu-
sion should contain at least one jet. The fraction with two or more jets is expected to
be around 5%. To retain approximately the same jet compositions at 100 TeV requires
only a modest increase in the jet cut to 80 GeV.
However, this analysis is not the full story, due to effects induced by a finite top-
mass that are neglected in the effective theory. This is illustrated in Fig. 10.3, which
shows the rates for Higgs production in association with up to three jets, taking proper
account of the top-mass, as a function of the minimum jet pT . As shown in the lower
panel, a comparison of these results with those obtained in the effective theory reveals
significant differences. Even for moderate jet cuts of around 50 GeV a finite top-mass
results in differences in the H + 3 jet rate of approximately 30%. For significantly
harder jet cuts the effective theory description clearly fails spectacularly. Although
Lessons for future colliders 633
this should not be a great surprise, given the energy scales being accessed, it is a
useful reminder of the limitations of approximations that are commonly used at the
LHC. Such approximations must clearly be left behind in order to obtain meaningful
predictions for relatively common kinematic configurations at a 100 TeV collider.
Of course, the differences that exist between the theoretical predictions at 14 TeV
and 100 TeV offer significant opportunities that are only beginning to be explored. The
event rates will be sufficiently high that analysis cuts can be devised to take advantage
of the unique kinematics at a 100 TeV collider, rather than simply “scaling up” the
types of analyses currently in use at the LHC. For instance, substantially harder cuts
on the transverse momenta of jets will lead to a predominance of boosted topologies,
which can be analysed with the types of jet substructure techniques that are still
relatively new at the LHC, cf. Section 9.2.4.
the few-percent level for quantities such as the transverse momentum of single pho-
ton or Z-bosons. Such exquisite measurements have thrown down the gauntlet to the
theoretical community.
Of course, some of these challenges have been foreseen. Going from the early days
of the TEVATRON to the build-up to the LHC saw a sea-change in the quality of pertur-
bative predictions. Rather than being limited to LO predictions for 2 → 2 processes, by
the advent of the LHC NLO predictions were available for almost all final states of im-
mediate interest. At the beginning of Run II of the LHC, even NNLO calculations have
matured to the level of providing differential predictions for events containing jets. The
pace of these developments has been so fast that it is easy to take for granted a level
of sophistication that many never believed would have been achieved by now. The
availability of N3 LO predictions for Higgs production, multiple examples of NNLO
calculations matched to a parton shower, and the ability to go from a Lagrangian to
NLO-accurate showered events, are just a few such examples. Such progress, to a level
of precision that in some cases borders the ridiculous, may leave the reader wondering
if challenges remain. Yet, undeniably, much work lies ahead.
In terms of fixed-order descriptions, the march to higher orders is not yet over.
It is not clear whether existing techniques for performing NNLO calculations will be
able to be applied to more complex final states. While continued improvements in
computer processing power will certainly help, it is almost certain that alternative,
superior approaches have yet to be devised. Similar arguments apply to the case of
N3 LO predictions, where extensions to the method that could provide more differential
information, or perhaps be suitable for more general processes, are far from obvious.
As highlighted in earlier chapters, the presence of substantial electroweak corrections
at high energies is just beginning to be probed. As the LHC becomes more sensitive
to even higher energies, the inclusion of higher-order electroweak effects will become
mandatory in order to retain theoretical predictions of sufficient precision. A simulta-
neous expansion in both parameters, i.e. correctly including corrections that contain
a mix of strong and electroweak couplings, will also become important. At present
no complete calculation of such effects exists, even for a single process. In addition, a
number of approximations are routinely used to simplify existing calculations. Exam-
ples include neglecting quark masses, working in the limit mt → ∞, and considering
production and decay stages of resonance production separately. These will all need
to be revisited, for various physics processes, in the coming years.
As improved fixed-order predictions become available it will be important that
their effects are included in parton shower predictions. This will enable the improved
modelling of the computed processes to be properly taken into account across a wide
range of experimental analyses. The parton showers themselves will be the subject of
greater scrutiny as they are held up to the light of experimental data that is ever more
precise. This may reveal deficiencies in our modelling, either related to an incomplete
treatment of towers of logarithms, or simply from an unavoidable choice in how the
shower is constructed. Further subtleties, related to non-perturbative effects such as
hadronization, fragmentation, and even the quality of the factorization picture itself,
will eventually require new theoretical understanding as they become the dominant
sources of theoretical uncertainty.
Lessons for future colliders 635
Finally – and, perhaps, most critically – it is important to not lose sight of the fact
that the ultimate goal of this program is to extract the most possible information from
the data that the LHC provides. To this end it is imperative to also continually develop
new tools and novel approaches for doing just that. An excellent example of this is
the development of jet substructure techniques, that have already had applications
to top-tagging, jet discrimination, and a host of other analysis methods besides. No
doubt there are many more insightful theoretical observations of this nature waiting
to be made in the years ahead.
Appendix A
Mathematical background
Γ2 (1 − ε) ε2 π 2
=1− + O(ε3 ). (A.5)
Γ(1 − 2ε) 6
In other words,
Z 1 Z 1
dx f (x)[g(x)]+ = dx [f (x) − f (1)] g(x). (A.8)
0 0
A.1.3 Dilogarithm
The dilogarithm (or Spence’s) function Li2 (x) is defined by,
Z x
ln(1 − y)
Li2 (x) = − dy , (A.9)
0 y
π2
Li2 (x) + Li2 (1 − x) = − log x log(1 − x). (A.13)
6
A.1.4 Mellin transforms
The Mellin transform MN [f (x)] of a function f (x) is given by1
1 In mathematical sciences, tpyically the integration limits are 0 and ∞, in which case the back-
transformation reads
Zi∞
1
f (x) = dN x−N −1 f (N ). (A.14)
2πi
−i∞
However, in what follows the back-transformation is often performed by merely identifiying known
expressions for Mellin transforms.
Special functions 639
Z1
MN [f (x)] = dx xN f (x). (A.15)
0
There are two reasons why the technology of this transform is interesting. First
of all, it can be seen very quickly that convolutions of the type encountered in the
cross-section calculation involving PDFs factorize to trivial products of the Mellin
transforms. In order to see how this works, consider the convolution of f ⊗ σ̂ which
often occurs in the calculation of cross-sections,
Z1 Z1 Z1
dy
σ = dx(f ⊗ σ̂)(x) = dx f (y) σ̂(x/y)
y
0 0 x
(A.16)
Z1
= dxdydz δ(x − yz) f (y)σ̂(z).
0
Z1 Z1
MN [(f ⊗ σ̂)(x)] = dx xN (f ⊗ σ̂)(x) = dxdydz xN δ(x − yz) f (y) σ̂(z)
0 0
Z1
= dydz (yz)N f (y) σ̂(z) = MN [f (x)] · MN [σ̂(x)] .
0
(A.17)
Consider, as a useful example, the PDFs transformed to Mellin space, MN fi/A (x, µF ) ,
which, in parallel to the original PDFs, fulfil the DGLAP equation
d
2
MN fi/A (x, µ) = γ(N, αs (µ2 )) MN fi/A (x, µ) (A.18)
d log µ
with the solution
2
ZQ
dq 2
MN fi/A (x, µ) = exp − 2
γ(N, αs (q 2 )) MN fi/A (x, Q) . (A.19)
q
µ2
Here, the γ(N, αs (µ2 )) are the anomalous dimensions and depend on the strong cou-
pling. Similar to all other quantities encountered so far they can be expanded as a
power series in αs as
X∞ i
αs (µ2 )
γ(N, αs (µ2 )) = γ (i) (N ). (A.20)
i=1
2π
640 Mathematical background
Concentrating on the first order term only, γ (1) (N ), and by direct comparison with
the DGLAP equation, cf. Eq. (2.31), it is clear that the moment related to the q → qg
splitting is given by
h i Z1 Z1
(1) (1) N (1) N 1 + x2
γqq (N ) = MN Pqq (x) = dxx Pqq (x) = CF dxx
1−x +
0 0
(A.21)
Z1
(1 + x2 )(xN − 1)
= CF dx = CF ξ(N ),
1−x
0
a finite number.
Furthermore, logarithms can be directly identified by analysing the analytical struc-
ture in the Mellin parameter N , by identifiying poles in 1/(N −N0 ). A straightforward
way to see this is by realising that
dL h i
L
MN [f (x)] = MN logL (x) f (x) , (A.22)
dN
which follows directly from the definition of the Mellin transform when realising that
xN = exp(N log x). In a similar way,
MN xk f (x) = MN +k f (x), (A.23)
For further reference, in Eq. (A.25) Mellin transforms of different relevant functions
are listed.
1
MN [1] =
N +1
" # N
1 X 1
MN = − (A.25)
1−x + k
k=1
" # ( 2 )
log(1 − x) 1
MN = ψ 0 (N ) + ζ(2) + ψ(N ) + γE
1−x + 2
The generic identity for combining propagators into a single denominator through the
use of Feynman parameters is,
! Q νi −1
Z 1 X xi
1 Γ(ν) n i
= Q d x i δ x i − 1 P νi . (A.27)
Aν11 . . . Aνnn Γ(νi ) 0 i P i
i xi Ai
i
The use of the Feynman parameterization of loop integrals given in Eq. (A.27) leads
directly to integrals that take the form,
Z
dD` (`2 )k
. (A.28)
(2π) (` − ∆ + iε)n
D 2
p
where `E = `20 + |`|2 . Since the integrand is only a function of `E the angular
integrations can be performed immediately by using
Z π
n √ Γ n+1 2
dθ sin θ = π . (A.30)
0 Γ n+2
2
The final integral over `E can be cast into the form of a beta-function integral,
Z Z 1
d`E `D−1 (−`2E )k (−1)n−k
E
= (∆ − iε)D/2−n+k dx xn−k−D/2−1 (1 − x)D/2+k−1
(−`2E − ∆ + iε)n 2 0
(A.32)
Using the result for such an integral given in Eq. (A.6), together with Eq. (A.31), one
arrives at the identity
642 Mathematical background
Z
dD` (`2 )k i (−1)n−k Γ(D/2 + k) Γ(n − k − D/2)
= (∆ − iε)D/2−n+k .
(2π)D (`2 − ∆)n (4π)D/2 Γ(D/2) Γ(n)
(A.33)
where p/ is not necessarily Hermitean and m does not need to be real, while in any case
p2 = m2 must be fulfilled. Additionally, the spinors fulfil the spin projection identity
Arbitrary, and potentially massive, spinors can be expressed in terms of these chiral
spinors as
p/ + m
u(p, λ) = √ w(k0 , −λ)
2p · k0
(A.39)
p/ − m
v(p, λ) = √ w(k0 , −λ).
2p · k0
The relations above also hold true for p2 < 0 and imaginary m.
For the construction of conjugate spinors the proper definitions
ū = u† γ 0 and v̄ = v † γ 0 , (A.40)
Spinors and spinor products 643
applied on the spinors obtained so far do not lead to the correct E.o.M.
p/ + m
ū(p, λ) = w̄(k0 , −λ) √
2p · k0
(A.44)
p/ − m
v̄(p, λ) = w̄(k0 , −λ) √ .
2p · k0
This yields the following products for massless spinors
Terms of the form ū(+, p1 )u(+, p2 ) are proportional to the masses of the spinors.
The such-defined spinors fulfil the completeness relation
X ū(p, λ)u(p, λ) − v̄(p, λ)v(p, λ)
1= . (A.46)
2m
λ
holds true, which allows to rewrite propagator numerators as spinor products. This
allows terms of the form ū(p1 )k/u(p2 ) to be rewritten by decomposing k/ into spinors,
resulting in the spinor products of Eq. (A.45). In addition, terms of the form (ūγ µ u) ×
(ūγµ u) are dealt with by employing Chisholm identities, yielding, again, spinor prod-
ucts of the type ūu from Eq. (A.45). Furthermore, a spinor representation of polar-
ization vector for external particles may become necessary, unless they are explic-
itly constructed and contracted into Lorentz-invariant scalar products or similar. To
achieve this, first for massless vector particles like gluons or photons, it is clear that
any representation must satisfy the identities
644 Mathematical background
µ (p, λ) pµ = 0
X qµ pν + qν pµ (A.48)
µ (p, λ)∗ν (p, λ) = − gµν + ,
pq
λ=±
and integrating over the solid angle of q1 in the rest-frame of p. The apparent problem
is that this does allow only for unpolarized cross section calculations and the addi-
tional integration renders the direct construction of polarization vectors potentially
advantageous in terms of computing speed.
There are two inner products in spinor space, for undotted and for dotted indices,
namely
hζηi = ζa η a
∗ (A.54)
[ζη] = ζȧ η ȧ = hζηi .
In the literature it has become customary to replace the spinors by their momentum
argument or its label; for example ζa (k) = |ki and ζȧ (k) = |k].
With four-vectors residing in the D( 21 , 12 ) representation, they are constructed
using two spinors and the four-vectors of Pauli matrices
Spinors and spinor products 645
σ µȧb = σ 0 , ~σ and σaµḃ = σ 0 , −~σ (A.55)
and where φk = argk⊥ . There are of course a few choices that can be made: first
of all, the spinor representation above can be multiplied with a freely chosen total
phase exp(iθ). In addition, of course, the choice of which axis defines the ±-direction,
is arbitrary; instead of using the z-axis it would be possible to choose another axis,
corresponding to a rotation of the Pauli matrices. Irrespective of such details in the
spinor definition, four-vectors are given by
µ ȧ
k µ = σȧb ζ (k)ζ b (k). (A.58)
Massive four-vectors can be constructed by decomposing them into two massless ones,
introducing yet another gauge degree of freedom. As a by-product of this, for massless
vectors ki and kj
2ki kj = hiji [ij] , (A.59)
so that a Lorentz product can be cast as a Dirac product. The fact that squared
scattering amplitudes can always be expressed as Lorentz invariants translates into
an independence on all choices made in constructing the underlying spinors, thus
providing a welcome check of any calculation.
External particles can then be represented in the following way:
Since they are orthogonal and normalised to 2p̂0 , the resulting Dirac spinors above
are normalised to ±2m. In most cases, massless particles are considered, which
leads to a massive simplification of the Dirac spinors, since only the upper or
lower components survive. In such a case, the short-hand notation
u± (k) = v∓ (k) = k ± and ū± (k) = v̄∓ (k) = k ± (A.64)
hq ∓ | γ µ |p̃∓ i
µ± (p, q) = ± √ (A.66)
2 hq ∓ | p̃± i
which is exactly the definition defining the massless spinors there; thus, up to a po-
tentially different phase convention, the objects |ki and hk| are identical to the basic
spinors ū and u. Therefore, the spinor products ūu of Eq. (A.45) become
A.3 Kinematics
In most cases, there is a special axis defined through the geometry of particle physics
experiments, namely the beam axis — the axis parallel to the incoming beams. In
most experiments (BaBar is a famous exception), this axis is uniquely defined.2 Usu-
ally, this beam axis is chosen to be the z-axis. In most cases, the position, where the
beams are brought to collision, is pretty well known; this knowledge is used to fix an
“origin” of the coordinate system. As long as the incoming beams are not polarized
there is thus only one particular axis, and in such cases events exhibit cylindrical sym-
metry w.r.t. the z axis. Usually the related azimuthal angle is denoted by φ. Naively,
then, other meaningful variables to determine momenta are the polar angle, typically
denoted by θ, and either the energy or the absolute value of the three-momentum
of the particle, where the ladder two are connected by the on-shell condition above.
This set of parameters, say {p, θ, φ} is particularly useful for lepton-lepton colliders
where the longitudinal (w.r.t. the beam axis) momenta of the colliding partons — the
leptons — are well known. However, for collisions involving hadrons this ceases to be
true. since usually only some constituents of them, the partons, interact. In such cases,
the energies and therefore the momenta of the incoming hadrons are known, but the
energies and momentum fractions of the respective constituents that interact are not
known a priori. Assuming that the initial partons of the process, the colliding hadron
constituents, move in parallel to the incoming hadrons, this implies that the overall
momentum of the colliding constituents along the beam axis is essentially unknown.
One could then characterise their collision by their centre-of-mass energy and by the
relative motion of their centre-of-mass system in the lab system. This relative motion
can be understood as a boost of the constituent system with respect to the lab or
beam system. Therefore, instead of using the polar angle θ in this cases it is more
useful to have a quantity with better properties under boosts along the beam axis.
Such a quantity is the rapidity, usually denoted by y. For a given four-momentum p,
it is defined as
1 E + pz
y = log . (A.70)
2 E − pz
It is simple to show that rapidity differences remain invariant under boosts along the
z axis. To do so, it is enough to prove that rapidities change additively under boosts.
Any boost is parameterized by a boost parameter γ and by defining an axis. Energy
2 Even in BaBar, where the beams cross under an angle in the laboratory system, a boost (Lorentz
transformation) can be applied, to find a system, where the beams collide “head on”.
648 Mathematical background
and three-momentum along this axis (here for obvious reasons the z axis) then change
according to
E 0 = E cosh γ − pz sinh γ
(A.71)
p0z = pz cosh γ − E sinh γ
and thus
y 0 = y − γ. (A.72)
Unfortunately the rapidity does not provide a very intuitive interpretation, based on
geometry. Therefore, another quantity has been introduced, called the pseudo-rapidity,
commonly denoted by η. Employing the polar angle θ, it is defined through
θ
η = log tan . (A.73)
2
It is worth stressing here that in the limit of massless particles, their rapidities and
pseudorapidities coincide. On the other hand, for massive particles, a finite rapidity
y may be achieved my a mere boost along the beam axis, leading to an angle θ = 0
w.r.t. this axis and hence an infinite pseudo-rapidity.
Having thus characterized the longitudinal component of the momentum, only the
transverse component needs to be described — which is typically achieved by quoting
its absolute value p⊥ and the azimuthal angle φ. For massless particles therefore the
four-momenta can be written as
This also allows to rewrite the Lorentz-invariant phase-space element of one particle
as follows:
d4 p 2 2 d3 p p⊥ dp⊥ dydφ
(2π)δ(p − m )Θ(p0 ) = = . (A.77)
(2π)4 2E(2π)3 2(2π)3
where the projectiles’ masses have been neglected. Then the total hadronic centre-of-
mass energy squared can be expressed as
S = 2P+ P− . (A.79)
where α and β are the plus and minus components of the momentum, respectively.
The rapidity of p is given by
1 E + pz 1 p+ 1 α
y= log = log = log . (A.81)
2 E − pz 2 p− 2 β
In addition,
p2 = αβS − p2⊥ , (A.82)
which, together with p2 = m2 allows α or β to be eliminated through
where it has been assumed that the two incident partons p1,2 move along the positive
and negative z axis respectively, implying that they have zero transverse momentum
and that α2 = β1 = 0. This also allows to identify α1 and β2 as the light-cone
momentum fractions the partons carry with respect to the incoming hadrons. It is
customary to identify these with the respective Bjorken-x,
x1 ≡ α1 and x2 ≡ β2 . (A.85)
Appendix B
The Standard Model
The Standard Model (SM) of particle physics is arguably the most successful model
in physics to date, explaining practically all known phenomena on sub-nuclear-length
scales with only 19 parameters, which are being determined at ever-increasing preci-
sion. The construction of the model rests on one paradigm, namely gauge invariance.
The idea is the following: global phase invariance is the invariance of a La-
grangian under phase transformations of its fields. As an example consider the La-
grangian for a simple massive Dirac fermion ψ without interactions,
L = ψ̄ (i∂/ − m) ψ, (B.1)
The Dirac fields above play the role of matter and are therefore usually also called the
matter fields. Their invariance under the transformation actually guarantees that
they have associated, conserved charges.
The gauge principle introduces interactions to this Lagrangian by postulating that
the Lagrangian remains invariant even if the phase depends on space-time xµ , θ =
θ(x), or, in other words, that the Lagrangian is also local phase or gauge-invariant.
Naively, however, this is not the case, since
and the second, additional term must be compensated for. This is achieved by defining
a self-compensating gauge-invariant derivative,
1
Aµ (x) −→ A0µ (x) = Aµ (x) + ∂µ θ(x). (B.5)
e
652 The Standard Model
As a consequence
/ψ)0 = D
(D /ψ (B.6)
and similar for the ψ̄. The new field(s) Aµ are the gauge fields. Through their intro-
duction enforced by the gauge postulate of local phase invariance the Lagrangian is
modified and reads
L = ψ̄ (iD
/ − m) ψ = ψ̄ (i∂/ + eA/ − m) ψ. (B.7)
m2
Lgm = Aµ Aµ (B.9)
2
violate gauge invariance and are thus forbidden.
Global and local phase transformations then mix the various components; this is
achieved through
Ψ −→ Ψ0 = exp (iθa τ a ) Ψ (B.11)
or, in component notation,
exhibiting the fact that the generators τa in fact are n × n matrices in the space of
the indices i, as is their exponential. The phases θa may or may not depend on x,
depending on whether the transformations above are local or global.
As a consequence, the gauge-invariant derivative reads
In all cases the gauge fields must be massless in order to guarantee gauge invariance.
The only thing necessary to fix now is the gauge group. In the case of the SM it
is given by a direct product of three groups, namely SU (3)c ⊗ SU (2)L ⊗U (1)Y .
The subscript c of the first group SU (3)c stands for “colour”, and it is the strong
interactions which are susceptible to colour charges. These interactions are enjoyed
by the quarks and mediated by the gluons. The most fundamental representation
of this group is by arranging the individual quarks in triplets with a colour index
running from 1 to 3, i ∈ [1, 3] such that a quark field q is given by three spinors ψq,i
T
Ψq = (ψq,1 , ψq,2 , ψq,3 ) . (B.16)
The three “colours” i that the quark fields can carry are often also denoted as “red”,
“blue”, and “green”, a reminiscence to the first day of colour television. The anti-
quarks of course carry anti-colour quantum numbers. The interactions mediated by
the gluons are able to change the colour of a quark. They are related to the eight
Gell-Mann matrices λa so that in the case of SU (3), τ a ≡ λa /2 where the latter
are given by
0 1 0 0 −i 0 1 0 0
λ1 = 1 0 0 , λ2 = i 0 0 , λ3 = 0 −1 0 ,
0 0 0 0 0 0 0 0 0
0 0 1 0 0 −i 0 0 0
λ4 = 0 0 0 , λ5 = 0 0 0 , λ6 = 0 0 1 , (B.17)
1 0 0 i
0 0 0 1 0
0 0 0 1 0 0
λ7 = 1 0 −i , λ8 = √13 0 1 0
0 i 0 0 0 −2
Furthermore, the λ are Hermitian and traceless, a property that usually is shared by
generators of a group. One of the invariants of this group is given by the Casimir
operator,
X 8
1 X 2 4
CF = τa2 = λa = . (B.19)
a
4 a=1 3
654 The Standard Model
This finishes the quick summary of the properties of the generators of SU (3) in its
fundamental representation.
The fabc are the structure constants of SU (3); they are completely anti-symmetric
in the three indices and are given by
f123 = 1
1
f147 = f165 = f246 = f257 = f345 = f376 = (B.20)
√ 2
3
f458 = f678 = .
2
The structure constants form the adjoint representation of the group — in this
a
repesentation the generators Tik are matrices of dimension 8 × 8 given by
a
Tik = ifaik . (B.21)
Here, the structure constants are the completely anti-symmetric Levi-Civita symbols.
Table B.1 The matter fermions of the Standard Model, with the corre-
sponding charge assignments. The charges fulfil Q = T3 + YW /2.
YW
Q = T3 + . (B.26)
2
Defining the gauge-invariant derivatives through their action on the various fermion
fields as
656 The Standard Model
a
(I) λaij a σαβ YW (I)
Dµ QL,i,α = ∂µ + ig3 Gµ δαβ + ig2 Wµa δij + ig1 Bµ δij δαβ QL,j,β
2 2 2
(I) λaij a YW (I)
Dµ uR,i = ∂µ + ig3 Gµ + ig1 Bµ δij uR,j
2 2
(I) λaij a YW (I)
Dµ dR,i = ∂µ + ig3 Gµ + ig1 Bµ δij uR,j
2 2
a
(I) σαβ YW (I)
Dµ LL,α = ∂µ + ig2 Wµa + ig1 Bµ δαβ LL,β
2 2
(I) YW (I)
Dµ `R = ∂µ + ig1 Bµ `R ,
2
(B.27)
where the λ and σ are the Gell–Mann and Pauli matrices, respectively, labelled by
a ∈ [1, 8] and a ∈ [1, 3]. The colour and weak isospin indices i and j and α and β
have been made explicit here. The gauge fields are the eight gluons Gaµ , the three weak
isospin bosons Wµa , and the weak hypercharge field Bµ . The gauge-invariant derivatives
enter the Lagrangian of the SM before electroweak symmetry breaking (EWSB) as
where the summation over the colour or weak isospin labels a is understood and where
the non-Abelian generalization of Eq. (B.8) yields for the field strength tensors
introducing self-interactions of the non-Abelian gauge fields Gaµ and Wµa into the gauge
part of the Lagrangian above.
Note that in principle also gauge-fixing terms Lg.f. would have to be added, which
could further neccessitate the introduction of Fadeev–Popov ghosts. These unphysi-
cal degrees of freedom manifest themselves as Grassman scalars, scalars with fermionic
behaviour, which carry the gauge quantum numbers of the gauge fields.
2 1
Bµ B µ −→ Bµ0 B 0µ = Bµ B µ + B µ ∂µ θ + 2 (∂µ θ)(∂ µ θ) =
6 Bµ B µ . (B.30)
g g
At the same time the fermions also cannot have a mass term. The reason is that such
a term for Dirac fermions ψ has the form
LDirac,mass = mψ̄ψ = m ψ̄R ψL + ψ̄L ψR . (B.31)
As long as left- and right-handed fermions transform in the same way, the respective
phase factors would of course compensate; this is the case in QED where the phase
transformation acts on all components of the spinors in the same way. Clearly, on
the other hand, in the very moment left- and right-handed fermions do have different
gauge transformations — as is the case in the SM, manifest for example in Eq. (B.27)
— there is no guarantee that this compensation happens. As a consequence, the mass
term above is explicitly gauge-violating, as it triggers an uncompensated phase factor
stemming from the SU (2)L transformation acting on the left-handed spinors only.
At the same time, masses for the weak gauge bosons - the W ± and Z 0 bosons and
for all fermions are well established. This means that either the underlying construction
paradigm of the SM, gauge invariance, does not hold true or that the SM in the form
presented so far is not complete and needs to be supplemented with a mechanism that
allows the generation of mass in a gauge-invariant way. As it turns out, the latter
option is realized in nature.
where, again, the α and β label the weak isospin components of the doublet Φ, φ+
and φ0 . The relevant quantum numbers of Φ are T3 = ±1/2 for the upper and lower
components and YW = 1/2.
Including a potential for the doublet, the original Lagrangian of Eq. (B.28) is
supplemented with two new parts, namely
658 The Standard Model
and
(I) (I) (I)
J
LHF = − fuIJ Q̄L Φ̃uJR − fdIJ Q̄L ΦdJR − feIJ L̄L ΦlR (B.34)
for its Yukawa interactions with the fermions, which actually contains both left- and
right-handed fermions. Here, µ2 and λ are real numbers, while the f IJ are arbitrary
matrices in generation space. In addition,
Φ̃ = iσ 2 Φ, (B.35)
essentially swapping the position of the φ+ and φ0 components in the Higgs doublet.
The new, complete SM Lagrangian is given by
or
µ2 v2
hΦ† Φi0 = = (B.38)
2λ 2
for the expectation value of the fields in the ground state. Identifying the ground state
with the physical vacuum gives rise to the notion of the vacuum expectation value
v of the fields. If µ2 and λ are both real positive numbers this leads √ to an infinite
number of equivalent vacua, forming a hypersphere with radius v/ 2 in the space
spanned by the doublet. Note that choosing µ2 > 0 also means that the Φ doublet
does not have a physical mass term in the Lagrangian, the corresponding term ∝ µ2
just has the wrong sign.
As the quantization of the fields proceeds by expanding around a single vacuum,
for instance by using creation and annihilation operators in canonical quantization,
one of these vacua must be picked.1 Without any loss of generality, it has become
customary to define the vacuum state of the Higgs doublet to be
0
hΦi0 = √v , (B.39)
2
1 Once such a vacuum has been picked, the system cannot tunnel out of it as there are infinitely
many equivalent states.
Standard Model Lagrangian 659
Rotating the three W bosons and the corresponding generators by introducing charged
states
1
Wµ1,2 −→ Wµ± = √ Wµ1 ∓ iWµ2 (B.41)
2
and, correspondingly,
0 2 0 0
σ ± = σ 1 ± iσ 2 = , , (B.42)
0 0 2 0
with χ = (0, 1)T and where the spatial dependence of the four real fields η(x) and
ξi (x) has been made explicit. These four fields of course have no vacuum expectation
value,
hvi0 = hξi i0 = 0. (B.44)
A convenient way to see how the breaking of electroweak symmetry (EWSB) pro-
ceeds is to chose the unitary gauge, fixing the Higgs doublet to have the form
!
v + η(x) 0
Φ0 = Φunitary = U (ξ)Φ = √ χ = v+η(x)
√
. (B.45)
2 2
This essentially means that the Higgs doublet has only one remaining visible field left,
η, while the three phase fields ξi have been rotated away and must be re-introduced in
the various parts of the Lagrangian — which is achieved by the set of transformations
in Eq. (B.46),
0
X X
i
Wµi τ i = U (ξ) Wµi τ i U −1
(ξ) + ∂µ U (ξ) U −1 (ξ)
i=±,3 i=±,3
g2
Bµ0 = Bµ (B.46)
Ψ0L = U (ξ)ΨL
Ψ0R = ΨR .
Taking a closer look at the gauge transformation for the W fields in the first line,
it becomes apparent that the three ξ fields that have appear to have vanished as
dynamics degrees of freedom from the Higgs sector have resurfaced as parts of the W
bosons through the ∂µ U (ξ) term. This term in the end results in fields ∂µ ξ, which
together with the derivatives in the kinetic term of the W gauge bosons from a kinetic
term for these new fields. It turns out that they indeed are the massless Goldstone
660 The Standard Model
2λv 2 2 λ
LH,pot = − η − λvη 3 − η 4 + const. (B.48)
2 4
with the Higgs field η as the only dynamic member. Also, this field has acquired a
mass term with the right sign, leading to a physical mass of
√
mH = v 2λ. (B.49)
From the transformations in Eq. (B.46) it is straightforward to see how this works
with the kinetic term of the Higgs doublet as well. From
it is simple to see that again the phases will just cancel out. The (Dµ Φ)† (Dµ Φ) term
expressed through the transformed fields looks like
and depends only on the vacuum expectation value v, and the dynamics degrees of
freedom given by the Higgs field η and the gauge fields Wµi and Bµ , where, for conve-
nience, the primes are omitted. They are given by
1
Lη,kin = (∂µ η)(∂ µ η)
2
T 2 T
v 2 g22 − +,µ v 2 Bµ g1 −g1 g2 Bµ
LM = Wµ W +
4 8 Wµ3 −g1 g2 g22 Wµ3
(B.52)
1
= m2W Wµ− W +,µ + m2Z Zµ Z µ
2
m2W 2 − +,µ m2Z 2
LI = (η + 2vη) Wµ W + (η + 2vη) Zµ Z µ ,
v2 2v 2
with the new fields Aµ and Zµ — the photon and the Z boson — emerging from the
diagonalization of the mass matrix in LM as
Standard Model Lagrangian 661
The photon is massless, mA = 0, and the masses of the charged W ± and the neutral
Z 0 boson are given by
q
vg2 v
mW = and mZ = g12 + g22 , (B.54)
2 2
fixing also the weak mixing angle or Weinberg angle as
g1 mW g2
tan θW = or cos θW = = p 2 . (B.55)
g2 mZ g1 + g22
It is a tedious but straightforward exercise to show that the kinetic term of the gauge
fields as well as their interaction with the fermions is invariant under the gauge trans-
formation of Eq. (B.46).
Mdiag = S † M T, (B.58)
where S and T are unitary and Mdiag is diagonal with non-zero eigenvalues. In addition,
every matrix M can be written as a product of a Hermitian and a unitary matrix, H
and U ,
M = HU, (B.59)
and in general M † M by construction is Hermitian and positive. As a consequence,
2
m1 0 0
S † (M † M )S = (M 2 )diag −→ 0 m22 0 . (B.60)
0 0 m23
S † F † (M † M )F S = (M 2 )diag , (B.61)
where
eiφ1 0 0
F = 0 eiφ2 0 . (B.62)
0 0 eiφ3
These phases will come back in the context of CP violation but here this freedom in-
deed guarantees that m2i ≥ 0. The Hermitian part H of the decomposition in Eq. (B.59)
can be identified with
H = SMdiag S † , (B.63)
which fixes U as
U = H −1 M and U † = M † H −1 . (B.64)
The hermiticity of H and the unitarity of U are simple to confirm. Using Eq. (B.63)
this means that
Mdiag = S † HS = S † M U † S = S † M T (B.65)
which also defines the matrix T in Eq. (B.58), T = U † S. With this reasoning, the
mass terms are diagonalized through
In other words, the left-handed fields will transform with S, while the right-handed
fermions will transform with T . Looking at the interactions with the gauge bosons
effectively leads to three different structures. First of all, there is the interaction of the
right-handed fermions, which will assume the form
I µ I 0K µ † 0L 0K µ 0L 0K µ 0K
ψ̄R γ ψR = ψ̄R γ TKI TIL ψR = ψ̄R γ δKL ψR = ψ̄R γ ψR . (B.67)
This means that the right-handed quarks can be rotated to their mass eigenstate
without any obvious consequence for their dynamics. The same reasoning also applies
for neutral interactions of the left-handed fermions, for their interactions with the
Feynman rules of the Standard Model 663
gluons and the B and W 3 or, equivalently, the photon and the Z boson. The only
difference is to replace the unitary matrix T with the unitary matrix S. However, for
their charged interactions with the W ± this is not true anymore. There, the fermion
current becomes
µ † (CKM) 0L
ūIL γ µ dIL = ū0K 0L 0K µ
L γ Su,KI Sd,IL dL = ūL γ VKL dL , (B.68)
has been introduced. This matrix mixes the quark generations in interactions, where
a W boson couples to an up-type and a down-type quark, and ultimately allows the
quarks of the second and third generation to decay weakly into the first generation.
As a product of two unitary matrices, the CKM matrix itself is unitary, in principle
with n2 = 1 free parameters. Using the arbitrary phases in the matrices S, it becomes
clear that (2n − 1) phases can be removed by redefining the quark states. In total
therefore the CKM matrix has n2 − (2n − 1) = (n − 1)2 = 4 free paramters, 3 free
angles and 1 phase. This gives rise to the Wolfenstein parameterization, where
the Cabibbo angle λ ≈ 0.22 is the small evolution parameter, and, up to third order
in λ
2
1 − λ2 λ Aλ3 (ρ − iη)
Vud Vus Vub
V (CKM) = Vcd Vcs Vcb = . (B.70)
2
−λ 1 − λ2 Aλ2
Vtd Vts Vtb 3 2
Aλ (1 − ρ − iη) −Aλ 1
In the free (non-interacting) theory this action leads to two-point functions whose in-
verses represent particle propagators. The interaction terms in the Lagrangian combine
particle fields of different types that are represented by vertices.
664 The Standard Model
B.2.1 Propagators
−1 i(6 p + m)
i (6 p − m) = , (B.73)
p2 − m2
where the inverse is easily obtained.
The bosonic propagators are slightly more complicated to obtain except for the
Higgs boson, for which the scalar propagator is trivial. Consider first the W and
Z propagators that correspond to the gauge terms in the Lagrangian (Lgauge ) of
Eq. (B.28), together with the mass terms generated by the BEH mechanism, LM
of Eq. (B.52). These generate two-point functions, for instance between Z fields, of
the form,
Zµ (p2 − m2Z )g µν − pµ pν Zν . (B.74)
The inverse of the factor in square brackets yields a propagator of the form
1 µν pµ pν
g − 2 . (B.75)
p2 − m2Z mZ
In the limit mZ → 0, corresponding to the case of the photon and gluon, the tensor
in Eq. (B.74) is not invertible. This necessitates the introduction of additional gauge-
fixing terms to the Lagrangian. For instance, for the photon one can add the term
1 2
Lg.f. = − (∂ µ Aµ ) , (B.76)
2ξ
where ξ is an arbitrary parameter. This leads to the Feynman rules shown. Note that
the choice ξ = 1, called the Feynman gauge, is often the simplest choice since it leads
to fewer terms at intermediate stages. Although this is the end of the story for the
photon, in a non-Abelian theory the gauge-fixing term introduces unphysical degrees
of freedom that must be cancelled by ghost contributions. These contributions are not
discussed further here.
The Feynman rules for all of the propagators of the SM are shown in Fig. B.1.
The interactions of the gauge bosons with the fermions of the SM originate from
Eqs. (B.28) and (B.56). The interactions that do not involve the Higgs boson corre-
spond to the covariant derivatives appearing in Eq. (B.27), after accounting for the
effects of the BEH mechansim and the modifications due to the CKM matrix indi-
cated in Eq. (B.68). The Yukawa interactions of the fermions with the Higgs boson
are manifest already in Eq. (B.56) and can be simplified slightly by identifying the
value of the vacuum expectation value through v = 2mW /gW .
Feynman rules of the Standard Model 665
ℓ
i (6 p2+mℓ2)
p −mℓ
j q i (6 p+m )
i p2−mq2 δij
q
µ γ ν
−i g µν − (1 − ξ) pµpν
p2 p2
µ, a g ν, b
−i g µν − (1 − ξ) pµpν δ
p2 p2 ab
µ W/Z ν
−i g µν − (1 − ξ) pµpν
p2−m2 m2
H i
p2−m2H
f¯ f q̄j qi f¯ f
Aµ Gaµ H
gW
−ieQ γµ −igsTija γµ −i 2m
W
m
f¯R fL f¯L fR f∓ f±
Zµ Zµ Wµ±
−i g√W γµ
! !
In order to see an example of this in practice, consider the expression for the
(I)
covariant derivative acting on the left-handed lepton doublet, Dµ LL,α of Eq. (B.27).
a
Rewriting the Pauli matrices σ in terms of the generators τ and the W1 , W2 fields in
terms of W ± , cf. Eq. (B.41), one obtains
(I) 1 + 1 − YW (I)
Dµ LL,α = ∂µ + ig2 √ ταβ Wµ+ + √ ταβ Wµ− + ταβ
3
Wµ3 + + ig1 Bµ δαβ LL,β ,
2 2 2
(B.77)
where τ ± are defined analogously to σ ± , cf. Eq. (B.42). Expressing the fields Wµ3 and
Bµ in terms of the photon and Z-boson fields, cf. Eq. (B.53), and using the weak-
hypercharge relation in Eq. (B.26) reduces all the fields to physical ones,
ig2 h + i
(I) −
Dµ LL,α = ∂µ + √ ταβ Wµ+ + ταβ Wµ− + i [g2 T3 sin θW + g1 (Q − T3 ) cos θW ] Aµ
2
(I)
+ i [g2 T3 cos θW − g1 (Q − T3 ) sin θW ] Zµ δαβ LL,β , (B.78)
The final simplification is obtained by relating the couplings g1 and g2 through the
weak mixing angle, cf. Eq. (B.55), and identifying the electromagnetic and weak cou-
plings, g2 → gW and e = gW sin θW ,
igW h + i
(I) −
Dµ LL,α = ∂µ + √ ταβ Wµ+ + ταβ Wµ− + ieQAµ
2
igW
(I)
+ T3 − Q sin2 θW Zµ δαβ LL,β . (B.79)
cos θW
From this expression it is straightforward to read off the Feynman rules that are shown
in Fig. B.2. Similar manipulations of the other covariant derivatives can be performed
to reproduce the remaining Feynman rules shown.
The self-interactions of the gauge bosons of the SM are generated by the non-Abelian
contributions to the term Lgauge of Eq. (B.28). The corresponding Feynman rules can
be derived in a straightforward manner by substitution of the corresponding expres-
sions for the field-strength tensors, cf. Eq. (B.29), accounting for identical-particle
factors of 1/n! where appropriate.
In QCD these terms lead to the three- and four-point vertices shown in Fig. B.3.
Note that the sign of the three-point vertex is sensitive to the direction of flow of the
momentum; the rule in the figure corresponds to all momenta outgoing (signified by
the outward-pointing arrows).
Turning to the electroweak sector, the interactions are again obtained by rewriting
the field-strength tensor in terms of the basis of physical fields, Wµ± , Zµ and Aµ . Thus,
Feynman rules of the Standard Model 667
Gaµ Gdσ
Gaµ(p1)
1 a a,µν
Lgauge,EW = Wµν W
"4
1 + + − −
= W W W W
4 µ ν ρ σ
#
1
− Wµ− Wν+ Zρ Zσ gW
2 2
cos θW + Aρ Aσ e2 + 2Aρ Zσ egW cos θW
2
× 2gµν gρσ − gµρ gνσ − gµσ gνρ
+iWµ− Wν+ (Aρ e + Zρ gW cos θW )
− + + −
× (pWρ − pWρ )gµν + (pW
µ − pVµ )gνρ + (pVν − pW
ν )gµρ
+ kinetic terms (B.80)
This leads to the Feynman rules for self-interactions of electroweak bosons that are
shown in Fig. B.4. The universal Lorentz structure of the interactions can be seen
immediately from Eq. (B.80), so that the figure summarizes several interactions at
once using the definitions,
cγ = e, cZ = gW cos θW , (B.81)
The cubic and quartic interactions that involve the Higgs and other electroweak bosons
can be read off from the contribution LI shown in Eq. (B.52). The self-interactions
of the Higgs boson are a result of the potential term introduced in Eq. (B.48). The
strength of the interactions can be simplified using the implicit expressions for v and
λ in terms of the electroweak coupling and boson masses given in Eqs. (B.49) and
(B.54).
668 The Standard Model
Vρ(p3)
(2) (1)
Vσ (p4) Vρ (p3)
−icV [(p1 − p2)ρ gµν + (p2 − p3)µ gνρ −icV (1) cV (2) (2gµν gρσ − gµρgνσ − gµσ gνρ)
+(p3 − p1)ν gρµ]
Vµ Vν Wµ+(p1) Wν+(p2)
Wσ−(p4) Wρ−(p3)
m2
igW m V gµν 2 (2g g − g g − g g )
igW µν ρσ µρ νσ µσ νρ
W
Vµ H H
H
H Vν H
2
2 mV g 2
igW
2m2 µν −igW 3mh
2m
W W
H H
H H
2 3m2H
−igW
4m2W
and
yij,k
p̃ij = pi + pj − pk
1 − yij,k
1 (C.2)
p̃k = pk ,
1 − yij,k
where the recoil parameter yij,k and the splitting variable z̃i are given by
pi pj
yij,k =
pi pj + pj pk + pk pi
pi pk pi p̃k (C.3)
z̃i = = and z̃j = 1 − z̃i .
(pi + pj )pk p̃ij p̃k
With their dependence on these two parameters understood, the splitting kernels
read (CS-5.7–CS-5.9)
2
hs|Vqi gj ;k |s0 i = 8πµ2ε CF αs − (1 + z̃i ) − ε(1 − z̃i ) δss0
1 − z̃i (1 − yij,k )
2
hµ|Vqi q̄j ;k |νi = 8πµ2ε TR αs −g µν − (z̃i pi − z̃j pj )µ (z̃i pi − z̃j pj )ν
pi pj
1 1
hµ|Vgi gj ;k |νi = 16πµ2ε CA αs −g µν + −2
1 − z̃i (1 − yij,k ) 1 − z̃j (1 − yij,k )
1−ε
+ (z̃i pi − z̃j pj )µ (z̃i pi − z̃j pj )ν . (C.4)
pi pj
670 Catani–Seymour subtraction
Note that here the flavours of the particles resulting from the splitting are indicated
as subscripts in the kernels V . The flavour of the spectator does not matter, while the
spin states of the splitter do. They are therefore given as arguments in the hi brackets.
The spin-averaged kernels are denoted by hV i and they are in this case given by
hVqi gj ,k i 2
= CF − (1 + z̃i ) − ε(1 − z̃i )
8παs µ2ε 1 − z̃i (1 − yij,k )
hVqi q̄j ;k i 2z̃i (1 − z̃i )
= TR 1 −
8πµ2ε αs 1−ε
hVgi gj ;k i 1 1
= 2C A + − 2 + z̃i (1 − z̃i ) .
8πµ2ε αs 1 − z̃i (1 − yij,k ) 1 − (1 − z̃i )(1 − yij,k )
(C.5)
These are the terms that form the basis of splitting kernels in the construction of
parton showers, see Section 5.3.2.
Integrating the spin-averaged kernels over the phase-space y = yij,k and z = z̃i
yields
Z1 Z1
−ε hVij,k (z, y)i
Vij (ε) = dz[z(1 − z)] dy(1 − 2y)1−2ε y −ε , (C.6)
8παs µ2ε
0 0
where the extra terms in z and y stem from the phase-space integral over momenta
rewritten in these quantities, cf. (CS-5.16–CS-5.21).
The resulting integrated dipoles are given by
1 3 π2
Vqg (ε) = CF 2 + + 5− + O (ε)
ε 2ε 2
2 16
Vqq̄ (ε) = TR − − + O (ε) (C.7)
3ε 9
1 11 50 π 2
Vgg (ε) = 2CA 2 + + − + O (ε) .
ε 6ε 9 2
where the γi are the usual first-order anomalous dimensions of the splitting functions
Eq. (2.33), and the Ki read
7 π2
Kq = CF −
2 6
(C.9)
67 π2 10
Kg = CA − − TR nf .
18 6 9
Catani–Seymour subtraction for NLO calculations 671
and
p̃ij = pi + pj − (1 − xij,a ) pa
(C.11)
p̃a = (1 − xij,a ) pa ,
where the recoil parameter xij,a and the splitting variable z̃i are given by
pi pa + pj pa − pi pj
xij,a =
(pi + pj )pa
(C.12)
pi pa pi p̃a
z̃i = = and z̃j = 1 − z̃i .
(pi + pj )pa p̃ij p̃a
With their dependence on these two parameters again understood, the splitting
kernels read (CS-5.39–CS-5.41)
a 0 2ε 2
hs|Vqi gj |s i = 8πµ CF αs − (1 + z̃i ) − ε(1 − z̃i ) δss0
1 − z̃i (1 − xij,a )
a 2ε µν 2 µ ν
hµ|Vqi q̄j |νi = 8πµ TR αs −g − (z̃i pi − z̃j pj ) (z̃i pi − z̃j pj )
pi pj
a 2ε µν 1 1
hµ|Vgi gj |νi = 16πµ CA αs −g + −2
1 − z̃i (1 − xij,a ) 1 − z̃j (1 − xij,a )
1−ε
+ (z̃i pi − z̃j pj )µ (z̃i pi − z̃j pj )ν .
pi pj
(C.13)
Apart from the replacement yij,k → xij,a these kernels are identical to the ones in
Eq. (C.4), and also the splitting parameter z̃i can be related to the one in the case
of a final-state splitter with a final-state spectator, with the obvious replacement of
pk → pa . It is therefore not a surprise that the spin-averaged splitting kernels hV i for
the FI case here can be obtained from their FF counterparts in Eq. (C.5) by replacing
yij,k with xij,a .
The integration of these terms over the emission phase-space with the replacement
x = xij,a and z = z̃i , yields, cf. (CS-5.50)
1+ε Z1
1 hVij,k (z, yij,k )i
Vij (x, ε) = Θ(xij,a )Θ(1 − xij,a ) dz[z(1 − z)]−ε ,
1 − xij,a 8παs µ2ε
0
(C.14)
672 Catani–Seymour subtraction
and here the first big difference with respect to the case of both splitter and spectator
being in the final-state becomes apparent. As the recoil parameter xij,a is not being
integrated over, for ε → 0, the term ∝ 1/(1 − xij,a ) before the integral diverges at
xij,a , a singularity, which cannot be lifted without taking care in how the two limits,
ε → 0 and xij,a → 1 are approached together. The usual solution for this kind of
problem is to invoke a “+” function such that
Z1
Vij (xij,a , ε) = [Vij (xij,a , ε)]+ + δ(1 − xij,a ) dx̃Vij (x̃, ε), (C.15)
0
where
[Vij (xij,a , ε)]+ = [Vij (xij,a , 0)]+ + O (ε) . (C.16)
Therefore, cf. (CS-5.57–CS-5.59),
" #
2 1 3 1 2
Vqg (x, ε) = CF log − + log(2 − x)
1−x 1−x + 2 1−x + 1−x
3CF
+ δ(1 − x) Vqg (ε) − + O (ε)
2
2 1 2TR
Vqq̄ (x, ε) = TR + δ(1 − x) Vqq̄ (ε) + + O (ε)
3 1−x + 3
" #
2 1 11 1 2
Vgg (x, ε) = 2CA log − + log(2 − x)
1−x 1−x + 6 1−x + 1−x
11CA
+ δ(1 − x) Vgg (ε) − + O (ε) ,
3
(C.17)
and
p̃ai = xij,a pa
(C.19)
p̃k = pk + pi − (1 − xij,a ) pa .
It is worth stressing that in this configuration the initial-state splitter actually keeps
its direction along the beam axis, with a momentum reduced by xij,a , which now plays
the role of a splitting parameter. At the same time, the transverse momentum recoil is
transferred to the spectator which, consequently, changes its direction. The splitting
Catani–Seymour subtraction for NLO calculations 673
There is, however, another parameter, ui , which is being used to decompose the
eikonals into two splitter–spectator dipoles:
pi pa
ui = . (C.21)
(pi + pk )pa
With their dependence on these two parameters again understood, the splitting
kernels read (CS-5.65–CS-5.68)
2
hs|Vkqa gi |s0 i = 8πµ2ε CF αs − (1 + xik,a ) − ε(1 − xik,a ) δss0
1 − xik,a + ui
hs|Vkga qi |s0 i = 8πµ2ε TR αs 1 − ε − 2xik,a (1 − xik,a ) δss0
q q̄ 1 − xik,a 2ui (1 − ui ) µ ν
hµ|Vk a j |νi = 8πµ2ε CF αs −g µν xik,a + qik qik (C.22)
xik,a pi pk
g g 1
hµ|Vk i j |νi = 16πµ2ε CA αs −g µν − 1 + xik,a (1 − xik,a )
1 − xik,a + ui
1 − xik,a 2ui (1 − ui ) µ ν
+ (1 − ε) qik qik ,
xik,a pi pk
where
µ pµi pµk
qik = − . (C.23)
ui 1 − ui
Indicating the number of polarizations of particle a with ns (a), the spin-averaged
kernels in a suitable normalization read
ns (q̃) hVkqg i 2
= C F − (1 + x ik,a ) − ε(1 − xik,a )
ns (q) 8παs µ2ε 1 − xik,a + ui
g q̄
ns (q̃) hVk i 2xik,a (1 − xik,a )
= T R 1 −
ns (g) 8παs µ2ε 1−ε
qq
ns (g̃) hVk i 1 − xik,a
= C F (1 − ε)x ik,a + 2
ns (q 8παs µ2ε xik,a
gg
ns (g̃) hVk i 1 1 − xik,a
= 2CA [ + − 1 + xik,a (1 − xik,a ) ,
ns (g) 8παs µ2ε 1 − xik,a + ui xik,a
(C.24)
cf. (CS-5.77–CS.5.80).
As in the FI case, the integration over the emission phase-space does not include
the parameter xik,a , governing the kinematic map for the initial state. Therefore, this
674 Catani–Seymour subtraction
time the integrated dipoles emerge from an integral over ui only, cf. (CS-5.74)
ε Z1
1 ˜ hVij,k (z̃i , yij,k )i
ns (ai)
V ai,a
(x, ε) = Θ(x)Θ(1 − x) dui [ui (1 − ui )]−ε .
1−x ns (a) 8παs µ2ε
0
(C.25)
As before poles emerge which this time are of the type f (x)/ε with f (x) is either
integrable over x or proportional to 1/(1 − x). Therefore the integrated dipoles are
written as (CS-5.75)
Z1
1 1
V ai,a (x, ε) = εx V ai,a (x, ε) + + εδ(1 − x) dx̃ x̃ V ai,a (x̃, ε) , (C.26)
ε x
0
where the pij are related to the Altarelli–Parisi splitting functions from Eq. (2.33)
through
cf. (CS-5.85–CS-5.88). This yields the integrated dipoles for the IF case,
1
V (x, ε) = − + log(1 − x) pqg (x) + CF x + O (ε)
qg
ε
1
V (x, ε) = − + log(1 − x) pgq (x) + 2TR x(1 − x) + O (ε)
gq
ε
2
qq 1 qq 2π
V (x, ε) = − p (x) + δ(1 − x) Vqg (ε) + CF −5
ε 3
"
4 1 2
+ CF − log − log(2 − x)
1−x 1−x + 1−x
#
+ (1 − x) − (1 + x) log(1 − x)
2
gg 1 gg 1 2π 50 16
V (x, ε) = − p (x) + δ(1 − x) Vgg (ε) + nf Vqq̄ (ε) + CA − + nf TR
ε 2 3 9 9
Catani–Seymour subtraction for NLO calculations 675
"
4 1 2
+ CA − log − log(2 − x)
1−x 1−x + 1−x
1−x
+ 2 −1 + x(1 − x) + log(1 − x) . (C.29)
x
The first terms ∝ 1/ε in each dipole are related to the collinear divergence in the
initial-state, while the terms ∝ Vij (ε) in V qq (x, ε) and V gg (x, ε) capture the soft
divergences stemming from the eikonals and are related to the emission of soft gluons.
Here,
p̃ai = xi,ab pa
pa pb − pi (pa + pb ) (C.32)
xi,ab = .
pa pb
As can be seen, the momentum of the spectator parton pb is not altered in this map,
but instead all final-state momenta are modified to read
676 Catani–Seymour subtraction
where the momenta K and K̃ are the total momenta of the dipole before and after
the mapping onto Born-level kinematics has taken place:
where
pi pa µ
q µ = pµi − p . (C.36)
pb pa b
With the notation concerning incident spins already used in the IF case, the phase-
space integration, cf. (CS-5.153),
1 Γ2 (1 − ε) ˜ hV ai,b i
ns (ai)
˜ (x, ε) = −
V a,ai Θ(x)Θ(1 − x) (1 − x)−2ε (C.37)
ε Γ(1 − 2ε) ns (a) 8παs µ2ε
(reg)
The regular bits of the splitting functions, Pab (x) are the remainders after the poles
for x → 1 have been subtracted, namely
" #
(reg) ab 2 1 (1)
Pab (x) = Pab (x) − δ 2Ta + γa δ(1 − x) , (C.40)
1−x +
where both the splitting functions and their anomalous dimensions γ are given by
Eq. (2.33). In particular they read
(reg)
Pab 6 b
(x) = Pab (x) for a =
(reg)
Pqq = − CF (1 + x)
(x) (C.41)
(reg) 1−x
Pgg (x) = 2CA − 1 + x(1 − x) .
x
Note that the spin-averaged splitting kernels of course effectively are the Altarelli–
Parisi splitting functions P in D dimensions,
˜ hV ai,b (x)i
ns (ai)
= Pa,ai
˜ (x) (C.42)
ns (a) 8παs µ2ε
X Z1 h i
(Born) 0 0
+ dxdσab0 (pa , xpb ) ⊗ K bb (x) + P bb (xpb , x; , µ2F ) ,
b0 0
see also Eq. (3.202) and (CS-10.27)-CS(10.30). The integrated dipole terms are
αs
I(ε) = −
2πΓ(1 − ε)
X V (ε) X
2 ε
2 ε
2 ε
i 4πµ 4πµ 4πµ
2 Ti · Tk + Ti · Ta + Ti · Tb
T i 2pi pk 2pi pa 2pi pb
i k6=i
" ε ε #
Va (ε) X 4πµ2 4πµ2
+ 2 Ta · Tk + Ta · Tb
Ta 2pa pk 2pa pb
k
678 Catani–Seymour subtraction
" ε ε #)
Vb (ε) X 4πµ2 4πµ2
+ 2 Tb · Tk + Tb · Ta , (C.44)
Tb 2pb pk 2pb pa
k
where
" #
qq 1 + x2 1−x 2
K = CF log + (1 − x) − δ(1 − x) 5 − π
1−x x +
" #
gg 1 1−x 1−x 1−x
K = 2CA log + − 1 + x(1 − x) log
1−x x + x x
(C.46)
50 16
− δ(1 − x) CA − π 2 − TR nf
9 9
qg (1) 1 − x
K = Pqg log + CF x
x
gq (1) 1−x
K = Pgq log + 2TR x(1 − x)
x
(reg)
The terms K̃ contain the regular part of the splitting functions Pba defined in
Eq. (C.40) and listed in Eq. (C.41),
" #
2
(reg) 2 log(1 − x) π
K̃ ab (x) = Pba (x) log(1 − x) + δ ab Ta2 − δ(1 − x) . (C.47)
1−x + 3
• kinematical parameters:
pi pj pi pk
yij;k = and zi = = 1 − zj ; (C.48)
pi pj + pi pk + pj pk pi pk + pj pk
• transverse momentum:
2
k⊥ = zi (1 − zi )yij;k Q2 , (C.49)
J (F F ) = 1 − yij;k . (C.50)
• splitting kernels:
(F F ) 2
Kqg,k = CF − (1 + zi )
1 − zi (1 − yij;k )
(F F ) 1 1
Kgg,k = 2CA + − 2 + zi (1 − zi )
1 − zi (1 + yij;k ) 1 − (1 − zi )(1 + yij;k )
(F F )
Kqq̄,k = TR 1 − 2zi (1 − zi ) ;
(C.52)
pk = (1 − yij;k ) p̃k .
680 Catani–Seymour subtraction
• kinematical parameters:
pi pa + pj pa − pi pj pi pa
xij;a = and zi = = 1 − zj ; (C.54)
pi pa + pj pa pi pa + pj pa
• transverse momentum:
2 1 − xij;a 2
k⊥ = zi (1 − zi ) Q , (C.55)
xij;a
where the PDF ratios typical for backwards evolution with ηa the momentum
fraction of a are made explict and where the transformation from an integral
2
over xij;a to one over k⊥ cancels the 1/(1 − xij;a ) term stemming from the
one-particle phase-space integral written as a function of xij;a ;
• splitting kernels:
(F I) 2
Kqg,k = CF − (1 + zi )
1 − zi + (1 − xij;a )
(F I) 1 1
Kgg,k = 2CA + − 2 + zi (1 − zi )
1 − zi + (1 + xij;a ) zi + (1 − xij;a )
(F I)
Kqq̄,k = TR 1 − 2zi (1 − zi ) ;
(C.57)
(1 − zi )(1 − xij;a )
pi = zi p̃ij + p̃k + ~k⊥
xij;a
zi (1 − xij;a )
pj = (1 − zi ) p̃ij + p̃k − ~k⊥ (C.58)
xij;a
1
pk = p̃k .
xij;a
• kinematical parameters:
Catani–Seymour subtraction for parton showers 681
pa pj + pa pk − pj pk pa pj
xaj;k = and ua = = 1 − uk ; (C.59)
pa pj + pa pk pa pj + pa pk
• transverse momentum:
2 1 − xaj;k 2
k⊥ = ui (1 − ui ) Q , (C.60)
xaj;k
In fact, these two phase-space maps are related to each other by the Lorentz
transformation Λµν (K),
µ
xaj;k k⊥ k⊥ν ua (1 − xaj;k ) K µ Kν
Λµν (K) = g µν + +
(1 − ua )(1 − xaj;k ) p̃aj p̃k xaj;k − ua p̃aj p̃k
µ µ
xaj;k k⊥ Kν − K k⊥ν
+
xaj;k − ua p̃aj p̃k
(C.65)
where
pa pb − pa pj − pb pj
xaj;b = . (C.67)
pa pb
Then, further building blocks are given by
• transverse momentum:
2 1 − xaj;b − vj
k⊥ = vj Q2 , (C.68)
xaj;b
• splitting kernels:
Catani–Seymour subtraction for parton showers 683
(II) 2
Kqg,k = CF − (1 + xaj;b )
1 − xaj;b
(II) 2(1 − xaj;b )
Kqq,k = CF − xaj;b
xaj;b
(C.71)
(II) 1 1 − xaj;b
Kgg,k = 2CA + − 1 + xaj;b (1 − xaj;b )
1 − xaj;b xaj;b
(II)
Kqq̄,k = TR 1 − 2xaj;b (1 − xaj;b ) ;
and all other (FS) momenta kj are suitably boosted and rotated by a Lorentz
transformation Λ
given by
[13] Aad, Georges et al. (2012g). Measurement of the production cross √ section of an
isolated photon associated with jets in proton-proton collisions at s = 7 TeV with
the ATLAS detector. Phys. Rev., D85, 092014. doi:10.1103/PhysRevD.85.092014.
[14] Aad, Georges et al. (2012h). Measurement √ of the t-channel single top-quark pro-
duction cross section in pp collisions at s = 7 TeV with the ATLAS detector.
Phys. Lett., B717, 330–350. doi:10.1016/j.physletb.2012.09.031.
[15] Aad, Georges et al. (2012i). Observation of a new particle in the search for
the Standard Model Higgs boson with the ATLAS detector at the LHC. Phys.
Lett., B716, 1–29. doi:10.1016/j.physletb.2012.08.020.
[16] Aad, Georges et al. (2012j). Rapidity √ gap cross sections measured with the
ATLAS detector in pp collisions at s = 7 TeV. Eur. Phys. J., C72, 1926.
doi:10.1140/epjc/s10052-012-1926-0.
[17] Aad, Georges et al. (2013a).
√ Measurement of hard double-parton interactions in
W (→ lν)+ 2 jet events at s=7 TeV with the ATLAS detector. New J. Phys., 15,
033038. doi:10.1088/1367-2630/15/3/033038.
[18] Aad, Georges et al.√ (2013b). Measurement of isolated-photon pair production
in pp collisions at s = 7 TeV with the ATLAS detector. JHEP , 1301, 086.
doi:10.1007/JHEP01(2013)086.
[19] Aad, Georges et al. (2013c). Measurement of the cross-section
√ for W boson pro-
duction in association with b-jets in pp collisions at s = 7 TeV with the ATLAS
detector. JHEP , 1306, 084. doi:10.1007/JHEP06(2013)084.
[20] Aad, Georges et al. (2013d). Measurement of the √ production cross section of jets
in association with a Z boson in pp collisions at s = 7 TeV with the ATLAS
detector. JHEP , 1307, 032. doi:10.1007/JHEP07(2013)032.
[21] Aad, Georges
√ et al. (2013e). Measurement of W + W − production in pp colli-
sions at s=7 TeV with the ATLAS detector and limits on anomalous WWZ and
WWW couplings. Phys. Rev., D87(11), 112001. doi:10.1103/PhysRevD.87.112001,
10.1103/PhysRevD.88.079906.
[22]√Aad, Georges et al. (2013f). Measurement of ZZ production in pp collisions at
s = 7 TeV and limits on anomalous ZZZ and ZZγ couplings with the ATLAS
detector. JHEP , 1303, 128. doi:10.1007/JHEP03(2013)128.
[23] Aad, Georges et al. (2013g). Search for light top squark pair√production in final
states with leptons and b− jets with the ATLAS detector in s = 7 TeV proton-
proton collisions. Phys. Lett., B720, 13–31. doi:10.1016/j.physletb.2013.01.049.
[24] Aad, Georges et al. (2014a). Comprehensive
√ measurements of t-channel single
top-quark production cross sections at s = 7 TeV with the ATLAS detector.
Phys. Rev., D90(11), 112006. doi:10.1103/PhysRevD.90.112006.
± ±
[25] Aad, Georges et√ al. (2014b). Evidence for electroweak production of W W jj in
pp collisions at s = 8 TeV with the ATLAS detector. Phys. Rev. Lett., 113(14),
141803. doi:10.1103/PhysRevLett.113.141803.
[26] Aad, Georges et al. (2014c). Measurement of differential production cross-sections
for a Z boson in association with b-jets in 7 TeV proton-proton collisions with the
ATLAS detector. JHEP , 1410, 141. doi:10.1007/JHEP10(2014)141.
References 687
[27] Aad, Georges et al. (2014d). Measurement of dijet cross sections in pp collisions
at 7 TeV centre-of-mass energy using the ATLAS detector. JHEP , 1405, 059.
doi:10.1007/JHEP05(2014)059.
[28] Aad, Georges et al. (2014e). Measurement of distributions sensitive√ to the under-
lying event in inclusive Z-boson production in pp collisions at s = 7 TeV with
the ATLAS detector. Eur. Phys. J., C74(12), 3195. doi:10.1140/epjc/s10052-014-
3195-6.
[29] Aad, Georges et al. (2014f). Measurement of Higgs boson production in the dipho-
ton decay channel in pp collisions at center-of-mass energies of 7 and 8 TeV with the
ATLAS detector. Phys. Rev., D90(11), 112015. doi:10.1103/PhysRevD.90.112015.
[30] Aad, Georges et al. (2014g). Measurement
√ of the inclusive isolated prompt photons
cross section in pp collisions at s = 7 TeV with the ATLAS detector using 4.6
fb−1 . Phys. Rev., D89(5), 052004. doi:10.1103/PhysRevD.89.052004.
[31] Aad, Georges et al. (2014h).√ Measurement of the inclusive jet cross-section in
proton-proton collisions at s = 7 TeV using 4.5 fb−1 of data with the ATLAS
detector.
[32] Aad, Georges et al. (2014i). Measurement√of the top quark pair production charge
asymmetry in proton-proton collisions at s = 7 TeV using the ATLAS detector.
JHEP , 1402, 107. doi:10.1007/JHEP02(2014)107.
[33] Aad, Georges et al. (2014j). Measurement
√ of the total cross section from elas-
tic scattering in pp collisions at s = 7 TeV with the ATLAS detector. Nucl.
Phys., B889, 486–548. doi:10.1016/j.nuclphysb.2014.10.019.
[34] Aad, Georges et al. (2014k). Measurement √ of the Z/γ ∗ boson transverse mo-
mentum distribution in pp collisions at s = 7 TeV with the ATLAS detector.
JHEP , 1409, 145. doi:10.1007/JHEP09(2014)145.
[35] Aad, Georges et al. (2014l). Measurements of fiducial and differential √ cross sec-
tions for Higgs boson production in the diphoton decay channel at s = 8 TeV
with ATLAS. JHEP , 09, 112. doi:10.1007/JHEP09(2014)112.
[36] Aad, Georges et√ al. (2015a). Combined measurement of the Higgs boson mass in
pp collisions at s = 7 and 8 TeV with the ATLAS and CMS experiments. Phys.
Rev. Lett., 114, 191803. doi:10.1103/PhysRevLett.114.191803.
[37] Aad, Georges et al. (2015b). Constraints on the off-shell Higgs boson signal
strength in the high-mass ZZ and W W final states with the ATLAS detector.
Eur. Phys. J., C75(7), 335. doi:10.1140/epjc/s10052-015-3542-2.
[38] Aad, Georges et al. (2015c). Evidence for the Higgs-boson Yukawa coupling to tau
leptons with the ATLAS detector. JHEP , 04, 117. doi:10.1007/JHEP04(2015)117.
[39] Aad, Georges et al. (2015d). Jet energy
√ measurement and its systematic uncer-
tainty in proton-proton collisions at s = 7 TeV with the ATLAS detector. Eur.
Phys. J., C75, 17. doi:10.1140/epjc/s10052-014-3190-y.
[40] Aad, Georges et al. (2015e). Measurement of the tt production cross-
section as a function of jet multiplicity and jet transverse momentum in 7
TeV proton-proton collisions with the ATLAS detector. JHEP , 01, 020.
doi:10.1007/JHEP01(2015)020.
688 References
[41] Aad, Georges et al. (2015f). Measurement of the W W + W Z cross section and
limits on anomalous triple gauge couplings using final states with one lepton,
√ miss-
ing transverse momentum, and two jets with the ATLAS detector at s = 7 TeV.
JHEP , 01, 049. doi:10.1007/JHEP01(2015)049.
[42] Aad, Georges et al. (2015g). Measurements of Higgs boson production and
couplings in the four-lepton channel in pp collisions at center-of-mass ener-
gies of 7 and 8TeV with the ATLAS detector. Phys. Rev., D91(1), 012006.
doi:10.1103/PhysRevD.91.012006.
[43] Aad, Georges et al. (2015h). Measurements of the total and differential Higgs
boson production
√ cross sections combining the H → γγ and H → ZZ ∗ → 4l decay
channels at s=8TeV with the ATLAS detector. Phys. Rev. Lett., 115(9), 091801.
doi:10.1103/PhysRevLett.115.091801.
[44] Aad, Georges et al. (2015i). Measurements of the W production cross sections
in association with jets with the ATLAS detector. Eur. Phys. J., C75(2), 82.
doi:10.1140/epjc/s10052-015-3262-7.
[45] Aad, Georges et al. (2015j). Observation and measurement of Higgs bo-
son decays to W W ∗ with the ATLAS detector. Phys. Rev., D92(1), 012006.
doi:10.1103/PhysRevD.92.012006.
[46] Aad, Georges et al. (2015k). Observation of top-quark pair production in asso-
ciation with √a photon and measurement of the tt̄γ production cross section in pp
collisions at s = 7 TeV using the ATLAS detector. Phys. Rev., D91(7), 072007.
doi:10.1103/PhysRevD.91.072007.
[47] Aad, Georges et al. (2015l). √ Search for s-channel single top-quark production
in proton-proton collisions at s = 8 TeV with the ATLAS detector. Phys.
Lett., B740, 118–136. doi:10.1016/j.physletb.2014.11.042.
[48] Aad, Georges et al. (2015m). Search for the bb̄ decay of the Standard Model Higgs
boson in associated (W/Z)H production with the ATLAS detector. JHEP , 01,
069. doi:10.1007/JHEP01(2015)069.
[49] Aad, Georges et al. (2015n). Study of the spin and parity of the Higgs bo-
son in diboson decays with the ATLAS detector. Eur. Phys. J., C75(10), 476.
[Erratum: Eur. Phys. J.C76,no.3,152(2016)]. doi:10.1140/epjc/s10052-015-3685-1,
10.1140/epjc/s10052-016-3934-y.
[50] Aad, Georges et al. (2016a). Measurement of the charge asymmetry in √ top-quark
pair production in the lepton-plus-jets final state in pp collision data at s = 8 TeV
with the ATLAS detector. Eur. Phys. J., C76(2), 87. doi:10.1140/epjc/s10052-
016-3910-6.
[51] Aad, Georges et al. (2016b). Measurement
√ of the inclusive isolated prompt photon
cross section in pp collisions at s = 8 TeV with the ATLAS detector. JHEP , 08,
005. doi:10.1007/JHEP08(2016)005.
[52] Aad, Georges et al. (2016c). Measurement of the transverse momentum √ and φ∗η
distributions of DrellYan lepton pairs in protonproton collisions at s = 8 TeV
with the ATLAS detector. Eur. Phys. J., C76(5), 291. doi:10.1140/epjc/s10052-
016-4070-4.
References 689
[66] Aaltonen, T. et al. (2013a). Higgs boson studies at the Tevatron. Phys.
Rev., D88(5), 052014. doi:10.1103/PhysRevD.88.052014.
[67] Aaltonen, T. et al. (2013b). Measurement of the cross section for prompt
isolated diphoton production using the full CDF Run II data sample.
Phys. Rev. Lett., 110(10), 101801. doi:10.1103/PhysRevLett.110.101801.
[68] Aaltonen, T. et al. (2013c). Measurement of the top quark forward-backward
production asymmetry and its dependence on event kinematic properties.
Phys. Rev., D87, 092002. doi:10.1103/PhysRevD.87.092002.
[69] Aaltonen, T. et al. (2013d). Search for resonant top-antitop production in the
lepton plus jets decay mode using the full CDF data set. Phys. Rev. Lett., 110(12),
121802. doi:10.1103/PhysRevLett.110.121802.
[70] Aaltonen, Timo Antero et al. (2014a). Combination of measurements of the top-
quark pair production cross section from the Tevatron Collider. Phys. Rev., D89,
072001. doi:10.1103/PhysRevD.89.072001.
[71] Aaltonen, Timo Antero et al. (2014b). Measurement of the inclusive leptonic
asymmetry in top-quark pairs that decay to two charged leptons at CDF. Phys.
Rev. Lett., 113, 042001. [Erratum: Phys. Rev. Lett.117,no.19,199901(2016)].
doi:10.1103/PhysRevLett.113.042001, 10.1103/PhysRevLett.117.199901.
[72] Aaltonen, Timo Antero et al. (2015a). Study of the energy dependence of the
underlying event in proton-antiproton collisions. Phys. Rev., D92(9), 092009.
doi:10.1103/PhysRevD.92.092009.
[73] Aaltonen, Timo Antero et al. (2015b). Tevatron combination of single-top-
quark cross sections and determination of the magnitude of the Cabibbo-
Kobayashi-Maskawa matrix element Vtb . Phys. Rev. Lett., 115(15), 152003.
doi:10.1103/PhysRevLett.115.152003.
[74] Aaltonen, T. et al. (2009b, Jun). Measurement √ of particle production and inclusive
differential cross sections in pp collisions at s = 1.96 TeV. Phys. Rev. D, 79,
112005. doi:10.1103/PhysRevD.79.112005.
[75] Aaltonen, T. et al. (2010, Dec). Erratum: Measurement √ of particle production and
inclusive differential cross sections in pp collisions at s = 1.96 TeV [phys. rev. d
79 , 112005 (2009)]. Phys. Rev. D, 82, 119903. doi:10.1103/PhysRevD.82.119903.
[76] Aaltonen, T. et al. (2015, Jan). Measurement of differential production
√ cross
sections for z/γ ∗ bosons in association with jets in pp collisions at s = 1.96 TeV.
Phys. Rev. D, 91, 012002. doi:10.1103/PhysRevD.91.012002.
[77] Aaron, F.D. et al. (2010a). Combined measurement and QCD analysis of
the inclusive e+- p scattering cross sections at HERA. JHEP , 1001, 109.
doi:10.1007/JHEP01(2010)109.
[78] Aaron, F.D. et al. (2010b). Jet production in ep collisions at low Q2 and deter-
mination of αs . Eur. Phys. J., C67, 1–24. doi:10.1140/epjc/s10052-010-1282-x.
[79] Aaron, F. D. et al. (2009). Jet production in ep collisions at high Q2 and determi-
nation of αs . Eur. Phys. J., C65, 363–383. DESY-09-032. doi:10.1140/epjc/s10052-
009-1208-7.
[80] Ababekri, Mamut, Dulat, Sayipjamal, Isaacson, Joshua, Schmidt, Carl, and Yuan,
C. P. (2016). Implication of CMS data on photon PDFs (arXiv:1603.04874).
References 691
[81] Abachi, S. et al. (1995). Observation of the top quark. Phys. Rev. Lett., 74,
2632–2637. doi:10.1103/PhysRevLett.74.2632.
[82] Abazov, √ V.M. et al. (2001). The ratio of the isolated photon cross sec-
tions at s = 630 GeV and 1800 GeV. Phys. Rev. Lett., 87, 251805.
doi:10.1103/PhysRevLett.87.251805.
[83] Abazov, V.M. et al. (2008a). First measurement of the forward-backward
charge asymmetry in top quark pair production. Phys. Rev. Lett., 100, 142002.
doi:10.1103/PhysRevLett.100.142002.
[84] Abazov, V.M. et al. (2008b). First study of the radiation-amplitude
√ zero in
W γ production and limits on anomalous W W γ couplings at s = 1.96- TeV.
Phys. Rev. Lett., 100, 241805. doi:10.1103/PhysRevLett.100.241805.
[85] Abazov, V.M. et al. (2008c). Measurement
√ of differential Z/γ ∗ + jet + X
cross sections in pp̄ collisions at s = 1.96-TeV. Phys. Lett., B669, 278–286.
doi:10.1016/j.physletb.2008.09.060.
[86] Abazov, V.M. et al. (2008d). Measurement of the inclusive jet cross-
section in pp̄ collisions at s(1/2) =1.96-TeV. Phys. Rev. Lett., 101, 062001.
doi:10.1103/PhysRevLett.101.062001.
[87] Abazov, V.M. et al. (2009a). Combination of tt̄ cross section measurements and
constraints on the mass of the top quark and its decays into charged Higgs bosons.
Phys. Rev., D80, 071102. doi:10.1103/PhysRevD.80.071102.
∗
[88] Abazov, V.M. et al. (2009b). Measurements of differential √ cross sections of Z/γ
+ jets + X events in proton anti-proton collisions at s = 1.96-TeV. Phys.
Lett., B678, 45–54. doi:10.1016/j.physletb.2009.05.058.
[89] Abazov, V.M. et al. (2010a). √ Double parton interactions in photon+3
jet events in pp̄ collisions s = 1.96 TeV. Phys. Rev., D81, 052012.
doi:10.1103/PhysRevD.81.052012.
[90] Abazov, V. et al. (2010b). Measurement
√ of direct photon pair production
cross sections in pp̄ collisions at s = 1.96 TeV. Phys. Lett., B690, 108–117.
doi:10.1016/j.physletb.2010.05.017.
[91] Abazov, V.M. et al. (2013a). Measurement of the differential
√ cross sections for
isolated direct photon pair production in pp̄ collisions at s = 1.96 TeV. Phys.
Lett., B725, 6–14. doi:10.1016/j.physletb.2013.06.036.
[92] Abazov, V.M. et al. (2013b). √ Measurement of the pp̄ → W + b + X pro-
duction cross section at s = 1.96 TeV. Phys. Lett., B718, 1314–1320.
doi:10.1016/j.physletb.2012.12.044.
[93] Abazov, V. M. et al. (2006). √ Measurement of the isolated photon cross
section in pp̄ collisions at s = 1.96 TeV. Phys. Lett., B639, 151–158.
doi:10.1016/j.physletb.2006.04.048.
[94] Abazov, V. M. et al. (2007). Measurement of the shape of the√boson rapidity
distribution for pp̄ → Z/γ ∗ → e+ e− + X events produced at s of 1.96-TeV.
Phys. Rev., D76, 012003. doi:10.1103/PhysRevD.76.012003.
[95] Abazov, Victor Mukhamedovich et al. (2009c). Measurement of the W boson
mass. Phys. Rev. Lett., 103, 141801. doi:10.1103/PhysRevLett.103.141801.
692 References
[112] Abe, F. et al. (1995). Observation of top quark production in p̄p collisions.
Phys. Rev. Lett., 74, 2626–2631. doi:10.1103/PhysRevLett.74.2626.
[113] Abe,
√ F. et al. (1997). Measurement of double parton scattering in p̄p collisions
at s = 1.8 TeV. Phys. Rev. Lett., 79, 584–589.
[114] Abelev, Betty et al. (2013). Measurement of inelastic, single- and double-
diffraction cross sections in proton-proton collisions at the LHC with ALICE. Eur.
Phys. J., C73, 2456. doi:10.1140/epjc/s10052-013-2456-0.
[115] Abramowicz, H. et al. (2015). Combination of measurements of inclusive deep
inelastic e± p scattering cross sections and QCD analysis of HERA data. Eur. Phys.
J., C75(12), 580. doi:10.1140/epjc/s10052-015-3710-4.
[116] Abulencia, A.√et al. (2006a). Measurement of the inclusive jet cross section in pp̄
interactions at s = 1.96-TeV using a cone-based jet algorithm. Phys. Rev., D74,
071103. doi:10.1103/PhysRevD.74.071103.
[117] Abulencia, A. et al. (2006b). Measurement√of the inclusive jet cross section using
the k(t) algorithm in p anti-p collisions at s = 1.96-TeV. Phys. Rev. Lett., 96,
122001. doi:10.1103/PhysRevLett.96.122001.
[118] Abulencia, A. et al. (2006c). √ Measurement of the tt̄ Production Cross
Section in pp̄ Collisions at s = 1.96 TeV. Phys. Rev. Lett., 97, 082004.
doi:10.1103/PhysRevLett.97.082004.
[119] Abulencia, A. et al. (2007). Measurement
√ of the inclusive jet cross section using
the kT algorithm in pp̄ collisions at s = 1.96 TeV with the CDF II detector.
Phys. Rev., D75, 092006. doi:10.1103/PhysRevD.75.092006.
[120] Acosta, D. et al. √(2002). Comparison√of the isolated direct photon cross sections
in pp̄ collisions at s = 1.8-TeV and s = 0.63-TeV. Phys. Rev., D65, 112003.
doi:10.1103/PhysRevD.65.112003.
[121] Acosta, D. et al. (2004). The underlying event in hard interactions at the Teva-
tron p̄p collider. Phys. Rev., D70, 072002. doi:10.1103/PhysRevD.70.072002.
[122] Acosta, D. et al. (2005).√ Study of jet shapes in inclusive jet pro-
duction in pp̄ collisions at s = 1.96 TeV. Phys. Rev., D71, 112002.
doi:10.1103/PhysRevD.71.112002.
[123] Actis, Stefano, Passarino, Giampiero, Sturm, Christian, and Uccirati, Sandro
(2008). NLO electroweak corrections to Higgs boson production at hadron colliders.
Phys. Lett., B670, 12–17. doi:10.1016/j.physletb.2008.10.018.
[124] Adams, D. et al. (2015). Towards an Understanding of the Correlations in Jet
Substructure. Eur. Phys. J., C75(9), 409. doi:10.1140/epjc/s10052-015-3587-2.
[125] Ade, P. A. R. et al. (2014). Planck 2013 results. XVI. Cosmological parameters.
Astron. Astrophys., 571, A16. doi:10.1051/0004-6361/201321591.
[126] Affolder, T. et al. (2001). √ Measurement of the inclusive jet cross sec-
tion in p̄p collisions at s = 1.8 TeV. Phys. Rev., D64, 032001.
doi:10.1103/PhysRevD.65.039903, 10.1103/PhysRevD.64.032001.
[127] Affolder, T. et al. (2002). Charged jet evolution and the underlying event in
proton-antiproton collisions at 1.8 TeV. Phys. Rev., D65, 092002.
[128] Aguilar-Saavedra, J.A., Juste, A., and Rubbo, F. (2012). Boosting the t-tbar
charge asymmetry. Phys. Lett., B707, 92–98. doi:10.1016/j.physletb.2011.12.007.
694 References
[129] Ahmed, Taushif, Mandal, M. K., Rana, Narayan, and Ravindran, V. (2014). Ra-
pidity distributions in Drell-Yan and Higgs productions at threshold to third order
in QCD. Phys. Rev. Lett., 113, 212003. doi:10.1103/PhysRevLett.113.212003.
[130] Ahrens, Valentin, Becher, Thomas, Neubert, Matthias, and Yang, Li Lin (2009).
Origin of the large perturbative corrections to Higgs production at hadron colliders.
Phys. Rev., D79, 033013. doi:10.1103/PhysRevD.79.033013.
[131] Aivazis, M. A. G., Collins, John C., Olness, Fredrick I., and Tung, Wu-Ki (1994).
Leptoproduction of heavy quarks. II. A unified QCD formulation of charged and
neutral current processes from fixed-target to collider energies. Phys. Rev., D50,
3102–3118. doi:10.1103/PhysRevD.50.3102.
[132] Akrawy, M.Z. et al. (1990). A Study of coherence of soft gluons in hadron jets.
Phys. Lett., B247, 617–628. doi:10.1016/0370-2693(90)91911-T.
[133] Albrow, Michael G. et al. (2006). Tevatron-for-LHC Report of the QCD Working
Group (hep-ph/0610012).
[134] Alcaraz Maestre, J. et al. (2012). The SM and NLO Multileg and SM MC
Working Groups: Summary Report (arXiv:1203.6803).
[135] Alekhin, Sergey et al. (2011). The PDF4LHC Working Group interim report.
[136] Alekhin, S., Blumlein, J., and Moch, S. (2014). The ABM par-
ton distributions tuned to LHC data. Phys. Rev., D89(5), 054028.
doi:10.1103/PhysRevD.89.054028.
[137] Ali, Ahmed, Pietarinen, E., Kramer, G., and Willrodt, J. (1980). A QCD
analysis of the high-energy e+ e− data from PETRA. Phys. Lett., B93, 155.
doi:10.1016/0370-2693(80)90116-1.
[138] Alioli, S., Badger, S., Bellm, J., Biedermann, B., Boudjema, F. et al. (2014).
Update of the Binoth Les Houches Accord for a standard interface between Monte
Carlo tools and one-loop programs. Comput. Phys. Commun., 185, 560–571.
doi:10.1016/j.cpc.2013.10.020.
[139] Alioli, Simone, Nason, Paolo, Oleari, Carlo, and Re, Emanuele (2009). NLO
Higgs boson production via gluon fusion matched with shower in POWHEG.
JHEP , 04, 002. doi:10.1088/1126-6708/2009/04/002.
[140] Alitti, J. et al. (1991). A Determination of the strong coupling constant αs
from W production at the CERN p anti-p collider. Phys. Lett., B263, 563–572.
doi:10.1016/0370-2693(91)90505-K.
[141] Almeida, Leandro G., Sterman, George F., and Vogelsang, Werner (2008).
Threshold resummation for the top quark charge asymmetry. Phys. Rev., D78,
014008. doi:10.1103/PhysRevD.78.014008.
[142] Altarelli, Guido, Ellis, R. Keith, and Martinelli, G. (1979). Large pertur-
bative corrections to the Drell-Yan process in QCD. Nucl. Phys., B157, 461.
doi:10.1016/0550-3213(79)90116-0.
[143] Altarelli, Guido and Parisi, G. (1977). Asymptotic freedom in parton language.
Nucl. Phys., B126, 298–318.
[144] Altheimer, A., Arora, S., Asquith, L., Brooijmans, G., Butterworth, J. et al.
(2012). Jet substructure at the Tevatron and LHC: New results, new tools, new
benchmarks. J. Phys., G39, 063001. doi:10.1088/0954-3899/39/6/063001.
References 695
[145] Alwall, J. et al. (2007a). A standard format for Les Houches Event Files. Comput.
Phys. Commun., 176, 300–304.
[146] Alwall, Johan et al. (2007b). MadGraph/MadEvent v4: The new web generation.
JHEP , 09, 028.
[147] Alwall, J. et al. (2008). Comparative study of various algorithms for the merging
of parton showers and matrix elements in hadronic collisions. Eur. Phys. J., C53,
473–500.
[148] Alwall, J., Frederix, R., Frixione, S., Hirschi, V., Maltoni, F., Mattelaer, O., Shao,
H. S., Stelzer, T., Torrielli, P., and Zaro, M. (2014). The automated computation of
tree-level and next-to-leading order differential cross sections, and their matching
to parton shower simulations. JHEP , 07, 079. doi:10.1007/JHEP07(2014)079.
[149] Alwall, Johan, Frederix, R., Gerard, J. M., Giammanco, A., Herquet, M.,
Kalinin, S., Kou, E., Lemaitre, V., and Maltoni, F. (2007c). Is V(tb ) ' 1? Eur.
Phys. J., C49, 791–801. doi:10.1140/epjc/s10052-006-0137-y.
[150] Alwall, Johan, Herquet, Michel, Maltoni, Fabio, Mattelaer, Olivier, and
Stelzer, Tim (2011). MadGraph 5 : Going Beyond. JHEP , 1106, 128.
doi:10.1007/JHEP06(2011)128.
[151] Amati, D. and Veneziano, G. (1979). Preconfinement as a property of perturba-
tive QCD. Phys. Lett., B83, 87. doi:10.1016/0370-2693(79)90896-7.
[152] Ametller, L., Gava, E., Paver, N., and Treleani, D. (1985). Role of the QCD
induced gluon - gluon coupling to gauge boson pairs in the multi - TeV region.
Phys. Rev., D32, 1699. doi:10.1103/PhysRevD.32.1699.
[153] Ametller, L., Paver, N., and Treleani, D. (1986). Possible signature of mul-
tiple parton interactions in collider four jet events. Phys. Lett., B169, 289.
doi:10.1016/0370-2693(86)90668-4.
[154] Anastasiou, Charalampos, Dixon, Lance J., Melnikov, Kirill, and Petriello,
Frank (2004a). High precision QCD at hadron colliders: Electroweak
gauge boson rapidity distributions at NNLO. Phys. Rev., D69, 094008.
doi:10.1103/PhysRevD.69.094008.
[155] Anastasiou, Charalampos, Duhr, Claude, Dulat, Falko, Furlan, Elisabetta,
Gehrmann, Thomas et al. (2014). Higgs boson gluon fusion production at threshold
in N 3 LO QCD. Phys. Lett., B737, 325–328. doi:10.1016/j.physletb.2014.08.067.
[156] Anastasiou, Charalampos, Duhr, Claude, Dulat, Falko, Furlan, Elisabetta,
Gehrmann, Thomas, Herzog, Franz, Lazopoulos, Achilleas, and Mistlberger, Bern-
hard (2016). High precision determination of the gluon fusion Higgs boson cross-
section at the LHC. JHEP , 05, 058. doi:10.1007/JHEP05(2016)058.
[157] Anastasiou, Charalampos, Duhr, Claude, Dulat, Falko, Herzog, Franz, and Mistl-
berger, Bernhard (2015). Higgs boson gluon-fusion production in QCD at three
loops. Phys. Rev. Lett., 114(21), 212001. doi:10.1103/PhysRevLett.114.212001.
[158] Anastasiou, Charalampos and Melnikov, Kirill (2002). Higgs boson production at
hadron colliders in NNLO QCD. Nucl. Phys., B646, 220–256. doi:10.1016/S0550-
3213(02)00837-4.
[159] Anastasiou, Charalampos, Melnikov, Kirill, and Petriello, Frank (2004b).
A new method for real radiation at NNLO. Phys. Rev., D69, 076010.
696 References
doi:10.1103/PhysRevD.69.076010.
[160] Anastasiou, Charalampos, Melnikov, Kirill, and Petriello, Frank (2004c).
Higgs boson production at hadron colliders: Differential cross sections
through next–to-next–to–leading order. Phys. Rev. Lett., 93, 262002.
doi:10.1103/PhysRevLett.93.262002.
[161] Andersen, J. R. et al. (2016). Les Houches 2015: Physics at TeV Colliders
Standard Model Working Group Report. In 9th Les Houches Workshop on Physics
at TeV Colliders (PhysTeV 2015) Les Houches, France, June 1-19, 2015.
[162] Andersson, Bo (1997). The Lund model. Volume 7. Camb. Monogr. Part. Phys.
Nucl. Phys. Cosmol.
[163] Andersson, Bo, Gustafson, G., Ingelman, G., and Sjöstrand, T. (1983). Parton
fragmentation and string dynamics. Phys. Rept., 97, 31–145. doi:10.1016/0370-
1573(83)90080-7.
[164] Andersson, Bo, Gustafson, Gosta, Lönnblad, Leif, and Pettersson, Ulf (1989).
Coherence effects in deep–inelastic scattering. Z. Phys., C43, 625.
[165] Andersson, Bo, Gustafson, G., and Sjöstrand, T. (1985). Baryon production in
jet fragmentation and Upsilon-decay. Phys. Scripta, 32, 574. doi:10.1088/0031-
8949/32/6/003.
[166] Antchev, G. et al. (2013a). Luminosity-independent
√ measurement of the proton-
proton total cross section at s = 8 TeV. Phys. Rev. Lett., 111(1), 012001.
doi:10.1103/PhysRevLett.111.012001.
[167] Antchev, G. et al. (2013b). Luminosity-independent
√ measurements of total,
elastic and inelastic cross-sections at s = 7 TeV. Europhys. Lett., 101, 21004.
doi:10.1209/0295-5075/101/21004.
[168] Appell, David, Sterman, George F., and Mackenzie, Paul B. (1988). Soft gluons
and the normalization of the Drell-Yan cross-section. Nucl. Phys., B309, 259.
doi:10.1016/0550-3213(88)90082-X.
[169] Arneodo, M. et al. (1997a). Accurate measurement of F2(d) / F2(p) and Rd −Rp .
Nucl. Phys., B487, 3–26. doi:10.1016/S0550-3213(96)00673-6.
[170] Arneodo, M. et al. (1997b). Measurement of the proton and deuteron structure
functions, F2(p) and F2(d), and of the ratio sigma-L / sigma-T. Nucl. Phys., B483,
3–43. doi:10.1016/S0550-3213(96)00538-X.
[171] Arnold, K., Bahr, M., Bozzi, Giuseppe, Campanario, F., Englert, C. et al. (2009).
VBFNLO: A parton level Monte Carlo for processes with electroweak bosons. Com-
put. Phys. Commun., 180, 1661–1670. doi:10.1016/j.cpc.2009.03.006.
[172] Artru, X. and Mennessier, G. (1974). String model and multiproduction. Nucl.
Phys., B70, 93–115.
[173] ATLAS
√ (2012). Measurement of the inclusive jet cross section in pp√ collisions
at s = 2.76TeV and comparison to the inclusive jet cross section at s = 7TeV
using the ATLAS detector (ATLAS-CONF-2012-128, ATLAS-COM-CONF-2012-
173).
[174] ATLAS (2013). Measurement of multi-jet cross-section ratios√and determination
of the strong coupling constant in proton-proton collisions at s=7 TeV with the
ATLAS detector (ATLAS-CONF-2013-041).
References 697
[175] ATLAS (2015, Jul). √ Measurement of the tt̄W and tt̄Z production cross sections
in pp collisions at s = 8 TeV with the ATLAS detector. Technical Report
ATLAS-CONF-2015-032, CERN, Geneva.
[176] Azimov, Yakov I., Dokshitzer, Yuri L., Khoze, Valery A., and Troian, S.I.
(1985a). The string effect and QCD coherence. Phys. Lett., B165, 147–150.
doi:10.1016/0370-2693(85)90709-9.
[177] Azimov, Yakov I., Dokshitzer, Yuri L., Khoze, Valery A., and Troyan, S.I.
(1985b). Similarity of parton and hadron Spectra in QCD jets. Z. Phys., C27,
65–72. doi:10.1007/BF01642482.
[178] Azimov, Yakov I., Dokshitzer, Yuri L., Khoze, Valery A., and Troyan, S.I.
(1986). Hump–backed QCD plateau in hadron spectra. Z. Phys., C31, 213.
doi:10.1007/BF01479529.
[179] Baak, M., Cth, J., Haller, J., Hoecker, A., Kogler, R., Mnig, K., Schott, M., and
Stelzer, J. (2014). The global electroweak fit at NNLO and prospects for the LHC
and ILC. Eur. Phys. J., C74, 3046. doi:10.1140/epjc/s10052-014-3046-5.
[180] Badger, Simon, Biedermann, Benedikt, Uwer, Peter, and Yundin, Valery
(2013a). NLO QCD corrections √ to multi-jet production at the LHC with
a centre-of-mass energy of s = 8 TeV. Phys. Lett., B718, 965–978.
doi:10.1016/j.physletb.2012.11.029.
[181] Badger, Simon, Biedermann, Benedikt, Uwer, Peter, and Yundin, Valery (2013b).
Numerical evaluation of virtual corrections to multi-jet production in massless
QCD. Comput. Phys. Commun., 184, 1981–1998. doi:10.1016/j.cpc.2013.03.018.
[182] Badger, Simon, Biedermann, Benedikt, Uwer, Peter, and Yundin, Valery (2014a).
Computation of multi-leg amplitudes with NJet. J. Phys. Conf. Ser., 523, 012057.
doi:10.1088/1742-6596/523/1/012057.
[183] Badger, Simon, Biedermann, Benedikt, Uwer, Peter, and Yundin, Valery (2014b).
Next-to-leading order QCD corrections to five jet production at the LHC.
Phys. Rev., D89, 034019. doi:10.1103/PhysRevD.89.034019.
[184] Badger, S.D. and Glover, E.W. Nigel (2004). Two loop splitting functions in
QCD. JHEP , 0407, 040. doi:10.1088/1126-6708/2004/07/040.
[185] Badger, Simon, Guffanti, Alberto, and Yundin, Valery (2014c). Next-to-leading
order QCD corrections to di-photon production in association with up to three jets
at the Large Hadron Collider. JHEP , 1403, 122. doi:10.1007/JHEP03(2014)122.
[186] Badger, Simon, Sattler, Ralf, and Yundin, Valery (2011). One-loop helic-
ity amplitudes for tt̄ production at hadron colliders. Phys. Rev., D83, 074020.
doi:10.1103/PhysRevD.83.074020.
[187] Baglio, J., Bellm, J., Campanario, F., Feigl, B., Frank, J. et al. (2014). Release
Note - VBFNLO 2.7.0 (arXiv:1404.3940).
[188] Bähr, M. et al. (2008). Herwig++ Physics and Manual. Eur. Phys. J., C58,
639–707. doi:10.1140/epjc/s10052-008-0798-9.
[189] Balitsky, I. I. and Lipatov, L. N. (1978). The Pomeranchuk singularity in quan-
tum chromodynamics. Sov. J. Nucl. Phys., 28, 822–829.
[190] Ball, Richard D. et al. (2009). A determination of parton distri-
butions with faithful uncertainty estimation. Nucl. Phys., B809, 1–63.
698 References
doi:10.1016/j.nuclphysb.2008.09.037.
[191] Ball, Richard D. et al. (2012). Unbiased global determination of parton distri-
butions and their uncertainties at NNLO, NLO, and at LO. Nucl. Phys., B855,
153–221. doi:10.1016/j.nuclphysb.2011.09.024.
[192] Ball, Richard D. et al. (2013a). Parton distributions with LHC data. Nucl.
Phys., B867, 244–289. doi:10.1016/j.nuclphysb.2012.10.003.
[193] Ball, Richard D. et al. (2013b). Parton distributions with QED corrections. Nucl.
Phys., B877, 290–320. doi:10.1016/j.nuclphysb.2013.10.010.
[194] Ball, Richard D. et al. (2015). Parton distributions for the LHC Run II.
JHEP , 04, 040. doi:10.1007/JHEP04(2015)040.
[195] Ball, Richard D., Bonvini, Marco, Forte, Stefano, Marzani, Simone, and Ri-
dolfi, Giovanni (2013c). Higgs production in gluon fusion beyond NNLO. Nucl.
Phys., B874, 746–772. doi:10.1016/j.nuclphysb.2013.06.012.
[196] Ball, Richard D., Carrazza, Stefano, Del Debbio, Luigi, Forte, Stefano, Gao, Jun
et al. (2013d). Parton distribution benchmarking with LHC data. JHEP , 1304,
125. doi:10.1007/JHEP04(2013)125.
[197] Ballestrero, Alessandro, Maina, Ezio, and Moretti, Stefano (1994). Heavy quarks
and leptons at e+ e− colliders. Nucl. Phys., B415, 265–292. doi:10.1016/0550-
3213(94)90112-0.
[198] Banfi, Andrea, Caola, Fabrizio, Dreyer, Frdric A., Monni, Pier F., Salam,
Gavin P., Zanderighi, Giulia, and Dulat, Falko (2016). Jet-vetoed Higgs cross
section in gluon fusion at N3 LO+NNLL with small-R resummation. JHEP , 04,
049. doi:10.1007/JHEP04(2016)049.
[199] Banfi, Andrea, Monni, Pier Francesco, Salam, Gavin P., and Zanderighi, Giulia
(2012). Higgs and Z-boson production with a jet veto. Phys. Rev. Lett., 109,
202001. doi:10.1103/PhysRevLett.109.202001.
[200] Banfi, Andrea, Salam, Gavin P., and Zanderighi, Giulia (2007). Accurate QCD
predictions for heavy-quark jets at the Tevatron and LHC. JHEP , 07, 026.
doi:10.1088/1126-6708/2007/07/026.
[201] Bardeen, John, Cooper, L. N., and Schrieffer, J. R. (1957a). Microscopic theory
of superconductivity. Phys. Rev., 106, 162. doi:10.1103/PhysRev.106.162.
[202] Bardeen, John, Cooper, L. N., and Schrieffer, J. R. (1957b). Theory of super-
conductivity. Phys. Rev., 108, 1175–1204. doi:10.1103/PhysRev.108.1175.
[203] Bartel, W. et al. (1983). Particle distribution in three jet events produced by
e+ e- annihilation. Z. Phys., C21, 37. doi:10.1007/BF01648774.
[204] Bartel, W. et al. (1986). Experimental studies on multijet production in e+ e−
annihilation at PETRA energies. Z. Phys., C33, 23. doi:10.1007/BF01410449.
[205] Bauer, Christian W. and Lange, Bjorn O. (2009). Scale setting and resummation
of logarithms in pp → V + jets (arXiv:0905.4739).
[206] Baur, U. (2007). Weak boson emission in hadron collider processes.
Phys. Rev., D75, 013005. doi:10.1103/PhysRevD.75.013005.
[207] Becattini, F. and Passaleva, G. (2002). Statistical hadronization model and
transverse momentum spectra of hadrons in high-energy collisions. Eur. Phys.
J., C23, 551–583. doi:10.1007/s100520100869.
References 699
[208] Becher, Thomas, Lorentzen, Christian, and Schwartz, Matthew D. (2012). Pre-
cision direct photon and W-boson spectra at high pT and comparison to LHC data.
Phys. Rev., D86, 054026. doi:10.1103/PhysRevD.86.054026.
[209] Beenakker, W., Dittmaier, S., Kramer, M., Plumper, B., Spira, M. et al. (2003).
NLO QCD corrections to tt̄H production in hadron collisions. Nucl. Phys., B653,
151–203. doi:10.1016/S0550-3213(03)00044-0.
[210] Belyaev, Alexander, Christensen, Neil D., and Pukhov, Alexander (2013).
CalcHEP 3.4 for collider physics within and beyond the Standard Model. Comput.
Phys. Commun., 184, 1729–1769. doi:10.1016/j.cpc.2013.01.014.
[211] Bena, Iosif, Bern, Zvi, and Kosower, David A. (2005). Twistor-space recursive
formulation of gauge-theory amplitudes. Phys. Rev., D71, 045008.
[212] Beneke, M., Buchalla, G., Neubert, M., and Sachrajda, Christopher T. (1999).
QCD factorization for B → ππ decays: Strong phases and CP violation in the heavy
quark limit. Phys. Rev. Lett., 83, 1914–1917. doi:10.1103/PhysRevLett.83.1914.
[213] Beneke, M., Buchalla, G., Neubert, M., and Sachrajda, Christopher T. (2000).
QCD factorization for exclusive, nonleptonic B meson decays: General argu-
ments and the case of heavy light final states. Nucl. Phys., B591, 313–418.
doi:10.1016/S0550-3213(00)00559-9.
[214] Bengtsson, Mats and Sjostrand, Torbjorn (1987). A comparative study of co-
herent and non-coherent parton shower evolution. Nucl. Phys., B289, 810–846.
doi:10.1016/0550-3213(87)90407-X.
[215] Bengtsson, Mats and Sjöstrand, Torbjorn (1987). Coherent parton showers ver-
sus matrix elements: Implications of PETRA - PEP data. Phys. Lett., B185, 435.
doi:10.1016/0370-2693(87)91031-8.
[216] Bengtsson, Mats, Sjöstrand, Torbjörn, and van Zijl, Maria (1986). Ini-
tial state radiation effects on W and jet production. Z. Phys., C32, 67.
doi:10.1007/BF01441353.
[217] Benvenuti, A.C. et al. (1989). A High statistics measurement of the proton
structure functions F2 (x, Q2 and R from deep inelastic muon scattering at High
Q2 . Phys. Lett., B223, 485. doi:10.1016/0370-2693(89)91637-7.
[218] Berends, Frits A., Gaemers, K. J. F., and Gastmans, R. (1973). α3 Contribu-
tion to the angular asymmetry in e+ e− → µ+ µ− . Nucl. Phys., B63, 381–397.
doi:10.1016/0550-3213(73)90153-3.
[219] Berends, Frits A. and Giele, W. (1987). The six gluon process as an exam-
ple of Weyl-Van Der Waerden spinor calculus. Nucl. Phys., B294, 700–732.
doi:10.1016/0550-3213(87)90604-3.
[220] Berends, Frits A. and Giele, W.T. (1988). Recursive calculations for processes
with n gluons. Nucl. Phys., B306, 759. doi:10.1016/0550-3213(88)90442-7.
[221] Berends, Frits A., Giele, W.T., and Kuijf, H. (1989a). Exact expressions for
processes involving a vector boson and up to five partons. Nucl. Phys., B321, 39.
doi:10.1016/0550-3213(89)90242-3.
[222] Berends, Frits A., Giele, W. T., and Kuijf, H. (1989b). On six-jet production at
hadron colliders. Phys. Lett., B232, 266–270. doi:10.1016/0370-2693(89)91699-7.
700 References
[223] Berends, Frits A., Giele, W. T., Kuijf, H., Kleiss, R., and Stirling, W. James
(1989c). Multi-jet production in W , Z events at pp̄ colliders. Phys. Lett., B224,
237. doi:10.1016/0370-2693(89)91081-2.
[224] Berends, Frits A., Kuijf, H., Tausk, B., and Giele, W.T. (1991). On the
production of a W and jets at hadron colliders. Nucl. Phys., B357, 32–64.
doi:10.1016/0550-3213(91)90458-A.
[225] Berger, C.F., Bern, Z., Dixon, Lance J., Febres Cordero, Fernando, Forde, D.
et al. (2009). Next-to-leading order QCD predictions for W +3-jet distributions at
hadron colliders. Phys. Rev., D80, 074036. doi:10.1103/PhysRevD.80.074036.
[226] Berger, C. F. et al. (2008). Automated implementation of on-shell methods for
one-loop amplitudes. Phys. Rev., D78, 036003. doi:10.1103/PhysRevD.78.036003.
[227] Berger, C. F. et al. (2010). Next-to-leading order QCD predictions
for Z, γ ∗ +3-Jet distributions at the Tevatron. Phys. Rev., D82, 074002.
doi:10.1103/PhysRevD.82.074002.
[228] Berger, Edmond L. and Campbell, John M. (2004). Higgs boson produc-
tion in weak boson fusion at next-to-leading order. Phys. Rev., D70, 073011.
doi:10.1103/PhysRevD.70.073011.
[229] Beringer, J. et al. (2012). Review of Particle Physics (RPP). Phys. Rev., D86,
010001. doi:10.1103/PhysRevD.86.010001.
[230] Bern, Z., De Freitas, A., and Dixon, Lance J. (2001). Two loop ampli-
tudes for gluon fusion into two photons. JHEP , 09, 037. doi:10.1088/1126-
6708/2001/09/037.
[231] Bern, Z., Diana, G., Dixon, L.J., Febres Cordero, F., Hoeche, S. et al. (2012).
Four-jet production at the Large Hadron Collider at next-to-leading order in QCD.
Phys. Rev. Lett., 109, 042001. doi:10.1103/PhysRevLett.109.042001.
[232] Bern, Z., Dixon, L.J., Febres Cordero, F., Hoeche, S., Ita, H. et al. (2013). Next-
to-leading order W + 5-jet production at the LHC. Phys. Rev., D88, 014025.
doi:10.1103/PhysRevD.88.014025.
[233] Bern, Z., Dixon, L.J., Febres Cordero, F., Hoeche, S., Ita, H. et al. (2014).
Ntuples for NLO Events at hadron colliders. Comput. Phys. Commun., 185, 1443–
1460. doi:10.1016/j.cpc.2014.01.011.
[234] Bern, Zvi, Dixon, Lance J., and Kosower, David A. (1998). One-loop ampli-
tudes for e+ e− → four partons. Nucl. Phys., B513, 3–86. doi:10.1016/S0550-
3213(97)00703-7.
[235] Bern, Zvi, Dixon, Lance J., and Schmidt, Carl (2002). Isolating a light Higgs
boson from the diphoton background at the CERN LHC. Phys. Rev., D66, 074018.
doi:10.1103/PhysRevD.66.074018.
[236] Bernreuther, W., Brandenburg, A., Si, Z.G., and Uwer, P. (2004). Top quark
pair production and decay at hadron colliders. Nucl. Phys., B690, 81–137.
doi:10.1016/j.nuclphysb.2004.04.019.
[237] Bernreuther, Werner and Si, Zong-Guo (2010). Distributions and correlations for
top quark pair production and decay at the Tevatron and LHC. Nucl. Phys., B837,
90–121. doi:10.1016/j.nuclphysb.2010.05.001.
References 701
[238] Bertone, Valerio, Carrazza, Stefano, and Rojo, Juan (2014). APFEL: A PDF
evolution library with QED corrections. Comput. Phys. Commun., 185, 1647–1668.
doi:10.1016/j.cpc.2014.03.007.
[239] Bethke, Siegfried (2013). World summary of αs (2012). Nucl.
Phys.Proc.Suppl., 234, 229–234. doi:10.1016/j.nuclphysbps.2012.12.020.
[240] Bethke, Siegfried, Dissertori, Gunther, and Salam, Gavin P. (2016).
World Summary of αs (2015). EPJ Web Conf., 120, 07005.
doi:10.1051/epjconf/201612007005.
[241] Bevilacqua, G., Czakon, M., Garzelli, M.V., van Hameren, A., Kardos,
A. et al. (2013). HELAC-NLO. Comput. Phys. Commun., 184, 986–997.
doi:10.1016/j.cpc.2012.10.033.
[242] Bevilacqua, G., Czakon, M., Papadopoulos, C.G., Pittau, R., and Worek,
M. (2009). Assault on the NLO Wishlist: pp → tt̄bb̄. JHEP , 0909, 109.
doi:10.1088/1126-6708/2009/09/109.
[243] Bevilacqua, G., Czakon, M., Papadopoulos, C.G., and Worek, M. (2010).
Dominant QCD backgrounds in Higgs boson analyses at the LHC: A Study
of pp → tt̄ + 2 jets at next-to-leading order. Phys. Rev. Lett., 104, 162002.
doi:10.1103/PhysRevLett.104.162002.
[244] Bevilacqua, G., Czakon, M., Papadopoulos, C.G., and Worek, M. (2011a).
Hadronic top-quark pair production in association with two jets at next-to-leading
order QCD. Phys. Rev., D84, 114017. doi:10.1103/PhysRevD.84.114017.
[245] Bevilacqua, Giuseppe, Czakon, Michal, van Hameren, Andreas, Papadopoulos,
Costas G., and Worek, Malgorzata (2011b). Complete off-shell effects in top quark
pair hadroproduction with leptonic decay at next-to-leading order. JHEP , 1102,
083. doi:10.1007/JHEP02(2011)083.
[246] Bevilacqua, G. and Worek, M. (2012). Constraining BSM Physics at the LHC:
Four top final states with NLO accuracy in perturbative QCD. JHEP , 07, 111.
doi:10.1007/JHEP07(2012)111.
[247] Bigi, Ikaros I.Y., Shifman, Mikhail A., Uraltsev, N.G., and Vainshtein, Arkady I.
(1993). QCD predictions for lepton spectra in inclusive heavy flavor decays.
Phys. Rev. Lett., 71, 496–499. doi:10.1103/PhysRevLett.71.496.
[248] Bigi, Ikaros I.Y., Uraltsev, N.G., and Vainshtein, A.I. (1992). Non-perturbative
corrections to inclusive beauty and charm decays: QCD versus phenomenological
models. Phys. Lett., B293, 430–436. doi:10.1016/0370-2693(92)90908-M.
[249] Binosi, D., Collins, J., Kaufhold, C., and Theussl, L. (2009). Jaxo-
draw: A graphical user interface for drawing feynman diagrams. version 2.0
release notes. Computer Physics Communications, 180(9), 1709 – 1715.
doi:http://dx.doi.org/10.1016/j.cpc.2009.02.020.
[250] Binoth, T. et al. (2010a). A proposal for a standard interface between Monte
Carlo tools and one-loop programs. Comput. Phys. Commun., 181, 1612–1622.
doi:10.1016/j.cpc.2010.05.016.
[251] Binoth, T et al. (2010b). The SM and NLO multileg working group: Summary
report. Proceedings of the Workshop “Physics at TeV Colliders”, Les Houches,
France, 8-26 June, 2009.
702 References
[252] Binoth, T., Gleisberg, T., Karg, S., Kauer, N., and Sanguinetti, G. (2010c). NLO
QCD corrections to ZZ+ jet production at hadron colliders. Phys. Lett., B683,
154–159. doi:10.1016/j.physletb.2009.12.013.
[253] Binoth, T., Greiner, N., Guffanti, A., Reuter, J., Guillet, J.-Ph. et al. (2010d).
Next-to-leading order QCD corrections to pp → bb̄bb̄ + X at the LHC: the quark
induced case. Phys. Lett., B685, 293–296. doi:10.1016/j.physletb.2010.02.010.
[254] Binoth, T., Guillet, J.P., Pilon, E., and Werlen, M. (2000). A Full next-to-
leading order study of direct photon pair production in hadronic collisions. Eur.
Phys. J., C16, 311–330. doi:10.1007/s100520050024.
[255] Binoth, T., Guillet, J. Ph., Pilon, E., and Werlen, M. (2001). Beyond leading
order effects in photon pair production at the Tevatron. Phys. Rev., D63, 114016.
doi:10.1103/PhysRevD.63.114016.
[256] Biswas, Sandip, Melnikov, Kirill, and Schulze, Markus (2010). Next-to-leading
order QCD effects and the top quark mass measurements at the LHC. JHEP , 1008,
048. doi:10.1007/JHEP08(2010)048.
[257] Bloch, F. and Nordsieck, A. (1937). Note on the radiation field of the electron.
Phys. Rev., 52, 54–59. doi:10.1103/PhysRev.52.54.
[258] Blok, B., Dokshitser, Yu., Frankfurt, L., and Strikman, M. (2012). pQCD physics
of multiparton interactions. Eur. Phys. J., C72, 1963. doi:10.1140/epjc/s10052-
012-1963-8.
[259] Blok, B., Dokshitzer, Yu., Frankfurt, L., and Strikman, M. (2011). The
Four jet production at LHC and Tevatron in QCD. Phys. Rev., D83, 071501.
doi:10.1103/PhysRevD.83.071501.
[260] Blok, B., Dokshitzer, Yu., Frankfurt, L., and Strikman, M. (2014). Pertur-
bative QCD correlations in multi-parton collisions. Eur. Phys. J., C74, 2926.
doi:10.1140/epjc/s10052-014-2926-z.
[261] Blok, B., Koyrakh, L., Shifman, Mikhail A., and Vainshtein, A.I. (1994).
Differential distributions in semileptonic decays of the heavy flavors in
QCD. Phys. Rev., D49, 3356. doi:10.1103/PhysRevD.50.3572, 10.1103/Phys-
RevD.49.3356.
[262] Blumlein, J., Riemersma, S., Botje, M., Pascaud, C., Zomer, F. et al. (1996). A
Detailed comparison of NLO QCD evolution codes.
[263] Bolzoni, Paolo, Maltoni, Fabio, Moch, Sven-Olaf, and Zaro, Marco (2012). Vector
boson fusion at NNLO in QCD: SM Higgs and beyond. Phys. Rev., D85, 035002.
doi:10.1103/PhysRevD.85.035002.
[264] Bolzoni, Paolo, Zaro, Marco, Maltoni, Fabio, and Moch, Sven-Olaf (2010). Higgs
production at NNLO in QCD: The VBF channel. Nucl. Phys.Proc.Suppl., 205-
206, 314–319. doi:10.1016/j.nuclphysbps.2010.09.012.
[265] Bonciani, Roberto, Catani, Stefano, Grazzini, Massimiliano, Sargsyan, Hayk,
and Torre, Alessandro (2015). The qT subtraction method for top quark production
at hadron colliders. Eur. Phys. J., C75(12), 581. doi:10.1140/epjc/s10052-015-
3793-y.
[266] Boos, E. et al. (2004). CompHEP 4.4 - automatic computations
from Lagrangians to events. Nucl. Instrum. Meth., A534, 250–259.
References 703
doi:10.1016/j.nima.2004.07.096.
[267] Bothmann, Enrico, Ferrarese, Piero, Krauss, Frank, Kuttimalai, Silvan, Schu-
mann, Steffen, and Thompson, Jennifer (2016). Aspects of perturbative
QCD at a 100 TeV future hadron collider. Phys. Rev., D94(3), 034007.
doi:10.1103/PhysRevD.94.034007.
[268] Botje, Michiel et al. (2011). The PDF4LHC Working Group Interim Recom-
mendations (arXiv:1101.0538).
[269] Boughezal, Radja, Campbell, John M., Ellis, R. Keith, Focke, Christfried, Giele,
Walter T., Liu, Xiaohui, and Petriello, Frank (2016a). Z-boson production in
association with a jet at next-to-next-to-leading order in perturbative QCD. Phys.
Rev. Lett., 116(15), 152001. doi:10.1103/PhysRevLett.116.152001.
[270] Boughezal, Radja, Caola, Fabrizio, Melnikov, Kirill, Petriello, Frank, and
Schulze, Markus (2013). Higgs boson production in association with a
jet at next-to-next-to-leading order in perturbative QCD. JHEP , 06, 072.
doi:10.1007/JHEP06(2013)072.
[271] Boughezal, Radja, Caola, Fabrizio, Melnikov, Kirill, Petriello, Frank, and
Schulze, Markus (2015a). Higgs boson production in association with a
jet at next-to-next-to-leading order. Phys. Rev. Lett., 115(8), 082003.
doi:10.1103/PhysRevLett.115.082003.
[272] Boughezal, Radja, Focke, Christfried, Giele, Walter, Liu, Xiaohui, and Petriello,
Frank (2015b). Higgs boson production in association with a jet at NNLO using
jettiness subtraction. Phys. Lett., B748, 5–8. doi:10.1016/j.physletb.2015.06.055.
[273] Boughezal, Radja, Focke, Christfried, and Liu, Xiaohui (2015c). Jet vetoes versus
giant K-factors in the exclusive Z+1-jet cross section. Phys. Rev., D92(9), 094002.
doi:10.1103/PhysRevD.92.094002.
[274] Boughezal, Radja, Focke, Christfried, Liu, Xiaohui, and Petriello, Frank
(2015d). W -boson production in association with a jet at next-to-next-
to-leading order in perturbative QCD. Phys. Rev. Lett., 115(6), 062002.
doi:10.1103/PhysRevLett.115.062002.
[275] Boughezal, Radja, Li, Ye, and Petriello, Frank (2014). Disentangling radiative
corrections using high-mass Drell-Yan at the LHC. Phys. Rev., D89, 034030.
doi:10.1103/PhysRevD.89.034030.
[276] Boughezal, Radja, Liu, Xiaohui, and Petriello, Frank (2016b). A comparison of
NNLO QCD predictions with 7 TeV ATLAS and CMS data for V +jet processes.
Phys. Lett., B760, 6–13. doi:10.1016/j.physletb.2016.06.032.
[277] Boughezal, Radja, Liu, Xiaohui, and Petriello, Frank (2016c). Phenomenol-
ogy of the Z-boson plus jet process at NNLO. Phys. Rev., D94(7), 074015.
doi:10.1103/PhysRevD.94.074015.
[278] Boughezal, Radja, Liu, Xiaohui, and Petriello, Frank (2016d). W-boson plus
jet differential distributions at NNLO in QCD. Phys. Rev., D94(11), 113009.
doi:10.1103/PhysRevD.94.113009.
[279] Bourhis, L., Fontannaz, M., Guillet, J.P., and Werlen, M. (2001). Next-to-
leading order determination of fragmentation functions. Eur. Phys. J., C19, 89–98.
doi:10.1007/s100520100579.
704 References
[280] Bourilkov, D, Group, R C, and Whalley, M R (2006). LHAPDF: PDF use from
the Tevatron to the LHC. hep-ph/0605240.
[281] Bowler, M. G. (1981). e+ e− production of heavy quarks in the string model. Z.
Phys., C11, 169. doi:10.1007/BF01574001.
[282] Bozzi, Giuseppe, Catani, Stefano, de Florian, Daniel, and Grazzini, Massimiliano
(2006). Transverse-momentum resummation and the spectrum of the Higgs boson
at the LHC. Nucl. Phys., B737, 73–120. doi:10.1016/j.nuclphysb.2005.12.022.
[283] Braunschweig, W. et al. (1990). Global jet properties at 14-GeV to 44-
GeV center-of-mass energy in e+ e− annihilation. Z. Phys., C47, 187–198.
doi:10.1007/BF01552339.
[284] Bredenstein, A., Denner, A., Dittmaier, S., and Pozzorini, S. (2009). NLO
QCD corrections to pp → tt̄bb̄ + X at the LHC. Phys. Rev. Lett., 103, 012002.
doi:10.1103/PhysRevLett.103.012002.
[285] Bredenstein, A., Denner, A., Dittmaier, S., and Pozzorini, S. (2010). NLO QCD
Corrections to tt̄bb̄ Production at the LHC: 2. full hadronic results. JHEP , 1003,
021. doi:10.1007/JHEP03(2010)021.
[286] Britto, Ruth, Cachazo, Freddy, and Feng, Bo (2005a). New recur-
sion relations for tree amplitudes of gluons. Nucl. Phys., B715, 499–522.
doi:10.1016/j.nuclphysb.2005.02.030.
[287] Britto, Ruth, Cachazo, Freddy, Feng, Bo, and Witten, Edward (2005b). Direct
proof of the tree-level scattering amplitude recursion relation in Yang-Mills theory.
Phys. Rev. Lett., 94, 181602.
[288] Brucherseifer, Mathias, Caola, Fabrizio, and Melnikov, Kirill (2014). On the
NNLO QCD corrections to single-top production at the LHC. Phys. Lett., B736,
58–63. doi:10.1016/j.physletb.2014.06.075.
[289] Buchalla, Gerhard, Buras, Andrzej J., and Lautenbacher, Markus E. (1996).
Weak decays beyond leading logarithms. Rev.Mod.Phys., 68, 1125–1144.
doi:10.1103/RevModPhys.68.1125.
[290] Buckley, Andy et al. (2011). General-purpose event generators for LHC physics.
Phys. Rept., 504, 145–233. doi:10.1016/j.physrep.2011.03.005.
[291] Buckley, Andy, Butterworth, Jonathan, Lonnblad, Leif, Grellscheid,
David, Hoeth, Hendrik, Monk, James, Schulz, Holger, and Siegert, Frank
(2013). Rivet user manual. Comput. Phys. Commun., 184, 2803–2819.
doi:10.1016/j.cpc.2013.05.021.
[292] Budnev, V. M., Ginzburg, I. F., Meledin, G. V., and Serbo, V. G. (1974). The two
photon particle production mechanism. Physical problems. Applications. Equiva-
lent photon approximation. Phys. Rept., 15, 181–281.
[293] Burdman, Gustavo and Donoghue, John F. (1992). Union of chiral and heavy
quark symmetries. Phys. Lett., B280, 287–291. doi:10.1016/0370-2693(92)90068-
F.
[294] Buttar, C., Dittmaier, S., Drollinger, V., Frixione, S., Nikitenko, A. et al. (2006).
Les Houches Physics at TeV colliders 2005, Standard Model and Higgs working
group: Summary report (hep-ph/0604120).
References 705
[295] Butterworth, Jon et al. (2016). PDF4LHC recommendations for LHC Run II.
J. Phys., G43, 023001. doi:10.1088/0954-3899/43/2/023001.
[296] Butterworth, J., Dissertori, G., Dittmaier, S., de Florian, D., Glover, N. et al.
(2014). Les Houches 2013: Physics at TeV Colliders: Standard Model Working
Group Report (arXiv:1405.1067).
[297] Butterworth, Jonathan M., Davison, Adam R., Rubin, Mathieu, and Salam,
Gavin P. (2008). Jet substructure as a new Higgs search channel at the LHC.
Phys. Rev. Lett., 100, 242001. doi:10.1103/PhysRevLett.100.242001.
[298] Butterworth, John M., Forshaw, Jeffrey R., and Seymour, Mike H. (1996). Mul-
tiparton interactions in photoproduction at HERA. Z. Phys., C72, 637–646.
[299] Byckling, E. and Kajantie, K. (1969). N-particle phase space in terms of invariant
momentum transfers. Nucl. Phys., B9, 568–576. doi:10.1016/0550-3213(69)90271-
5.
[300] C. Balazs, G. Ladinsky, C.-P. Yuan (Fortran); P. Nadolsky (C++);
CTEQ Collaboration (CTEQ libraries). Resummation program resbos
(http://hep.pa.msu.edu/www/legacy/).
[301] Cacciari, Matteo, Dreyer, Frdric A., Karlberg, Alexander, Salam, Gavin P., and
Zanderighi, Giulia (2015). Fully differential VBF Higgs production at NNLO. Phys.
Rev. Lett., 115, 082002. doi:10.1103/PhysRevLett.115.082002.
[302] Cacciari, Matteo, Greco, Mario, and Nason, Paolo (1998). The P(T) spec-
trum in heavy flavor hadroproduction. JHEP , 9805, 007. doi:10.1088/1126-
6708/1998/05/007.
[303] Cacciari, Matteo and Salam, Gavin P. (2008). Pileup subtraction using jet areas.
Phys. Lett., B659, 119–126. doi:10.1016/j.physletb.2007.09.077.
[304] Cacciari, Matteo, Salam, Gavin P., and Soyez, Gregory (2008). The Anti-kT jet
clustering algorithm. JHEP , 0804, 063. doi:10.1088/1126-6708/2008/04/063.
[305] Cacciari, Matteo, Salam, Gavin P., and Soyez, Gregory (2012). FastJet user
manual. Eur. Phys. J., C72, 1896. doi:10.1140/epjc/s10052-012-1896-2.
[306] Cachazo, Freddy and Svrček, Peter (2005). Lectures on twistor strings and
perturbative Yang-Mills theory. PoS , RTN2005, 004.
[307] Cachazo, Freddy, Svrcek, Peter, and Witten, Edward (2004). MHV ver-
tices and tree amplitudes in gauge theory. JHEP , 09, 006. doi:10.1088/1126-
6708/2004/09/006.
[308] Cafarella, Alessandro, Papadopoulos, Costas G., and Worek, Malgorzata (2009).
HELAC-PHEGAS: A generator for all parton level processes. Comput. Phys. Com-
mun., 180, 1941–1955. doi:10.1016/j.cpc.2009.04.023.
[309] Campbell, John and Ellis, R. Keith. MCFM – Monte Carlo for FeMtobarn
processes (mcfm.fnal.gov).
[310] Campbell, J.M., Hatakeyama, K., Huston, J., Petriello, F., Andersen, Jeppe R.
et al. (2013). Working Group Report: Quantum Chromodynamics.
[311] Campbell, John M. and Ellis, R. Keith (1999). Update on vector
boson pair production at hadron colliders. Phys. Rev., D60, 113006.
doi:10.1103/PhysRevD.60.113006.
706 References
[326] Caravaglios, F., Mangano, Michelangelo L., Moretti, M., and Pittau, R. (1999).
A New approach to multijet calculations in hadron collisions. Nucl. Phys., B539,
215–232. doi:10.1016/S0550-3213(98)00739-1.
[327] Caravaglios, Francesco and Moretti, Mauro (1995). An algorithm to compute
Born scattering amplitudes without Feynman graphs. Phys. Lett., B358, 332–338.
doi:10.1016/0370-2693(95)00971-M.
[328] Caravaglios, Francesco and Moretti, M. (1997). e+ e- into four fermions +
gamma with ALPHA. Z. Phys., C74, 291–296. doi:10.1007/s002880050390.
[329] Carli, Tancredi et al. (2010a). A posteriori inclusion of parton density functions
in NLO QCD final-state calculations at hadron colliders: The APPLGRID Project.
Eur. Phys. J., C, 503–524. doi:10.1140/epjc/s10052-010-1255-0.
[330] Carli, Tancredi, Gehrmann, Thomas, and Höche, Stefan (2010b). Hadronic
final states in deep-inelastic scattering with SHERPA. Eur. Phys. J., C67, 73.
doi:10.1140/epjc/s10052-010-1261-2.
[331] Carrazza, Stefano, Ferrara, Alfio, Palazzo, Daniele, and Rojo, Juan (2015a).
APFEL Web. J. Phys., G42(5), 057001. doi:10.1088/0954-3899/42/5/057001.
[332] Carrazza, Stefano, Forte, Stefano, Kassabov, Zahari, Latorre, Jose Ignacio, and
Rojo, Juan (2015b). An unbiased Hessian representation for Monte Carlo PDFs.
Eur. Phys. J., C75(8), 369. doi:10.1140/epjc/s10052-015-3590-7.
[333] Carrazza, Stefano, Forte, Stefano, Kassabov, Zahari, and Rojo, Juan (2016).
Specialized minimal PDFs for optimized LHC calculations. Eur. Phys. J., C76(4),
205. doi:10.1140/epjc/s10052-016-4042-8.
[334] Carrazza, Stefano, Latorre, Jos I., Rojo, Juan, and Watt, Graeme (2015c). A
compression algorithm for the combination of PDF sets. Eur. Phys. J., C75, 474.
doi:10.1140/epjc/s10052-015-3703-3.
[335] Carrazza, Stefano and Pires, Joao (2014). Perturbative QCD description
of jet data from LHC Run-I and Tevatron Run-II. JHEP , 1410, 145.
doi:10.1007/JHEP10(2014)145.
[336] Cascioli, F., Gehrmann, T., Grazzini, M., Kallweit, S., Maierhoefer, P., von
Manteuffel, A., Pozzorini, S., Rathlev, D., Tancredi, L., and Weihs, E. (2014a).
ZZ production at hadron colliders in NNLO QCD. Phys. Lett., B735, 311–313.
doi:10.1016/j.physletb.2014.06.056.
[337] Cascioli, F., Hoeche, S., Krauss, F., Maierhoefer, P., Pozzorini, S., and Siegert,
F. (2014b). Precise Higgs-background predictions: merging NLO QCD and squared
quark-loop corrections to four-lepton + 0,1 jet production. JHEP , 01, 046.
doi:10.1007/JHEP01(2014)046.
[338] Cascioli, Fabio, Maierhofer, Philipp, and Pozzorini, Stefano (2012). Scat-
tering amplitudes with open loops. Phys. Rev. Lett., 108, 111601.
doi:10.1103/PhysRevLett.108.111601.
[339] Catani, S. and Ciafaloni, M. (1984). Many-gluon correlations and the quark
form factor in QCD. Nucl. Phys., B236, 61. doi:10.1016/0550-3213(84)90525-X.
[340] Catani, Stefano, Cieri, Leandro, de Florian, Daniel, Ferrera, Giancarlo, and
Grazzini, Massimiliano (2012a). Diphoton production at hadron colliders: a
708 References
[400] CMS (2009). Particle-flow event reconstruction in CMS and performance for
jets, taus, and MET (CMS-PAS-PFT-09-001).
[401] CMS (2013a). Projected performance of an upgraded CMS detector at the LHC
and HL-lHC: Contribution to the Snowmass Process (arXiv:1307.7135).
[402] CMS (2013b). Properties of the observed Higgs-like resonance using the diphoton
channel. Technical Report CMS-PAS-HIG-13-016, CERN, Geneva.
[403] CMS (2014a). Measurement of the inclusive top-quark pair + photon production
cross section in the muon + jets channel in pp collisions at 8 TeV. Technical Report
CMS-PAS-TOP-13-011, CERN, Geneva.
[404] CMS (2014b). Study of topological distributions of inclusive three- and four-jet
events at the LHC. Technical Report CMS-PAS-QCD-11-006, CERN, Geneva.
[405] CMS (2015a). Measurement of the double-differential inclusive jet cross section
at sqrt(s) = 8 TeV. Technical Report CMS-PAS-SMP-14-001, CERN, Geneva.
[406] CMS (2015b). Measurement of top quark pair production in association with a
W or Z boson using event reconstruction techniques. Technical Report CMS-PAS-
TOP-14-021, CERN, Geneva.
[407] Colangelo, G. and Nason, P. (1992). A theoretical study of the c and b
fragmentation function from e+ e− annihilation. Phys. Lett., B285, 167–171.
doi:10.1016/0370-2693(92)91317-3.
[408] collaboration, LHCb (2014,
√ Aug). Measurement of the forward W boson cross-
section in pp collisions at s = 7 TeV. Technical Report arXiv:1408.4354. LHCB-
PAPER-2014-033. CERN-PH-EP-2014-175, CERN, Geneva.
[409] Collins, John C., Soper, Davison E., and Sterman, George F. (1985a). Fac-
torization for short-distance hadron–hadron scattering. Nucl. Phys., B261, 104.
doi:10.1016/0550-3213(85)90565-6.
[410] Collins, John C., Soper, Davison E., and Sterman, George F. (1985b). Transverse
momentum distribution in Drell-Yan pair and W and Z boson production. Nucl.
Phys., B250, 199. doi:10.1016/0550-3213(85)90479-1.
[411] Collins, P.D.B. and Spiller, T.P. (1985). The fragmentation of heavy quarks. J.
Phys., G11, 1289. doi:10.1088/0305-4616/11/12/006.
[412] Cooper, Leon N. (1956). Bound electron pairs in a degenerate Fermi gas. Phys.
Rev., 104, 1189–1190. doi:10.1103/PhysRev.104.1189.
[413] Cooper-Sarkar, A.M. (2011). PDF fits at HERA. PoS , EPS-HEP2011, 320.
[414] Corcella, G. et al. (2002). HERWIG 6.5 Release Note.
[415] Corcella, G., Knowles, I. G., Marchesini, G., Moretti, S., Odagiri, K., Richardson,
P., Seymour, M. H., and Webber, B. R. (2001). HERWIG 6: An Event generator
for hadron emission reactions with interfering gluons (including supersymmetric
processes). JHEP , 01, 010. doi:10.1088/1126-6708/2001/01/010.
[416] Corcella, G. and Seymour, M. H. (1998). Matrix element corrections to par-
ton shower simulations of heavy quark decay. Phys. Lett., B442, 417–426.
doi:10.1016/S0370-2693(98)01251-9.
[417] Corcella, Gennaro and Seymour, Michael H. (2000). Initial state radiation in
simulations of vector boson production at hadron colliders. Nucl. Phys., B565,
227–244. doi:10.1016/S0550-3213(99)00672-0.
References 713
[418] Corke, Richard and Sjöstrand, Torbjörn (2009). Multiparton interactions and
rescattering. JHEP , 01, 035. doi:10.1007/JHEP01(2010)035.
[419] Corke, Richard and Sjostrand, Torbjorn (2011). Multiparton Interactions with
an x-dependent Proton Size. JHEP , 1105, 009. doi:10.1007/JHEP05(2011)009.
[420] Cullen, G., van Deurzen, H., Greiner, N., Luisoni, G., Mastrolia, P., Mirabella,
E., Ossola, G., Peraro, T., and Tramontano, F. (2013). Next-to-Leading-Order
QCD corrections to Higgs boson production plus three jets in gluon fusion. Phys.
Rev. Lett., 111(13), 131801. doi:10.1103/PhysRevLett.111.131801.
[421] Curci, G., Furmanski, W., and Petronzio, R. (1980). Evolution of parton
densities beyond leading order: The non-singlet case. Nucl. Phys., B175, 27.
doi:10.1016/0550-3213(80)90003-6.
[422] Currie, J, Glover, E. W. N., and Pires, J (2017). NNLO QCD predictions for
single jet inclusive production at the LHC. Phys. Rev. Lett., 118(7), 072002.
doi:10.1103/PhysRevLett.118.072002.
[423] Curtin, David, Jaiswal, Prerit, and Meade, Patrick (2013). Charginos hiding In
plain sight. Phys. Rev., D87(3), 031701. doi:10.1103/PhysRevD.87.031701.
[424] Curtin, David, Meade, Patrick, and Tien, Pin-Ju (2014). Natural SUSY in plain
sight. Phys. Rev., D90(11), 115012. doi:10.1103/PhysRevD.90.115012.
[425] Czakon, M. (2011). Double-real radiation in hadronic top quark pair pro-
duction as a proof of a certain concept. Nucl. Phys., B849, 250–295.
doi:10.1016/j.nuclphysb.2011.03.020.
[426] Czakon, Michal, Fiedler, Paul, Heymes, David, and Mitov, Alexander (2016a).
NNLO QCD predictions for fully-differential top-quark pair production at the
Tevatron. JHEP , 05, 034. doi:10.1007/JHEP05(2016)034.
[427] Czakon, Michal, Fiedler, Paul, and Mitov, Alexander (2013). The total
top quark pair production cross-section at hadron colliders through O(αS4 ).
Phys. Rev. Lett., 110, 252004. doi:10.1103/PhysRevLett.110.252004.
[428] Czakon, Michal, Fiedler, Paul, and Mitov, Alexander (2015). Resolving
the Tevatron top quark forward-backward asymmetry puzzle: Fully differen-
tial next-to-next-to-leading-order calculation. Phys. Rev. Lett., 115(5), 052001.
doi:10.1103/PhysRevLett.115.052001.
[429] Czakon, Michal, Heymes, David, and Mitov, Alexander (2016b). Dynamical
scales for multi-TeV top-pair production at the LHC (arXiv:1606.03350).
[430] Daleo, A., Gehrmann, T., and Maitre, D. (2007). Antenna subtraction with
hadronic initial states. JHEP , 04, 016. doi:10.1088/1126-6708/2007/04/016.
[431] Dasgupta, Mrinal, Fregoso, Alessandro, Marzani, Simone, and Salam, Gavin P.
(2013). Towards an understanding of jet substructure. JHEP , 1309, 029.
doi:10.1007/JHEP09(2013)029.
[432] Dasgupta, Mrinal, Magnea, Lorenzo, and Salam, Gavin P. (2008). Non-
perturbative QCD effects in jets at hadron colliders. JHEP , 02, 055.
doi:10.1088/1126-6708/2008/02/055.
[433] Dashen, Roger F. (1969). Chiral SU (3) ⊗ SU (3) as a symmetry of the strong
interactions. Phys. Rev., 183, 1245–1260. doi:10.1103/PhysRev.183.1245.
714 References
[450] Del Duca, Vittorio (1993). Parke-Taylor amplitudes in the multi - Regge kine-
matics. Phys. Rev., D48, 5133–5139. doi:10.1103/PhysRevD.48.5133.
[451] Del Duca, Vittorio (1995). An introduction to the perturbative QCD pomeron
and to jet physics at large rapidities (hep-ph/9503226).
[452] Del Duca, Vittorio, Dixon, Lance J., and Maltoni, Fabio (2000). New color
decompositions for gauge amplitudes at tree and loop level. Nucl. Phys., B571,
51–70. doi:10.1016/S0550-3213(99)00809-3.
[453] del Duca, Vittorio, Frizzo, Alberto, and Maltoni, Fabio (2000). Factorization
of tree QCD amplitudes in the high-energy limit and in the collinear limit. Nucl.
Phys., B568, 211–262.
[454] Del Duca, V., Kilgore, W., Oleari, C., Schmidt, C., and Zeppenfeld, D. (2001).
Gluon fusion contributions to H + 2 jet production. Nucl. Phys., B616, 367–399.
doi:10.1016/S0550-3213(01)00446-1.
[455] Del Duca, Vittorio, Kilgore, William B., and Maltoni, Fabio (2000). Multi-
photon amplitudes for next-to-leading order QCD. Nucl. Phys., B566, 252–274.
doi:10.1016/S0550-3213(99)00663-X.
[456] Denner, Ansgar and Dittmaier, S. (2006). Reduction schemes for one-loop tensor
integrals. Nucl. Phys., B734, 62–115. doi:10.1016/j.nuclphysb.2005.11.007.
[457] Denner, Ansgar, Dittmaier, Stefan, and Hofer, Lars (2014). COLLIER - A
fortran-library for one-loop integrals. PoS , LL2014, 071.
[458] Denner, Ansgar, Dittmaier, Stefan, Kallweit, Stefan, and Muck, Alexander
(2012). Electroweak corrections to Higgs-strahlung off W/Z bosons at the Tevatron
and the LHC with HAWK. JHEP , 1203, 075. doi:10.1007/JHEP03(2012)075.
[459] Denner, A., Dittmaier, S., Kallweit, S., and Pozzorini, S. (2011a). NLO QCD
corrections to W + W − bb̄ production at hadron colliders. Phys. Rev. Lett., 106,
052001. doi:10.1103/PhysRevLett.106.052001.
[460] Denner, A., Heinemeyer, S., Puljak, I., Rebuzzi, D., and Spira, M. (2011b). Stan-
dard Model Higgs-Boson branching ratios with uncertainties. Eur. Phys. J., C71,
1753. doi:10.1140/epjc/s10052-011-1753-8.
[461] Denner, Ansgar and Pozzorini, Stefano (2001). One loop leading logarithms
in electroweak radiative corrections. 1. Results. Eur. Phys. J., C18, 461–480.
doi:10.1007/s100520100551.
[462] d’Enterria, David and Rojo, Juan (2012). Quantitative constraints on the gluon
distribution function in the proton from collider isolated-photon data. Nucl.
Phys., B860, 311–338. doi:10.1016/j.nuclphysb.2012.03.003.
[463] Dicus, Duane A. and Willenbrock, Scott S.D. (1988). Photon pair pro-
duction and the intermediate mass Higgs boson. Phys. Rev., D37, 1801.
doi:10.1103/PhysRevD.37.1801.
[464] Diehl, Markus, Ostermeier, Daniel, and Schafer, Andreas (2012). Ele-
ments of a theory for multi-parton interactions in QCD. JHEP , 1203, 089.
doi:10.1007/JHEP03(2012)089.
[465] Diehl, Markus and Schafer, Andreas (2011). Theoretical considera-
tions on multi-parton interactions in QCD. Phys. Lett., B698, 389–402.
doi:10.1016/j.physletb.2011.03.024.
716 References
[466] Dinsdale, Michael, Ternick, Marko, and Weinzierl, Stefan (2007). Par-
ton showers from the dipole formalism. Phys. Rev., D76, 094003.
doi:10.1103/PhysRevD.76.094003.
[467] Dissertori, G., Knowles, I.G., and Schmelling, M. (2003). Quantum Chromody-
namics: High energy experiments and theory.
[468] Dittmaier, S. et al. (2011). Handbook of LHC Higgs Cross Sections: 1. Inclusive
Observables (arXiv:1101.0593).
[469] Dittmaier, Stefan, Huss, Alexander, and Speckner, Christian (2012). Weak
radiative corrections to dijet production at hadron colliders. JHEP , 11, 095.
doi:10.1007/JHEP11(2012)095.
[470] Dittmar, M., Forte, S., Glazov, A., Moch, S., Altarelli, G. et al. (2009). Parton
Distributions (arXiv:0901.2504).
[471] Dixon, Lance J., Kunszt, Z., and Signer, A. (1998). Helicity amplitudes for O(αs )
production of W + W − , W ± Z, ZZ, W ± γ, or Zγ pairs at hadron colliders. Nucl.
Phys., B531, 3–23. doi:10.1016/S0550-3213(98)00421-0.
[472] Dixon, Lance J., Kunszt, Z., and Signer, A. (1999). Vector boson pair production
in hadronic collisions at O(αs ): Lepton correlations and anomalous couplings. Phys.
Rev., D60, 114037. doi:10.1103/PhysRevD.60.114037.
[473] Dixon, Lance J. and Li, Ye (2013). Bounding the Higgs boson width through in-
terferometry. Phys. Rev. Lett., 111, 111802. doi:10.1103/PhysRevLett.111.111802.
[474] Djouadi, A., Spira, M., and Zerwas, P.M. (1991). Production of Higgs bosons in
proton colliders: QCD corrections. Phys. Lett., B264, 440–446. doi:10.1016/0370-
2693(91)90375-Z.
[475] Dobrescu, Bogdan A. and Lykken, Joseph D. (2013). Coupling spans of the
Higgs-like boson. JHEP , 1302, 073. doi:10.1007/JHEP02(2013)073.
[476] Dokshitzer, Yuri L., Diakonov, Dmitri, and Troian, S.I. (1978). On the trans-
verse momentum distribution of massive lepton pairs. Phys. Lett., B79, 269–272.
doi:10.1016/0370-2693(78)90240-X.
[477] Dokshitzer, Yuri L., Khoze, Valery A., Mueller, Alfred H., and Troyan, S. I.
(1991a). Basics of perturbative QCD. Gif-sur-Yvette, France: Ed. Frontieres.
[478] Dokshitzer, Yuri L., Khoze, Valery A., and Sjöstrand, T. (1992). Rapidity gaps in
Higgs production. Phys. Lett., B274, 116–121. doi:10.1016/0370-2693(92)90312-R.
[479] Dokshitzer, Yuri L., Khoze, Valery A., and Troian, S.I. (1991b). On specific QCD
properties of heavy quark fragmentation (’dead cone’). J. Phys., G17, 1602–1604.
doi:10.1088/0954-3899/17/10/023.
[480] Dokshitzer, Yuri L., Leder, G. D., Moretti, S., and Webber, B. R. (1997). Better
jet clustering algorithms. JHEP , 08, 001. doi:10.1088/1126-6708/1997/08/001.
[481] Dokshitzer, Yuri L., Marchesini, G., and Webber, B. R. (1996). Dispersive ap-
proach to power-behaved contributions in QCD hard processes. Nucl. Phys., B469,
93–142. doi:10.1016/0550-3213(96)00155-1.
[482] Dokshitzer, Yuri L., Troian, S.I., and Khoze, Valery A. (1987). Collective QCD
effects in the structure of final multi–hadron states. (in Russian). Sov. J. Nucl.
Phys., 46, 712–719.
References 717
[498] Ellis, John R., Gaillard, Mary K., and Ross, Graham G. (1976). Search for gluons
in e+ e− annihilation. Nucl. Phys., B111, 253. doi:10.1016/0550-3213(76)90542-3.
[499] Ellis, R.Keith and Zanderighi, Giulia (2008). Scalar one-loop integrals for QCD.
JHEP , 0802, 002. doi:10.1088/1126-6708/2008/02/002.
[500] Ellis, R. Keith, Hinchliffe, I., Soldate, M., and van der Bij, J.J. (1988). Higgs
decay to tau+ tau-: A Possible signature of intermediate mass Higgs bosons at the
SSC. Nucl. Phys., B297, 221. doi:10.1016/0550-3213(88)90019-3.
[501] Ellis, R. Keith, Kunszt, Zoltan, Melnikov, Kirill, and Zanderighi, Giulia (2012).
One-loop calculations in quantum field theory: from Feynman diagrams to unitarity
cuts. Phys.Rept., 518, 141–250. doi:10.1016/j.physrep.2012.01.008.
[502] Ellis, R. Keith, Martinelli, G., and Petronzio, R. (1983). Lepton-pair production
at large transverse momentum in second order QCD. Nucl. Phys., B211, 106.
doi:10.1016/0550-3213(83)90188-8.
[503] Ellis, R. Keith and Sexton, J.C. (1986). QCD radiative corrections to parton
parton scattering. Nucl. Phys., B269, 445. doi:10.1016/0550-3213(86)90232-4.
[504] Ellis, R. Keith, Stirling, W. James, and Webber, Bryan R. (1996). QCD and
collider physics (1 edn). Volume 8. Cambridge Monogr. Part. Phys. Nucl. Phys.
Cosmol.
[505] Ellis, R. Keith and Veseli, Sinisa (1998). W and Z transverse momen-
tum distributions: Resummation in qT space. Nucl. Phys., B511, 649–669.
doi:10.1016/S0550-3213(97)00655-X.
[506] Ellis, Stephen. private communication.
[507] Ellis, S.D., Huston, J., and Tonnesmann, M. (2001). On building better cone jet
algorithms. eConf , C010630, P513.
[508] Ellis, S. D., Huston, J., Hatakeyama, K., Loch, P., and Tonnesmann, M.
(2008). Jets in hadron-hadron collisions. Prog. Part. Nucl. Phys., 60, 484–551.
doi:10.1016/j.ppnp.2007.12.002.
[509] Ellis, Stephen D., Kunszt, Zoltan, and Soper, Davison E. (1990). The one-jet
inclusive cross section at order αs3 , quarks and gluons. Phys. Rev. Lett., 64, 2121.
doi:10.1103/PhysRevLett.64.2121.
[510] Ellis, Stephen D., Kunszt, Zoltan, and Soper, Davison E. (1992). Jets at
hadron colliders at order αs3: A Look inside. Phys. Rev. Lett., 69, 3615–3618.
doi:10.1103/PhysRevLett.69.3615.
[511] Ellis, Stephen D. and Soper, Davison E. (1993). Successive combina-
tion jet algorithm for hadron collisions. Phys. Rev., D48, 3160–3166.
doi:10.1103/PhysRevD.48.3160.
[512] Ellis, Stephen D., Vermilion, Christopher K., and Walsh, Jonathan R.
(2009). Techniques for improved heavy particle searches with jet substructure.
Phys. Rev., D80, 051501. doi:10.1103/PhysRevD.80.051501.
[513] Ellis, Stephen D., Vermilion, Christopher K., and Walsh, Jonathan R. (2010).
Recombination algorithms and jet substructure: Pruning as a tool for heavy particle
searches. Phys. Rev., D81, 094023. doi:10.1103/PhysRevD.81.094023.
[514] Engel, R. (1995). Photoproduction within the two component dual par-
ton model. 1. Amplitudes and cross-sections. Z. Phys., C66, 203–214.
References 719
doi:10.1007/BF01496594.
[515] Englert, C., Freitas, A., Mhlleitner, M. M., Plehn, T., Rauch, M., Spira, M., and
Walz, K. (2014). Precision measurements of Higgs couplings: Implications for new
physics scales. J. Phys., G41, 113001. doi:10.1088/0954-3899/41/11/113001.
[516] Englert, F. and Brout, R. (1964). Broken symmetry and the mass of gauge
vector mesons. Phys. Rev. Lett., 13, 321–323. doi:10.1103/PhysRevLett.13.321.
[517] Fadin, Victor S., Kuraev, E.A., and Lipatov, L.N. (1975). On the Pomer-
anchuk singularity in asymptotically free theories. Phys. Lett., B60, 50–52.
doi:10.1016/0370-2693(75)90524-9.
[518] Fadin, Victor S. and Lipatov, L. N. (1998). BFKL pomeron in the next-to-leading
approximation. Phys. Lett., B429, 127–134. doi:10.1016/S0370-2693(98)00473-0.
[519] Falk, Adam F., Ligeti, Zoltan, Neubert, Matthias, and Nir, Yosef (1994). Heavy
quark expansion for the inclusive decay B̄ → τ ν̄ + X. Phys. Lett., B326, 145–153.
doi:10.1016/0370-2693(94)91206-8.
[520] Febres Cordero, F., Reina, L., and Wackeroth, D. (2006). NLO QCD corrections
to W boson production with a massive b-quark jet pair at the Tevatron pp̄ collider.
Phys. Rev., D74, 034007. doi:10.1103/PhysRevD.74.034007.
[521] Febres Cordero, Fernando, Reina, L., and Wackeroth, D. (2009). W - and Z-
boson production with a massive bottom-quark pair at the Large Hadron Collider.
Phys. Rev., D80, 034015. doi:10.1103/PhysRevD.80.034015.
[522] Feigl, Bastian, Rzehak, Heidi, and Zeppenfeld, Dieter (2012). New Physics
Backgrounds to the H→WW Search at the LHC? Phys. Lett., B717, 390–395.
doi:10.1016/j.physletb.2012.09.033.
[523] Fermi, E. (1924). On the theory of the impact between atoms and electrically
charged particles. Z. Phys., 29, 315–327. doi:10.1007/BF03184853.
[524] Ferrera, Giancarlo, Grazzini, Massimiliano, and Tramontano, Francesco (2011).
Associated W H production at hadron colliders: a fully exclusive QCD calculation
at NNLO. Phys. Rev. Lett., 107, 152003. doi:10.1103/PhysRevLett.107.152003.
[525] Field, Richard D. (1989). Applications of perturbative QCD. Addison-Wesley,
Redwood City, USA. Frontiers in physics, 77.
[526] Field, Rick D. (2001). The Underlying event in hard scattering processes.
eConf , C010630, P501.
[527] Field, R. D. and Feynman, R. P. (1978). A parametrization of the properties of
quark jets. Nucl. Phys., B136, 1. doi:10.1016/0550-3213(78)90015-9.
[528] Field, Richard D. and Wolfram, Stephen (1983). A QCD Model for e+ e− Anni-
hilation. Nucl. Phys., B213, 65. doi:10.1016/0550-3213(83)90175-X.
[529] Finkemeier, Markus and Mirkes, Erwin (1996). The scalar contribution to τ →
Kπντ . Z. Phys., C72, 619–626. doi:10.1007/s002880050284.
[530] Foldy, Leslie L. and Peierls, Ronald F. (1963). Isotopic spin of exchanged sys-
tems. Phys. Rev., 130, 1585–1589. doi:10.1103/PhysRev.130.1585.
[531] Forte, Stefano, Isgr, Andrea, and Vita, Gherardo (2014). Do we need N3 LO Par-
ton Distributions? Phys. Lett., B731, 136–140. doi:10.1016/j.physletb.2014.02.027.
720 References
[532] Forte, Stefano, Laenen, Eric, Nason, Paolo, and Rojo, Juan (2010).
Heavy quarks in deep-inelastic scattering. Nucl. Phys., B834, 116–162.
doi:10.1016/j.nuclphysb.2010.03.014.
[533] Fox, Geoffrey C. and Wolfram, Stephen (1980). A model for parton showers in
QCD. Nucl. Phys., B168, 285. doi:10.1016/0550-3213(80)90111-X.
[534] Frederix, Rikkert (2014). Top quark induced backgrounds to Higgs produc-
tion in the W W (∗) → llνν decay channel at next-to-leading-order in QCD.
Phys. Rev. Lett., 112(8), 082002. doi:10.1103/PhysRevLett.112.082002.
[535] Frederix, Rikkert and Frixione, Stefano (2012). Merging meets matching in
MC@NLO. JHEP , 1212, 061. doi:10.1007/JHEP12(2012)061.
[536] Frederix, Rikkert, Frixione, Stefano, Maltoni, Fabio, and Stelzer, Tim (2009).
Automation of next–to–leading order computations in QCD: The FKS subtraction.
JHEP , 0910, 003. doi:10.1088/1126-6708/2009/10/003.
[537] Frederix, Rikkert, Gehrmann, Thomas, and Greiner, Nicolas (2008). Automation
of the dipole subtraction method in MadGraph/MadEvent. JHEP , 0809, 122.
doi:10.1088/1126-6708/2008/09/122.
[538] Frederix, R., Gehrmann, T., and Greiner, N. (2010). Integrated
dipoles with MadDipole in the MadGraph framework. JHEP , 06, 086.
doi:10.1007/JHEP06(2010)086.
[539] Frixione, S. (1997). A general approach to jet cross sections in QCD. Nucl.
Phys., B507, 295–314. doi:10.1016/S0550-3213(97)00574-9.
[540] Frixione, Stefano (1998). Isolated photons in perturbative QCD. Phys.
Lett., B429, 369–374. doi:10.1016/S0370-2693(98)00454-7.
[541] Frixione, S., Hirschi, V., Pagani, D., Shao, H. S., and Zaro, M. (2015). Elec-
troweak and QCD corrections to top-pair hadroproduction in association with
heavy bosons. JHEP , 06, 184. doi:10.1007/JHEP06(2015)184.
[542] Frixione, S., Kunszt, Z., and Signer, A. (1996). Three-jet cross-sections to next–
to–leading order. Nucl. Phys., B467, 399–442. doi:10.1016/0550-3213(96)00110-1.
[543] Frixione, Stefano, Nason, Paolo, and Oleari, Carlo (2007). Matching NLO QCD
computations with parton shower simulations: the POWHEG method. JHEP , 11,
070. doi:10.1088/1126-6708/2007/11/070.
[544] Frixione, Stefano, Nason, Paolo, and Webber, Bryan R. (2003). Matching
NLO QCD and parton showers in heavy flavor production. JHEP , 08, 007.
doi:10.1088/1126-6708/2003/08/007.
[545] Frixione, Stefano and Ridolfi, Giovanni (1997). Jet photoproduction at HERA.
Nucl. Phys., B507, 315–333. doi:10.1016/S0550-3213(97)00575-0.
[546] Frixione, Stefano and Webber, Bryan R. (2002). Matching NLO QCD com-
putations and parton shower simulations. JHEP , 06, 029. doi:10.1088/1126-
6708/2002/06/029.
[547] Froissart, Marcel (1961). Asymptotic behavior and subtractions in the Mandel-
stam representation. Phys. Rev., 123, 1053–1057. doi:10.1103/PhysRev.123.1053.
[548] Froissart, M. and Omnes, R. (1965). Introduction to the theory of strong inter-
actions. pp. 89–186.
References 721
[549] Furmanski, W. and Petronzio, R. (1980). Singlet parton densities beyond leading
order. Phys. Lett., B97, 437. doi:10.1016/0370-2693(80)90636-X.
[550] Furmanski, W. and Petronzio, R. (1982). Lepton - Hadron Processes Be-
yond Leading Order in Quantum Chromodynamics. Z. Phys., C11, 293.
doi:10.1007/BF01578280.
[551] Gao, Jun, Guzzi, Marco, Huston, Joey, Lai, Hung-Liang, Li, Zhao et al.
(2014). The CT10 NNLO Global Analysis of QCD. Phys. Rev., D89, 033009.
doi:10.1103/PhysRevD.89.033009.
[552] Gao, Jun and Nadolsky, Pavel (2014). A meta-analysis of parton distribution
functions. JHEP , 07, 035. doi:10.1007/JHEP07(2014)035.
[553] Gasser, J. and Leutwyler, H. (1983). On the low-energy structure of QCD. Phys.
Lett., B125, 321. doi:10.1016/0370-2693(83)91293-5.
[554] Gasser, J. and Leutwyler, H. (1984). Chiral perturbation theory to one loop.
Ann. Phys., 158, 142. doi:10.1016/0003-4916(84)90242-2.
[555] Gaunt, Jonathan, Stahlhofen, Maximilian, Tackmann, Frank J., and Walsh,
Jonathan R. (2015). N-jettiness subtractions for NNLO QCD calculations.
JHEP , 09, 058. doi:10.1007/JHEP09(2015)058.
[556] Gavin, Ryan, Li, Ye, Petriello, Frank, and Quackenbush, Seth (2011). FEWZ
2.0: A code for hadronic Z production at next-to-next-to-leading order. Comput.
Phys. Commun., 182, 2388–2403. doi:10.1016/j.cpc.2011.06.008.
[557] Gehrmann, T., Grazzini, M., Kallweit, S., Maierhfer, P., von Manteuffel, A. et al.
(2014). W + W − production at hadron colliders in next to next to leading order
QCD. Phys. Rev. Lett., 113(21), 212001. doi:10.1103/PhysRevLett.113.212001.
[558] Gehrmann, Thomas, Hoche, Stefan, Krauss, Frank, Schonherr, Marek, and
Siegert, Frank (2013). NLO QCD matrix elements + parton showers in e+ e− →
hadrons. JHEP , 01, 144. doi:10.1007/JHEP01(2013)144.
[559] Gehrmann, T. and Remiddi, E. (2000). Differential equations for two loop four
point functions. Nucl. Phys., B580, 485–518. doi:10.1016/S0550-3213(00)00223-6.
[560] Gehrmann, Thomas, von Manteuffel, Andreas, and Tancredi, Lorenzo (2015).
The two-loop helicity amplitudes for qq 0 → V1 V2 → 4 leptons. JHEP , 09, 128.
doi:10.1007/JHEP09(2015)128.
[561] Gehrmann-De Ridder, A., Gehrmann, T., and Glover, E.W. Nigel (2005a).
Gluon-gluon antenna functions from Higgs boson decay. Phys. Lett., B612, 49–60.
doi:10.1016/j.physletb.2005.03.003.
[562] Gehrmann-De Ridder, A., Gehrmann, T., and Glover, E.W. Nigel (2005b).
Quark-gluon antenna functions from neutralino decay. Phys. Lett., B612, 36–48.
doi:10.1016/j.physletb.2005.02.039.
[563] Gehrmann-De Ridder, A., Gehrmann, T., and Glover, E. W. Nigel (2005c). An-
tenna subtraction at NNLO. JHEP , 09, 056. doi:10.1088/1126-6708/2005/09/056.
[564] Gehrmann-De Ridder, A., Gehrmann, T., Glover, E. W. N., Huss, A., and
Morgan, T. A. (2016a). Precise QCD predictions for the production of a Z
boson in association with a hadronic jet. Phys. Rev. Lett., 117(2), 022001.
doi:10.1103/PhysRevLett.117.022001.
722 References
[565] Gehrmann-De Ridder, Aude, Gehrmann, T., Glover, E. W. N., Huss, A., and
Morgan, T. A. (2016b). The NNLO QCD corrections to Z boson production at
large transverse momentum. JHEP , 07, 133. doi:10.1007/JHEP07(2016)133.
[566] Gehrmann-De Ridder, Aude, Gehrmann, Thomas, Glover, E. W. N., and
Pires, Joao (2013). Second order QCD corrections to jet production at
hadron colliders: the all-gluon contribution. Phys. Rev. Lett., 110(16), 162003.
doi:10.1103/PhysRevLett.110.162003.
[567] Georgi, Howard (1990). An effective field theory for heavy quarks at low energies.
Phys. Lett., B240, 447–450. doi:10.1016/0370-2693(90)91128-X.
[568] Georgi, Howard (1991). Comment on heavy baryon weak form-factors. Nucl.
Phys., B348, 293–296. doi:10.1016/0550-3213(91)90519-4.
[569] Georgi, Howard, Grinstein, Benjamin, and Wise, Mark B. (1990). Λb semilep-
tonic decay form-factors for mc does not equal infinity. Phys. Lett., B252, 456–460.
doi:10.1016/0370-2693(90)90569-R.
[570] Gerwick, Erik, Plehn, Tilman, and Schumann, Steffen (2012a). Understand-
ing jet scaling and jet vetos in Higgs searches. Phys. Rev. Lett., 108, 032003.
doi:10.1103/PhysRevLett.108.032003.
[571] Gerwick, Erik, Plehn, Tilman, Schumann, Steffen, and Schichtel, Peter (2012b).
Scaling patterns for QCD jets. JHEP , 1210, 162. doi:10.1007/JHEP10(2012)162.
[572] Giele, W., Glover, E.W. Nigel, Hinchliffe, I., Huston, J., Laenen, Eric et al.
(2002). The QCD / SM working group: Summary report (hep-ph/0204316). pp.
275–426.
[573] Giele, W.T., Glover, E.W. Nigel, and Kosower, David A. (1993). Higher order
corrections to jet cross-sections in hadron colliders. Nucl. Phys., B403, 633–670.
doi:10.1016/0550-3213(93)90365-V.
[574] Giele, W.T., Glover, E.W. Nigel, and Kosower, David A. (1994). The same
side / opposite side two jet ratio. Phys. Lett., B339, 181–186. doi:10.1016/0370-
2693(94)91152-5.
[575] Giele, W.T. and Zanderighi, G. (2008). On the numerical evaluation of
one–loop amplitudes: The gluonic case. JHEP , 0806, 038. doi:10.1088/1126-
6708/2008/06/038.
[576] Giele, W. T. and Glover, E. W. Nigel (1992). Higher-order corrections
to jet cross sections in e+ e− annihilation. Phys. Rev., D46, 1980–2010.
doi:10.1103/PhysRevD.46.1980.
[577] Giele, Walter T., Kosower, David A., and Skands, Peter Z. (2008).
A simple shower and matching algorithm. Phys. Rev., D78, 014026.
doi:10.1103/PhysRevD.78.014026.
[578] Gieseke, Stefan (2005). Uncertainties of Sudakov form-factors. JHEP , 01, 058.
doi:10.1088/1126-6708/2005/01/058.
[579] Gieseke, Stefan, Stephens, P., and Webber, Bryan (2003). New formalism for
QCD parton showers. JHEP , 12, 045. doi:10.1088/1126-6708/2003/12/045.
[580] Glashow, S.L. (1961). Partial Symmetries of Weak Interactions. Nucl. Phys., 22,
579–588. doi:10.1016/0029-5582(61)90469-2.
References 723
[581] Glashow, S.L., Iliopoulos, J., and Maiani, L. (1970). Weak in-
teractions with lepton-hadron symmetry. Phys. Rev., D2, 1285–1292.
doi:10.1103/PhysRevD.2.1285.
[582] Gleisberg, Tanju and Höche, Stefan (2008). Comix, a new matrix element gen-
erator. JHEP , 12, 039. doi:10.1088/1126-6708/2008/12/039.
[583] Gleisberg, Tanju, Hoeche, Stefan, Krauss, Frank, Schalicke, Andreas, Schumann,
Steffen, and Winter, Jan-Christopher (2004). SHERPA 1. alpha: A Proof of concept
version. JHEP , 02, 056. doi:10.1088/1126-6708/2004/02/056.
[584] Gleisberg, Tanju and Krauss, Frank (2008). Automating dipole subtraction for
QCD NLO calculations. Eur. Phys. J., C53, 501–523. doi:10.1140/epjc/s10052-
007-0495-0.
[585] Glover, E.W. Nigel (2003). Progress in NNLO calculations for scattering pro-
cesses. Nucl. Phys.Proc.Suppl., 116, 3–7. doi:10.1016/S0920-5632(03)80133-0.
[586] Glover, E. W. Nigel and Morgan, A. G. (1994). Measuring the photon fragmen-
tation function at LEP. Z. Phys., C62, 311–322. doi:10.1007/BF01560245.
[587] Gluck, M., Pisano, Cristian, and Reya, E. (2002). The Polarized and unpolarized
photon content of the nucleon. Phys. Lett., B540, 75–80. doi:10.1016/S0370-
2693(02)02125-1.
[588] Goebel, C., Halzen, F., and Scott, D.M. (1980). Double Drell-Yan annihilations
in hadron collisions: novel tests of the constituent picture. Phys. Rev., D22, 2789.
doi:10.1103/PhysRevD.22.2789.
[589] Goldstone, J. (1961). Field theories with superconductor solutions. Nuovo
Cim., 19, 154–164. doi:10.1007/BF02812722.
[590] Goldstone, Jeffrey, Salam, Abdus, and Weinberg, Steven (1962). Broken Sym-
metries. Phys. Rev., 127, 965–970. doi:10.1103/PhysRev.127.965.
[591] Golonka, Piotr and Was, Zbigniew (2006). PHOTOS Monte Carlo: A Preci-
sion tool for QED corrections in Z and W decays. Eur. Phys. J., C45, 97–107.
doi:10.1140/epjc/s2005-02396-4.
[592] Good, M.L. and Walker, W.D. (1960). Diffraction disssociation of beam particles.
Phys. Rev., 120, 1857–1860. doi:10.1103/PhysRev.120.1857.
[593] Gottschalk, Thomas D. (1983). A realistic model for e+ e- annihilation including
parton bremsstrahlung effects. Nucl. Phys., B214, 201–222. doi:10.1016/0550-
3213(83)90658-2.
[594] Grazzini, Massimiliano (2008). NNLO predictions for the Higgs boson signal in
the H → WW → lnu lnu and H → ZZ → 4l decay channels. JHEP , 0802, 043.
doi:10.1088/1126-6708/2008/02/043.
[595] Grazzini, Massimiliano, Kallweit, Stefan, Rathlev, Dirk, and Torre, Alessandro
(2014). Zγ production at hadron colliders in NNLO QCD. Phys. Lett., B731,
204–207. doi:10.1016/j.physletb.2014.02.037.
[596] Grazzini, M. et al. HqT program. http://theory.fi.infn.it/grazzini/codes.html.
[597] Grellscheid, David and Richardson, Peter (2007). Simulation of τ decays in the
Herwig++ event generator (arXiv:0710.1951).
[598] Gribov, V.N. (1968). A Reggeon diagram technique. Sov.Phys.JETP , 26, 414–
422.
724 References
[631] Hoeche, Stefan, Krauss, Frank, Schonherr, Marek, and Siegert, Frank (2013b).
QCD matrix elements + parton showers: The NLO case. JHEP , 1304, 027.
doi:10.1007/JHEP04(2013)027.
[632] Hoeche, Stefan, Krauss, Frank, Schumann, Steffen, and Siegert, Frank (2009).
QCD matrix elements and truncated showers. JHEP , 05, 053. doi:10.1088/1126-
6708/2009/05/053.
[633] Hoeche, Stefan, Li, Ye, and Prestel, Stefan (2014b). Higgs-boson production
through gluon fusion at NNLO QCD with parton showers. Phys. Rev., D90(5),
054011. doi:10.1103/PhysRevD.90.054011.
[634] Hoeche, Stefan, Li, Ye, and Prestel, Stefan (2015). Drell-Yan lepton pair
production at NNLO QCD with parton showers. Phys. Rev., D91(7), 074015.
doi:10.1103/PhysRevD.91.074015.
[635] Hou, Wei-Shu (1993). Enhanced charged Higgs boson effects in B − → τ ν̄τ , µν̄µ ,
and b → τ ν̄τ + X. Phys. Rev., D48, 2342–2344. doi:10.1103/PhysRevD.48.2342.
[636] Hoyer, P., Osland, P., Sander, H. G., Walsh, T. F., and Zerwas, P. M.
(1979). Quantum chromodynamics and jets in e+ e− . Nucl. Phys., B161, 349.
doi:10.1016/0550-3213(79)90217-7.
[637] Humpert, B. and Odorico, R. (1985). Multiparton scattering and QCD radi-
ation as sources of four-jet events. Phys. Lett., B154, 211. doi:10.1016/0370-
2693(85)90587-8.
[638] Hunter, J. D. (2007). Matplotlib: A 2d graphics environment. Computing In
Science & Engineering, 9(3), 90–95.
[639] Isgur, Nathan, Scora, Daryl, Grinstein, Benjamin, and Wise, Mark B. (1989).
Semileptonic B and D decays in the quark model. Phys. Rev., D39, 799.
doi:10.1103/PhysRevD.39.799.
[640] Isgur, Nathan and Wise, Mark B. (1989). Weak decays of heavy mesons
in the static quark approximation. Phys. Lett., B232, 113. doi:10.1016/0370-
2693(89)90566-2.
[641] Isgur, Nathan and Wise, Mark B. (1990a). Influence of the B ∗ Resonance on
B̄ → πeν̄e . Phys. Rev., D41, 151. doi:10.1103/PhysRevD.41.151.
[642] Isgur, Nathan and Wise, Mark B. (1990b). Weak transition form factors between
heavy mesons. Phys. Lett., B237, 527. doi:10.1016/0370-2693(90)91219-2.
[643] Isgur, Nathan and Wise, Mark B. (1991). Heavy baryon weak form-factors. Nucl.
Phys., B348, 276–292. doi:10.1016/0550-3213(91)90518-3.
[644] Ita, H., Bern, Z., Dixon, L.J., Febres Cordero, Fernando, Kosower, D.A. et al.
(2012). Precise predictions for Z + 4 lets at hadron colliders. Phys. Rev., D85,
031501. doi:10.1103/PhysRevD.85.031501.
[645] Jackson, John David. Classical electrodynamics (3rd ed. edn). Wiley, New York,
NY.
[646] Jadach, S., Was, Z., Decker, R., and Kuhn, Johann H. (1993). The tau
decay library TAUOLA: Version 2.4. Comput. Phys. Commun., 76, 361–380.
doi:10.1016/0010-4655(93)90061-G.
[647] Jaiswal, Prerit, Kopp, Karoline, and Okui, Takemichi (2013). Higgs
Production Amidst the LHC Detector. Phys. Rev., D87(11), 115017.
References 727
doi:10.1103/PhysRevD.87.115017.
[648] James, F. (1980). Monte Carlo theory and practice. Rept.Prog.Phys., 43, 1145.
doi:10.1088/0034-4885/43/9/002.
[649] Jimenez-Delgado, P. and Reya, E. (2009). Dynamical NNLO parton distribu-
tions. Phys. Rev., D79, 074023. doi:10.1103/PhysRevD.79.074023.
[650] Juste, Aurelio, Mantry, Sonny, Mitov, Alexander, Penin, Alexander, Skands,
Peter, Varnes, Erich, Vos, Marcel, and Wimpenny, Stephen (2014). Determination
of the top quark mass circa 2013: methods, subtleties, perspectives. Eur. Phys.
J., C74(10), 3119. doi:10.1140/epjc/s10052-014-3119-5.
[651] Kanaki, Aggeliki and Papadopoulos, Costas G. (2000). HELAC: A Package to
compute electroweak helicity amplitudes. Comput. Phys. Commun., 132, 306–315.
doi:10.1016/S0010-4655(00)00151-X.
[652] Karlberg, Alexander, Re, Emanuele, and Zanderighi, Giulia (2014). NNLOPS
accurate Drell-Yan production. JHEP , 09, 134. doi:10.1007/JHEP09(2014)134.
[653] Kartvelishvili, V.G., Likhoded, A.K., and Petrov, V.A. (1978). On the frag-
mentation functions of heavy quarks into hadrons. Phys. Lett., B78, 615.
doi:10.1016/0370-2693(78)90653-6.
[654] Kauer, Nikolas and Passarino, Giampiero (2012). Inadequacy of zero-
width approximation for a light Higgs boson signal. JHEP , 08, 116.
doi:10.1007/JHEP08(2012)116.
[655] Khachatryan,
√ Vardan et al. (2011). Strange particle production in pp collisions
at s = 0.9 and 7 TeV. JHEP , 1105, 064. doi:10.1007/JHEP05(2011)064.
[656] Khachatryan, Vardan et al. (2014a). √ Measurement of prompt J/ψ
pair production in pp collisions at s = 7 Tev. JHEP , 1409, 094.
doi:10.1007/JHEP09(2014)094.
[657] Khachatryan, Vardan et al. (2014b). Measurement of the t-channel single-top-
quark production
√ cross section and of the | Vtb | CKM matrix element in pp
collisions at s= 8 TeV. JHEP , 06, 090. doi:10.1007/JHEP06(2014)090.
[658] Khachatryan, Vardan et al. (2014c). Observation of the diphoton decay of the
Higgs boson and measurement of its properties. Eur. Phys. J., C74(10), 3076.
doi:10.1140/epjc/s10052-014-3076-z.
[659] Khachatryan, Vardan et al. (2015a). Constraints on the spin-parity and anoma-
lous HVV couplings of the Higgs boson in proton collisions at 7 and 8TeV. Phys.
Rev., D92(1), 012004. doi:10.1103/PhysRevD.92.012004.
[660] Khachatryan, Vardan et al. (2015b). Differential cross section measurements for
the
√ production of a W boson in association with jets in protonproton collisions at
s = 7 TeV. Phys. Lett., B741, 12–37. doi:10.1016/j.physletb.2014.12.003.
[661] Khachatryan, Vardan et al. (2015c). Limits on the Higgs boson lifetime and
width from its decay to four charged leptons. Phys. Rev., D92(7), 072010.
doi:10.1103/PhysRevD.92.072010.
[662] Khachatryan, Vardan et al. (2015d). Measurement of√the differential cross sec-
tion for top quark pair production in pp collisions at s = 8 TeV. Eur. Phys.
J., C75(11), 542. doi:10.1140/epjc/s10052-015-3709-x.
728 References
[663] Khachatryan, Vardan et al. (2015e). Measurement of the inclusive 3-jet produc-
tion differential cross section in protonproton collisions at 7 TeV and determina-
tion of the strong coupling constant in the TeV range. Eur. Phys. J., C75(5), 186.
doi:10.1140/epjc/s10052-015-3376-y.
[664] Khachatryan, Vardan et al. (2015f). Measurement of the pp → ZZ
production cross section and constraints √ on anomalous triple gauge cou-
plings in four-lepton final states at s =8 TeV. Phys. Lett., B740, 250–
272. [Erratum: Phys. Lett.B757,569(2016)]. doi:10.1016/j.physletb.2016.04.010,
10.1016/j.physletb.2014.11.059.
[665] Khachatryan, Vardan et al. (2015g). Measurement of the pp to ZZ pro-
duction cross section and constraints on anomalous triple gauge couplings
in four-lepton final states at sqrt(s) = 8 TeV. Phys. Lett., B740, 250.
doi:10.1016/j.physletb.2014.11.059.
[666] Khachatryan, Vardan et al. (2015h). Measurement of the Z boson differential
cross section in transverse momentum and rapidity in protonproton collisions at 8
TeV. Phys. Lett., B749, 187–209. doi:10.1016/j.physletb.2015.07.065.
[667] Khachatryan, Vardan et al. (2015i). Measurements of differential and double-
differential Drell-Yan cross sections in proton-proton collisions at 8 TeV. Eur.
Phys. J., C75(4), 147. doi:10.1140/epjc/s10052-015-3364-2.
[668] Khachatryan, Vardan et al. (2015j). Measurements of jet multiplicity and dif-
ferential production cross sections of Z+ jets events in proton-proton collisions at
√
s = 7 TeV. Phys. Rev., D91(5), 052008. doi:10.1103/PhysRevD.91.052008.
[669] Khachatryan, Vardan et al. (2015k). Precise determination of the mass of the
Higgs boson and tests of compatibility of its couplings with the standard model
predictions using proton collisions at 7 and 8 TeV. Eur. Phys. J., C75(5), 212.
doi:10.1140/epjc/s10052-015-3351-7.
[670] Khachatryan, Vardan et al. (2016a). Inclusive √ and differential measurements
of the tt charge asymmetry in pp collisions at s = 8TeV. Phys. Lett., B757,
154–179. doi:10.1016/j.physletb.2016.03.060.
[671] Khachatryan, Vardan et al. (2016b). Measurement of differential cross sections√
for Higgs boson production in the diphoton decay channel in pp collisions at s =
8 TeV. Eur. Phys. J., C76(1), 13. doi:10.1140/epjc/s10052-015-3853-3.
[672] Khachatryan, Vardan et al. (2016c). Measurement p of the charge asymmetry
in top quark pair production in pp collisions at (s) = 8 TeV using a template
method. Phys. Rev., D93(3), 034014. doi:10.1103/PhysRevD.93.034014.
[673] Khachatryan, Vardan√ et al. (2016d). Measurement of the W+ W− cross section
in pp collisions at s = 8 TeV and limits on anomalous gauge couplings. Eur.
Phys. J., C76(7), 401. doi:10.1140/epjc/s10052-016-4219-1.
[674] Khachatryan, Vardan et al. (2016e). Measurements
√ of tt̄ charge asymmetry using
dilepton final states in pp collisions at s = 8 TeV. Phys. Lett., B760, 365–386.
doi:10.1016/j.physletb.2016.07.006.
[675] Kibble, T.W.B. (1967). Symmetry breaking in non-Abelian gauge theories.
Phys. Rev., 155, 1554–1561. doi:10.1103/PhysRev.155.1554.
References 729
[676] Kidonakis, Nikolaos and Owens, J.F. (2001). Effects of higher order thresh-
old corrections in high E(T) jet production. Phys. Rev., D63, 054019.
doi:10.1103/PhysRevD.63.054019.
[677] Kilian, Wolfgang, Ohl, Thorsten, and Reuter, Jurgen (2011). WHIZARD: sim-
ulating multi–particle processes at LHC and ILC. Eur. Phys. J., C71, 1742.
doi:10.1140/epjc/s10052-011-1742-y.
[678] Kinoshita, T. (1962). Mass singularities of Feynman amplitudes. J. Math.
Phys., 3, 650–677. doi:10.1063/1.1724268.
[679] Kirschner, R. (1979). Generalized Lipatov-Altarelli-Parisi equations and jet cal-
culus rules. Phys. Lett., 84B, 266–270. doi:10.1016/0370-2693(79)90300-9.
[680] Klasen, M. and Kramer, G. (1997). Jet shapes in ep and pp̄ collisions in NLO
QCD. Phys. Rev., D56, 2702–2712. doi:10.1103/PhysRevD.56.2702.
[681] Kleiss, Ronald and Kuijf, Hans (1989). Multi - gluon cross-sections and five jet
production at hadron colliders. Nucl. Phys., B312, 616–644. doi:10.1016/0550-
3213(89)90574-9.
[682] Kleiss, Ronald and Pittau, Roberto (1994). Weight optimization in multichan-
nel Monte Carlo. Comput. Phys. Commun., 83, 141–146. doi:10.1016/0010-
4655(94)90043-4.
[683] Kleiss, R. and Stirling, W.James (1988). Top quark production at hadron col-
liders: some useful formulae. Z. Phys., C40, 419–423. doi:10.1007/BF01548856.
[684] Kleiss, R. and Stirling, W. James (1985). Spinor techniques for calculating p
anti-p —¿ W+- / Z0 + jets. Nucl. Phys., B262, 235–262. doi:10.1016/0550-
3213(85)90285-8.
[685] Kleiss, R., Stirling, W. James, and Ellis, S. D. (1986). A New Monte Carlo treat-
ment of multiparticle phase space at high-energies. Comput. Phys. Commun., 40,
359. doi:10.1016/0010-4655(86)90119-0.
[686] Kluge, T., Rabbertz, K., and Wobisch, M. (2006). FastNLO: Fast pQCD calcu-
lations for PDF fits. hep-ph/0609285. pp. 483–486.
[687] Kniehl, Bernd A., Kramer, G., and Potter, B. (2000). Fragmentation functions
for pions, kaons, and protons at next–to–leading order. Nucl. Phys., B582, 514–
536. doi:10.1016/S0550-3213(00)00303-5.
[688] Koba, Z., Nielsen, Holger Bech, and Olesen, P. (1972). Scaling of multiplic-
ity distributions in high-energy hadron collisions. Nucl. Phys., B40, 317–334.
doi:10.1016/0550-3213(72)90551-2.
[689] Kodaira, Jiro and Trentadue, Luca (1982). Summing soft emission in QCD.
Phys. Lett., B112, 66. doi:10.1016/0370-2693(82)90907-8.
[690] Konishi, K., Ukawa, A., and Veneziano, G. (1979). Jet calculus: A Simple al-
gorithm for resolving QCD jets. Nucl. Phys., B157, 45–107. doi:10.1016/0550-
3213(79)90053-1.
[691] Korchemsky, Gregory P. and Sterman, George F. (1995). Non-perturbative
corrections in resummed cross-sections. Nucl. Phys., B437, 415–432.
doi:10.1016/0550-3213(94)00006-Z.
[692] Kosower, David A. (1990). Light cone recurrence relations for QCD amplitudes.
Nucl. Phys., B335, 23–44. doi:10.1016/0550-3213(90)90167-C.
730 References
[693] Kovarik, K., Schienbein, I., Olness, F.I., Yu, J.Y., Keppel, C. et al. (2011). Nu-
clear corrections in neutrino-nucleus DIS and their compatibility with global NPDF
analyses. Phys. Rev. Lett., 106, 122301. doi:10.1103/PhysRevLett.106.122301.
[694] Kramer, Michael, 1, Olness, Fredrick I., and Soper, Davison E. (2000). Treat-
ment of heavy quarks in deeply inelastic scattering. Phys. Rev., D62, 096007.
doi:10.1103/PhysRevD.62.096007.
[695] Krauss, F. (2002). Matrix elements and parton showers in hadronic interactions.
JHEP , 08, 015. doi:10.1088/1126-6708/2002/08/015.
[696] Krauss, F., Kuhn, R., and Soff, G. (2002). AMEGIC++ 1.0: A Matrix element
generator in C++. JHEP , 02, 044. doi:10.1088/1126-6708/2002/02/044.
[697] Kretzer, S. (2000). Fragmentation functions from flavor inclu-
sive and flavor tagged e+ e− annihilations. Phys. Rev., D62, 054001.
doi:10.1103/PhysRevD.62.054001.
[698] Kretzer, S., Lai, H.L., Olness, F.I., and Tung, W.K. (2004). Cteq6 par-
ton distributions with heavy quark mass effects. Phys. Rev., D69, 114005.
doi:10.1103/PhysRevD.69.114005.
[699] Krohn, David, Thaler, Jesse, and Wang, Lian-Tao (2010). Jet Trimming.
JHEP , 1002, 084. doi:10.1007/JHEP02(2010)084.
[700] Kuhn, Johann H. and Rodrigo, German (1998). Charge asymme-
try in hadroproduction of heavy quarks. Phys. Rev. Lett., 81, 49–52.
doi:10.1103/PhysRevLett.81.49.
[701] Kuhn, Johann H. and Rodrigo, German (1999). Charge asymme-
try of heavy quarks at hadron colliders. Phys. Rev., D59, 054017.
doi:10.1103/PhysRevD.59.054017.
[702] Kuhn, Johann H. and Rodrigo, German (2012). Charge asymme-
tries of top quarks at hadron colliders revisited. JHEP , 1201, 063.
doi:10.1007/JHEP01(2012)063.
[703] Kuhn, Johann H. and Santamaria, A. (1990). τ decays to pions. Z. Phys., C48,
445–452. doi:10.1007/BF01572024.
[704] Kuhn, Johann H., Scharf, A., and Uwer, P. (2006). Electroweak corrections to
top-quark pair production in quark-antiquark annihilation. Eur. Phys. J., C45,
139–150. doi:10.1140/epjc/s2005-02423-6.
[705] Kuhn, J. H., Scharf, A., and Uwer, P. (2015). Weak interactions in top-quark
pair production at hadron colliders: An Update. Phys. Rev., D91(1), 014020.
doi:10.1103/PhysRevD.91.014020.
[706] Kuhn, Johann H. and Was, Z. (2008). tau decays to five mesons in TAUOLA.
Acta Phys. Polon., B39, 147–158.
[707] Kulesza, Anna, Sterman, George F., and Vogelsang, Werner (2004).
Joint resummation for Higgs production. Phys. Rev., D69, 014012.
doi:10.1103/PhysRevD.69.014012.
[708] Kuraev, E. A., Lipatov, L. N., and Fadin, Victor S. (1976). Multi-Reggeon
processes in the Yang-Mills theory. Sov.Phys.JETP , 44, 443–450.
[709] Kuraev, E. A., Lipatov, L. N., and Fadin, V. S. (1977). The Pomeranchuk
singularity in non–Abelian gauge theories. Sov. Phys. JETP , 45, 199–204.
References 731
[710] Kuzmin, V.A., Rubakov, V.A., and Shaposhnikov, M.E. (1985). On the anoma-
lous electroweak baryon number nonconservation in the early universe. Phys.
Lett., B155, 36. doi:10.1016/0370-2693(85)91028-7.
[711] Ladinsky, G.A. and Yuan, C.P. (1994). The non–perturbative regime in QCD
resummation for gauge boson production at hadron colliders. Phys. Rev., D50,
4239. doi:10.1103/PhysRevD.50.R4239.
[712] Laenen, Eric, Sterman, George F., and Vogelsang, Werner (2000). Higher order
QCD corrections in prompt photon production. Phys. Rev. Lett., 84, 4296–4299.
doi:10.1103/PhysRevLett.84.4296.
[713] Lai, Hung-liang, Guzzi, Marco, Huston, Joey, Li, Zhao, Nadolsky, Pavel M. et al.
(2010a). New parton distributions for collider physics. Phys. Rev., D82, 074024.
doi:10.1103/PhysRevD.82.074024.
[714] Lai, Hung-Liang, Huston, Joey, Li, Zhao, Nadolsky, Pavel, Pumplin, Jon et al.
(2010b). Uncertainty induced by QCD coupling in the CTEQ global analysis of
parton distributions. Phys. Rev., D82, 054021. doi:10.1103/PhysRevD.82.054021.
[715] Lai, Hung-Liang, Huston, Joey, Mrenna, Stephen, Nadolsky, Pavel, Stump,
Daniel et al. (2010c). Parton distributions for event generators. JHEP , 1004,
035. doi:10.1007/JHEP04(2010)035.
[716] Landry, F., Brock, R., Nadolsky, Pavel M., and Yuan, C.P. (2003). Teva-
tron Run-1 Z boson data and Collins-Soper-Sterman resummation formalism.
Phys. Rev., D67, 073016. doi:10.1103/PhysRevD.67.073016.
[717] Landshoff, P.V. and Polkinghorne, J.C. (1971). The dual quark-parton model
and high energy hadronic processes. Nucl. Phys., B32, 541–556. doi:10.1016/0550-
3213(71)90493-7.
[718] Lane, Kenneth D. (1996). Electroweak and flavor dynamics at hadron colliders.
hep-ph/9605257.
[719] Lange, Bjorn O., Neubert, Matthias, and Paz, Gil (2005). Theory of charm-
less inclusive B decays and the extraction of Vub . Phys. Rev., D72, 073006.
doi:10.1103/PhysRevD.72.073006.
[720] Langenfeld, U., Moch, S., and Uwer, P. (2009). Measuring the running top-quark
mass. Phys. Rev., D80, 054009. doi:10.1103/PhysRevD.80.054009.
[721] Laporta, S. (2000). High precision calculation of multiloop Feynman integrals
by difference equations. Int.J.Mod.Phys., A15, 5087–5159. doi:10.1016/S0217-
751X(00)00215-7.
[722] Larkoski, Andrew J., Marzani, Simone, Soyez, Gregory, and Thaler, Jesse (2014).
Soft Drop. JHEP , 05, 146. doi:10.1007/JHEP05(2014)146.
[723] Lee, Benjamin W., Quigg, C., and Thacker, H.B. (1977). Weak interactions at
very high-energies: The Role of the Higgs Boson mass. Phys. Rev., D16, 1519.
doi:10.1103/PhysRevD.16.1519.
[724] Lee, T.D. and Nauenberg, M. (1964). Degenerate systems and mass singularities.
Phys. Rev., 133, B1549–B1562. doi:10.1103/PhysRev.133.B1549.
[725] Lepage, G. Peter (1980). VEGAS - an Adaptive multi-dimensional integration
program. CLNS-80/447.
732 References
[744] Mangano, Michelangelo L., Moretti, Mauro, and Pittau, Roberto (2002). Mul-
tijet matrix elements and shower evolution in hadronic collisions: W bb̄ + n jets as
a case study. Nucl. Phys., B632, 343–362. doi:10.1016/S0550-3213(02)00249-3.
[745] Mannel, Thomas, Roberts, Winston, and Ryzak, Zbigniew (1991). Baryons in
the heavy quark effective theory. Nucl. Phys., B355, 38–53. doi:10.1016/0550-
3213(91)90301-D.
[746] Manohar, Aneesh, Nason, Paolo, Salam, Gavin P., and Zanderighi, Giu-
lia (2016). How bright is the proton? A precise determination of the
photon parton distribution function. Phys. Rev. Lett., 117(24), 242002.
doi:10.1103/PhysRevLett.117.242002.
[747] Manohar, Aneesh V. and Trott, Michael (2012). Electroweak Sudakov Cor-
rections and the Top Quark Forward-Backward Asymmetry. Phys. Lett., B711,
313–316. doi:10.1016/j.physletb.2012.04.013.
[748] Manohar, Aneesh V. and Waalewijn, Wouter J. (2012). A QCD Analysis of
Double Parton Scattering: Color Correlations, Interference Effects and Evolution.
Phys. Rev., D85, 114009. doi:10.1103/PhysRevD.85.114009.
[749] Marchesini, G., Trentadue, L., and Veneziano, G. (1981). Space-time descrip-
tion of color screening via Jet calculus techniques. Nucl. Phys., B181, 335.
doi:10.1016/0550-3213(81)90357-6.
[750] Marchesini, G. and Webber, B. R. (1984). Simulation of QCD jets including soft
gluon interference. Nucl. Phys., B238, 1. doi:10.1016/0550-3213(84)90463-2.
[751] Marchesini, G. and Webber, B. R. (1990). Simulation of QCD Coher-
ence in Heavy Quark Production and Decay. Nucl. Phys., B330, 261–283.
doi:10.1016/0550-3213(90)90310-A.
[752] Marquard, Peter, Smirnov, Alexander V., Smirnov, Vladimir A., and Stein-
hauser, Matthias (2015). Quark mass relations to four-loop order in perturbative
QCD. Phys. Rev. Lett., 114(14), 142002. doi:10.1103/PhysRevLett.114.142002.
[753] Martin, A.D., Roberts, R.G., Stirling, W.J., and Thorne, R.S. (2003). Uncertain-
ties of predictions from parton distributions. 1: Experimental errors. Eur. Phys.
J., C28, 455–473. doi:10.1140/epjc/s2003-01196-2.
[754] Martin, A. D., Roberts, R. G., Stirling, W. J., and Thorne, R. S. (2005). Par-
ton distributions incorporating QED contributions. Eur. Phys. J., C39, 155–161.
doi:10.1140/epjc/s2004-02088-7.
[755] Martin, A. D., Stirling, W. J., Thorne, R. S., and Watt, G. (2009). Parton
distributions for the LHC. Eur. Phys. J., C63, 189–295. doi:10.1140/epjc/s10052-
009-1072-5.
[756] Mastrolia, P., Ossola, G., Reiter, T., and Tramontano, F. (2010). Scatter-
ing amplitudes from unitarity-based reduction algorithm at the integrand-level.
JHEP , 1008, 080. doi:10.1007/JHEP08(2010)080.
[757] Matre, Daniel and Sapeta, Sebastian (2013). Simulated NNLO for high-pT ob-
servables in vector boson + jets production at the LHC. Eur. Phys. J., C73(12),
2663. doi:10.1140/epjc/s10052-013-2663-8.
[758] Meissner, W. and Ochsenfeld, R. (1933). Ein neuer effekt bei ein-
tritt der supraleitfhigkeit. Die Naturwissenschaften, 21 (44), 787–788.
734 References
doi:10.1007/BF01504252.
[759] Mekhfi, M. (1985). Multiparton processes: an application to double Drell-Yan.
Phys. Rev., D32, 2371. doi:10.1103/PhysRevD.32.2371.
[760] Melia, Tom, Nason, Paolo, Rontsch, Raoul, and Zanderighi, Giulia (2011).
W + W − , W Z and ZZ production in the POWHEG BOX. JHEP , 1111, 078.
doi:10.1007/JHEP11(2011)078.
[761] Melnikov, Kirill and Petriello, Frank (2006). Electroweak gauge boson pro-
duction at hadron colliders through O(alpha(s)**2). Phys. Rev., D74, 114017.
doi:10.1103/PhysRevD.74.114017.
[762] Melnikov, Kirill and Schulze, Markus (2009). NLO QCD corrections to top quark
pair production and decay at hadron colliders. JHEP , 0908, 049. doi:10.1088/1126-
6708/2009/08/049.
[763] Melnikov, Kirill and Schulze, Markus (2010). NLO QCD corrections to top
quark pair production in association with one hard jet at hadron colliders. Nucl.
Phys., B840, 129–159. doi:10.1016/j.nuclphysb.2010.07.003.
[764] Mikaelian, K.O., Samuel, M.A., and Sahdev, D. (1979). The magnetic mo-
ment of weak bosons produced in pp and pp̄ collisions. Phys. Rev. Lett., 43, 746.
doi:10.1103/PhysRevLett.43.746.
[765] Mishra, Kalanand, Becher, Thomas, Barze, Luca, Chiesa, Mauro, Dittmaier,
Stefan et al. (2013). Electroweak corrections at high energies. arXiv:1308.1430.
[766] Miu, Gabriela and Sjöstrand, Torbjörn (1999). W production in an im-
proved parton-shower approach. Phys. Lett., B449, 313–320. doi:10.1016/S0370-
2693(99)00068-4.
[767] Moch, S., Vermaseren, J. A. M., and Vogt, A. (2004). The three-loop split-
ting functions in QCD: The non-singlet case. Nucl. Phys., B688, 101–134.
doi:10.1016/j.nuclphysb.2004.03.030.
[768] Monni, Pier Francesco and Zanderighi, Giulia (2015). On the excess
in the inclusive W + W − → l+ l− νν cross section. JHEP , 05, 013.
doi:10.1007/JHEP05(2015)013.
[769] Moreno, G., Brown, C.N., Cooper, W.E., Finley, D., Hsiung, √ Y.B. et al.
(1991). Dimuon production in proton - copper collisions at s = 38.8-GeV.
Phys. Rev., D43, 2815–2836. doi:10.1103/PhysRevD.43.2815.
[770] Moretti, Mauro, Ohl, Thorsten, and Reuter, Jurgen (2001). O’Mega: An opti-
mizing matrix element generator. hep-ph/0102195.
[771] Mueller, Alfred H. and Navelet, H. (1987). An inclusive minijet cross-section
and the bare pomeron in QCD. Nucl. Phys., B282, 727. doi:10.1016/0550-
3213(87)90705-X.
[772] Mukherjee, A. and Pisano, Cristian (2003). Manifestly covariant analysis of the
QED Compton process in e p → e gamma p and e p → e gamma X. Eur. Phys.
J., C30, 477–486. doi:10.1140/epjc/s2003-01308-0.
[773] Murayama, H., Watanabe, I., and Hagiwara, Kaoru (1992). HELAS: HELicity
amplitude subroutines for Feynman diagram evaluations. KEK-91-11.
[774] Nadolsky, Pavel M. et al. (2008). Implications of CTEQ global analysis for
collider observables. Phys. Rev., D78, 013004. doi:10.1103/PhysRevD.78.013004.
References 735
[775] Nadolsky, Pavel M., Balazs, C., Berger, Edmond L., and Yuan, C.-P. (2007).
Gluon-gluon contributions to the production of continuum diphoton pairs at
hadron colliders. Phys. Rev., D76, 013008. doi:10.1103/PhysRevD.76.013008.
[776] Nadolsky, Pavel M. and Sullivan, Z. (2001). PDF uncertainties in WH production
at Tevatron. eConf , C010630, P510.
[777] Nagy, Zoltan (2003). Next-to-leading order calculation of three jet
observables in hadron hadron collision. Phys. Rev., D68, 094002.
doi:10.1103/PhysRevD.68.094002.
[778] Nagy, Zoltan and Soper, Davison E. (2006). A New parton shower algorithm:
Shower evolution, matching at leading and next-to-leading order level. In Proceed-
ings, Ringberg Workshop on New Trends in HERA Physics 2005: Ringberg Castle,
Tegernsee, Germany, October 2-7, 2005, pp. 101–123.
[779] Nagy, Zoltan and Soper, Davison E. (2008). Parton showers with quan-
tum interference: leading color, with spin. JHEP , 07, 025. doi:10.1088/1126-
6708/2008/07/025.
[780] Nagy, Zoltan and Soper, Davison E. (2010). On the transverse momentum
in Z-boson production in a virtuality ordered parton shower. JHEP , 03, 097.
doi:10.1007/JHEP03(2010)097.
[781] Nagy, Zoltan and Trocsanyi, Zoltan (1999). Next-to-leading order calculation of
four jet observables in electron positron annihilation. Phys. Rev., D59, 014020.
doi:10.1103/PhysRevD.62.099902, 10.1103/PhysRevD.59.014020.
[782] Nason, Paolo (2004). A new method for combining NLO QCD with shower
Monte Carlo algorithms. JHEP , 11, 040. doi:10.1088/1126-6708/2004/11/040.
[783] Nason, P. and Webber, B.R. (1994). Scaling violation in e+ e− fragmentation
functions: QCD evolution, hadronization and heavy quark mass effects. Nucl.
Phys., B421, 473–517. doi:10.1016/0550-3213(94)90513-4.
[784] Neubert, Matthias (1994). Heavy quark symmetry. Phys. Rept., 245, 259–396.
doi:10.1016/0370-1573(94)90091-4.
[785] Nielsen, Holger Bech and Olesen, P. (1973). Vortex line models for dual strings.
Nucl. Phys., B61, 45–61. doi:10.1016/0550-3213(73)90350-7.
[786] Norrbin, E. and Sjostrand, T. (2001). QCD radiation off heavy particles. Nucl.
Phys., B603, 297–342. doi:10.1016/S0550-3213(01)00099-2.
[787] Nussinov, S. (1975). Colored quark version of some hadronic puzzles.
Phys. Rev. Lett., 34, 1286–1289. doi:10.1103/PhysRevLett.34.1286.
[788] Odorico, R. (1980). Exclusive calculations for QCD jets in a Monte Carlo ap-
proach. Nucl. Phys., B172, 157. doi:10.1016/0550-3213(80)90165-0.
[789] Ohl, Thorsten (1999). Vegas revisited: Adaptive Monte Carlo integration be-
yond factorization. Comput. Phys. Commun., 120, 13–19. doi:10.1016/S0010-
4655(99)00209-X.
[790] Okun, L. B. Okun and Pomeranchuk, I. I. (1956). . Sov. Phys. JETP , 3, 307.
[791] Olive, K. A. et al. (2014). Review of Particle Physics. Chin. Phys., C38, 090001.
doi:10.1088/1674-1137/38/9/090001.
[792] Ossola, Giovanni, Papadopoulos, Costas G., and Pittau, Roberto (2007). Re-
ducing full one-loop amplitudes to scalar integrals at the integrand level. Nucl.
736 References
[810] Plehn, T., Rainwater, David L., and Zeppenfeld, D. (2000). A method for iden-
tifying H → τ + τ − → e± µ∓ pT at the CERN LHC. Phys. Rev., D61, 093005.
doi:10.1103/PhysRevD.61.093005.
[811] Plehn, Tilman, Rainwater, David L., and Zeppenfeld, Dieter (2002). Determin-
ing the structure of Higgs couplings at the LHC. Phys. Rev. Lett., 88, 051801.
doi:10.1103/PhysRevLett.88.051801.
[812] Plothow-Besch, H. (1995). The Parton distribution function library.
Int.J.Mod.Phys., A10, 2901–2920. doi:10.1142/S0217751X9500139X.
[813] Politzer, H. David and Wise, Mark B. (1988). Effective field theory approach
to processes involving both light and heavy fields. Phys. Lett., B208, 504.
doi:10.1016/0370-2693(88)90656-9.
[814] Pomeranchuk, I. I. (1956). . Sov. Phys. JETP , 3, 306.
[815] Pomeranchuk, I. I. (1958). Equality of the nucleon and antinucleon total inter-
action cross section at high energies. Sov. Phys. JETP , 7, 499.
[816] Pumplin, Jon (2009). Data set diagonalization in a global fit. Phys. Rev., D80,
034002. doi:10.1103/PhysRevD.80.034002.
[817] Pumplin, J., Stump, D., Brock, R., Casey, D., Huston, J. et al. (2001). Uncer-
tainties of predictions from parton distribution functions. 2. The Hessian method.
Phys. Rev., D65, 014013. doi:10.1103/PhysRevD.65.014013.
[818] Pumplin, J., Stump, D.R., Huston, J., Lai, H.L., Nadolsky, Pavel M. et al. (2002).
New generation of parton distributions with uncertainties from global QCD anal-
ysis. JHEP , 0207, 012. doi:10.1088/1126-6708/2002/07/012.
[819] Radescu, Voica (2010). Combination and QCD analysis of the HERA inclusive
cross sections. PoS , ICHEP2010, 168.
[820] Rainwater, David L. and Zeppenfeld, D. (1997). Searching for H → γγ in weak
boson fusion at the LHC. JHEP , 12, 005. doi:10.1088/1126-6708/1997/12/005.
[821] Rainwater, David L., Zeppenfeld, D., and Hagiwara, Kaoru (1998). Searching
for H → τ + τ − in weak boson fusion at the CERN LHC. Phys. Rev., D59, 014037.
doi:10.1103/PhysRevD.59.014037.
[822] Ramond, Pierre (1981). Field theory. A modern primer. Front.Phys., 51, 1–397.
[823] Regge, T. (1959). Introduction to complex orbital momenta. Nuovo Cim., 14,
951. doi:10.1007/BF02728177.
[824] Regge, T. (1960). Bound states, shadow states and Mandelstam representation.
Nuovo Cim., 18, 947–956. doi:10.1007/BF02733035.
[825] Richardson, Peter (2001). Spin correlations in Monte Carlo simulations.
JHEP , 11, 029. doi:10.1088/1126-6708/2001/11/029.
[826] Ritzmann, M., Kosower, D.A., and Skands, P. (2013). Antenna
showers with hadronic initial states. Phys. Lett., B718, 1345–1350.
doi:10.1016/j.physletb.2012.12.003.
[827] Rojo, Juan, Accardi, Alberto, Ball, Richard D., Cooper-Sarkar, Amanda,
de Roeck, Albert et al. (2015). The PDF4LHC report on PDFs and LHC data:
Results from Run I and preparation for Run II (arXiv:1507.00556).
[828] Rolbiecki, Krzysztof and Sakurai, Kazuki (2013). Light stops emerging in WW
cross section measurements? JHEP , 1309, 004. doi:10.1007/JHEP09(2013)004.
738 References
[829] Rontsch, Raoul and Schulze, Markus (2014). Constraining couplings of top
quarks to the Z boson in tt + Z production at the LHC. JHEP , 07, 091.
doi:10.1007/JHEP07(2014)091.
[830] Rubin, Mathieu, Salam, Gavin P., and Sapeta, Sebastian (2010). Giant QCD
K-factors beyond NLO. JHEP , 09, 084. doi:10.1007/JHEP09(2010)084.
[831] Rubin, V. C., Thonnard, N., and Ford, Jr., W. K. (1980). Rotational properties
of 21 SC galaxies with a large range of luminosities and radii, from NGC 4605 /R =
4kpc/ to UGC 2885 /R = 122 kpc/. Astrophys. J., 238, 471. doi:10.1086/158003.
[832] Ryskin, M.G. and Snigirev, A.M. (2011). A fresh look at double parton scatter-
ing. Phys. Rev., D83, 114047. doi:10.1103/PhysRevD.83.114047.
[833] Ryskin, M.G. and Snigirev, A.M. (2012). Double parton scattering in dou-
ble logarithm approximation of perturbative QCD. Phys. Rev., D86, 014018.
doi:10.1103/PhysRevD.86.014018.
[834] Sakharov, A.D. (1967). Violation of CP invariance, c asymmetry, and
baryon asymmetry of the universe. Pisma Zh.Eksp.Teor.Fiz., 5, 32–35.
doi:10.1070/PU1991v034n05ABEH002497.
[835] Salam, Abdus (1968). Weak and electromagnetic interactions. Proceedings of
the eighth Nobel symposium, Elementary particle physics: relativistic groups and
analyticity, N. Svartholm, ed., Almqvist & Wiskell.
[836] Salam, Gavin P. and Soyez, Gregory (2007). A practical seedless infrared-safe
cone jet algorithm. JHEP , 0705, 086. doi:10.1088/1126-6708/2007/05/086.
[837] Schmidt, Carl, Pumplin, Jon, Stump, Daniel, and Yuan, C. P. (2016). CT14QED
parton distribution functions from isolated photon production in deep inelastic
scattering. Phys. Rev., D93(11), 114015. doi:10.1103/PhysRevD.93.114015.
[838] Schönherr, M. and Krauss, F. (2008). Soft photon radiation in particle decays
in SHERPA. JHEP , 12, 018. doi:10.1088/1126-6708/2008/12/018.
[839] Schumann, Steffen and Krauss, Frank (2008). A Parton shower algorithm
based on Catani-Seymour dipole factorisation. JHEP , 03, 038. doi:10.1088/1126-
6708/2008/03/038.
[840] Schwinger, Julian (1951a). On the greens functions of quantized fields.
i. Proceedings of the National Academy of Sciences, 37(7), 452–455.
doi:10.1073/pnas.37.7.452.
[841] Schwinger, Julian S. (1951b). On gauge invariance and vacuum polarization.
Phys. Rev., 82, 664–679. doi:10.1103/PhysRev.82.664.
[842] Seymour, Michael H. (1994). Searches for new particles using cone and
cluster jet algorithms: A comparative study. Z. Phys., C62, 127–138.
doi:10.1007/BF01559532.
[843] Seymour, Michael H. (1995). Matrix element corrections to parton shower algo-
rithms. Comput. Phys. Commun., 90, 95–101. doi:10.1016/0010-4655(95)00064-M.
[844] Shekhovtsova, O., Przedzinski, T., Roig, P., and Was, Z. (2012). Resonance
chiral Lagrangian currents and τ decay Monte Carlo. Phys. Rev., D86, 113008.
doi:10.1103/PhysRevD.86.113008.
References 739
[845] Shelest, V.P., Snigirev, A.M., and Zinovev, G.M. (1982). The Multiparton
distribution equations in QCD. Phys. Lett., B113, 325. doi:10.1016/0370-
2693(82)90049-1.
[846] Sherstnev, A. and Thorne, R.S. (2008). Parton distributions for LO generators.
Eur. Phys. J., C55, 553–575. doi:10.1140/epjc/s10052-008-0610-x.
[847] Shifman, Mikhail A. and Voloshin, M.B. (1987). On annihilation of mesons built
from heavy and light quark and B̄0 ↔ B0 oscillations. Sov. J. Nucl. Phys., 45,
292.
[848] Shifman, Mikhail A. and Voloshin, M.B. (1988). On production of D and D∗
mesons in B meson decays. Sov. J. Nucl. Phys., 47, 511.
[849] Sjöstrand, Torbjörn (1984). Jet fragmentation of nearby partons. Nucl.
Phys., B248, 469. doi:10.1016/0550-3213(84)90607-2.
[850] Sjostrand, Torbjorn (1984). The merging of jets. Phys. Lett., B142, 420–424.
doi:10.1016/0370-2693(84)91354-6.
[851] Sjostrand, Torbjorn (1985). A Model for initial state parton showers. Phys.
Lett., 157B, 321–325. doi:10.1016/0370-2693(85)90674-4.
[852] Sjostrand, Torbjorn, Eden, Patrik, Friberg, Christer, Lonnblad, Leif, Miu,
Gabriela et al. (2001). High-energy physics event generation with PYTHIA 6.1.
Comput. Phys. Commun., 135, 238–259. doi:10.1016/S0010-4655(00)00236-8.
[853] Sjostrand, Torbjorn, Mrenna, Stephen, and Skands, Peter Z. (2006). PYTHIA
6.4 Physics and Manual. JHEP , 05, 026. doi:10.1088/1126-6708/2006/05/026.
[854] Sjostrand, T. and Skands, Peter Z. (2003). Baryon number violation and string
topologies. Nucl. Phys., B659, 243. doi:10.1016/S0550-3213(03)00193-7.
[855] Sjostrand, T. and Skands, Peter Z. (2004). Multiple interactions and the struc-
ture of beam remnants. JHEP , 03, 053. doi:10.1088/1126-6708/2004/03/053.
[856] Sjostrand, T. and Skands, Peter Z. (2005). Transverse-momentum-ordered
showers and interleaved multiple interactions. Eur. Phys. J., C39, 129–154.
doi:10.1140/epjc/s2004-02084-y.
[857] Sjostrand, Torbjorn and van Zijl, Maria (1987). A Multiple interaction
model for the event structure in hadron collisions. Phys. Rev., D36, 2019.
doi:10.1103/PhysRevD.36.2019.
[858] Skands, Peter, Plehn, Tilman, and Rainwater, David (2005). QCD radiation in
the production of high–ŝ final states. ECONF , C0508141, ALCPG0417.
[859] Skands, Peter, Webber, Bryan, and Winter, Jan (2012). QCD coherence and the
top quark asymmetry. JHEP , 1207, 151. doi:10.1007/JHEP07(2012)151.
[860] Skands, Peter Z. and Wicke, Daniel (2007). Non-perturbative QCD ef-
fects and the top mass at the Tevatron. Eur. Phys. J., C52, 133–140.
doi:10.1140/epjc/s10052-007-0352-1.
[861] Snigirev, A.M. (2003). Double parton distributions in the leading log-
arithm approximation of perturbative QCD. Phys. Rev., D68, 114012.
doi:10.1103/PhysRevD.68.114012.
[862] Spergel, D.N. et al. (2003). First year Wilkinson Microwave Anisotropy
Probe (WMAP) observations: Determination of cosmological parameters. Astro-
phys.J.Suppl., 148, 175–194. doi:10.1086/377226.
740 References
[879] van Hameren, A. (2011). OneLOop: For the evaluation of one-loop scalar func-
tions. Comput. Phys. Commun., 182, 2427–2438. doi:10.1016/j.cpc.2011.06.011.
[880] van Hameren, Andre and Papadopoulos, Costas G. (2002). A hierarchical
phase space generator for QCD antenna structures. Eur. Phys. J., C25, 563–574.
doi:10.1007/s10052-002-1000-4.
[881] van Oldenborgh, G.J. (1991). FF: A Package to evaluate one loop Feynman
diagrams. Comput. Phys. Commun., 66, 1–15. doi:10.1016/0010-4655(91)90002-3.
[882] Virkus, T. et al. (2008). Direct measurement of the Chudakov effect. Phys. Rev.
Lett., 100, 164802. doi:10.1103/PhysRevLett.100.164802.
[883] Vogt, A., Moch, S., and Vermaseren, J. A. M. (2004). The three–loop
splitting functions in QCD: The singlet case. Nucl. Phys., B691, 129–181.
doi:10.1016/j.nuclphysb.2004.04.024.
[884] von Weizsacker, C.F. (1934). Radiation emitted in collisions of very fast elec-
trons. Z. Phys., 88, 612–625. doi:10.1007/BF01333110.
[885] Watt, G. and Thorne, R. S. (2012). Study of Monte Carlo approach to ex-
perimental uncertainty propagation with MSTW 2008 PDFs. JHEP , 08, 052.
doi:10.1007/JHEP08(2012)052.
[886] Webber, Bryan R. (1984). A QCD Model for jet fragmentation including soft
gluon interference. Nucl. Phys., B238, 492–528. doi:10.1016/0550-3213(84)90333-
X.
[887] Webber, Bryan R. (1994). Estimation of power corrections to hadronic event
shapes. Phys. Lett., B339, 148–150. doi:10.1016/0370-2693(94)91147-9.
[888] Weinberg, Steven (1967). A Model of leptons. Phys. Rev. Lett., 19, 1264–1266.
doi:10.1103/PhysRevLett.19.1264.
[889] Werner, Klaus, Liu, Fu-Ming, and Pierog, Tanguy (2006). Parton ladder splitting
and the rapidity dependence of transverse momentum spectra in deuteron-gold
collisions at RHIC. Phys. Rev., C74, 044902. doi:10.1103/PhysRevC.74.044902.
[890] Weyl, Hermann (1931). The theory of groups and quantum mechanics. Dover,
New York, USA.
[891] Whalley, M. R., Bourilkov, D., and Group, R. C. (2005). The Les Houches accord
PDFs (LHAPDF) and LHAGLUE. In HERA and the LHC: A Workshop on the
implications of HERA for LHC physics. Proceedings, Part B, pp. 575–581.
[892] Williams, E.J. (1934). Nature of the high-energy particles of penetrating radi-
ation and status of ionization and radiation formulae. Phys. Rev., 45, 729–730.
doi:10.1103/PhysRev.45.729.
[893] Wilson, K.G. and Zimmermann, W. (1972). Operator product expansions and
composite field operators in the general framework of quantum field theory. Com-
mun.Math.Phys., 24, 87–106. doi:10.1007/BF01878448.
[894] Wilson, Kenneth G. (1974). Confinement of quarks. Phys. Rev., D10, 2445–
2459. doi:10.1103/PhysRevD.10.2445.
[895] Winter, Jan-Christopher and Krauss, Frank (2008). Initial-state showering
based on colour dipoles connected to incoming parton lines. JHEP , 07, 040.
doi:10.1088/1126-6708/2008/07/040.
742 References
[896] Winter, Jan-Christopher, Krauss, Frank, and Soff, Gerhard (2004). A Modified
cluster hadronization model. Eur. Phys. J., C36, 381–395. doi:10.1140/epjc/s2004-
01960-8.
[897] Witten, Edward (2004). Perturbative gauge theory as a string theory in twistor
space. Commun. Math. Phys., 252, 189–258. doi:10.1007/s00220-004-1187-3.
[898] Wobisch, M. and Wengler, T. (1998). Hadronization corrections to jet cross-
sections in deep inelastic scattering. hep-ph/9907280.
[899] Wolfram, Stephen (1980). Parton and hadron Production in e+ e− Annihilation.
In Elementary constituents and hadronic structure. Proceedings, 15th Rencontres
de Moriond, Les Arcs, France, March 9-21, 1980. vol. 1, pp. 549–588.
[900] Yang, Un-Ki et al. (2001). Measurements of F2 and xF3ν − xF3ν̄ from CCFR
νµ −Fe and ν̄µ −Fe data in a physics model independent way. Phys. Rev. Lett., 86,
2742–2745. doi:10.1103/PhysRevLett.86.2742.
[901] Yennie, D. R., Frautschi, Steven C., and Suura, H. (1961). The infrared di-
vergence phenomena and high-energy processes. Annals Phys., 13, 379–452.
doi:10.1016/0003-4916(61)90151-8.
[902] Zeppenfeld, D., Kinnunen, R., Nikitenko, A., and Richter-Was, E. (2000). Mea-
suring Higgs boson couplings at the CERN LHC. Phys. Rev., D62, 013009.
doi:10.1103/PhysRevD.62.013009.
[903] Zinovev, G.m., Snigirev, A.m., and Shelest, V.p. (1982). Equations for many-
parton distributions in Quantum Chromodynamics. Theor. Math. Phys., 51, 523–
528. doi:10.1007/BF01017270.
Index