The Phenomenological Approach To Modeling The Dark Energy: Martin Kunz
The Phenomenological Approach To Modeling The Dark Energy: Martin Kunz
The Phenomenological Approach To Modeling The Dark Energy: Martin Kunz
Martin Kunz
Département de Physique Théorique and Center for Astroparticle Physics, Université de
Genève, Quai E. Ansermet 24, CH-1211 Genève 4, Switzerland
E-mail: martin.kunz@unige.ch
Abstract. In this mini-review we discuss first why we should investigate cosmological models
beyond ΛCDM. We then show how to describe dark energy or modified gravity models in a
fluid language with the help of one background and two perturbation quantities. We review
a range of dark energy models and study how they fit into the phenomenological framework,
including generalizations like phantom crossing, sound speeds different from c and non-zero
anisotropic stress, and how these effective quantities are linked to the underlying physical
models. We also discuss the limits of what can be measured with cosmological data, and
some challenges for the framework.
1 Introduction 1
2 Initial questions 2
2.1 Reliability of the data 2
2.2 Why go beyond Λ? 3
2.3 Other possible explanations 5
3 A phenomenological description 6
3.1 Overview 6
3.2 Background evolution 7
3.3 Perturbation equations 10
3.4 Different descriptions 12
3.5 Schematic Measurements 14
1 Introduction
2011 has been an exciting year for dark energy: In the same week in early October, the Nobel
prize in physics was given to Saul Perlmutter, Brian P. Schmidt and Adam G. Riess “for
the discovery of the accelerating expansion of the Universe through observations of distant
supernovae” and the Euclid satellite mission1 [1] was selected by ESA for the second Cosmic
Vision launch slot (expected for 2019 – but some delays are likely between now and then).
Dark energy has arrived in the main stream of physics, not only cosmology, and it is seen
as one of big challenges facing the discipline at the beginning of the 21st century. And at
least observationally it really is a 21st century problem. While theoretically the cosmological
constant has been around since the development of General Relativity [2, 3] – as the unique
additional term that can self-consistently be added to the Einstein equations, and because
it would have permitted the construction of a static universe – the two observational pub-
lications for which the Nobel prize was given date from 1998 and 1999 respectively [4, 5].
1
Continuously updated information on Euclid is available on http://www.euclid-ec.org.
–1–
Even though there were observational hints before then, it was really the evidence presen-
ted in these two papers that led, in a way reminiscent of a phase transition, to the general
acceptance amongst cosmologists that some kind of Dark Energy was necessary.
There are several excellent reviews discussing dark energy in great detail, for example
[6–12] just to name a few, and there is also a new book by Luca Amendola and Shinji
Tsujikawa on the topic [13]. In this mini-review, loosely based on an overview talk given at
PONT 2011 in Avignon, I do not plan to compete with these weighty tomes and to discuss in
depth the dark energy in all its guises and with all its consequences. Instead, I will present
a rather personal view of the dark energy, focused on its phenomenological aspects, and I
will be trying to add a few ideas that in my opinion are interesting but a bit off the trodden
path. Many more aspects of the accelerating universe are discussed in the companion papers
to this one: Observational Evidence of the Accelerated Expansion of the Universe by Pierre
Astier and Reynald Pain [14], Everything You always Wanted to Know about the Cosmological
Constant (but Were Afraid to Ask) by Jérôme Martin [15], Establishing Homogeneity of the
Universe in the Shadow of Dark Energy by Chris Clarkson [16] and Galileons in the Sky by
Claudia de Rham [17].
I will start in section 2 with the question whether the data is really reliable and needs
to be taken seriously, why it makes sense to go beyond a cosmological constant, and what
other explanations beyond Λ we can think of.
In section 3 we will then introduce the phenomenological framework that describes the
most general fluid in terms of the variables present in the energy-momentum tensor. These
variables allow for general departures from ΛCDM and in addition they allow to understand
in a precise, quantitative sense, which degrees of freedom can be probed by cosmological
measurements.
We review actual dark energy models in section 4, illustrating how they fit into the
framework and how different types can be distinguished through the values taken by their fluid
parameters. We also discuss how the phenomenological variables allow for straightforward
generalizations of the models (for example to different sound speeds) and how this is then
reflected in “model space”.
In the penultimate section 5 we discuss some of the challenges and limits of the phe-
nomenological description, including the dark degeneracy which illustrates a fundamental
limitation of the framework that is a reflection of fundamental limitations of cosmological
measurements when trying to reconstruct the physics of the different constituents of the Uni-
verse. We conclude this mini-review with a summary and an outlook on future developments
in section 6.
2 Initial questions
–2–
In any metric theory of gravity, the distances are essentially unique, thanks to a theorem
proved in 1933 by Etherington that shows that for this class of theories the relation between
luminosity distance and angular diameter distance,
dL (z)
= (1 + z)2 (2.1)
dA (z)
(where z is the redshift) always holds [18]. This means that we can test the luminosity
distance data with the help of angular diameter distance data, and vice versa [19–21]. Since
the two types of data agree well even for today’s precise measurements [22] this limits a
wide range of systematic effects that affect only one of the two distances. Examples on the
luminosity distance side include explanations for the observed dimming of supernovae like
replenishing grey dust or ultralight axions [23]. More generally speaking, it is very difficult
to explain the supernova data by doing unpleasant things to the photons, since this would
generically break the distance duality. This leaves effectively only the option to change the
behavior of the metric, which is of course exactly what dark energy models do (but see also
[24] for interesting conclusions on the metric with the help of distance duality). For this
reason it is very unlikely that the distance data is “just” wrong.
–3–
where N = ln(a) is the number of e-foldings relative to today, and ΩΛ ≈ 0.73. Let us call
the transition the period from ΩΛ (a) = 0.05 to ΩΛ (a) = 0.95. We find that this corresponds
to N ∈ [−1.31, 0.65], i.e. in total about two e-foldings, or an expansion of the Universe
by a factor of 7.4. If we were more conservative and defined the transition to go from
ΩΛ (a) = 0.01 to ΩΛ (a) = 0.99 then the transition takes about three e-foldings, corresponding
to an expansion by a factor of 20. I leave it to the reader to judge whether we should consider
this very fine-tuned compared to seven e-foldings (factor of 1100) from the emission of the
cosmic microwave background radiation (CMB) to today, or the roughly 74 e-foldings between
the Planck time and today.
These problems notwithstanding, it is still the case that the cosmological constant is in
agreement with all the data, and in addition it remains the best-motivated model. Is it really
worth investigating other possibilities? Apart from the basic truth that in order to test a
model, we need to go beyond it, there is another reason why I think that the study of dark
energy is worthwhile: according to the cosmological standard model, the current period of
accelerated expansion is not the first one. The standard model postulates that in the very
early Universe, it was also accelerated expansion, called inflation, that created the seeds of
the structure that we see around us. Was inflation due to a cosmological constant? It cannot
have been a “true” cosmological constant, since inflation ended. But can we test whether it
was a kind of pseudo-static vacuum energy, or something dynamical?
Inflationary dynamics is usually parameterized in terms of slow-roll parameters, with
the first two (in the Hamilton–Jacobi formalism, see e.g. [27]) defined as
2
H0 H 00
2 2
H = 2MPl , ηH = 2MPl . (2.3)
H H
Here 0 denotes a derivative with respect to the scalar field φ and we used the reduced Planck
mass MPl2 = 1/(8πG). With H 0 = Ḣ/φ̇ and with the help of the Friedmann equation (3.4)
and the conservation equation (3.5) as well as the expressions for energy density and pressure
2 H 0 and
of a canonical scalar field (4.2) we find φ̇ = −2MPl
2
1 + w = H . (2.4)
3
The equation of state during inflation is therefore directly given by the first slow-roll para-
meter. To lowest order in slow-roll this is also related to the tensor to scalar ratio by r = 16H .
Without any further work we can deduce that, since primordial gravitational waves have not
been observed (yet?), there is no observational requirement for a deviation from w = −1
during inflation. The upper limit on r from the seven-year Wilkinson Microwave Anisotropy
Probe (WMAP) data for a flat ΛCDM model without running is about 0.36 [28], corres-
ponding to a maximum deviation from w = −1 of 0.015, and adding additional data would
tighten this limit further.
So far it looks like inflation supports also an effective cosmological constant, with w =
−1 at the percent-level. However, we can do better by looking at the spectrum of scalar
perturbations: the scalar spectral index ns is given to lowest order in slow roll by
A deviation of the scalar spectral index from the Harrison-Zel’dovich (HZ) case (ns = 1) can
be caused by the second slow-roll parameter ηH . Thus even if at a given time H ≈ 0, it is
–4–
still possible to obtain ns 6= 1 through a non-zero ηH . But the rate of change of w is also
linked to the first two slow-roll parameters,
d ln(1 + w) d ln H
= = −2(ηH − H ) (2.6)
dN dN
where N = ln a is again the number of e-foldings. Hence if Planck confirms the deviation
from HZ that WMAP sees at the 2.5σ level, then not both H and ηH can be zero. And if
not both are zero, then either w 6= −1 or w is changing with time during inflation, or both.
In either case, the result is not compatible with an effective cosmological constant for the
time period when the observable scales left the horizon, and we will be able to conclude that
inflation was driven by a kind of early dynamic dark energy. The Planck measurement of ns
may therefore lead to the first direct experimental evidence of the existence of dark energy
dynamics [29].
1. The Universe contains the standard model particles, a dark, cold, matter-like compon-
ent and a cosmological constant.
2. The large-scale evolution of the Universe is described well by General Relativity (GR).
Let us start with the last assumption. Some theoretical justification is given by the
Cosmological (or Copernican) Principle that demands that there are no preferred observers in
our Universe, implying that it should be close to homogeneous and isotropic. Observationally,
the CMB looks very uniform and supports the Cosmological Principle. If the Universe was
exactly homogeneous and isotropic then GR implies that the metric is of the FLRW form.
A crucial question therefore is whether the cosmological evolution of a state that is initially
close to FLRW can induce large changes from the expected evolution of the averaged metric.
This scenario is often called backreaction [30–33], and since it is discussed elsewhere [16] I
will not consider it further2 . The only question I want to raise here is why in such a scenario
the apparent evolution of the Universe should be close to ΛCDM for all existing probes. It
is not obvious why the backreaction scenario should evolve towards a de-Sitter like state. So
2
I include here models that explicitly violate the Copernican Principle through the use of a fundamentally
inhomogeneous metric like LTB [34–39] – in my personal opinion, these models are more likely to represent
effective descriptions of the backreaction scenario [40] rather than fundamental models themselves, although
large voids could potentially have been formed during inflation [41].
–5–
although backreaction models can potentially avoid some of the fine-tuning issues inherent
in ΛCDM, they seem subject to others that are not of a theoretical but of an observational
nature.
The other two assumptions are more closely related to the topic of this review, namely
that GR may be modified on large scales, and that there may be more in the Universe than
the particles that we know about (plus the dark matter). Here I will concentrate on the first
assumption, not least motivated by the discussion in the last section, namely that during
inflation we seem to observe a behavior that already requires extra ingredients beyond those
found in ΛCDM. But as we will see, the formalism that we will put in place in the next
section is equally able to deal with modifications of GR, at least in an indirect way.
3 A phenomenological description
3.1 Overview
Given the range of different explanations, it is useful to put in place a general and unified
description of possible observations. In this way, we can analyze the data without the need
to assume a specific model, and thus without imposing in advance limitations on what can
be tested.
The best way to build such a framework is by reconstructing the metric perturbations.
Here we will restrict ourselves to scalar perturbations on a flat background for simplicity, but
the approach can be generalized (and in general will need to be generalized once we consider
non-linear perturbations as then the scalar-vector-tensor split is no longer conserved). The
metric perturbations are good variables since they describe all observations that rely on
gravity alone, specifically all cosmological probes3 . Once the metric is given (as a function
of space and time), we know that the observations are now also fixed as the metric describes
the motion of both light and of the test particles (excluding astrophysical effects on small
scales that we will neglect as we concentrate on observations on large scales k ∼ < 0.1/Mpc –
see sections 5.2 and 5.3 for short remarks concerning non-linear effects on smaller scales).
The basic outline is then as follows: the line element that we will consider is
Here η is conformal time, and we will denote the physical time as t, with dt = adη. We
therefore have one scale factor a(t) and two metric potentials φ(x, t) and ψ(x, t) that we need
to know in order to fix the cosmological evolution of the Universe.
Through the Einstein equations
we can link the quantities in the metric to a physical description of the constituents of the
Universe. We can do this not only for a “typical” dark energy model where an extra compon-
ent included in the total energy-momentum tensor on the “right hand side” of the Einstein
equations is responsible for the accelerated expansion, but also for models where the Einstein
equations themselves are modified. Even in that case we can use the observational fact that
we are 3+1 dimensional beings to always project the equations down to 3+1 dimensions,
3
I exclude here observations of non-gravitational physics like those of spectral lines in quasars that probe
the variation of fundamental constants, even though such observations would provide a powerful way to probe
the presence of light fields [42]. See e.g. [43] for a review on varying constants and cosmology.
–6–
where we can use the (fundamental or induced) metric to construct the Einstein tensor Gµν
and then build an effective dark energy-momentum tensor by subtracting the Tµν of the
known components (baryons, radiation and neutrinos):
2
Xµν ≡ MPl Gµν − Tµν (3.3)
The new effective total energy momentum tensor on the right hand side is conserved (meaning
∇µ Xνµ = 0) just because the Einstein tensor G obeys the Bianchi identity ∇µ Gµν = 0 and
because the EMT of the known particles satisfies ∇µ T̃νµ = 0.
How should we interpret this framework? In general, we need to interpret observations
within a theory. Without any theory, we do not know at all how to connect the observed
distribution and motion of e.g. galaxies with physics. We expect that the correct theory is
similar to GR (and quite possibly is GR). So we just start by assuming this to be the case, and
derive the consequences that arise from this assumption. If the resulting energy-momentum
tensor looks suspicious, then we will have to consider the possibility that GR may have to be
modified. We will discuss in section 4.6 what such a suspicious signature of modified gravity
could be.
If we therefore found that p = ρ/3 ⇔ w = 1/3 then we could conclude that the Universe is
filled with a gas of radiation.
–7–
Model equation of state Dark sector χ2min
parameters
Constant w w = w0 1 391.3
Linear (CPL) w = w0 + (1 − a)wa 2 312.1
Quadratic w = w0 + w1 (1 − a) + w2 (1 − a)2 3 309.8
Cubic w = w0 + w1 (1 − a) + w2 (1 − a)2 + w3 (1 − a)3 4 309.6
ΛCDM w = −(1 − Ωm )/[Ωm a−3 + (1 − Ωm )] 1 311.9
Table 1. Number of parameters of the total dark sector equation of state (i.e. dark matter and dark
energy) and best-fit chi-squared for various parameterizations of w, for background-only data (SN-Ia,
CMB peak position, BAO and H0 , see discussion in section 3.2). The constant equation of state is
ruled out while all others, including the ΛCDM equation of state, fit this data similarly well.
What constraints on w do we get from cosmological data? The basic approach is simple:
we just select a function w(a) and compute H(a) with the help of the Friedmann equation
(3.4) and the conservation equation (3.5). Based on H(a) we can then compute predictions
for cosmological measurements that involve only the background geometry like luminosity
and angular diameter distances. We can then use for example a Markov-Chain Monte Carlo
(MCMC) method to evaluate the goodness of fit of the predictions with a likelihood function
that is based on observations of these distances (see section 4.5 for an explanation of the
MCMC method). In principle this is straightforward, but there are some technical questions.
One important question is how to choose the function w(a). There are many different ways
to parameterize the equation of state parameter, for example as a function of redshift z [44–
46] or as a function of scale factor a [47–50]. Although in principle both descriptions are
equivalent, one has to be careful to keep the parameterization well behaved to z > ∼ 1000 when
using high-redshift data, e.g. from the CMB [51]. This is less of a problem with functions
of the scale factor, since it varies only in the interval a ∈ [0, 1], see [52] for a discussion
of different parameterizations and potential problems. An elegant approach is given by the
Principal Component Analysis (PCA), pioneered in a cosmological context by [53]. In PCA
the function is expanded in a set of basis functions (for example in bins in a, but any functional
basis will do) and the covariance matrix of the coefficients is diagonalized. The eigenfunctions
then provide uncorrelated parameterizations of w, with the eigenvalues giving the precision
with which those functions of w can be measured. The drawback of PCA is that it depends
on the data set used. Yet another possibility that has been investigated more recently is to
use a Gaussian process for modeling w(z) [54]. For now we simply use a Taylor expansion of
w in a and increase the order until the goodness of fit no longer increases significantly.4
Another question is what data to use. Here one needs to be careful to avoid data sets
that require the calculation of perturbations, the necessary formalism for that is discussed
in the following section. In [58] we used type-Ia supernova data (SN-Ia, we used the Union
sample [59]), the peak position in the CMB [60] and Baryonic Acoustic Oscillations (BAO,
[61]), as well as the SHOES measurement of H0 [62]. We then found the best-fit χ2 values
listed in Table 1 (setting the spatial curvature to zero). For comparison we also list ΛCDM
which uses the single parameter Ωm . For this data, a quadratic expansion of w(a) appears
sufficient, we plot in Fig. 1 a random sample of such quadratic curves to given an idea of
4
An alternative approach to modeling the evolution of w parameterizes instead the expansion rate H or
the dark energy density ρDE [55–57].
–8–
the range of allowed values of w. Although ΛCDM provides a slightly worse fit than the
quadratic model, it has fewer parameters and so passes this test. Only a constant total
equation of state is really ruled out – we remind the reader that we consider here the total
dark sector equation of state (i.e. dark energy and dark matter together) since the Einstein
equations relate the geometry to the total energy-momentum tensor. We show in Fig. 1
a random sample of the best fitting quadratic w(a) curves. At a ∼ < 10−4 the Universe is
radiation dominated which implies w = 1/3, but the curves in the figure start later, during
matter domination, and we can see that indeed p ≈ 0 as expected for a Universe filled with
pressureless dust. But already at relatively high redshift w begins to decrease, and at a ≈ 0.5
the expansion starts to accelerate as w drops below −1/3. Today we have that w ≈ −0.8,
but with a large spread, mainly for two reasons: there is little data at very low redshifts
(where anyway the local dynamics starts to be important), and w affects the distances only
through a double integration which smoothes its impact strongly.
0.2
−0.2
w
−0.4
−0.6
−0.8
−1
0 0.2 0.4 0.6 0.8 1
a
Figure 1. An illustration of the form of the best fitting quadratic w(a) curves. A sparse sampling of
400 chain elements, colour coded by likelihood with the red (lighter) shading the highest, is shown.
(Figure from [58])
An additional subtlety here is that we are usually not interested in the total pressure
p, but instead in the pressure of the different constituents of the Universe. If for example
the Universe was composed of matter and a cosmological constant, then we would find a
total w somewhere between 0 and −1, see the last row of table 1 for the equation of w in
ΛCDM. Instead, we would like to find two components, one with w = 0 and a second one
with w = −1. As we will discuss in more detail in section 5.1 this is in general not possible
as the Einstein equations only link the geometry with the total energy momentum tensor
(EMT). For the time being we will assume that the dark matter itself has been observed so
that we can separate the total EMT into several components,
(tot) (radiation) (baryons) (DM) (DE)
Tµν = Tµν + Tµν + Tµν + Tµν . (3.7)
We put both electromagnetic radiation and neutrinos into the “radiation” EMT as we are not
strongly interested in these components here. DM denotes the non-baryonic dark matter,
–9–
and DE whatever it is that is responsible for the observed accelerated expansion of the
Universe (including modifications of gravity). We are then interested in the equation of
state parameter wDE that characterizes the pressure of the DE component. We will discuss
constraints on wDE (a) in section 4.5 in the context of a canonical scalar field model for which
the two contributions can be separated.
But even when we have measured wDE (a) we will in general still not know what exactly
the dark energy is, except maybe if the result matches one of the simple cases given in
Eq. (3.6). In general there are many different possibilities that can lead to the same w.
For example, a time-evolving w could be due to a scalar field as well as a modification of
gravity. Even if, as is indeed the case, observations are consistent with w = −1, we are still in
trouble: we would like to interpret this result as a sign that the dark energy is a cosmological
constant, but for the reasons discussed earlier (and elsewhere in this volume), we think that
the cosmological constant is a problematic explanation for the dark energy.
Hence we would like to learn more about the nature of the dark energy. This is possible
by exploiting the extra information encoded in the evolution of the perturbations in the
Universe as we will discuss next. In addition, whenever we use data that requires calculating
the evolution of perturbations (like e.g. the CMB, the full galaxy distribution or weak lensing)
then we must provide a model that fixes the perturbation evolution. There is no “model
independent way” to just use w in that case. We either need to pick an explicit physical
model or, as we will see in a moment, we need to choose precisely two extra functions to fix
the evolution of the scalar perturbations to linear order. Whenever we think that we can get
away by using only w, or maybe w and the growth rate of matter perturbations, then we fix
one or two other quantities to some value (and in the case of multiple data sets possibly to
several different, incompatible values).
where δ = δρ/ρ is the density contrast, and we use V = ikj T0j /ρ as the scalar velocity
perturbation (the reason for this choice will become clear in section 4.3). A prime designates
– 10 –
a derivative with respect to the scale factor a. The right-hand side of the conservation
equations also contains the pressure perturbation δp (∼ the perturbation of the diagonal
part of the space-space part of the EMT) and the anisotropic stress σ (∼ the perturbation
of the off-diagonal part of the space-space part of the EMT). These four variables (δρ, V , δp
and σ) fully parameterize the scalar first-order perturbations of a general energy momentum
tensor. The conservation equations need to be complemented by the Einstein equations.
One of them gives the φ potential in terms of the fluid content in a form reminiscent of the
Newtonian Poisson equation,
X 3aH
X
2 2
k φ = −4πGN a ρα δα + 2 Vα = −4πGN a2 ρα ∆α . (3.10)
α
k α
The sum over α on the right hand side runs over all fluids, and ∆ is the comoving density
contrast (while δ is the density contrast in the longitudinal gauge). The second potential is
then related to the first through the anisotropic stresses present in the fluids,
X
k 2 (φ − ψ) = 12πGN a2 (1 + wα )ρα σα . (3.11)
α
We notice that we have (at the perturbation level, i.e. without counting the background
quantities) two functions that determine the (scalar part of the) metric, φ and ψ, and four
functions that describe the behavior of a fluid and that define the (scalar part of the) energy-
moment tensor, δ, δp, V and σ. Hence in total we have six functions, and four equations,
so that two of the functions cannot be determined. Given the structure of the perturbation
equations where the two conservation equations describe the evolution of δ and V , and the
two Einstein equations determine φ and ψ, it makes sense to consider δp and σ as the two free
functions that describe the type of fluid present in the Universe (although this is in principle
a choice to be made). This is very similar to the background case where the evolution of
ρ was determined by the conservation equation and H was given by the Einstein equation,
while we considered p (or w) as a free function describing the nature of the fluid.
Is it possible to construct a quantity like w that is of order unity and easier to interpret
than p, to describe δp? For this purpose one often defines a rest-frame sound speed c2s ,
defined through δp = c2s δρ for the comoving pressure and density perturbation. In terms of
the quantities in the conformal Newtonian gauge used here, this relation becomes
2 − c2
3aH cs a
δp = c2s ρδ + ρV (3.12)
k2
where c2s is still the rest-frame sound speed, and where c2a ≡ ṗ/ρ̇ is called the adiabatic sound
speed of the fluid. A fluid that has no internal degrees of freedom would have c2a = c2s ,
but in general such fluids are not viable dark energy models unless they mimic ΛCDM very
closely due to the perturbation evolution [68, 69]. Models with internal degrees of freedom
like scalar field models (where both the field and its time derivative can be seen as different
degrees of freedom since the field obeys a second order equation of motion) have in general
c2s 6= c2a . We will see in section 4.4, where we discuss the evolution of the perturbations
in a generalized Quintessence model, that the role of the rest-frame sound speed is really
to describe pressure support, i.e. it defines the existence of a sound horizon below which
the density perturbations do not grow. The sound speed here is not necessarily related to
the actual propagation velocity of perturbations, although the physics of pressure support is
– 11 –
usually related to the speed with which perturbations can adjust and thus the sound speed
coincides with the propagation velocity for many models, most notably for the Quintessence
and K-essence models that we will discuss below. The sound speed does also not always
provide a simpler description than δp, for an explicit example where this is not the case see
the Quintom model in section 4.3 below.
Finally, as already discussed in general terms in the introduction, the energy momentum
tensor considered here may be wholly or partially an effective, fictitious energy moment
tensor. In this case also the fluid quantities δp and σ will be effective quantities. (In these
cases as well we should not expect a simple c2s .) Nonetheless, we still can use the same formal-
ism as these functions provide a general and complete description of first order perturbation
theory around a FLRW metric. And as we will discuss in section 4.6 especially the presence
of a non-zero anisotropic stress at late times (where contributions from relativistic particles
to the total EMT are small) is a good indicator that we are dealing with a modified gravity
model.
– 12 –
Here we also used the total “geometric” equation of state parameter
2 Ḣ
wG = −1 − . (3.14)
3 H2
If we use this pressure perturbation together with the anisotropic stress inferred from the
difference of the potentials and the background evolution given by wG then we find again the
same evolution of the metric potentials. The equivalence holds thus in both directions.
It is also possible to parameterize the geometric degrees of freedom through dimension-
less variables that link two perturbative quantities, similarly to the way wG links p and ρ.
One example [71] is given by the pair {Q, η} defined through
In this way, Q parameterizes both a deviation from the Poisson equation due to a modification
of GR and any extra contributions from perturbations beyond those of matter5 . The latter
is the case for example if we have a Quintessence-like model (see section 4.1) for which the
dark energy perturbations are non-zero and will lead to Q 6= 1. The variable Q can also
model a variation of the gravitational constant GN . Of course, in order to be general, both
Q and η need to be functions of scale and time. This leads to an additional complication,
namely that a multiplication in Fourier space corresponds to a convolution in real space, and
vice versa. The conventional choice when limiting ourselves to first order perturbations is to
define Q as a multiplicative factor in k space where the equations are easier to deal with as
the modes decouple.
The choice of Q and η is not unique at all. The common theme is that we need to choose
two non-degenerate functions to parameterize the extra perturbations. One often defines a
parameter Σ that is relevant for lensing through Σ = Q(1 + η/2) so that the lensing potential
is given by
k 2 Φ = k 2 (φ + ψ) = −8πGa2 Σρm ∆m . (3.17)
Another often used parameter is µ = (1 + η)Q which quantifies the impact of the dark energy
perturbations on a Poisson equation in ψ in the same way as Q for φ. A compilation of
different conventions can for example be found in [72], see also [73–75] for other discussions.
For use in data analysis, some combinations may be better suited than others because
they may be more or less correlated. For example [76, 77] found that {µ, Σ} is a good choice
for weak lensing, CMB and large scale structure data. We have also left out the question of
how to parameterize the time and scale dependence of these functions. An early example is
the Parameterized Post Friedmann framework [78] (with an even earlier example, pre-dating
dark energy, in [79]), while the papers listed above provide further possibilities. The most
general and flexible approach is probably the principal component analysis (PCA) approach
already mentioned in the context of parameterizing w. In PCA the two functions are specified
in a number of bins in k and a, and the resulting covariance matrix between the bins is then
diagonalized to provide uncorrelated measurements of the resulting eigenfunctions.
5
Here we mean dark energy perturbations, i.e. we implicitly assume that the parameterization is used
only at late times. At early times the radiation perturbations may be important, in which case they should
probably be included explicitly.
– 13 –
3.5 Schematic Measurements
Having discussed the freedom available, we also have to worry whether the free functions
can be measured at all in cosmology. The aim here is not to discuss in detail the different
observations. The goal of this short section is merely to check whether it is possible to meas-
ure the gravitational potentials in principle. Even this apparently straightforward question
contains many different pitfalls. For example, dark energy might couple directly to baryons
and light, and possibly do so in different ways. In this case we are going to reconstruct a
metric that is not actually the metric we wanted to reconstruct. On the other hand, we could
have transformed the action so that at least e.g. baryons are not coupled, and in general we
assume that we did this. We will also assume that the constraints from direct tests for fifth
forces affecting photons and baryons are applicable also on the scales of interest and limit
such couplings to a level where they are not relevant.
If we are able to use the propagation of light and of baryons, then a possible high-level
scheme can proceed as follows: the propagation of light is governed by the perpendicular
derivative of the lensing potential φ + ψ, so that with the help of weak lensing we are able to
constrain this combination of the potentials (this is also the reason why the Σ parameter is a
good choice when using weak lensing data). Non-relativistic particles on the other hand are
accelerated by ∇ψ (the contribution from φ is suppressed by the particle velocity). Observing
the peculiar motion of objects on large scales, where they are not affected by gas-physics, it
is thus possible to constrain ψ. One possibility is to use redshift space distortions on large
scales for this purpose. In this way we can at least in principle obtain both φ and ψ from the
observations. Once we have measured both potentials independently, then we can e.g. read
off whether the anisotropic stress is zero or not, since σ is directly related to the gravitational
slip Π = φ − ψ through Eq. (3.11).
We have not studied the measurements in any detail, but based on the above discussion
we can conclude that a combined large-scale structure and weak-lensing survey can in prin-
ciple constrain separately the two perturbation variables that characterize the dark sector.
This means that if we include these variables in the likelihood of the observations, then we
expect to obtain some constraints. How strong the constraints will be can be studied for
example with the help of the PCA approach mentioned in the last section.
Here we have not yet used galaxy clustering. The reason is that in modified gravity
models it is non-trivial to determine the bias between the clustering of galaxies and the
clustering of the dark matter (e.g. [70]). However, as we have seen, we do not need this
measurement at least in principle.
4.1 Quintessence
Let us consider how the basic canonical scalar field model looks like in the phenomenological
description discussed here. The action of the canonical scalar field (often called Quintessence
or cosmon in a dark energy context [80–82]) ϕ is given by
√
Z
1
S = d4x −g − ∂µ ϕ ∂ µ ϕ − V (ϕ) .
(4.1)
2
From this action we can compute the evolution equation of ϕ through variation with respect
to the field, and the energy momentum tensor through variation with respect to the metric
– 14 –
(if we added also the Einstein-Hilbert action then the variation wrt the metric would give
the Einstein equation). We usually split the field into a homogeneous background field (a
kind of condensate) and perturbations, ϕ(k, t) = ϕ̄(t) + δϕ(k, t), and linearize the equations
in the latter.
For the background quantities, one obtains
dV 1 1
ϕ̄¨ + 3H ϕ̄˙ + = 0, ρ = ϕ̄˙ 2 + V (ϕ̄) , p = ϕ̄˙ 2 − V (ϕ̄) . (4.2)
dϕ 2 2
We see the energy density and pressure depend on the potential and the evolution of the field,
so a Quintessence field can dynamically change w. But we also find that ρ + p = (1 + w)ρ =
ϕ̄˙ 2 ≥ 0, implying that w ≥ −1 as long as ρ > 0.
It is furthermore easy to show that the evolution equation for ϕ̄ together with the
expressions for ρ and p is exactly the conservation equation ρ̇+3H(ρ+p) = 0. The analogous
computation can be done with the equations for the perturbations δϕ, and one finds that
they correspond exactly to the fluid perturbation equations in section 3.3 for a pressure
perturbation defined through (3.12) with c2s = 1 and no anisotropic stress, σ = 0 [66, 83]. To
first order in perturbation theory, a canonical scalar field is therefore behaving just like a fluid
with speed of sound equal to the speed of light, and instead of evolving the Klein-Gordon
equation of the field, we can just use the fluid perturbation equations.
There are however many aspects that are not captured by the fluid description of Quint-
essence. For example, the formalism gives no indication what equation of state w to use.
We would expect in general the physics to specify a scalar field potential, which then allows
to integrate the evolution equation which in turn tells us what w is. Of course this can be
reversed: given a set of data, we can use the fluid description to constrain w, which in turn
places constraints on the allowed shape of V [84–86]. Another important point that is missing
from the fluid description is the existence of fixed points and attractors in the evolution of
ϕ, especially in the presence of other fluids. Scalar field models with exponential potentials
are for example able to adjust their equation of state to follow the background evolution at
a fixed fraction of the energy density. This allows for a partial solution to the coincidence
problem: The scalar field starts out with a fairly arbitrary energy density early on, which
then either decays rapidly or freezes until the scalar field energy density roughly matches
the radiation energy density. The scalar field then “follows” radiation and latter matter. In
this way it is natural that the scalar field today has an energy density comparable to the
one in matter. Unfortunately it has proven very difficult to engineer a natural exit from this
scaling phase, in general one needs to place a feature into the potential just at the right point.
A very nice discussion of the scalar field evolution described as a dynamical system with a
discussion of scaling properties for different potentials can be found for example in [7].
– 15 –
where K(X) is a function of the standard kinetic term X, and we recover standard Quint-
essence through K(X, ϕ) = X − V (ϕ). As for Quintessence, there is a vast literature on
K-essence models, here I only summarize some standard results (see e.g. [13]). As usual
through the variation of the action we can find the equation of motion and the energy mo-
mentum tensor. One finds that at the background level the energy density and pressure are
given by
p=K, ρ = 2XK,X − K (4.4)
where we dropped the dependence of K on X and ϕ and where we used the shortened
notation K,X = dK/dX. The equation of state is therefore
p K
w= = (4.5)
ρ 2XK,X − K
which for Quintessence (where K,X = 1) coincides with with the equation of state derived
from Eq. (4.2). When studying the perturbations, one finds that also K-essence models are
described by a sound speed given by
p,X K,X
c2s = = (4.6)
ρ,X 2XK,XX + K,X
and vanishing anisotropic stress, σ = 0. K-essence models therefore generalize the Quint-
essence models to {w, c2s , σ = 0}, where now w and c2s are free. The sound speed in K-essence
models is no longer equal to the speed of light. It can be larger or smaller than 1 and also
vary over time. We note that for the Quintessence case we recover again the expected result
c2s = 1, while for the class of models with K(X) = X α − V (ϕ) we have c2s = 1/(2α − 1), i.e.
we can choose a constant sound speed through the choice of α.
An important point (and a main motivation for introducing K-essence) that is not
apparent in a fluid formulation, is that in K-essence models the evolution of the homogeneous
“background” field can be made to depend on the overall expansion rate of the Universe
so that the onset of matter dominated expansion triggers a transition to a later accelerated
expansion stage dominated by the K-essence field. That is, K-essence models can (for suitable
choices of the kinetic function) overcome the coincidence problem of the cosmological constant
by linking the onset of dark energy domination to the earlier onset of matter domination.
As matter perturbations do not grow during radiation domination, structure and therefore
intelligent lifeforms similar to us can also form only after the radiation-matter transition and
so a coincidence between the emergence of species observing the cosmos and the transition to
dark energy dominated expansion is expected in such models. However, a drawback of these
models is that the speed of sound necessarily becomes larger than unity in all models that can
solve the coincidence problem [89]. Whether such a superluminal propagation of information
is problematic is a subject of debate [90, 91]: in general if information can be transmitted
faster than light for all observers, then it is possible to construct closed time-like curves and
so to transmit information into the past. This leads to obvious problems with causality.
On other hand, if information travels only faster than c with respect to some observers, for
example with respect to the rest-frame of the K-essence fluids on which the perturbations
propagate, then causality problems can be avoided but Lorenz invariance is broken and we
have introduced an effective aether through the presence of the K-essence condensate.
– 16 –
4.3 Phantom crossing
As seen in section 4.1, a canonical scalar field model cannot cross the “phantom divide”
w = −1. This does not mean however that such a crossing is impossible: we will briefly
discuss a two-fluid example here, and another possibility is mentioned in section 5.1, namely
that coupled dark matter - dark energy models can lead to apparent phantom crossing. In
addition, theories where GR is modified can in general cross as well, since there is no “real”
dark energy field present. When analyzing data, and especially since the data indicates that
w ≈ −1, it is important to not exclude the possibility w < −1 by construction. In this
section we will review at a purely classical level the behavior of the perturbations close to
w = −1 and how to avoid problems when w crosses −1 (at the quantum level phantom
fields would allow for spontaneous vacuum decay, but potentially viable models could still be
constructed, see e.g. [92, 93]). An additional motivation for this section is that it illustrates
explicitly the limitations inherent in parametrizing the pressure perturbations purely in terms
of a rest-frame sound speed: although this is often a good choice that significantly simplifies
the description of the situation, this is not always the case.
When looking at the standard perturbation equations, as e.g. found in [65] where the
velocity perturbations θ are defined through T0,ii ∝ (ρ + p)θ, one finds terms like
ẇ
θ̇ = − θ + ... (4.7)
1+w
The division by 1+w looks like a problem, as it would lead to a divergence in θ when trying to
cross w = −1. But since ρ + p was factored out in the definition of θ, the energy-momentum
tensor can stay finite even if θ diverges. Indeed, if the divergence of T0i does not go to zero
at crossing, then θ will necessarily diverge. But this is only an apparent problem and easily
cured by changing slightly the definition of the the velocity perturbations, and it is the reason
why we use here V defined through T0,i i ∝ ρV .
A more severe problem appears when we look at the pressure perturbation parameterized
through a sound speed in the rest-frame. The gauge transformation from the rest-frame to
another frame involves the adiabatic sound speed c2a = ṗ/ρ̇ which can also be written as
ẇ
c2a = w − . (4.8)
3Ha(1 + w)
The adiabatic sound speed necessarily diverges at phantom crossing, except when crossing
with ẇ = 06 . Since δp appears directly in the EMT, its divergence implies a potential
singularity in the metric and is not acceptable on physical grounds. The reason for this
divergence is that the rest-frame of the dark energy is badly defined when w = −1, so we
chose to fix the pressure perturbation to a finite value in the one gauge that we should not
have used. Seen in this way, the problem becomes easy to remedy: we just have to keep δp
finite in another gauge, for example in the longitudinal gauge. Then nothing at all happens
at phantom crossing. In practice, this is what we will do later on when using supernova and
CMB data to put constraints on w, allowing also for w < −1. More precisely, we regularize
the adiabatic sound speed by setting
ẇ(1 + w)
c̃2a = w − (4.9)
3H[(1 + w)2 + λ]
6
A more detailed analysis shows that for c2s = 0 the crossing is possible even if ẇ =
6 0 [83] – interestingly,
this is the same condition as the one found in [94] with the help of effective field theory, and an explicit
example can be found in [95].
– 17 –
where λ is a tunable parameter which determines how close to w = −1 the regularisation
kicks in. A value of λ ≈ 1/1000 appears to work reasonably well [83].
Finally, it is instructive to have a quick look at the behavior of the perturbations in
the “Quintom” model of dark energy [96, 97]. This model consists of two scalar fields, one
with a constant equation of state parameter w1 > −1 and another one with w2 < −1 (which
necessitates changing the sign of the kinetic term, introducing a ghost degree of freedom,
but here we are only concerned with the classical behavior of the perturbations at the linear
level). As the ratio of the energy densities of the two fields scales as ρ2 /ρ1 = a−3(w2 −w1 ) and
since w2 − w1 < 0 we see that the second field becomes more important over time, and so
the total equation of state parameter will evolve from w ≈ w1 at early times towards w ≈ w2
at late times, and will cross w = −1 somewhere in between. Yet since we are just dealing
with two scalar fields with constant w, we also know that nothing strange will happen to the
perturbations. In this system, we can compute in detail all the perturbations and study their
evolution [83]. The total pressure perturbation can be written in terms of the total effective
quantities in the form
Veff
δpeff = ĉ2s,eff δρeff + δprel + δpnad + 3H ĉ2s,eff − c2a ρ̄eff 2
(4.10)
k
where ĉ2s,eff = 1 since this is the sound speed of both fields, δprel is the contribution from the
relative density perturbation of the two fields (corresponding to a gauge invariant relative
entropy perturbation) and δpnad a non-adiabatic contribution from the relative motion of the
fields. We plot in the left panel of Fig. 2 the evolution of these terms as the total equation of
state crosses w = −1. Each contribution individually diverges, but their sum remains finite
and well behaved. If one now tries to extract a total effective sound speed through the usual
formula
Veff
δpeff = c2x δρeff + 3H c2x − c2a ρ̄eff 2 .
(4.11)
k
then all the terms get mixed up, and the resulting pseudo-sound speed c2x is plotted in the
right-hand panel of Fig. 2. This pseudo-sound speed that is not connected to any physical
quantity has acquired a scale-dependence and additionally diverges at w = −1.
There are two main points to take away from this section, apart from the phantom-
crossing discussion. Firstly, although we have argued that the different parameterizations
of the degrees of freedom available in the metric and in the energy momentum tensor are
in principle all equivalent, we have been provided here with an explicit example that some
parameterizations are nonetheless better than others, and that the answer to the question of
which one is best depends on the situation. Usually, the rest-frame sound speed c2s is a good
parameter. It manages to capture the main characteristic of a canonical scalar field in a single
number (c2s = 1) while a description in terms of δp would look much more complicated, and
in addition the parameter itself describes a physical object (the propagation velocity of the
scalar field perturbations) of direct relevance to cosmology (as the propagation speed gives
rise to a sound horizon). However, in a situation where the direct physical correspondence is
lost, for example in the two-field Quintom model, the total “rest-frame” sound speed takes a
very complicated form, with scale dependence and divergences. Naively, one would probably
not have allowed such strange behavior, thinking it unphysical! For this reason, it appears
important to use several parameterizations and to test with forecasts whether we expect to
see interesting effects for the models where a given parameterization is simple.
But on the other hand, the phantom crossing example also shows how in some cases
the phenomenological approach allows a straightforward generalization of the behavior of
– 18 –
Figure 2. Left: This figure shows the different divergent contributions to the pressure perturbation,
Eq. (4.10), multiplied by 109 . The relative pressure perturbation is shown as red dotted line, the
non-adiabatic pressure perturbation as green dash-dotted line and the contribution from the gauge
transformation to the conformal Newtonian frame as blue dashed line. Each of the contributions
diverges at the phantom crossing, but their sum (shown as black solid line), and so δp, stays finite.
Right: We plot the apparent sound speed c2x defined by Eq. (4.11) for three different wave vectors,
k = 1/H0 (black solid line), k = 10/H0 (red dashed line) and 100/H0 (blue dotted line). Although
the real sound speed is just c2s = 1, the apparent sound speed is scale dependent, diverges at w = −1
and can even become negative. (Figure from [83])
the original model. A canonical scalar field model would never cross w = −1. There is no
choice of the potential that can be made for which w < −1 (as long as the energy density
is kept positive). If we just restricted ourselves to reconstructing Quintessence potentials,
we would never even have realized that the possibility w < −1 exists. This is an example
where the phenomenological approach allows to probe the available degrees of freedom in a
more model-independent way: let us assume that the true model indeed has w < −1. Then,
if all we knew were scalar fields, we would presumably keep finding that ΛCDM is the best
fit, even though there are other models, that we did not think of, that would be better and
could rule out ΛCDM.
– 19 –
become
!
V 9a2 H 2 c2s − w 3 2
δ0 = − cs − w δ + 3 (1 + w) φ0
1+ −
Ha2 k2 a
V k 2 c2s k2
V 0 = − 1 − 3c2s + δ + (1 + w) φ (4.12)
a Ha2 Ha2
and we only have a single potential given by
2 2
X 3aH
k φ = −4πGa ρj δj + 2 Vj (4.13)
k
j
since ψ = φ due to the vanishing anisotropic stress. Following [98] we also assume matter
domination so that the background evolution is given by
For the matter dominated era, the (growing) solution of the matter perturbations (w = δp =
0) is well known,
H02 Ωm H 2 a2
δm = δ0 a + 3 = δ0 a 1 + 3 2 (4.15)
k2 k
p
1/2
Vm = −δ0 H0 Ωm a (4.16)
3
k 2 φ = − δ0 H02 Ωm (4.17)
2
which can be checked easily by inserting the solution into the perturbation equations. Since
we are working with linear perturbation equations, the overall scale is a free parameter, here
called δ0 . It is in general a function of k and is set by initial conditions that should include
the perturbation generation in the early Universe (inflation) and the subsequent evolution
during radiation domination. We also observe that the gravitational potential is constant in
time.
We can now study the scalar field perturbations without the need to look at the actual
field evolution, since these equations are equivalent to the fluid equation with the appropriate
value of the fluid parameters. We first look at perturbations larger than the sound horizon,
k aH/cs . In this case, we neglect all terms containing the sound speed in Eq. (4.12),
effectively setting c2s = 0. The solution for the velocity perturbation is (neglecting a decaying
solution ∝ 1/a) p
V = −δ0 (1 + w)H0 Ωm a1/2 . (4.18)
Up to the prefactor (1 + w) this is the same as for the matter velocity perturbations. We
find that this expression is valid on scales larger than the sound horizon even if the sound
speed is non-zero.
It is now straightforward to insert this solution for the dark energy velocity perturbation
into Eq. (4.12). Again setting c2s = 0 we find the solution
3H02 Ωm
a
δ = δ0 (1 + w) + (4.19)
1 − 3w k2
where we neglected a term proportional to a3w which is decaying as long as w is negative. Not
surprisingly, also this solution becomes equal to the one for matter perturbations for w → 0.
– 20 –
Relative to the matter perturbations the dark energy perturbations are suppressed by the
factor (1 + w). This factor is necessarily always there, as the gravitational potential terms
contain it. It can be thought of as modulating the strength of the coupling of the dark energy
perturbations to the perturbations in the metric. For w = −1 the dark energy perturbations
are completely decoupled (in the sense that they do not feel metric perturbations – but they
can still produce them if the dark energy perturbations are not zero).
The most important feature of the scalar field perturbations compared to the matter
perturbations is the existence of a sound horizon. Inside the light horizon, the dark matter
perturbations grow linearly with a (until the perturbations become non-linear). The dark
energy perturbations on the other hand will eventually encounter their sound horizon if
c2s > 0. Once inside the sound horizon, they will stop growing. This means that the dark
energy perturbation spectrum is cut off on small scales.
To get a solution on small scales, k aH/cs , we start again with the equation for the
velocity perturbation. However, we expect the two terms with k 2 to cancel to a high degree
to avoid large velocity perturbations, or in other words
(1 + w)φ 3 H02 Ωm
δ=− = (1 + w) δ . (4.20)
c2s 2 c2s k 2 0
As expected the dark energy perturbations stop growing and become constant inside the
sound horizon. The velocity perturbations are now given simply by using Eq. (4.12) and
inserting Eq. (4.20):
3/2
9 H 3 Ωm
V = −3Ha(c2s − w)δ = − (1 + w)(c2s − w) 02 2 a−1/2 . (4.21)
2 cs k
The extra term in brackets in Eq. (4.12) is not important for the scales of interest here.
As the horizons grow over time, a fixed wave number k will correspond to a scale that
is larger than the light horizon, k < aH, at early times, and eventually it will enter the light
horizon and later the sound horizon. This makes it possible to illustrate the behavior of the
perturbations in the different regimes in a single figure: In the left panel of Fig. 3 we plot
the numerical solution for the dark energy density contrast for k = 200H0 as well as the
expressions (4.19) and (4.20). It is easy to see how the perturbations start to grow inside the
causal horizon but how the growth stops when the sound horizon is encountered and pressure
support counteracts the gravitational collapse.
In the right-hand panel of Fig. 3 we show the size of δDE relative to δm . There are two
effects: on large scales the dark energy perturbations are suppressed by a factor proportional
to (1 + w) relative to the matter perturbations (and an additional factor ∼ 4 inside the light
horizon), and as there is no sound horizon for perfectly cold dark matter, δm continues to
grow on small scales so that δDE /δm ∝ 1/a inside the dark energy sound horizon. If we want
to express the impact of the dark energy perturbations on the gravitational potentials with
the help of the Q variable, as defined in (3.15), we should in principle consider the rest-frame
density perturbations which is a negligible change on small scales and on large scales (and
w ≈ −1) just gives ∆DE ≈ ∆m (1 + w)/4. Additionally we need to take into account the
relative mean density, ρDE /ρm = ΩDE /Ωm a−3w for constant w. The combination of these
effects can then be expressed by the interpolating, approximate formula
1 − Ωm a−3w
Q−1= (1 + w) (4.22)
Ωm 1 − 3w + 32 ν(a)2
– 21 –
−3
10 0
10
−1
−4
10 10
δDE/δDM
δDE
−2
10
−5
10
−3
10
−6
10 −6 −4 −2 0 −6 −4 −2 0
10 10 10 10 10 10 10 10
a a
Figure 3. Left: The figure shows the behavior of the variable δDE (in a universe without radiation).
The black dot-dashed line is the numerical solution with c2s = 0.01 and w = −0.8 for the mode
k = 200H0 . The red solid line is the approximation on scales above the sound horizon, Eq. (4.19)
and the blue dashed line is the approximation to the scales below the sound horizon, Eq. (4.20). The
two vertical lines give the scale factor at which the mode enters the Hubble horizon (left line) and the
sound horizon (right line). The numerical solution shows how the perturbations decay at late times
when matter domination ends, but radiation was omitted from the numerical calculation to allow for
a longer dynamic range in a to illustrate the different regimes. Right: The ratio of dark energy to
dark matter perturbations. For scales inside the dark energy sound horizon, the relative amplitude
decreases linearly with a. We can also see that the ratio of the perturbations is described well by the
fitting formula even when dark energy domination starts to affect the perturbation evolution.
where we introduced ν(a)2 ≡ k 2 c2s a/(H02 Ωm ) (the amount by which a mode is inside the sound
horizon) and also assumed flatness so that ΩDE = 1 − Ωm . This expression is quite good on
scales much larger and smaller than the sound horizon, while close to the sound horizon it
misses some transient effects. From this formula, we can see that today (a = 1) the impact
of the dark energy perturbation on large scales (ν 1) is of the order of 3(1 + w)/4, while
on small scales (ν 1) it behaves roughly like 4(1 + w)/ν 2 . On scales above the sound
horizon therefore the perturbations can contribute up to 10% to φ given today’s limits on w,
but inside the sound horizon the dark energy impact is strongly suppressed. Additionally, as
we go back into the past, the dark energy contribution to φ scales like a−3w ∼ 1/a3 , so at a
redshift of z = 1 we get an additional suppression by a factor of 6 to 8.
The direct detection of the perturbations is important because there is much more
information on models encoded in the perturbation variables than in the evolution of w.
Measuring the behavior of perturbations is akin to finding a fingerprint at a crime scene.
By matching the perturbation fingerprint against model predictions we may be able to un-
derstand what the physics behind the accelerated expansion is. Unfortunately the smallness
of the perturbations in Quintessence and K-essence type models make it very difficult to
directly measure the perturbations. In [99] we found that even large future weak lensing and
galaxy clustering observations can only hope to measure the sound speed (which we use as a
proxy to decide whether it is possible to detect the perturbations) if c2s < 10−4 , a conclusion
also supported by other studies [100, 101].
Another point to take away from this section is how the use of the fluid equations
and fluid variables (rather than, say, the potential V (φ) of a Quintessence model or the
– 22 –
exponent α of a K-essence model with K(X) = X α ) allowed us to study the behavior of the
perturbations in way that is more abstract from the fundamental model point of view, but
that emphasizes the physical evolution of the perturbations and allows to derive relatively
simple yet quite accurate formulae in order to study the expected observationally relevant
effects. The main effect for these models is the existence of a sound horizon within which
the perturbation growth is suppressed by pressure support. Because of this sound horizon,
only perturbations in sufficiently “cold” dark energy with c2s 1 can be detected.
– 23 –
Traditional (frequentist) statistics works directly with the likelihood, while Bayesian statistics
is interested in the probability of the parameters given the data, P (θ|D, M). With the help
of Bayes theorem the two can be linked,
The second term in the numerator, P (θ|M), is called the prior since it corresponds to the prior
knowledge of how the parameter values are distributed, before the data is taken into account.
The quantity in the denominator, P (D|M) is independent of the parameters and thus just
an irrelevant proportionality constant when trying to constrain the parameters. It is however
an important quantity for model-comparison purposes in the Bayesian framework (e.g. [113–
116]). How to choose the prior is not an entirely simple question. For a parameter like the
mean which can be at an arbitrary location, a natural choice is to choose P (µ|M) constant.
On the other hand if we wanted to estimate the variance σ 2 that is rather independent of
scale, a prior of the form P (σ|M) ∝ 1/σ is the usual choice. A detailed discussion of priors
(and of Bayesian statistics in general) can be found in the book by Jaynes [117]. One of the
standard works on classical statistics are the two volumes by Feller [118, 119].
Once we have chosen a likelihood we need to compute confidence intervals, i.e. find
regions in parameter space that encompass a certain percentage (e.g. 95%) of the probability.
This requires effectively to compute an integral in a potentially high-dimensional parameter
space. High-dimensional integrals are generally very difficult to compute numerically; for
about a decade now the preferred solution in cosmology has been to use a Markov-Chain
Monte Carlo (MCMC) method with Metropolis-Hastings acceptance criterion [120]. The
algorithm is very simple:
1. Pick an initial point in parameter space, θ0 , and evaluate the likelihood at that point,
L0 = L(θ0 ).
2. Choose a new point θ1 so that the probability of picking θ1 when at θ0 is the same as
the probability of picking θ0 when at θ1 . (This condition can be relaxed by allowing
for a slightly more complicated acceptance criterion in step 4 below.)
4. Accept the step with probability P = min(1, L1 /L0 ). If the new point is accepted, set
θ0 = θ1 and L0 = L1 .
5. Record θ0 as a new element in the chain (even if the step was not accepted and θ0 has
not changed!).
The usual way to choose a parameter vector θ1 in step 2 above is by picking a random
vector ∆θ from a Gaussian pdf with zero mean and a given, fixed covariance matrix, and
setting θ1 = θ0 + ∆θ. A good choice for the covariance matrix of the step-distribution is an
approximation to the parameter covariance matrix (formally, when allowing for an infinite
number of MCMC steps, it does not matter what step-distribution is chosen as long as it
is symmetric and can reach all of parameter space, but in practice it is very important for
the efficiency of the algorithm). One also needs to remove an initial non-stationary period of
– 24 –
the MCMC evolution, the burn-in, and one needs to ensure that all the parameter space has
been probed sufficiently, i.e. that the chain has converged. See e.g. [121] for more details on
MCMC methods in cosmology.
The output of the MCMC method is a so-called chain of parameter values that provide
a random sample from the posterior distribution of the parameters θ. Since the density
of points is proportional to the value of the likelihood, we can marginalize (integrate out)
parameters by just ignoring them. To get the marginalized pdf of a given parameter θi ,
we can just look at the histogram of the values of that parameter in the chain, while for
two-dimensional confidence contours one needs to determine an area that contains e.g. 95%
of the probability. This is typically done by discretizing the relevant 2D parameter sub-space
onto a grid and associating to each grid-square the number of points in the chain that lie
there. From the discretized 2D-histogram it is then straightforward to derive the desired
contour. A popular package for cosmological MCMC applications, with likelihoods for many
data sets and tools for post-processing chains, is CosmoMC7 [121, 122]. A good discussion of
MCMC methods and extensions (and generally a lot of statistics) can be found in the book by
MacKay [123]8 . There are also other numerical methods to infer parameter constraints and
compute model probabilities. One approach that has become quite popular in cosmology
over the last few years is nested sampling [52, 124–126], another one is population-Monte
Carlo [127].
Figure 4. Constraints on two parameterizations of w(a) for a Quintessence model {c2s = 1,σ =
0} from WMAP CMB data and the Union type-Ia supernova data. The blue (inner) shaded area
corresponds to the location of 95% of the accepted w(a)’s, while any model in the red (outer) shaded
area has an effective χ2 that is at least 4 worse than that of the best model. The left panel uses
a kink parameterization of w [110] while the right panel uses the same second order polynomial
parameterization as Fig. 1. Phantom crossing was modeled using the prescription of Eq. (4.9). The
constraints are best at a ≈ 0.8 where we find that w ≈ −1 ± 0.1. At later times the constraints are
weaker due to the integrated nature of the data. A very early times, a . 0.4, there is no strong lower
bound on w since very negative values make the dark energy just vanish more rapidly in the past.
For the constraints shown in Fig. 4 we use a likelihood that combines the Union 2
supernova compilation [128] and the WMAP 7-year data [129]. As models we choose two
7
Available from http://cosmologist.info/cosmomc/
8
Also available on David MacKay’s website, http://www.inference.phy.cam.ac.uk/itprnn/book.html
– 25 –
different parametrizations of the equation of state w(a), the kink parameterization of [110]
and a second-order polynomial form as used for Fig. 1. We set c2s = 1 and σ = 0, which
corresponds to a canonical scalar field model of dark energy (Quintessence). To these para-
meters we have to add the usual cosmological parameters {H0 , Ωm , Ωb h2 , As , ns , τ } as well as
possibly nuisance parameters required by the likelihood (none in our case since we margin-
alize analytically over the absolute supernova luminosity). Here the parameters H0 and Ωm
describe together with w(a) the background evolution (we take the universe to be spatially
flat) while the baryon density Ωb , the reionisation optical depth τ as well as the amplitude
of the primordial fluctuations As and the scalar spectral index ns are necessary for the CMB
predictions.
After performing the MCMC we end up with a chain consisting of a large number (in
our case about 105 ) of accepted parameter values. Each of these values encodes an evolution
of w(a). The figure then shows for each value of a the location of the central 95% of w(a) as
the blue shaded region.
Instead of showing where most curves lie, we can also show the region in the (w, a)
plane where the dark energy evolution histories provide a “bad fit” to the data, relative to
the best-fitting w(a). Let us assume that at a given a the probability distribution for w(a)
looks roughly like the Gaussian likelihood of Eq. (4.24), with µ being w(a). The best fit (the
maximum likelihood value) of (4.24) can easily be shown to lie at the arithmetic mean x̄ with
an uncertainty given by σ̄ 2 = σ 2 /n (see e.g. chapter 24 of [123]), so that the distribution of the
mean µ is again a normal distribution around x̄ with variance σ̄ 2 . The integral from x̄−2σ̄ to
x̄ + 2σ̄ (the so-called two-sigma interval) encompasses about 95.4% of the probability for the
location of µ (and the one sigma interval, i.e. the integral from x̄ − σ̄ to x̄ + σ̄, about 68.3%).
The quantity χ2 (µ) = (µ − x̄)2 /σ̄ 2 increases by 4 when we move µ two sigma away from the
best fit x̄. We use this relationship and define an effective χ2 through χ2eff = −2 ln L which
again for the simple case (4.24) agrees with χ2 up to an irrelevant normalization constant.
We then plot in Fig. 4 also a red-shaded region showing the location of w(a) for which χ2eff
is larger by at least 4 than the smallest χ2eff . This is then expected to delineate roughly the
2σ̄ boundary for w at a given a and thus should approximately coincide with the boundary
of the blue region, but shows the region where the badly fitting w(a) lie.
We notice that the best constraints occur at a ≈ 0.8 where we find w ≈ −1.0 ± 0.1
(1σ errors). The constraints are compatible with w = −1 at all redshifts, the prediction
for a cosmological constant. At higher redshift (smaller a) there is no lower bound for w
since the dark energy is subdominant in the past, and for very low values of w it merely
disappears faster as a → 0. We also notice that while the red “exclusion” region is similar
for the two parameterizations, the blue region for the kink model is narrower. This is an
example of the prior imposed by a parameterization or model for w on the results. This is
not necessarily undesirable: the distance data is linked to w through a double integration, so
that the resulting constraints on the equation of state are strongly smoothed. If we would for
example use higher and higher order polynomials then we would find less and less constraints
as the resulting w(a) would oscillate more and more around −1. For this reason it is necessary
to impose some constraint on the allowed form of w. For an example on how to do this in a
Bayesian way with a maximum entropy prior, see [130].
– 26 –
σ 6= 0 in their effective energy momentum tensor, so that Π = φ − ψ 6= 0 even when the
contribution from relativistic particles is negligible? Indeed, there is: Let us look at a quite
general scalar-tensor action, now including gravity and matter,
√ h1
Z i
S = d4x −g
1 + f (ϕ) R + K(X) − V (ϕ) + Lm . (4.26)
2
Here we have chosen a frame in which the matter is minimally coupled, so that it follows the
geodesics of gµν , since this is the frame what we would generally reconstruct from observations
of weak lensing and the motion of galaxies [131]. We can again compute the Einstein equations
by varying the action with respect to the metric, and arrange it in the Einstein form (3.2).
In this case we find a total, effective dark energy EMT with Π ∝ f 0 (ϕ) which does in general
not vanish. This action therefore gives an explicit example of a model which has all the
possible degrees of freedom that can be recovered from cosmological measurements9 .
The scalar-tensor model is not the only example. Let us go through several typical
“modified gravity” models (see e.g. [133] for a recent review). During this exercise we will
also consider the question whether it is possible to revert to the case Π = 0. For the scalar-
tensor model it is easy to see that this requires f (ϕ) to be constant. In other words, requiring
the absence of anisotropic stress forces us to go to the GR limit of the theory. A class of
models closely related to the scalar-tensor type is the is the f (R) kind of models, with the
gravitational part of the action given by
√
Z
Sg ∼ d4x −gf (R) (4.27)
for an arbitrary function f . In this case, the effective anisotropic stress is found to be
Π ∝ f 00 (R). A vanishing anisotropic stress then is only possible if f (R) = R + Λ for a
constant Λ, again just the GR action10 . The problem can also be seen from a different
angle by noticing that f (R) models contain an effective scalar degree of freedom with a mass
linked to the anisotropic stress (in the quasistatic limit) through Π ∝ 1/m2 . Turning off the
anisotropic stress requires therefore to make this “scalaron” very massive and so effectively
suppressing it, forcing the theory to revert to GR.
Another widely studied modified-gravity model is the so-called DGP (Dvali-Gabadadze-
Porrati [134]) model which is based on a 4D brane embedded in a 5D bulk. The gravitational
action is taken to be the 5D Einstein-Hilbert action together with an induced 4D Einstein-
Hilbert action confined to the brane. The relative strength of the two contributions is given
by the crossover scale rc = M42 /M53 . The effective anisotropic stress of the DGP model is
proportional to 1/β where β = 1+2rc HwDE (see (4.34) below). Setting the anisotropic stress
to zero requires sending β to infinity, which in turn only happens for rc → ∞, or M5 → 0.
But in this limit we have turned off the 5-dimensional nature of DGP gravity and are left
with only the usual 4D Einstein-Hilbert action, and thus standard GR.
With two extra degrees of freedom on the other hand, it should be possible to balance
them against each other and so turn off the anisotropic stress [135]. An example is afforded
9
See also [132] where an action with a conformal coupling to the matter field is proposed as a generic way
to parameterize modified-gravity models
10
In principle the anisotropic stress also vanishes if δR = 0 but this is a complicated condition on the
evolution of the gravitational potentials that appears unnatural since it would require a very peculiar mat-
ter contribution to be compatible with the Einstein equations, and is at any rate not in agreement with
observations.
– 27 –
by the f (R, G) type models [136] with action
√
Z
Sg ∼ d4x −gf (R, G) (4.28)
where G = R2 − 4Rµν Rµν + Rµναβ Rµναβ is the Gauss-Bonnet term, a topological invariant in
4D (i.e. it only contributes a boundary term in the action integral)11 . Indeed it is now possible
to build models that have no anisotropic stress, but in general the condition Π = 0 depends
on the background, i.e. a model has no anisotropic stress during e.g. matter domination
but Π 6= 0 when accelerated expansion sets in. It should in principle be possible to create
functions that both lead to the right sequence of evolutionary stages (radiation domination,
then matter domination and finally accelerated expansion) and that retains Π = 0, but it
would be a quite complicated and fine-tuned endeavor, and it does not appear as if there was
an easy way to link the anisotropic stress to the evolution, rather the contrary.
As an example, we fix the background evolution to be de Sitter. In that case we find
that the condition Π = 0 is equivalent to
– 28 –
today, which reveals the required fine-tuning in this model. Without going into a great deal
more detail (see e.g. [139, 140]), one finds for the perturbations that
1
k 2 φ = −4πGN a2 1 − ρm ∆m (4.32)
3β
2 2 1
k ψ = −4πGN a 1 + ρm ∆m (4.33)
3β
for β ≡ 1 − 2rc H[1 + Ḣ/(3H 2 )] = 1 + 2rc HwDE , and the matter perturbation evolution
proceeds as usual (i.e. as given by Eqs. (3.8) and (3.9) with w = 0, δp = 0 and σ = 0). In
terms of the relative parameterization of Eq. (3.16) we find therefore that
2
η= . (4.34)
3β − 1
Numerically, η is small at high redshifts but tends towards η ≈ −0.44 today for a flat
Universe with Ωm = 0.3. In other words, the gravitational slip Π (or equivalently the
effective anisotropic stress) in DGP is of a size comparable to the gravitational potentials
themselves. We should note here that the extra factors of 1 ± 1/(3β) in Eqs. (4.32) and
(4.33) affect the growth of the perturbations as well, so that the value of the gravitational
potentials is somewhat different than in a Quintessence model with the same equation of
state parameter – but of course this is good since it implies that it is possible to measure the
extra perturbation level parameters, at least if they are of the size found in DGP.
We have seen that it is very difficult to avoid generating a late-time effective aniso-
tropic stress for the models that we would call “modified gravity models”, and in addition
the corresponding gravitational slip is generically very large, of the order of the gravitational
potentials themselves. For this reason Π is a key diagnostic in the phenomenological frame-
work: If we find a deviation from ΛCDM, then constraints on the anisotropic stress can help
to identify a likely explanation: If Π 6= 0 then Quintessence and K-essence type models are
ruled out and a modification of GR looks likely, while in the opposite case modified gravity
models are disfavored and it appears more sensible to look for a minimally coupled field.
The ghost issues in DGP can be cured by adding more higher order terms (taking care
to ensure that the equations of motion stay second order). The most general such theory
was worked out in [141], but only recently have such theories become better known, like the
Galileon [142] and similar models like Kinetic Gravity Braiding [93], massive gravity [143]
and others more [144, 145], described in much more detail elsewhere in this volume [17].
Equations (4.32) and (4.33) have an interesting property: we see that the extra terms
cancel for the lensing potential φ + ψ. In other words, light is lensed by matter perturba-
tions exactly as in GR, without any additional lensing. This offers another way to see why
suppressing the anisotropic stress forces DGP to revert back to GR: if both φ − ψ and φ + ψ
are just given by the usual GR expressions without any dark energy contribution, then the
(effective) dark energy does not contain any fluctuations. But this in general only possible if
the dark energy is a cosmological constant: due to the Bianchi identity and the conservation
of the rest of the EMT, also the effective dark energy EMT is conserved. If w 6= −1 the
conservation equations (3.8) and (3.9) couple the perturbations to the gravitational poten-
tials, and with only one remaining function to choose (δp) it is in general not possible to
find a solution. So not only does σ = 0 suppress all effective dark energy perturbations for
such models, in general it is not even possible to achieve this self-consistently except for a
– 29 –
ΛCDM-like behavior. Therefore if we want to have a non-trivial modified-gravity model with
φ = ψ, then this model needs in general to change the lensing potential.
It is not only DGP that has this property where lensing is unaffected. On the other
hand, the full Galileon case should change lensing [146]. I suspect that in this case it may
be possible to construct an explicit example where the anisotropic stress vanishes without
rendering the model unviable – but this still needs to be demonstrated. But also for Galileons
I expect that in general the anisotropic stress is non-zero. The only way around the need to
fine-tune σ to vanish is to include the property already at the level of the action. I do not
know what that condition translates into for a fully general model, but as mentioned above,
for the action (4.26) the condition is that the coupling to R vanishes [131]. At least for this
class (and for the reason outlined above) “anisotropic stress” and “modification of gravity”
are synonyms.
On the observational front, current data is consistent with no additional anisotropic
stress (and indeed no detection of any dark energy perturbations), but with errors of order
unity on the additional perturbation parameters η (3.16) and Q (3.15) or their equivalents.
Future observations over the next one to two decades will provide much stronger constraints,
reaching about a 10% accuracy on the perturbation parameters in several redshift and scale
bins. For more details, see for example [71, 73, 75, 77, 78, 147–156]
An immediate consequence is that cosmological data will never be able to tell us that we
have more than one general dark energy component. The only way to reach such a conclusion
would as a probabilistic statement based on model predictions and Occams razor.
But the situation is actually worse: if we restrict ourselves to distance data (or more
generally, data that constrains only the background evolution) and try to reconstruct directly
an equation of state parameter that can fit the data in a model with dark matter and dark
energy (a fairly common approach), we find that
Using a dark energy with this w(z), we will always fit the (background) data which is here
given as H(z). But in this expression Ωm is a free parameter that we can choose as we want
[157]. We will therefore find a possible dark energy for any amount of dark matter. Indeed,
we cannot even be sure that there is dark matter at all, maybe we are dealing with a single
dark fluid. We can thus generate possible families of dark energy evolutions, parameterized
by Ωm . Again, only theoretical prejudice or non-gravitational tests can break this degeneracy.
It is for this reason that we plotted only the total w in Fig. 1. This is all that we can learn
from background data.
– 30 –
The reason is that we tried to measure more degrees of freedom than are present in the
energy momentum tensor. We can only measure one pressure, but instead we try to measure
a general pressure and the amount of dark matter. Of course the degeneracy is broken when
we put constraints on the dark energy pressure, e.g. if we demand a constant w. Then we
can measure both w and Ωm . But then again, the true dark energy may not be characterized
by a constant w, so the extra information that we input into the analysis may be wrong.
The existence of the degeneracy is also a good test for codes that try to reconstruct a fully
general w(z): using only background data there should then be no constraints on Ωm .
What happens at the level of perturbations? There are two extra functions, c2s and σ,
that can be constrained. This is the reason why we could constrain in section 4.5 the equation
of state w(z) of Quintessence and simultaneously measure Ωm : dark matter has c2s = 0 and
Quintessence c2s = 1. If we tried to do the same with cold dark energy, characterized by a
free w(z) and {c2s = 0, σ = 0} then we recreate the degeneracy between dark energy and
dark matter, and in that case it is again impossible to measure Ωm , see Fig. 5. The same is
true if we leave c2s and σ free: In that case we are reconstructing the most general dark fluid
(within first order perturbation theory and only considering scalar perturbations), and so
any further freedom cannot be constrained [158, 159]. This is just due to the way Einstein’s
equations work, and the phenomenological framework allows at least to diagnose the issue.
0.4
0.3
0.2
0.1
λ
−0.1
−0.2
−0.3
Figure 5. Left: Different wλ (z) = −1/[1 − λ(1 + z)3 ] with λ = 0 (horizontal black line), λ > 0 (red
line starting at w < −1) and λ < 0 (blue line starting at w > −1). All these w’s lead to the same
ΛCDM expansion history if Ωm is adjusted accordingly. Right: One and two sigma contours for a
dark energy model given wλ (z), σ = 0 and c2s = 0 (large yellow/red filled contours) and c2s = 1 (black
open contours). For c2s = 0 we cannot distinguish between dark matter and dark energy and so cannot
measure Ωm , while the choice c2s = 1 breaks the degeneracy. This example taken from [157] uses the
first-year SNLS supernova data [160] and the three-year WMAP data [161].
We also add more freedom when we introduce couplings between dark matter and dark
energy. Again, since we can only measure the total dark energy momentum tensor, we can
always separate it into an uncoupled sum of a dark matter and a general dark energy EMT,
even if the the true model is actually a coupled model. We can always compute a “valid”
– 31 –
dark energy EMT through
(DE) (total coupled) (DM)
Tµν = Tµν − Tµν (5.3)
for any choice of dark matter EMT. This means that we can always find an uncoupled dark
energy - dark matter model that gives the same observations as a coupled model. Our only
hope is that the coupled model is sufficiently well motivated on theoretical grounds as well
as simpler to allow accepting it as the correct description. However, we can also do the
inverse, we can always find a coupling function that will allow us to model the dark energy
as a cosmological constant, as long as there is no additional anisotropic stress. To show this
explicitly at the background level, let us assume that we have baryons and a coupled dark
matter - dark energy fluid, and let us write the conservation equations for dark matter and
dark energy as
With this, and by assuming that the dark energy is a cosmological constant and computing
Ḣ from the Friedmann equation, we find easily that [131]
This then is the (averaged) coupling that allows us to recover a cosmological constant as the
dark energy. Again the problem is that the combination of coupling parameters and a general
DE energy-momentum tensor encompasses more degrees of freedom that can be measured.
Part of the families of w(a) that arise from the dark degeneracy, whether when changing
Ωm or couplings, contain phantom-crossing and even divergences at low redshift as can be seen
in the left panel of Fig. 5. This typically happens if we need to “overcompensate” the dark
energy. However, by construction the expansion rate H(a) and therefore all observations
are unaffected. This illustrates that strange behavior in w and in other parameters (like
the form of the effective sound speed of the Quintom model in figure 2), even divergences,
are not necessarily problematic as long as the metric stays well behaved. Similar effects
can apparently happen for f (R) models [162]. Also coupled models can lead to (apparent)
phantom crossing when they are analyzed without taking the coupling into account [163, 164];
in this case this is directly the consequence of the dark degeneracy [157].
The dark degeneracy also shows that the dark energy does not need to be smooth at
all. Current data is perfectly compatible with a clustering dark energy with c2s = 0. It could
contain some, or even all, of the clustering that is usually attributed to the dark matter.
On the other hand, the current data is indeed compatible also with a smooth dark energy
like a cosmological constant or a scalar field model with high sound speed. For this reason
it is tempting to define the split between dark matter and dark energy by assigning the
clustering part of the dark sector to the dark matter and the rest to the dark energy. This
works for simple dark energy models like generalized quintessence since once we have defined
the split in this way, the dark degeneracy guarantees that we can find a w(z) for the dark
energy that allows matching the observed H(z). Although the split would not reflect physical
reality if indeed the dark energy has a low sound speed and does cluster, we are barred by
the dark degeneracy to see this with cosmological measurements alone. But if dark matter
– 32 –
experiments one day indicate that the amount of dark matter is different from the amount
observed in a cosmological setting, then we must not forget that the split of the dark sector
into dark matter and dark energy is not unique, and that a different dark matter density can
be accommodated by modifying the dark energy [159].
There is also a cosmological reason why it is a bad idea to use the clustering prop-
erty directly to define the dark energy part of the total dark EMT and so break the dark
degeneracy: modified gravity models tend to have Q 6= 1 in addition to η 6= 0. The non-
zero anisotropic stress in these models contributes to the clustering through the last term in
Eq. (3.9). By defining the dark energy to be smooth, we would exclude some of the models
that we want to test with help of the data.
A closely related degeneracy exists between curvature and the dark energy [165, 166].
Since the global curvature contributes only to the background expansion rate, it is easily
confused with a smooth dark energy component. In order to break the curvature degeneracy,
one needs to use a geometrical test for the presence of non-zero curvature, for example by
comparing longitudinal and transverse distances [167]. So at least in principle it is possible
to break the curvature degeneracy with cosmological observations (while the dark degeneracy
is in principle perfect and unbreakable) but until this has been done one needs to be very
careful not to mistake a small curvature contribution as a smooth, evolving dark energy.
– 33 –
5.3 Dependence on the environment
A further issue relevant for the phenomenological description concerns a possible dependence
on local physics. For example, a screening mechanism like the Vainshtein [169] or Chameleon
[170–172] mechanism, based on local quantities like the density is an integral part of all viable
MG models. Without such a screening these models all fall foul of local tests of gravity that
place very strict limits on fifth forces in the solar system. But such a mechanism is difficult
to model with the phenomenological description as it is not purely a function of scale k [173],
and a description at the level of the action that incorporates the screening effect may be
preferable [132]. On the other hand, power spectra are also written as a function of scale
and so phase the same problem. Environmental dependence is therefore a general concern
for data analysis as a whole.
The precise way in which a given physical modified gravity model makes the transition
from to the small-scale regime where the modifications are suppressed is actually an important
discriminant: The action given in (4.26) already contains the possible degrees of freedom to
first order in perturbation theory. Measuring the evolution of the Universe on large scales
will allow to constrain the functions f (ϕ) (through the anisotropic stress), V (ϕ) (mostly
through the background expansion) and K(X) (mostly through the sound speed). But this
is certainly not the only action with these degrees of freedom that can be written down,
and there is a degeneracy between different actions that can at least partially be broken by
considering the additional information from the way the model reverts to GR on small scales.
In this review we have discussed a general framework that allows to parameterize all possible
outcomes of cosmological observations.13 We found that this can be done in different ways
that are all directly related but may be better in some situation or other. The first approach
is to use an effective dark energy momentum tensor, where the relevant degrees of freedom
can be taken to be the average (background) pressure p(t), and the pressure perturbation
δp(t, k) as well as the anisotropic stress σ(t, k). Usually the background pressure is given
in terms of an equation of state parameter w = p/ρ, and often the pressure perturbation is
parameterized by a rest-frame sound speed c2s .
The Einstein equations relate these quantities to the geometry, where the equivalent de-
grees of freedom are the average expansion rate H and (in the conformal Newtonian gauge)
the gravitational potentials φ and ψ. There are many ways to introduce other parameteriza-
tions, a common approach is to use w and in addition parameters that quantify the deviation
from a hypothetical universe where the gravitational potentials are given by the perturba-
tions in the matter (and radiation) alone. The main point here is that one always needs one
background parameter and two perturbation level parameters for all measurements that in-
volve perturbations, like the CMB, galaxy clustering, weak lensing, redshift space distortions,
etc. Conversely, these 1+2 functions also represent the information that can be extracted
from measurements.
In this way we have built a framework that allows to combine all possible cosmological
observations, by parameterizing exactly the possible degrees of freedom. With this, we are
13
Small print: At least to first order and for scalar perturbations. The latter is straightforward to rectify
by including vector and tensor perturbations, while extensions to non-linear perturbations present a more
formidable challenge, at least concerning the interpretation of the results.
– 34 –
also sure that we can probe the consistency of observations with the still favored ΛCDM
model in all possible ways.
We have further seen that the phenomenological parameters can be mapped to functions
in an action that describes gravity and the fields present in the Universe. Through this we can
reconstruct actions from the observations, but again the limited information available will
mean that there are degeneracies that cannot be broken. Nonetheless the phenomenological
parameters will give an indication of which classes of models we should investigate in more
detail. As an example, the presence of a strong late-time anisotropic stress appears closely
related to a modification of gravity.
Nonetheless, the framework rests on the validity of the cosmological principle, i.e. stat-
istical isotropy and homogeneity. If the Universe has a fundamentally different structure, e.g.
if we lived in the center of a gigantic void, then further degrees of freedom become possible.
As an example, in such a case there are two different functions that parameterize the back-
ground evolution, that can be taken to be a transverse and a longitudinal expansion rate.
In a FLRW background they are equal, but in a Lemaı̂tre-Tolman-Bondi (LTB) background
they are in general different. It has been shown that in this case they can be distinguished
by measurements of H(z) and distances [167, 174]. The full perturbation theory for the LTB
background is still being worked out, and there are of course further generalizations possible.
Hence when using the phenomenological approach one should always be aware of the FLRW
assumption that has been made, and one should test this assumption as far as possible.
Today the phenomenological parameters are not well constrained, with deviations of
order unity still possible. However, future very large surveys like Euclid [1] will significantly
tighten the bounds and will permit to constrain w at the 1% level and the perturbative
quantities at the 10% level [71, 73, 75, 77, 78, 147–156]. To reach this precision will still take
a lot of work to control the systematic errors in the observations, but also to ensure that all
important theoretical effects are taken into account, for example non-linear behavior of the
perturbations, but also relativistic effects [175–178] as well as including vector and tensor
modes.
Acknowledgments
I would like to thank Philippe Brax and Céline Boehm for inviting me to write this mini-
review. It is a pleasure to thank all my dark energy collaborators over the years, and I am
especially grateful to Bruce Bassett, Philippe Brax, Ruth Durrer, Lukas Hollenstein, Dragan
Huterer and Ignacy Sawicki for helpful discussions and comments about this manuscript.
This work is supported by the Swiss National Science Foundation. I gratefully acknowledge
hospitality by UNFPA Thailand during part of the writing of this review.
References
[1] Euclid Consortium Collaboration, R. Laureijs, J. Amiaux, S. Arduini, J.-L. Augueres,
et. al., Euclid Definition Study Report, arXiv:1110.3193.
[2] A. Einstein, The Field Equations of Gravitation, Sitzungsber.Preuss.Akad.Wiss.Berlin
(Math.Phys.) 1915 (1915) 844–847.
[3] A. Einstein, Cosmological Considerations in the General Theory of Relativity,
Sitzungsber.Preuss.Akad.Wiss.Berlin (Math.Phys.) 1917 (1917) 142–152.
– 35 –
[4] Supernova Search Team Collaboration, A. G. Riess et. al., Observational evidence from
supernovae for an accelerating universe and a cosmological constant, Astron.J. 116 (1998)
1009–1038, [astro-ph/9805201].
[5] Supernova Cosmology Project Collaboration, S. Perlmutter et. al., Measurements of
Omega and Lambda from 42 high redshift supernovae, Astrophys.J. 517 (1999) 565–586,
[astro-ph/9812133].
[6] S. M. Carroll, The Cosmological constant, Living Rev.Rel. 4 (2001) 1, [astro-ph/0004075].
[7] E. J. Copeland, M. Sami, and S. Tsujikawa, Dynamics of dark energy, Int.J.Mod.Phys. D15
(2006) 1753–1936, [hep-th/0603057].
[8] R. Durrer and R. Maartens, Dark Energy and Modified Gravity, arXiv:0811.4132.
[9] P. Brax, Gif Lectures on Cosmic Acceleration, arXiv:0912.3610.
[10] B. Jain and J. Khoury, Cosmological Tests of Gravity, Annals Phys. 325 (2010) 1479–1516,
[arXiv:1004.3294].
[11] D. Sapone, Dark Energy in Practice, Int.J.Mod.Phys. A25 (2010) 5253–5331,
[arXiv:1006.5694].
[12] T. Clifton, P. G. Ferreira, A. Padilla, and C. Skordis, Modified Gravity and Cosmology,
arXiv:1106.2476.
[13] L. Amendola and S. Tsujikawa, Dark Energy: Theory and Observations. 2010.
[14] P. Astier and R. Pain, Observational Evidence of the Accelerated Expansion of the Universe,
arXiv:1204.xxxx.
[15] J. Martin, Everything You always Wanted to Know about the Cosmological Constant (but
Were Afraid to Ask), arXiv:1204.xxxx.
[16] C. Clarkson, Establishing Homogeneity of the Universe in the Shadow of Dark Energy,
arXiv:1204.xxxx.
[17] C. de Rham, Galileons in the Sky, arXiv:1204.xxxx.
[18] I. M. H. Etherington, On the Definition of Distance in General Relativity., Philosophical
Magazine 15 (1933) 761.
[19] B. A. Bassett and M. Kunz, Cosmic distance-duality as a probe of exotic physics and
acceleration, Phys.Rev. D69 (2004) 101305, [astro-ph/0312443].
[20] B. A. Bassett and M. Kunz, Cosmic Acceleration versus Axion-Photon Mixing, Astrophys. J.
607 (June, 2004) 661–664, [astro-ph/0311495].
[21] J.-P. Uzan, N. Aghanim, and Y. Mellier, The Distance duality relation from x-ray and SZ
observations of clusters, Phys.Rev. D70 (2004) 083533, [astro-ph/0405620].
[22] A. Avgoustidis, C. Burrage, J. Redondo, L. Verde, and R. Jimenez, Constraints on cosmic
opacity and beyond the standard model physics from cosmological distance measurements,
JCAP 1010 (2010) 024, [arXiv:1004.2053].
[23] M. Kunz and B. A. Bassett, A tale of two distances, astro-ph/0406013.
[24] C. Clarkson, G. Ellis, A. Faltenbacher, R. Maartens, O. Umeh, et. al., (Mis-)Interpreting
supernovae observations in a lumpy universe, arXiv:1109.2484.
[25] M. Maggiore, L. Hollenstein, M. Jaccard, and E. Mitsou, Early dark energy from zero-point
quantum fluctuations, Phys.Lett. B704 (2011) 102–107, [arXiv:1104.3797].
[26] L. Hollenstein, M. Jaccard, M. Maggiore, and E. Mitsou, Zero-point quantum fluctuations in
cosmology, arXiv:1111.5575.
[27] A. R. Liddle and D. Lyth, Cosmological inflation and large scale structure, .
– 36 –
[28] WMAP Collaboration Collaboration, E. Komatsu et. al., Seven-Year Wilkinson
Microwave Anisotropy Probe (WMAP) Observations: Cosmological Interpretation,
Astrophys.J.Suppl. 192 (2011) 18, [arXiv:1001.4538].
[29] S. Ilic, M. Kunz, A. R. Liddle, and J. A. Frieman, A dark energy view of inflation, Phys.Rev.
D81 (2010) 103502, [arXiv:1002.4196].
[30] T. Buchert, On average properties of inhomogeneous fluids in general relativity. 1. Dust
cosmologies, Gen.Rel.Grav. 32 (2000) 105–125, [gr-qc/9906015].
[31] S. Rasanen, Dark energy from backreaction, JCAP 0402 (2004) 003, [astro-ph/0311257].
[32] S. Rasanen, Accelerated expansion from structure formation, JCAP 0611 (2006) 003,
[astro-ph/0607626].
[33] T. Buchert, Dark Energy from Structure: A Status Report, Gen.Rel.Grav. 40 (2008) 467–527,
[arXiv:0707.2153].
[34] J. Moffat and D. Tatarski, Redshift and structure formation in a spatially flat inhomogeneous
universe, Phys.Rev. D45 (1992) 3512–3522.
[35] K. Tomita, A local void and the accelerating universe, Mon.Not.Roy.Astron.Soc. 326 (2001)
287, [astro-ph/0011484].
[36] M.-N. Celerier, Do we really see a cosmological constant in the supernovae data?,
Astron.Astrophys. 353 (2000) 63–71, [astro-ph/9907206].
[37] T. Biswas, R. Mansouri, and A. Notari, Nonlinear Structure Formation and Apparent
Acceleration: An Investigation, JCAP 0712 (2007) 017, [astro-ph/0606703].
[38] J. Garcia-Bellido and T. Haugboelle, Confronting Lemaitre-Tolman-Bondi models with
Observational Cosmology, JCAP 0804 (2008) 003, [arXiv:0802.1523].
[39] C. Clarkson and R. Maartens, Inhomogeneity and the foundations of concordance cosmology,
Class.Quant.Grav. 27 (2010) 124008, [arXiv:1005.2165]. 26 pages and 1 figure. Invited
review article for the CQG special issue on nonlinear cosmological perturbations. v2 has
additional refs and comments, minor errors corrected, version in CQG.
[40] J. Larena, J.-M. Alimi, T. Buchert, M. Kunz, and P.-S. Corasaniti, Testing backreaction
effects with observations, Phys.Rev. D79 (2009) 083011, [arXiv:0808.1161].
[41] A. D. Linde, D. A. Linde, and A. Mezhlumian, Do we live in the center of the world?,
Phys.Lett. B345 (1995) 203–210, [hep-th/9411111].
[42] D. Parkinson, B. A. Bassett, and J. D. Barrow, Mapping the dark energy with varying alpha,
Phys.Lett. B578 (2004) 235–240, [astro-ph/0307227].
[43] J.-P. Uzan, Varying Constants, Gravitation and Cosmology, Living Rev.Rel. 14 (2011) 2,
[arXiv:1009.5514].
[44] D. Huterer and M. S. Turner, Probing the dark energy: Methods and strategies, Phys.Rev.
D64 (2001) 123527, [astro-ph/0012510].
[45] I. Maor, R. Brustein, and P. J. Steinhardt, Limitations in using luminosity distance to
determine the equation of state of the universe, Phys.Rev.Lett. 86 (2001) 6,
[astro-ph/0007297].
[46] J. Weller and A. Albrecht, Opportunities for future supernova studies of cosmic acceleration,
Phys.Rev.Lett. 86 (2001) 1939–1942, [astro-ph/0008314].
[47] M. Chevallier and D. Polarski, Accelerating universes with scaling dark matter,
Int.J.Mod.Phys. D10 (2001) 213–224, [gr-qc/0009008].
[48] E. V. Linder, Exploring the expansion history of the universe, Phys.Rev.Lett. 90 (2003)
091301, [astro-ph/0208512].
– 37 –
[49] P. S. Corasaniti and E. Copeland, A Model independent approach to the dark energy equation
of state, Phys.Rev. D67 (2003) 063521, [astro-ph/0205544].
[50] M. Douspis, Y. Zolnierowski, A. Blanchard, and A. Riazuelo, What can we learn about dark
energy evolution?, Astron.Astrophys. (2006) [astro-ph/0602491].
[51] B. A. Bassett, M. Kunz, J. Silk, and C. Ungarelli, A Late time transition in the cosmic dark
energy?, Mon.Not.Roy.Astron.Soc. 336 (2002) 1217–1222, [astro-ph/0203383].
[52] B. A. Bassett, P. S. Corasaniti, and M. Kunz, The Essence of quintessence and the cost of
compression, Astrophys.J. 617 (2004) L1–L4, [astro-ph/0407364].
[53] D. Huterer and G. Starkman, Parameterization of dark-energy properties: A
Principal-component approach, Phys.Rev.Lett. 90 (2003) 031301, [astro-ph/0207517].
[54] T. Holsclaw, U. Alam, B. Sanso, H. Lee, K. Heitmann, et. al., Nonparametric Dark Energy
Reconstruction from Supernova Data, Phys.Rev.Lett. 105 (2010) 241302, [arXiv:1011.3079].
[55] Y. Wang and P. M. Garnavich, Measuring time dependence of dark energy density from type
Ia supernova data, Astrophys.J. 552 (2001) 445, [astro-ph/0101040].
[56] M. Tegmark, Measuring the metric: A Parametrized postFriedmanian approach to the cosmic
dark energy problem, Phys.Rev. D66 (2002) 103507, [astro-ph/0101354].
[57] R. A. Daly and S. Djorgovski, A model-independent determination of the expansion and
acceleration rates of the universe as a function of redshift and constraints on dark energy,
Astrophys.J. 597 (2003) 9–20, [astro-ph/0305197].
[58] M. Kunz, A. R. Liddle, D. Parkinson, and C. Gao, Constraining the dark fluid, Phys.Rev.
D80 (2009) 083533, [arXiv:0908.3197].
[59] Supernova Cosmology Project Collaboration, M. Kowalski et. al., Improved Cosmological
Constraints from New, Old and Combined Supernova Datasets, Astrophys.J. 686 (2008)
749–778, [arXiv:0804.4142].
[60] Y. Wang and P. Mukherjee, Observational Constraints on Dark Energy and Cosmic
Curvature, Phys.Rev. D76 (2007) 103533, [astro-ph/0703780].
[61] SDSS Collaboration Collaboration, B. A. Reid et. al., Baryon Acoustic Oscillations in the
Sloan Digital Sky Survey Data Release 7 Galaxy Sample, Mon.Not.Roy.Astron.Soc. 401
(2010) 2148–2168, [arXiv:0907.1660].
[62] A. G. Riess, L. Macri, S. Casertano, M. Sosey, H. Lampeitl, et. al., A Redetermination of the
Hubble Constant with the Hubble Space Telescope from a Differential Distance Ladder,
Astrophys.J. 699 (2009) 539–563, [arXiv:0905.0695].
[63] H. Kodama and M. Sasaki, Cosmological Perturbation Theory, Prog.Theor.Phys.Suppl. 78
(1984) 1–166.
[64] R. Durrer, Gauge invariant cosmological perturbation theory: A General study and its
application to the texture scenario of structure formation, Fund.Cosmic Phys. 15 (1994) 209,
[astro-ph/9311041].
[65] C.-P. Ma and E. Bertschinger, Cosmological perturbation theory in the synchronous and
conformal Newtonian gauges, Astrophys.J. 455 (1995) 7–25, [astro-ph/9506072].
[66] W. Hu, Covariant linear perturbation formalism, astro-ph/0402060.
[67] K. A. Malik and D. Wands, Cosmological perturbations, Phys.Rept. 475 (2009) 1–51,
[arXiv:0809.4944].
[68] H. Sandvik, M. Tegmark, M. Zaldarriaga, and I. Waga, The end of unified dark matter?,
Phys.Rev. D69 (2004) 123524, [astro-ph/0212114].
[69] R. Bean and O. Dore, Are Chaplygin gases serious contenders to the dark energy throne?,
– 38 –
Phys.Rev. D68 (2003) 023515, [astro-ph/0301308].
[70] G. Ballesteros, L. Hollenstein, R. K. Jain, and M. Kunz, Nonlinear cosmological consistency
relations and effective matter stresses, arXiv:1112.4837.
[71] L. Amendola, M. Kunz, and D. Sapone, Measuring the dark side (with weak lensing), JCAP
0804 (2008) 013, [arXiv:0704.2421].
[72] S. F. Daniel, E. V. Linder, T. L. Smith, R. R. Caldwell, A. Cooray, et. al., Testing General
Relativity with Current Cosmological Data, Phys.Rev. D81 (2010) 123508,
[arXiv:1002.1962].
[73] R. Bean and M. Tangmatitham, Current constraints on the cosmic growth history, Phys.Rev.
D81 (2010) 083534, [arXiv:1002.4197].
[74] P. G. Ferreira and C. Skordis, The linear growth rate of structure in Parametrized Post
Friedmannian Universes, Phys.Rev. D81 (2010) 104020, [arXiv:1003.4231].
[75] L. Pogosian, A. Silvestri, K. Koyama, and G.-B. Zhao, How to optimally parametrize
deviations from General Relativity in the evolution of cosmological perturbations?, Phys.Rev.
D81 (2010) 104023, [arXiv:1002.2382].
[76] G.-B. Zhao, T. Giannantonio, L. Pogosian, A. Silvestri, D. J. Bacon, et. al., Probing
modifications of General Relativity using current cosmological observations, Phys.Rev. D81
(2010) 103510, [arXiv:1003.0001].
[77] S. F. Daniel and E. V. Linder, Confronting General Relativity with Further Cosmological
Data, Phys.Rev. D82 (2010) 103523, [arXiv:1008.0397].
[78] W. Hu and I. Sawicki, A Parameterized Post-Friedmann Framework for Modified Gravity,
Phys.Rev. D76 (2007) 104043, [arXiv:0708.1190].
[79] W. Hu, Structure formation with generalized dark matter, Astrophys.J. 506 (1998) 485–494,
[astro-ph/9801234].
[80] R. Caldwell, R. Dave, and P. J. Steinhardt, Cosmological imprint of an energy component
with general equation of state, Phys.Rev.Lett. 80 (1998) 1582–1585, [astro-ph/9708069].
[81] C. Wetterich, Cosmology and the Fate of Dilatation Symmetry, Nucl.Phys. B302 (1988) 668.
[82] B. Ratra and P. Peebles, Cosmological Consequences of a Rolling Homogeneous Scalar Field,
Phys.Rev. D37 (1988) 3406.
[83] M. Kunz and D. Sapone, Crossing the Phantom Divide, Phys.Rev. D74 (2006) 123503,
[astro-ph/0609040].
[84] D. Huterer and M. S. Turner, Prospects for probing the dark energy via supernova distance
measurements, Phys.Rev. D60 (1999) 081301, [astro-ph/9808133].
[85] T. D. Saini, S. Raychaudhury, V. Sahni, and A. A. Starobinsky, Reconstructing the cosmic
equation of state from supernova distances, Phys.Rev.Lett. 85 (2000) 1162–1165,
[astro-ph/9910231].
[86] M. Sahlen, A. R. Liddle, and D. Parkinson, Direct reconstruction of the quintessence potential,
Phys.Rev. D72 (2005) 083511, [astro-ph/0506696].
[87] T. Chiba, T. Okabe, and M. Yamaguchi, Kinetically driven quintessence, Phys.Rev. D62
(2000) 023511, [astro-ph/9912463].
[88] C. Armendariz-Picon, V. F. Mukhanov, and P. J. Steinhardt, A Dynamical solution to the
problem of a small cosmological constant and late time cosmic acceleration, Phys.Rev.Lett. 85
(2000) 4438–4441, [astro-ph/0004134].
[89] C. Bonvin, C. Caprini, and R. Durrer, A no-go theorem for k-essence dark energy,
Phys.Rev.Lett. 97 (2006) 081303, [astro-ph/0606584].
– 39 –
[90] C. Bonvin, C. Caprini, and R. Durrer, Superluminal motion and closed signal curves,
arXiv:0706.1538.
[91] E. Babichev, V. Mukhanov, and A. Vikman, k-Essence, superluminal propagation, causality
and emergent geometry, JHEP 0802 (2008) 101, [arXiv:0708.0561].
[92] S. M. Carroll, M. Hoffman, and M. Trodden, Can the dark energy equation of state parameter
w be less than -1?, Phys.Rev. D68 (2003) 023509, [astro-ph/0301273].
[93] C. Deffayet, O. Pujolas, I. Sawicki, and A. Vikman, Imperfect Dark Energy from Kinetic
Gravity Braiding, JCAP 1010 (2010) 026, [arXiv:1008.0048].
[94] P. Creminelli, G. D’Amico, J. Norena, and F. Vernizzi, The Effective Theory of Quintessence:
the w < −1 Side Unveiled, JCAP 0902 (2009) 018, [arXiv:0811.0827].
[95] E. A. Lim, I. Sawicki, and A. Vikman, Dust of Dark Energy, JCAP 1005 (2010) 012,
[arXiv:1003.5751].
[96] B. Feng, X.-L. Wang, and X.-M. Zhang, Dark energy constraints from the cosmic age and
supernova, Phys.Lett. B607 (2005) 35–41, [astro-ph/0404224].
[97] W. Hu, Crossing the phantom divide: Dark energy internal degrees of freedom, Phys.Rev. D71
(2005) 047301, [astro-ph/0410680].
[98] D. Sapone, M. Kunz, and M. Kunz, Fingerprinting Dark Energy, Phys.Rev. D80 (2009)
083519, [arXiv:0909.0007].
[99] D. Sapone, M. Kunz, and L. Amendola, Fingerprinting Dark Energy II: weak lensing and
galaxy clustering tests, Phys.Rev. D82 (2010) 103535, [arXiv:1007.2188].
[100] R. de Putter, D. Huterer, and E. V. Linder, Measuring the Speed of Dark: Detecting Dark
Energy Perturbations, Phys.Rev. D81 (2010) 103513, [arXiv:1002.1311].
[101] G. Ballesteros and J. Lesgourgues, Dark energy with non-adiabatic sound speed: initial
conditions and detectability, JCAP 1010 (2010) 014, [arXiv:1004.5509].
[102] S. Perlmutter, M. S. Turner, and M. J. White, Constraining dark energy with SNe Ia and
large scale structure, Phys.Rev.Lett. 83 (1999) 670–673, [astro-ph/9901052].
[103] P. S. Corasaniti and E. J. Copeland, Constraining the quintessence equation of state with SnIa
data and CMB peaks, Phys.Rev. D65 (2002) 043004, [astro-ph/0107378].
[104] J. Weller and A. Lewis, Large scale cosmic microwave background anisotropies and dark
energy, Mon.Not.Roy.Astron.Soc. 346 (2003) 987–993, [astro-ph/0307104].
[105] C. Baccigalupi, A. Balbi, S. Matarrese, F. Perrotta, and N. Vittorio, Constraints on flat
cosmologies with tracking quintessence from cosmic microwave background observations,
Phys.Rev. D65 (2002) 063520, [astro-ph/0109097].
[106] R. Bean and A. Melchiorri, Current constraints on the dark energy equation of state,
Phys.Rev. D65 (2002) 041302, [astro-ph/0110472].
[107] B. A. Bassett, M. Kunz, D. Parkinson, and C. Ungarelli, Condensate cosmology - Dark energy
from dark matter, Phys.Rev. D68 (2003) 043504, [astro-ph/0211303].
[108] R. Bean and O. Dore, Probing dark energy perturbations: The Dark energy equation of state
and speed of sound as measured by WMAP, Phys.Rev. D69 (2004) 083503,
[astro-ph/0307100].
[109] M. Kunz, P.-S. Corasaniti, D. Parkinson, and E. J. Copeland, Model-independent dark energy
test with sigma(8) using results from the Wilkinson microwave anisotropy probe, Phys.Rev.
D70 (2004) 041301, [astro-ph/0307346]. Reviewed in Nature 431:519,2004.
[110] P. S. Corasaniti, M. Kunz, D. Parkinson, E. Copeland, and B. Bassett, The Foundations of
observing dark energy dynamics with the Wilkinson Microwave Anisotropy Probe, Phys.Rev.
– 40 –
D70 (2004) 083006, [astro-ph/0406608].
[111] U. Seljak and M. Zaldarriaga, A Line of sight integration approach to cosmic microwave
background anisotropies, Astrophys.J. 469 (1996) 437–444, [astro-ph/9603033].
[112] A. Lewis, A. Challinor, and A. Lasenby, Efficient computation of CMB anisotropies in closed
FRW models, Astrophys.J. 538 (2000) 473–476, [astro-ph/9911177].
[113] A. R. Liddle, How many cosmological parameters?, Mon.Not.Roy.Astron.Soc. 351 (2004)
L49–L53, [astro-ph/0401198].
[114] T. D. Saini, J. Weller, and S. Bridle, Revealing the nature of dark energy using Bayesian
evidence, Mon.Not.Roy.Astron.Soc. 348 (2004) 603, [astro-ph/0305526].
[115] R. Trotta, Applications of Bayesian model selection to cosmological parameters,
Mon.Not.Roy.Astron.Soc. 378 (2007) 72–82, [astro-ph/0504022].
[116] M. Kunz, R. Trotta, and D. Parkinson, Measuring the effective complexity of cosmological
models, Phys.Rev. D74 (2006) 023503, [astro-ph/0602378].
[117] E. T. Jaynes and G. L. Bretthorst, Probability Theory. Cambridge University Press, June,
2003.
[118] W. Feller, An introduction to probability theory and its applications. Vol. I. Third edition.
John Wiley & Sons Inc., New York, 1968.
[119] W. Feller, An introduction to probability theory and its applications. Vol. II. Second edition.
John Wiley & Sons Inc., New York, 1971.
[120] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller, Equation of state
calculations by fast computing machines, J.Chem.Phys. 21 (1953) 1087–1092.
[121] A. Lewis and S. Bridle, Cosmological parameters from CMB and other data: A Monte Carlo
approach, Phys.Rev. D66 (2002) 103511, [astro-ph/0205436].
[122] A. Lewis and S. Bridle, “CosmoMC Notes.” http://cosmologist.info/notes/CosmoMC.pdf.
[123] D. MacKay, Information theory, inference, and learning algorithms. Cambridge University
Press, 2003.
[124] J. Skilling, Nested Sampling, in American Institute of Physics Conference Series (R. Fischer,
R. Preuss, and U. V. Toussaint, eds.), pp. 395–405, Nov., 2004.
[125] P. Mukherjee, D. Parkinson, and A. R. Liddle, A nested sampling algorithm for cosmological
model selection, Astrophys.J. 638 (2006) L51–L54, [astro-ph/0508461].
[126] F. Feroz and M. Hobson, Multimodal nested sampling: an efficient and robust alternative to
MCMC methods for astronomical data analysis, Mon.Not.Roy.Astron.Soc. 384 (2008) 449,
[arXiv:0704.3704].
[127] D. Wraith, M. Kilbinger, K. Benabed, O. Cappe, J.-F. Cardoso, et. al., Estimation of
cosmological parameters using adaptive importance sampling, Phys.Rev. D80 (2009) 023507,
[arXiv:0903.0837].
[128] R. Amanullah, C. Lidman, D. Rubin, G. Aldering, P. Astier, et. al., Spectra and Light Curves
of Six Type Ia Supernovae at 0.511 < z < 1.12 and the Union2 Compilation, Astrophys.J. 716
(2010) 712–738, [arXiv:1004.1711].
[129] D. Larson, J. Dunkley, G. Hinshaw, E. Komatsu, M. Nolta, et. al., Seven-Year Wilkinson
Microwave Anisotropy Probe (WMAP) Observations: Power Spectra and WMAP-Derived
Parameters, Astrophys.J.Suppl. 192 (2011) 16, [arXiv:1001.4635].
[130] C. Zunckel and R. Trotta, Reconstructing the history of dark energy using maximum entropy,
Mon.Not.Roy.Astron.Soc. 380 (2007) 865, [astro-ph/0702695].
– 41 –
[131] L. Amendola, M. Kunz, I. Saltas, and I. Sawicki, All you can know about Dark Energy, . in
preparation.
[132] P. Brax, A.-C. Davis, and B. Li, Modified Gravity Tomography, arXiv:1111.6613.
[133] S. Tsujikawa, Modified gravity models of dark energy, Lect.Notes Phys. 800 (2010) 99–145,
[arXiv:1101.0191].
[134] G. Dvali, G. Gabadadze, and M. Porrati, 4-D gravity on a brane in 5-D Minkowski space,
Phys.Lett. B485 (2000) 208–214, [hep-th/0005016].
[135] I. D. Saltas and M. Kunz, Anisotropic stress and stability in modified gravity models,
Phys.Rev. D83 (2011) 064042, [arXiv:1012.3171].
[136] A. De Felice and T. Suyama, Vacuum structure for scalar cosmological perturbations in
Modified Gravity Models, JCAP 0906 (2009) 034, [arXiv:0904.2092].
[137] R. P. Woodard, Avoiding dark energy with 1/r modifications of gravity, Lect.Notes Phys. 720
(2007) 403–433, [astro-ph/0601672].
[138] R. Maartens and E. Majerotto, Observational constraints on self-accelerating cosmology,
Phys.Rev. D74 (2006) 023004, [astro-ph/0603353].
[139] A. Lue, R. Scoccimarro, and G. D. Starkman, Probing Newton’s constant on vast scales: DGP
gravity, cosmic acceleration and large scale structure, Phys.Rev. D69 (2004) 124015,
[astro-ph/0401515].
[140] K. Koyama and R. Maartens, Structure formation in the dgp cosmological model, JCAP 0601
(2006) 016, [astro-ph/0511634].
[141] G. W. Horndeski, Second-order scalar-tensor field equations in a four-dimensional space,
International Journal of Theoretical Physics 10 (1974) 363–384. 10.1007/BF01807638.
[142] A. Nicolis, R. Rattazzi, and E. Trincherini, The Galileon as a local modification of gravity,
Phys.Rev. D79 (2009) 064036, [arXiv:0811.2197].
[143] C. de Rham and G. Gabadadze, Generalization of the Fierz-Pauli Action, Phys.Rev. D82
(2010) 044020, [arXiv:1007.0443].
[144] C. Charmousis, E. J. Copeland, A. Padilla, and P. M. Saffin, General second order
scalar-tensor theory, self tuning, and the Fab Four, arXiv:1106.2000.
[145] A. De Felice and S. Tsujikawa, Conditions for the cosmological viability of the most general
scalar-tensor theories and their applications to extended Galileon dark energy models,
arXiv:1110.3878.
[146] M. Wyman, Galilean-invariant scalar fields can strengthen gravitational lensing,
Phys.Rev.Lett. 106 (2011) 201102, [arXiv:1101.1295].
[147] R. Caldwell, A. Cooray, and A. Melchiorri, Constraints on a New Post-General Relativity
Cosmological Parameter, Phys.Rev. D76 (2007) 023507, [astro-ph/0703375].
[148] E. Bertschinger and P. Zukin, Distinguishing Modified Gravity from Dark Energy, Phys.Rev.
D78 (2008) 024015, [arXiv:0801.2431].
[149] Y.-S. Song and K. Koyama, Consistency test of general relativity from large scale structure of
the Universe, JCAP 0901 (2009) 048, [arXiv:0802.3897].
[150] Y.-S. Song, L. Hollenstein, G. Caldera-Cabral, and K. Koyama, Theoretical Priors On
Modified Growth Parametrisations, JCAP 1004 (2010) 018, [arXiv:1001.0969].
[151] Y.-S. Song, G.-B. Zhao, D. Bacon, K. Koyama, R. C. Nichol, et. al., Complementarity of
Weak Lensing and Peculiar Velocity Measurements in Testing General Relativity, Phys.Rev.
D84 (2011) 083523, [arXiv:1011.2106].
– 42 –
[152] A. Hojjati, L. Pogosian, and G.-B. Zhao, Testing gravity with CAMB and CosmoMC, JCAP
1108 (2011) 005, [arXiv:1106.4543].
[153] T. Baker, P. G. Ferreira, C. Skordis, and J. Zuntz, Towards a fully consistent
parameterization of modified gravity, Phys.Rev. D84 (2011) 124018, [arXiv:1107.0491].
[154] G.-B. Zhao, H. Li, E. V. Linder, K. Koyama, D. J. Bacon, et. al., Testing Einstein Gravity
with Cosmic Growth and Expansion, arXiv:1109.1846.
[155] J. Zuntz, T. Baker, P. Ferreira, and C. Skordis, Ambiguous Tests of General Relativity on
Cosmological Scales, arXiv:1110.3830.
[156] Euclid Consortium Collaboration, L. Amendola, S. Appleby, D. Bacon, T. Baker, et. al.,
Cosmology and fundamental physics with the Euclid satellite: Review Document of the Euclid
Theory Working Group, . in preparation.
[157] M. Kunz, The dark degeneracy: On the number and nature of dark components, Phys.Rev.
D80 (2009) 123001, [astro-ph/0702615].
[158] W. Hu and D. J. Eisenstein, The Structure of structure formation theories, Phys.Rev. D59
(1999) 083509, [astro-ph/9809368].
[159] M. Kunz, Why we need to see the dark matter to understand the dark energy,
J.Phys.Conf.Ser. 110 (2008) 062014, [arXiv:0710.5712].
[160] The SNLS Collaboration Collaboration, P. Astier et. al., The Supernova legacy survey:
Measurement of omega(m), omega(lambda) and W from the first year data set,
Astron.Astrophys. 447 (2006) 31–48, [astro-ph/0510447].
[161] WMAP Collaboration Collaboration, D. Spergel et. al., Wilkinson Microwave Anisotropy
Probe (WMAP) three year results: implications for cosmology, Astrophys.J.Suppl. 170 (2007)
377, [astro-ph/0603449].
[162] L. Amendola and S. Tsujikawa, Phantom crossing, equation-of-state singularities, and local
gravity constraints in f(R) models, Phys.Lett. B660 (2008) 125–132, [arXiv:0705.0396].
[163] G. Huey and B. D. Wandelt, Interacting quintessence. The Coincidence problem and cosmic
acceleration, Phys.Rev. D74 (2006) 023519, [astro-ph/0407196].
[164] S. Das, P. S. Corasaniti, and J. Khoury, Super-acceleration as signature of dark sector
interaction, Phys.Rev. D73 (2006) 083509, [astro-ph/0510628].
[165] R. Hlozek, M. Cortes, C. Clarkson, and B. Bassett, Non-parametric Dark Energy
Degeneracies, arXiv:0801.3847.
[166] C. Clarkson, M. Cortes, and B. A. Bassett, Dynamical Dark Energy or Simply Cosmic
Curvature?, JCAP 0708 (2007) 011, [astro-ph/0702670].
[167] C. Clarkson, B. Bassett, and T. H.-C. Lu, A general test of the Copernican Principle,
Phys.Rev.Lett. 101 (2008) 011301, [arXiv:0712.3457].
[168] R. Maartens, T. Gebbie, and G. F. Ellis, Covariant cosmic microwave background
anisotropies. 2. Nonlinear dynamics, Phys.Rev. D59 (1999) 083506, [astro-ph/9808163].
[169] A. Vainshtein, To the problem of nonvanishing gravitation mass, Phys.Lett. B39 (1972)
393–394.
[170] J. Khoury and A. Weltman, Chameleon fields: Awaiting surprises for tests of gravity in space,
Phys.Rev.Lett. 93 (2004) 171104, [astro-ph/0309300].
[171] P. Brax, C. van de Bruck, A.-C. Davis, J. Khoury, and A. Weltman, Detecting dark energy in
orbit - The Cosmological chameleon, Phys.Rev. D70 (2004) 123518, [astro-ph/0408415].
[172] W. Hu and I. Sawicki, Models of f(R) Cosmic Acceleration that Evade Solar-System Tests,
Phys.Rev. D76 (2007) 064004, [arXiv:0705.1158].
– 43 –
[173] G.-B. Zhao, B. Li, and K. Koyama, Testing General Relativity using the Environmental
Dependence of Dark Matter Halos, Phys.Rev.Lett. 107 (2011) 071303, [arXiv:1105.0922].
[174] J.-P. Uzan, C. Clarkson, and G. F. Ellis, Time drift of cosmological redshifts as a test of the
Copernican principle, Phys.Rev.Lett. 100 (2008) 191303, [arXiv:0801.0068].
[175] J. Yoo, General Relativistic Description of the Observed Galaxy Power Spectrum: Do We
Understand What We Measure?, Phys.Rev. D82 (2010) 083508, [arXiv:1009.3021].
[176] C. Bonvin and R. Durrer, What galaxy surveys really measure, Phys.Rev. D84 (2011) 063505,
[arXiv:1105.5280].
[177] A. Challinor and A. Lewis, The linear power spectrum of observed source number counts,
Phys.Rev. D84 (2011) 043516, [arXiv:1105.5292].
[178] D. Jeong, F. Schmidt, and C. M. Hirata, Large-scale clustering of galaxies in general
relativity, arXiv:1107.5427.
– 44 –