2412.00036v1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

Beyond Monte Carlo: Harnessing Diffusion Models

to Simulate Financial Market Dynamics


Andrew Lesniewski and Giulio Trigila
Department of Mathematics
arXiv:2412.00036v1 [q-fin.CP] 21 Nov 2024

Baruch College
One Bernard Baruch Way
New York, NY 10010
USA

December 3, 2024

Abstract
We propose a highly efficient and accurate methodology for generating synthetic financial
market data using a diffusion model approach. The synthetic data produced by our methodology
align closely with observed market data in several key aspects: (i) they pass the two-sample
Cramer – von Mises test for portfolios of assets, and (ii) Q - Q plots demonstrate consistency
across quantiles, including in the tails, between observed and generated market data. Moreover,
the covariance matrices derived from a large set of synthetic market data exhibit significantly
lower condition numbers compared to the estimated covariance matrices of the observed data.
This property makes them suitable for use as regularized versions of the latter. For model training,
we develop an efficient and fast algorithm based on numerical integration rather than Monte Carlo
simulations. The methodology is tested on a large set of equity data.

1 Introduction
In this paper, we present an efficient methodology for generating synthetic financial market data,
based on the diffusion model approach. Diffusion models [19], [20], [6], [21], [22], a class of deep
generative models, are mathematical models designed to generate synthetic data by Monte Carlo sim-
ulating a reverse-time stochastic process, which is specified as an Ito stochastic differential equation
(diffusion process). The diffusion model strategy to synthetic data generation and is a two stage pro-
cess: encoding and decoding. This process employs the use of linear stochastic differential equations.
These models have demonstrated impressive results across various applications, including com-
puter vision, natural language processing, time series modeling, multimodal learning, waveform sig-
nal processing, robust learning, molecular graph modeling, materials design, and inverse problem
solving [25]. Despite their successes, certain aspects of diffusion models, particularly those related
to the learning mechanism, require further refinement and development. Ongoing research efforts
focus on addressing these performance-related challenges and enhancing the overall capabilities of
diffusion modeling methodologies.
The significance of synthetic data across various domains, including addressing challenges related
to data privacy, regulatory compliance, and fraud detection, is highlighted in [10] and the references

1
2 A. Lesniewski and G. Trigila

therein. In the context of the finance industry, the ability to generate high-quality synthetic market
data plays a crucial role in applications such as portfolio allocation, portfolio and enterprise risk
quantification, the design and back-testing of trading and market-making strategies, what-if analysis,
and more.
In financial practice, accurately estimating various statistics of asset returns is critical. A key
example is the covariance matrix, which plays a central role in quantitative metrics such as portfolio
asset allocation and value at risk. These methodologies often assume, either explicitly or implicitly,
that returns are drawn from a multidimensional Gaussian distribution or, at a minimum, an ellipti-
cal distribution. Various estimation techniques have been proposed, including: (i) standard sample
(or maximum likelihood) estimators, (ii) shrinkage estimators [15], (iii) factor-based estimators [4],
and others [3]. However, standard sample estimators often produce poorly conditioned covariance
matrices with unstable inverses. This issue is commonly attributed to two primary factors:

(i) First, the dimensionality of the problem poses challenges: a covariance matrix of dimension d
has d(d+1)/2 parameters, which are estimated from a time series of n observations. For typical
portfolio sizes, this leads to a severely underdetermined problem. The alternative methods listed
above address this issue by regularizing the sample estimator using exogenous mechanisms.

(ii) Second, it is often believed that the estimation process itself is somehow prone to “error maxi-
mization”, rendering it unreliable for practical use.

A natural approach to regularizing the estimated covariance matrix is to reliably generate a large
number of samples from the unknown, non-parametric probability distribution underlying the asset
returns. This would allow for the estimation of statistics, including the covariance matrix, with im-
proved conditioning properties.
Specifically, we consider a portfolio of d assets A1 , . . . , Ad (e.g. equities), whose prices are
observed at discrete times. By X 1 , . . . , X d we denote the returns on these assets. While the frequency
of price observations is irrelevant to our analysis, for clarity, we assume daily observations. Denote
the d-dimensional vectors of their returns1 at each time ti , i = 1, . . . , n, where t1 < t2 < . . . < tn ,
by X = {xi , i = 1, . . . , n}. Notably, we impose no specific assumptions on the distribution of these
returns. The proposed method of generating synthetic data consistent with X consists in the following
steps:

1. We view the market data X as a sample drawn from an unspecified probability distribution
P0 . To each data point in X random noise is added incrementally over a time interval until the
data is completely deformed into white (Gaussian) noise. This random noise is modeled as a
suitable diffusion process.

2. Specifically, we generate random noise by a multi-dimensional stochastic differential equation


(SDE) of dimension d, whose initial value at time t = 0 is an element of X . This SDE is
referred to as the denoising SDE (DSDE). The diffusion process is carefully chosen to smoothly
transform the distribution P0 into d-dimensional white noise as the time variable approaches
the terminal time t = 12 . The DSDE is parameterized by coefficient functions that control how
quickly the process transitions to white noise.
1
Defined, as usual, as xi = (priceti − priceti−1 )/priceti−1 .
2
This choice of terminal time is conventional and does not result in any loss of generality.
Simulating financial market dynamics 3

3. Next, we estimate the coefficient functions using the score matching method, which estimates
the gradient of the log probability of the data. This method learnes the model how to move
along the score in the direction that most rapidly increases the probability of the data. Score
matching involves minimizing a nonlinear functional expressed as a d-dimensional integral.
The standard approach for this optimization problem used in the literature (Monte Carlo simu-
lation) is inefficient due to the high dimensionality of the problem. Instead, we develop a rapid,
efficient method by reducing the dimensionality of the problem to a manageable size, where
numerical quadrature can be carried out. This completes the encoding part of the algorithm.
4. Once we determine the score function of the DSDE, we set up the corresponding reverse-time
SDE. This reverse-time SDE generates the same sample paths as the original (forward-time)
SDE starting at the terminal time.
5. Since the terminal probability is white noise, we easily generate samples from that distribution.
Using these samples as terminal t = 1 values, we generate samples from the reverse-time SDE
using the Euler-Maruyama numerical scheme. The t = 0 values of these samples represent
the desired synthetic market data. Importantly, there’s no limit to the number of synthetic data
points we can generate. This completes the decoding part of the algorithm.
6. To assess whether the market data of size n and the generated synthetic sample of size m come
for the same (unknown) probability distribution P0 , we perform a two-sample Cramer-von
Mises test at the desired significance level.
Earlier work on this topic includes several notable publications. In [14], a market generator
based on a restricted Boltzmann machine (RBM) was introduced. However, tests using observed
market data revealed significant deviations in the tails of the RBM-generated distributions compared
to the actual data. Subsequently, a generative adversarial network (GAN) methodology for generating
market scenarios was developed in [24], demonstrating improved performance in the tails, aligning
more closely with market data. The diffusion model generative methodology proposed in this work
further enhances tail consistency, producing synthetic market scenarios that align well with observed
market data.
From a mathematical perspective, the style of this paper is informal, avoiding formally stated
theorems and proofs. Standard notations are used throughout, with the following conventions for
vector operations:
1. We employ Dirac’s notation to express various linear algebra operations, such as the inner
product of two vectors ⟨x|y⟩, the projection operator |x⟩⟨x| on a vector, etc.
2. For two vectors x, y ∈ Rd , x = (x1 , . . . , xd ), y = (y 1 , . . . , y d ) their product xy is understood
as the Hadamard (componentwise) product, i.e. (xy)i = xi y i , for i = 1, . . . , d. Additionally,
For a vector x ∈ Rd and a function f : R → R, f (x) represents the vector whose components
are the results from applying f componentwise, i.e. f (x)i = f (xi ). Similarly, inequalities like
f (x) > 0 indicate that all components of f (x) are positive.
The paper is organized as follows. Section 2 reviews the methodology for model learning through
score matching. For completeness, Section 3 summarizes key results from the theory of stochastic
differential equations, which are applied later in the paper. Section 4 introduces the framework of
denoising stochastic differential equations. Section 5 constitutes the core technical contribution, pre-
senting an efficient algorithm for model learning via score matching. This method eliminates the need
4 A. Lesniewski and G. Trigila

for time-consuming Monte Carlo simulations, replacing them with precise and efficient analytical ap-
proximations. In Section 6, we define the concept of synthetic market scenarios generated through
diffusion modeling. Finally, Section 7 showcases a series of experiments using equity market data,
validating the effectiveness of the proposed methodology.

2 Model learning via score matching


We begin by reviewing the score matching method and its refinement to denoising score matching.
These methods are the foundations of the diffusion model approach.

2.1 Explicit score matching


The score matching methodology for model learning was introduced by Hyvärinen in [7] as a practical
alternative to the maximum likelihood estimation (MLE) of the parameters of a probability distribu-
tion. The advantage of score matching over MLE is that it is directly applicable even if the probability
density function (PDF) is known only up to a normalization factor. The method consists in the fol-
lowing.
Let p(x) be the PDF of a continuous probability distribution defined for x ∈ Rd . Its (Stein) score
s(x) defined as
s(x) = ∇ log p(x). (1)
Notice that if p(x) is defined only up to a multiplicative normalization constant (which may be hard
to compute), p(x) ∼ q(x), its score is given by s(x) = ∇ log q(x). In particular, the score of a
d-dimensional Gaussian distribution with mean µ and covariance C is given by

s(x) = −C −1 (x − µ), (2)

an expression that does not involve the determinant of C. The definition of the score can easily be
extended to the family of probability distribution p(t, x) parameterized by time t.
The probability distribution p(x) typically depends on a set of parameters θ, i.e. p(x) = p(x; θ).
The objective of model training is to learn these parameters from the available data. In the context
of the Gaussian distribution, this dependence is established by specifying the functional relationship
between the mean and covariance in terms of the parameters, i.e. µ = µ(θ), C = C(θ).
Model learning via score matching consists in minimizing the objective function
1
E ∥s(· ; θ) − s(·)∥2

L(θ) =
2 Z
(3)
1
= ∥s(x; θ) − s(x)∥2 p(x)dx,
2
where p(x) represents here the continuum of data, and s(x; θ) is the score of the parameterized PDF3 ,

s(x; θ) = ∇x p(x; θ). (4)

Integrating by parts, one can rewrite this expression in the form [7]
Z
1
∇T s(x; θ) + ∥s(x; θ)∥2 p(x)dx + const,

L(θ) = (5)
2
3
Note that the score in the sense used here is different from the Fisher score defined as ∇θ p(x; θ).
Simulating financial market dynamics 5

where const is independents of θ.


In practice, we use the finite sample version of this objective function. If X = {xi , i = 1, . . . n},
xi ∈ Rd , is a data sample, then the corresponding empirical objective function is given by
n
1 X T 1
∇ s(xi ; θ) + ∥s(xi ; θ)∥2

LX (θ) =
n 2
Z i=1 (6)
1
∇T s(x; θ) + ∥s(x; θ)∥2 pX (x)dx,

=
2
where
n
1 X
pX (x) = δ(x − xi ) (7)
n
i=1
is the empirical probability distribution function corresponding to the observations X . Here and in
the following, δ(x) denotes the d-dimensional Dirac delta function. It was shown in [7] that, in case
of the Gaussian distribution with θ chosen as the set of all independent parameters of µ and C, the
minimizers of this objective function are the usual MLE values for the estimates of µ and C.

2.2 Denoising score matching


A practically important variant of score matching, replacing the empirical distribution (7) with a
smooth probability distribution, was developed by Vincent [23]. The method consists in regularizing
the empirical distribution pX (x) by its Parzen-Rosenblatt estimate,
n
1 X
pX ,h (x) = ph (x|xi ), (8)
n
i=1

where ph (x|xi ) is the Gaussian kernel of bandwidth h > 0,


1 1 2
ph (x|xi ) = 2 d/2
e− 2h2 ∥x−xi ∥ . (9)
(2πh )
This leads to the objective function
Z
1
LX ,h (θ) = ∥s(x; θ) − ∇ log pX ,h (x)∥2 pX ,h (x)dx, (10)
2
which is equivalent to (7), as h → 0.
Interpreting ph (x|y) as a conditional probability distribution, we can express the Parzen-Rosenblatt
estimate (8) as the margin Z
pX ,h (x) = ph (x, y)dy (11)

of the joint distribution


ph (x, y) = ph (x|y)pX (y). (12)
Using this, an explicit calculation [23] shows that this objective function is equivalent to the denoising
score matching (DSM) objective function given by
Z
DSM 1
LX ,h (θ) = ∥s(x; θ) − ∇ log pX ,h (x|y)∥2 pX ,h (x, y)dx dy. (13)
2
6 A. Lesniewski and G. Trigila

This form of the score matching principle is particularly beneficial when used with Monte Carlo
simulations. We will see below that this approach is equally applicable to numerical quadrature
methods. The arguments x can be viewed as “corruptions” of the observations in the data set X by
perturbing the data by noise drawn from a suitable probability distribution. The problem of optimizing
(13) can be viewed as the problem of denoising these corrupted observations.

3 Diffusion models
The incorporation of noise into the system will be achieved by generating sample paths of a suitable
diffusion (Ito) process, which is characterized as denoising stochastic differential equations. This
method facilitates the simulation of noise effects within the model.

3.1 Diffusion processes


3.1.1 Stochastic differential equations
A diffusion process Xt ∈ Rd over the time interval t ∈ [0, 1] is the solution to a stochastic differential
equation (SDE) [17], [8]
dXt = α(t, Xt ) dt + σ(t, Xt ) dWt ,
(14)
X0 = x0 ,
where Wt ∈ Rp is the standard p-dimensional Brownian motion. The drift vector α(t, Xt ) ∈ Rd
and diffusion matrix σ(t, Xt ) ∈ Matd,p (R) are assumed to satisfy suitable technical conditions guar-
anteeing the existence and uniqueness of the solution. We let p(t, x) denote the probability density
function of Xt .
Note that the initial point x0 is either known a priori, or more generally, selected from an initial
distribution p0 (x) = p(0, x). The choice of this distribution is part of the problem specification.
Within our scope, the initial distribution is assumed to be the empirical distribution (7), p0 (x) =
pX (x) All subsequent points Xt , for t > 0, follow the distribution determined by equation (14).
Let 0 < u < t, and x, y ∈ Rd . The transition probability p(t, x|u, y) is defined as the conditional
probability
p(t, x|u, y) = P(Xt = x|Xu = y), (15)
where P is the probability measure associated with the process (14). It satisfies two fundamental
partial differential equations (see the mathematics literature cited above, and also [18], [11] for the
physics perspective):
1. The forward Kolmogorov equation (also known as Fokker-Planck equation) with respect to t
and x:
∂  1
p(t, x|u, y) = −∇Tx α(t, x)p(t, x|u, y) + ∇Tx ∇x D(t, x)p(t, x|u, y) ,

∂t  2 (16)
p(u, x|u, y) = δ(x − y).

2. The backward Kolmogorov equation with respect u and y:


∂ 1
− p(t, x|u, y) = α(u, y)T ∇y p(t, x|u, y) + D(u, y)∇Ty ∇y p(t, x|u, y),
∂u 2 (17)
p(t, x|t, y) = δ(x − y),
Simulating financial market dynamics 7

where
D(t, x) = σ(t, x)σ(t, x)T (18)
is the diffusion matrix.
Note that the probability density function p(t, x) of the process Xt is the solution to the forward
Kolmogorov equation:
∂  1
p(t, x) = −∇Tx α(t, x)p(t, x) + ∇Tx ∇x D(t, x)p(t, x) ,

∂t 2 (19)
p(0, x) = δ(x − x0 ).

In the context of diffusion models, when linear SDEs are employed, the probability density function
p(t, x) indeed exists and is, in fact, Gaussian.

3.1.2 Linear SDEs


Specifically, a d-dimensional linear SDE driven by a d-dimensional Brownian motion (i.e. p = d in
the notation of Section 14) is a diffusion process of the form

dXt = α(t, Xt )dt + σ(t)dWt ,


(20)
X0 = x0 ,

with a linear drift


α(t, Xt ) = α1 (t)Xt + α0 (t). (21)
Here
(i) the coefficients α1 (t), α0 (t) ∈ Rd are vectors of deterministic functions of time, and

(ii) σ(t) ∈ Rd is a vector of positive, deterministic functions σi (t).


The general solution to (20) is explicitly given by
Rt Z t Rt Z t Rt
α (s)ds α (s)ds
Xt = x0 e 0 1
+ α0 (v)e v 1
dv + σ(v)e v α1 (s)ds dWv . (22)
0 0

This is a Gaussian diffusion process whose mean is given by

µ(t, X0 ) = E(Xt )
Rt Z t Rt (23)
α1 (s)ds α1 (s)ds
= x0 e 0 + α0 (u)e u du,
0

and whose covariance matrix C(t) is diagonal with

C(t)ij = Cov(Xt )ij


Z t Rt (24)
= δij σii (v)2 e2 v α1i (s)ds dv,
0

where δij denotes Kronecker’s delta. Hence the probability law of Xt is Gaussian, with density

p(t, x) = n(x; µ(t, x0 ), C(t)), (25)


8 A. Lesniewski and G. Trigila

where n(x; µ, C) is the PDF of the d-dimensional Gaussian distribution N (µ, C).
More generally, it follows from (22) that, when conditioned on Xu = y (where u ≤ t), we have
Rt Z t Rt Z t Rt
α (s)ds α (s)ds
Xt = ye u 1
+ α0 (v)e v 1
dv + σ(v)e u α1 (s)ds dWv . (26)
u u

As a consequence, the transition probability is given by the kernel


p(t, x|u, y) = n(x; µ(t|u, y), C(t|u)), (27)
with mean
µ(t|u, y) = E(Xt |Xu = y)
Z t
Rt Rt (28)
= ye u α1 (s)ds + α0 (v)e v α1 (s)ds dv,
u

and covariance
C(t|u)ij = Cov(Xt |Xu )ij
Z t Rt (29)
= δij σii (v)2 e2 v α1i (s)ds dv.
u

The multivariate SDE above is, in fact, a system of univariate SDEs. If the initial condition were
drawn from a product of independent univariate distributions, studying its solution would involve
solving d independent one-dimensional SDEs. However, in our case, the initial condition X0 is
drawn from an unknown, possibly complex distribution. As a result, we consider the entire system
(22) as an aggregate rather than a collection of individual univariate problems.

3.2 Reverse-time diffusion process


The key input to a diffusion model is the reverse-time diffusion process, which corresponds to the
(forward-time) diffusion process defined above. This process constitutes the decoding phase of the
model, and the generated sample paths represent the synthetic scenarios. The reverse-time diffusion
processes corresponding to a forward-time diffusion (14) is an SDE which runs in reverse time and
which generates the same sample paths as the original process. The general concept of reverse-time
diffusion processes was introduced and studied by Anderson in [1], who extended earlier work on
reverse-time linear diffusions.
The theory developed in [1] can be summarized as follows. Consider the diffusion process defined
by (14) with α(t, x) and σ(t, x) sufficiently regular, so that the probability density function p(t, x), i.e.
the solution to the Kolmogorov equation (19), exists and is unique4 . In the case of linear diffusions,
these assumptions are satisfied. We define the following p-dimensional process W t :
1 
dW t = dWt + ∇ σ(t, Xt )p(t, Xt ) dt,
p(t, Xt ) (30)
W 0 = 0.

Furthermore, Xt is independent of the increments W t − W s , for all t ≥ s ≥ 0, and is the solution to


the following SDE:
dXt = α(t, Xt )dt + σ(t, Xt )dW t , (31)
4
A general set of explicit sufficient conditions on α(t, x) and σ(t, x) guaranteeing this was formulated in [5].
Simulating financial market dynamics 9

where
1 
α(t, Xt ) = α(t, Xt ) − ∇ D(t, Xt )p(t, Xt ) . (32)
p(t, Xt )
Equation (31) is the reverse-time SDE corresponding to (14). This reverse-time diffusion process
mirrors the original forward-time process while running in reverse time.
It can be verified that the process W t exhibits the natural properties of a Brownian motion running
in reverse time, and is referred to as the reverse-time Brownian motion. The presence of drift in W t
is, of course, related to the fact that it is associated with a different measure than the one governing
Wt . Specifically, it is associated with the measure obtained from the original measure via Girsanov’s
theorem after the shift (32) of the drift term in (14).
In particular, if σ(t, x) = σ(t) is independent of the state variable, then the formulas stated above
can be expressed in terms of the score function s(t, x) of the probability distribution p(t, x) in a more
straightforward manner. Namely,

dW t = dWt + σ(t)s(t, x)dt, (33)

is the reverse-time Brownian motion, and

α(t, Xt ) = α(t, Xt ) − σ(t)2 s(t, x) (34)

is the reverse-time drift. The corresponding reverse-time diffusion process reads:

dXt = α(t, Xt ) − σ(t)2 s(t, Xt ) dt + σ(t)dW t .



(35)

In the case of a linear SDE of the form (20), the corresponding reverse-time diffusion process
remains linear. This is a consequence of the explicit expression (2) for the true score function, ac-
cording to which s(t, Xt ) is linear in Xt . However, it should be emphasized that the the reverse-time
diffusion process used for scenario generation involves the estimated score function s(t, Xt ; θ), where
θ denotes a collection of trainable parameters. This function is trained (e.g., using neural networks)
on the data X , and it is only approximately linear.

4 Denoising SDEs and their time reversals


A denoising SDE (DSDE) is a linear SDE with suitable mean-reversion properties, designed to model
noise within the context of a diffusion model. The following three examples are commonly used as
standard realizations of DSDEs [22]:

1. Variance preserving (VP) SDE.

2. Sub-variance preserving (sub-VP) SDE.

3. Variance exploding (VE) SDE.

In the following sections, we will focus on the VP SDE. However, for completeness, we also provide
brief discussions of the sub-VP and VE SDEs.
10 A. Lesniewski and G. Trigila

4.1 Variance preserving SDE


4.1.1 Forward time VP SDE
Let β(t) ∈ Rd be a vector R t of strictly positive, deterministic functions βi (t), for 0 < t < 1 and
i = 1, . . . , d, such that 0 βi (s)ds → ∞, as t → 1. A convenient choices of βi (t) is the power
function
βi (t) = bi (1 − t)−(1+a) , (36)
where a ≥ 0, and bi > 0, for i = 1, . . . , d, are fixed hyperparameters of the problem. These functions
have the property that they grow power-like to infinity as t ↑ 1.
We consider the d-dimensional linear SDE
1 p
dXt = − β(t)Xt dt + β(t) dWt , (37)
2
According to (22), its solution reads

1 t
Z tp
1 t
Z Z
 
Xt = x0 exp − β(s)ds + β(v) exp − β(s)ds dWv . (38)
2 0 0 2 v
Thanks to the specific form of the drift and diffusion coefficients, we can express this equation more
conveniently. Specifically, we introduce a new time variable:
Z t
τ (t) = β(s)ds. (39)
0

Next, we define the time-changed process:

Zu = Xτ −1 (u) , (40)

In other words, Zt represents the pullback of Xt under the time change defined by (39). Leveraging
the change of time formula for Brownian motion [17],
p
τ ′ (t) dWt = dWτ (t) , (41)

and noting that τ (0) = 0, we can express (38) as follows:


Z τ
Zτ = z0 e−τ /2 + e−(τ −s)/2 dWs , (42)
0

where τ = τ (t) and z0 = x0 .


Remarkably, the time-changed process corresponds to the multi-dimensional Ornstein-Uhlenbeck
process with long-term mean zero:
1
dZτ = − Zτ dτ + dWτ . (43)
2
The mean of Zτ is given by
1
E(Zτ ) = x0 e− 2 τ , (44)
and its variance is
Var(Zτ ) = 1 − e−τ . (45)
Simulating financial market dynamics 11

The PDF of the VP process is explicitly given by


1
p(t, x) = n(x; x0 e− 2 τ (t) , 1 − e−τ (t) ), (46)

which asymptotically tends to the standard normal distribution, as t → 1, i.e. Xt becomes white
noise in this limit.
Explicitly, with the choice (36), the new time variables are given by
(
−bi log(1 − t), if a = 0,
τi (t) = bi 1
 (47)
a (1−t)a − 1 , if a > 0.

The parameter a determines the rate at which τi (t) approaches infinity as t ↑ 1, with a = 0 corre-
sponding to the slowest rate.

4.1.2 Reverse-time VP SDE


According to (35), the reverse-time SDE for Xt reads
1  p
dXt = −β(t) Xt + s(t, Xt ) dt + β(t)dW t , (48)
2
where s(t, Xt ) is the score function of the forward-time process. As explained in Section 3.2, this
score function is replaced by the estimated score function.
After time change (39) this equation takes the form
1
Zτ + s(t−1 (τ ), Zτ ) dτ + dW τ ,

dZτ = − (49)
2
which is used for scenario generation.

4.2 Sub-variance preserving SDE


4.2.1 Forward-time VE SDE
The sub-variance preserving (sub-VP) SDE, introduced by [22], modifies the VP SDE as follows:

1
q Rt
dXt = − β(t)Xt dt + β(t) 1 − e−2 0 β(s)ds dWt .

(50)
2
Rather than explicitly solving this cumbersome equation, we can directly invoke the time change
method.
Using the notation introduced earlier, and applying the change of time transformation (41), we
express this SDE in the form:
1 p
dZτ = − Zτ dτ + 1 − e−2τ dWτ . (51)
2
The solution for Zτ is given by:
Z τ p
−τ /2
Z τ = z0 e + 1 − e−2s e−(τ −s)/2 dWs . (52)
0
12 A. Lesniewski and G. Trigila

While the mean of Zτ remains given by (44), its variance becomes:

Var(Zτ ) = (1 − e−τ )2 . (53)

This expression is dominated by (45).


The PDF of the sub-VP process is explicitly given by

p(t, x) = n(x; x0 e−τ (t) , (1 − e−τ (t) )2 ), (54)

Consequently, Xt tends to pure white noise, as t → 1, albeit with a lower variance compared to that
of the VE SDE.

4.2.2 Reverse-time sub-VP SDE


According to (31), the reverse-time SDE for Xt reads

1 Rt q Rt
Xt + 1 − e−2 0 β(s)ds s(t, Xt ) dt + β(t) 1 − e−2 0 β(s)ds dW t ,
  
dXt = −β(t) (55)
2
where s(t, Xt ) is the (estimated) score function. After time change (39) this equation takes the form:

1
Zτ + 1 − e−2τ s(t−1 (τ ), Zτ ) dτ + dW τ .
 
dZτ = − (56)
2

4.3 Variance exploding SDE


4.3.1 Forward-time VE SDE
Let v(t) ∈ Rd be a vector of positive, strictly increasing differentiable functions. A convenient choice
is
vi (t) = bi at , (57)
with constant hyperparameters a > 1 and bi > 0, for i = 1, . . . , d.
We consider the following linear, driftless SDE:
r
d
dXt = v(t) dWt (58)
dt
It’s solution is given by
Z tr
d
Xt = X0 + v(s) dWs , (59)
0 ds
and so
E(Xt ) = x0 , (60)
and
Cov(Xt ) = v(t). (61)
This solution can also be understood in terms of change of time formula (41) with

τ (t) = v(t). (62)


Simulating financial market dynamics 13

Explicitly,
dZτ = dWτ , (63)
with Z0 = z0 (= x0 ), and
τ (t) = bat . (64)
Clearly, the probability density function of (58) is explicitly given by

p(t, x) = n(x; x0 , τ (t)), (65)

which explodes to infinity, as t → ∞.

4.3.2 Reverse-time VE SDE


According to (31), the reverse-time SDE for Xt reads
r
d d
dXt = −s(t, Xt ) v(t) dt + v(t) dW t , (66)
dt dt
After change of time this equation takes the form

dZτ = −s(t−1 (τ ), Zτ )dτ + dW τ . (67)

5 Model training via score matching


The parameters θ are estimated from the objective function [22]

1 1
Z Z
LSM (θ) = λ(t)∥s(t, x; θ) − log p(t, x)∥2 p(t, x) dx dt, (68)
2 0 Rd
where λ(t) > 0 is a suitably chosen weight function. Its purpose is to regularize the behavior of
∥s(t, x; θ) − log p(t, x)∥2 at small values of t. As explained in Section 2.2, this objective function is
equivalent to the DSM objective function [23]:

1 1
Z Z Z
DSM
L (θ) = λ(t)∥s(t, x; θ) − log p(t, x|0, x0 )∥2 p(t, x|0, x0 )p0 (x0 ) dx0 dx dt, (69)
2 0 Rd Rd
where p(t, x|0, x0 ) = p(t, x) is the transition probability density (46), and p0 (x0 ) is the (empirical)
distribution of the data set X (7).

5.1 Parameterization of the DSM objective function


According to (2), the model score function s(t, x) is given by
1 
s(t, x) = − x − µ(t, x0 ) , (70)
C(t)
where we have taken advantage of the fact that our covariance matrix is diagonal. We parameterize
the fitted score s(t, x; θ) as a fully connected neural net with a single hidden layer, namely
1
s(t, x; θ) = K(x; θ). (71)
C(t)
14 A. Lesniewski and G. Trigila

Here K(x; θ) is explicitly given by


h
X d
X
K i (x; θ) = cij a wjk xk + bj + di ,

(72)
j=1 k=1

where i = 1, . . . , d, and where a(u) is a smooth, positive activation function, such as the softplus
function,
a(u) = log(1 + eu ). (73)
The neuronal weights and biases θ = (w, b, c, d) have the following properties:
(i) w ∈ Math,d (R), where h is the number of neurons in the hidden layer,
(ii) b ∈ Rh ,
(iii) c ∈ Matd,h (R), and
(iv) d ∈ Rd .
Notice that we use upper and lover indices in accordance with our convention to label the components
of a single x with upper indices, while examples of x are labeled with lower indices. We refer to this
network as the noise conditional score network (NCSN).
Using (27) and (70) to compute the transition probability, we can rewrite this expression in the
form
1 1 x − µ(t, x0 ) 2
Z Z Z
DSM
L (θ) = λ(t) s(t, x; θ) + p(t, x|0, x0 )p0 (x0 ) dx0 dx dt
2 0 Rd Rd C(t)
n Z Z (74)
1 X 1 K(x; θ) + xi − µ(t, xi ) 2 i
= λ(t) ft (x)dx dt,
2n 0 Rd C(t)
i=1
where we have introduced the notation
fit (x) = n(x; µ(t, xi ), C(t)). (75)
We will now choose λ(t) to have the form:
λ(t) = λ0 (t)C(t)2 , (76)
where λ0 (t) is continuous for all t ≥ 0, and so
n Z Z
DSM 1 X 1
L (θ) = λ0 (t)∥K(x; θ) + xi − µ(t, xi )∥2 fit (x)dx dt. (77)
2n 0 Rd
i=1
Explicitly, in case of the VP and sub-VP denoising diffusions, this expression reads explicitly
n Z Z
1 X 1
L DSM
(θ) = λ0 (t)∥K(x; θ) + xi − xi e−τ (t) ∥2 fit (x)dx dt, (78)
2n 0 Rd
i=1
with the appropriate covariances defined in Sections 4.1 and 4.2, respectively. Analogous expression
for the VE denosing SDE reads
n Z Z
DSM 1 X 1
L (θ) = λ0 (t)∥K(x; θ) + xi ∥2 fit (x)dx dt, (79)
2n 0 Rd
i=1
with the covariance defined in Section 4.3.
Simulating financial market dynamics 15

5.2 Evaluation of the DSM objective function


The high dimensional integrals is in (38) require an efficient numerical evaluation method. Integration
over t is carried out accurately using Simpson’s rule. In fact, a small number of subdivisions of the
integral over [0, 1] (such as 10 or even less) is generally sufficient to get an accurate value of the
integral. Integration over x can, in principle, be carried out via Monte Carlo simulations. This method
requires very significant computing resources even if the dimensionality d of the problem is moderate,
and is impractical for large values of d (such as d > 100). However, owing to the specific form of
the objective function, the d-dimensional integration can be reduced to one- and two-dimensional
Gaussian integrals which can be efficiently computed using Gauss - Hermite quadrature.
We start by expressing LDSM (θ) as a sum:
n Z
1 X 1
Z
DSM
L (θ) = λ0 (t) ∥K(x; θ) + xi − µ(t, xi )∥2 fit (x)dx dt
2n 0 R d
i=1
n d Z
1 XX 1
= λ0 (t)(xki − µ(t, xi )k + dk )2 dt
2n
i=1 k=1 0
n h d
1 X X X jk 1
Z Z
k k k
+ c λ0 (t)(xi − µ(t, xi ) + d ) a(⟨wj |x⟩ + bj ) fit (x)dx dt
n 0 R d
i=1 j=1 k=1
n h
1 X X ij ik 1
Z Z
+ c c λ0 (t) a(⟨wj |x⟩ + bj )a(⟨wk |x⟩ + bk ) fit (x)dx dt,
2n 0 Rd
i=1 j,k=1

where fit (x) is given by (75), and where wj , wk ∈ Rd denote the j-th and k-th rows of the matrix w,
respectively. Notice that, with probability one, wj and wk , are linearly independent for j ̸= k. In
order to organize the computation, we rewrite this expression as follows:
n d Z
DSM 1 XX 1
L (θ) = λ0 (t)(xki − µ(t, xi )k + dk )2 dt
2n 0
i=1 k=1
n h d
1 X X X jk 1
Z
+ c λ0 (t)(xki − µ(t, xi )k + dk )Ii1 (wj , bj , t) dt (80)
n 0
i=1 j=1 k=1
n h
1 X X ij ik 1
Z
+ c c λ0 (t)Ii2 (wj , wk , bj , bk , t) dt,
2n 0
i=1 j,k=1

where Z
Ii1 (w, b, t) a ⟨w|x⟩ + b fit (x)dx,

= (81)
Rd
and Z
Ii2 (w1 , w2 , b1 , b2 , t) a ⟨w1 |x⟩ + b1 a ⟨w2 |x⟩ + b2 fit (x)dx,
 
= (82)
Rd
for linearly independent w1 and w2 . Changing variables we express I1 and I2 as integrals with
respect to the standard d-dimensional Gaussian measure:
Z
i 1 p  − 1 ⟨x|x⟩
I1 (w, b, t) = a ⟨w| C(t) x + µ(t, xi )⟩ + b e 2 dx, (83)
(2π)d/2 Rd
16 A. Lesniewski and G. Trigila

and
Z
1 p
Ii2 (w1 , w2 , b1 , b2 , t)

= a ⟨w1 | C(t) x + µ(t, xi )⟩ + b1
(2π)d/2 Rd (84)
 1
× a ⟨w2 | C(t) x + µ(t, xi )⟩ + b2 e− 2 ⟨x|x⟩ dx,
p

respectively.
The d-dimensional integrals I1 are I2 are actually highly numerically tractable and can be reduced
to one- and two-dimensional Gaussian integrals, respectively. In order to see it, we set
1
E= w, (85)
∥w∥

and rewrite Ii1 (w, b, t) as


Z
1  1
C(t)⟨E|x⟩ + ⟨w|µ(t, xi )⟩ + b e− 2 ⟨x|x⟩ dx.
p
Ii1 (w, b, t) = a ∥w∥ (86)
(2π)d/2 Rd

Let y = ⟨E|x⟩ and Γ = I − |E⟩⟨E|. Then


Z
 1 2 1
a ∥w∥ C(t) y + ⟨w|µ(t, xi )⟩ + b e− 2 y − 2 ⟨x|Γx⟩ dy dd−1 x
i
p
I1 (w, b, t) =
Rd Z (87)
1 p  − 1 y2
= a ∥w∥ C(t) y + ⟨w|µ(t, x i )⟩ + b e 2 dy,
(2π)1/2 R

as det(Γ) = 1. This integral can be efficiently evaluated using the Gauss-Hermite quadrature algo-
rithm,
XD p
i

I1 (w, b, t) ≈ wp a ∥w∥ 2C(t) zp + ⟨w|µ(t, xi )⟩ + b . (88)
p=1

Here zp , p = 1, . . . , D, are the zeros of the Hermite polynomial of degree D, and wp are the appropri-
ate weights. Even low values of D, such as D = 4, are sufficient to calculate the integral accurately.
√ 1 2
Note the presence of 2 above, arising from the weight function in the integral being e− 2 y rather
2
than e−y .
Similarly, let us define the normalized vectors
1
Ei = wi , (89)
∥wi ∥

for i = 1, 2. We use the Gram-Schmidt algorithm to orthonormalize these vectors, which yields

E1′ = E1 ,
1 (90)
E2′ = (E2 − ⟨E2 |E1 ⟩E1 ).
(1 − ⟨E2 |E1 ⟩2 )1/2

Let yi = ⟨Ei′ |x⟩, for i = 1, 2, and Γ = I − |E1′ ⟩⟨E1′ | − |E2′ ⟩⟨E2′ |. We also let

⟨w1 |w2 ⟩
c(w1 , w2 ) = , (91)
∥w1 ∥∥w2 ∥
Simulating financial market dynamics 17

and s
⟨w1 |w2 ⟩2
s(w1 , w2 ) = 1− (92)
∥w1 ∥2 ∥w2 ∥2
denote the cosine and (the absolute value of the) sine of the angle between w1 and w2 , respectively.
Then
Ii2 (w1 , w2 )
Z
1 p 
= d/2
a ∥w1 ∥ C(t) y1 + ⟨w1 |µ(t, xi )⟩ + b1
(2π) Rd
 1 2 2 1
× a ∥w2 ∥ C(t) (s(w1 , w2 )y2 + c(w1 , w2 )y1 ) + ⟨w2 |µ(t, xi )⟩ + b2 e− 2 (y1 +y2 )− 2 ⟨x|Γx⟩ d2 y dd−2 x
p
Z
1 p 
= a ∥w1 ∥ C(t) y1 + ⟨w1 |µ(t, xi )⟩ + b1
2π R2
 1 2 2
× a ∥w2 ∥ C(t) (s(w1 , w2 )y2 + c(w1 , w2 )y1 ) + ⟨w2 |µ(t, xi )⟩ + b2 e− 2 (y1 +y2 ) d2 y,
p

as det(Γ) = 1. This two-dimensional Gaussian integral can be efficiently evaluated using the Gauss-
Hermite quadrature algorithm,
X p
Ii2 (w1 , w2 ) ≈

wp wq a ∥w1 ∥ 2C(t) zp + ⟨w1 |µ(t, xi )⟩ + b1
1≤p,q≤D
p 
× a ∥w2 ∥ 2C(t) (s(w1 , w2 )zq + c(w1 , w2 )zp ) + ⟨w2 |µ(t, xi )⟩ + b2 ,
with a low value of D.
Finally, as explained at the beginning of this section, we perform time integration using Simp-
son’s method. As a result of these calculations, we obtain an expression for (80) that is suitable for
implementation in TensorFlow, enabling fast and accurate computations.

6 Synthetic markets
The process of generating synthetic markets involves several tasks:
(i) Training of the fitted score function, as discussed in Section 5.
(ii) Encoding and decoding of the training (historically observed) data.
(iii) Testing whether the generate data follow the same probability distribution as the training data.
This step is crucial, as financial data are often difficult to visualize.
In this section we focus on tasks (ii) and (iii).

6.1 Euler-Maruyama scheme for the forward and reverse-time DSDE


The process of generating a synthetic market scenario involves two runs of the denoising SDE: a for-
ward run and a reverse-time run. Both runs effectively draw a sample from the probability distribution
associated with a DSDE. This is achieved using the Euler-Maruyama numerical scheme [11], which
simulates approximate sample paths of the SDE.
Forward path generation. To generate a forward path for the linear SDE (20) starting at Xinit , we
divide the interval [0, 1] into K + 1 equal subintervals of length δ = 1/K using the points tj = j/K,
where j = 0, . . . , K, and denote by X̂tj the (approximate) value of Xt at t = tj .
18 A. Lesniewski and G. Trigila

1. We start at the initial time t = t0 = 0, and set

X̂t0 = Xinit . (93)

2. Next, for j = 1, 2, . . . , K, simulate increments for a d-dimensional Brownian motion dŴj ∼


N (0, δId ), and set
X̂tj+1 = X̂tj + α(tj , X̂tj )δ + σ(tj )dŴj . (94)

Reverse-time path generation. To simulate sample paths of the reverse-time SDE (35) starting at
Xterm , an appropriate score function s(t, x) is required. In practice, we utilize the fitted score function
(71), defining ŝ(t, x) = s(t, x, θ).
As with forward path generation, we divide the interval [0, 1] into K + 1 equal subintervals of
length δ = 1/K, denoting the division points as tj = j/K, where j = 0, . . . , K. Let X̂tj represent
the (approximate) value of Xt at t = tj . The procedure is as follows:

1. We start at the terminal time t = tK = 1, and set

X̂tK = Xterm . (95)

2. Next, for j = K, K − 1, . . . , 1, simulate increments for a d-dimensional Brownian motion


dŴj ∼ N (0, δId ). Then, update using the following equations:

ˆ = dŴ + σ(t )s(t , X̂ )δ,


dW j j j j tj
(96)
ˆ .
X̂tj−1 = X̂tj + α(tj , X̂tj ) − σ(tj )2 s(tj , X̂tj ) δ + σ(tj )dW

j

Note: The Brownian motion used for reverse-time path generation should be freshly simulated and
must not reuse the Brownian motion from forward path generation. Reusing it would merely recover
the initial point, without the advantage of introducing noise.

6.2 Generating synthetic market scenarios


Given a training set of market data X representing n observations of returns on d assets, we aim at
generating m synthetic returns on these assets, where the choice of m is unrelated to n. We proceed
as follows:

A. Train the fitted score function (71) on the dataset X , as explained in Section 5.

B. Generate synthetic samples. For k = 1, . . . , m perform the following steps:

(i) Randomly select an index ik ∈ {1, . . . , n}.


k
(ii) Set Xinit k . Let X̂ k denote the terminal
= xik , and generate a forward path starting at Xinit 1
value of this path.
k
(iii) Set Xterm = X̂1k , and generate a reverse-time path starting at Xterm
k . Let x̂k denote the
initial (t = 0) value of this path.

C. The set X̂ = {x̂1 , . . . , x̂m } is the desired set of m synthetic market scenarios.
Simulating financial market dynamics 19

It remains to verify that X̂ is drawn from the same distribution as the original market data X .
Multivariate tests for assessing the goodness of fit between two empirical distributions - see, for
example, [12] - are notoriously challenging and difficult to implement. In a forthcoming paper [16],
we propose a practical methodology that we believe is well-suited for situations like the one addressed
in this study.
In the meantime, we adapt the univariate two-sample Cramer-von Mises (CvM) criterion [2] to
1 d
our context. To this end, we consider a specified portfolio Pd of assets A , . . . , A by choosing their
weights g = (g1 , . . . , gd ), where gi ≥ 0, for all i, and i=1 gi = 1. Let

pi = ⟨g|xi ⟩, for i = 1, . . . , n,
(97)
q̂k = ⟨g|x̂k ⟩, for k = 1, . . . , m,

denote the returns on this portfolio under the historical and synthetic scenarios, respectively. Let
P = {p1 , . . . , pn } and Q̂ = {q̂1 , . . . , q̂m } denote the two datasets of portfolio returns.
The two-sample CvM criterion tests the hypothesis that the two samples P and Q̂ are drawn from
the same distribution. This is done using the test statistics TCvM [2], whose value depends of course
on the choice of weights g. The null hypothesis, which posits that the two samples come from the
same distribution (i.e. TCvM = 0), can be rejected at a desired level of confidence.
While no choice of the portfolio weights g is a priori preferred, numerical simulations [16] sug-
gest that an equally weighted portfolio g = (1/d, . . . , 1/d) often lies near the maximizer (i.e. the
worst case scenario) of TCvM . We use the corresponding p-value of the CvM test, denoted in the
following by pCvM , as a provisional metric for the significance level of the multivariate test assessing
whether the datasets X and X̂ are drawn from the same distribution.

7 Experiments
For our experiments, we downloaded daily closing prices from Bloomberg5 for 33 large U.S. eq-
uities over the five-year period from November 13, 2019, to November 13, 2024.6 . This period
notably encompasses the economic distress caused by the COVID-19 pandemic, which introduced
significant volatility to equity markets. Testing the model’s ability to accurately simulate under these
non-stationary conditions serves as a critical assessment of its validity.
We selected various data windows, each n = 256 days in length - approximately equivalent
to the number of trading days in a year. For each window, we trained the diffusion model on the
corresponding data and generated m = 1, 024 synthetic market scenarios.
The model was configured as follows: the VP DSDE was selected with instantaneous volatility
functions defined by (36), using parameters a = 0 and bi = 0.1 for all i. Gauss-Hermite integration
was performed with an order of D = 4, and Simpson integration was performed using 8 subintervals.
The function λ0 (t) in (76) was set to λ0 (t) = 1, and the number of hidden neurons was configured as
h = 16. Training was conducted using the Adam optimizer.
To evaluate the model’s performance for each run, we recorded the following metrics:

(i) The p-value pCvM of the CvM statistic of the equally weighted portfolio of all selected assets.
5
The Bloomberg closing prices are adjusted for stock splits but not for dividends. We believe this does not impact our
simulation methodology
6
AAPL, AXP, BA, CAT, COST, CSCO, CVX, DD, DIS, GE, GS, HD, IBM, INTC, JNJ, JPM, KO, LLY, MCD, MMM,
MRK, MSFT, NIKE, NVDA, PFE, PG, TRV, TSLA, UNH, V, VZ, WMT, XOM.
20 A. Lesniewski and G. Trigila

(ii) The condition number κhist of the covariance matrix of the synthetic market data compared to
that of the historical data, κsynth .

(iii) The Q-Q plot comparing the synthetic data to the historical market data for visual assessment.

Experiment 1. We use days 1 through 256 of the data set. Historically, this period covered the Covid
pandemic, highly elevated market volatility, and aggressive rate cuts by the Federal Reserve.

pCvM = 1.00,
κhist = 109.64, (98)
κsynth = 53.46.

Figure 1: Histograms of historical (left) and synthetic (right) returns in Experiment 1.

Figure 2: Q-Q plot of the historical versus synthetic returns in Experiment 1.

Experiment 2. We use days 250 through 506 of the data set. This period was marked by historically
Simulating financial market dynamics 21

low interest rates and low volatility of equities.

pCvM = 0.64,
κhist = 130.37, (99)
κsynth = 70.63.

Figure 3: Histograms of historical (left) and synthetic (right) returns in Experiment 2.

Figure 4: Q-Q plot of the historical versus synthetic returns in Experiment 2.

Experiment 3. We use days 500 through 756 of the data set. This period was marked by rising
inflation, elevated market volatility, and interest rate hikes implemented by the Federal Reserve.

pCvM = 1.00,
κhist = 208.17, (100)
κsynth = 124.40.
22 A. Lesniewski and G. Trigila

Figure 5: Histograms of historical (left) and synthetic (right) returns in Experiment 3.

Figure 6: Q-Q plot of the historical versus synthetic returns in Experiment 3.

Experiment 4. We use days 750 through 1006 of the data set. During this period, the Federal Reserve
concluded its rate-hiking program.

pCvM = 0.83,
κhist = 95.82, (101)
κsynth = 51.05.
Simulating financial market dynamics 23

Figure 7: Histograms of historical (left) and synthetic (right) returns in Experiment 4.

Figure 8: Q-Q plot of the historical versus synthetic returns in Experiment 4.

Experiment 5. We use days 1000 through 1256 of the data set. This period was characterized by
occasional spikes in market volatility driven by the presidential election.

pCvM = 0.98,
κhist = 438.51, (102)
κsynth = 282.80.
24 A. Lesniewski and G. Trigila

Figure 9: Histograms of historical (left) and synthetic (right) returns in Experiment 5.

Figure 10: Q-Q plot of the historical versus synthetic returns in Experiment 5.

Experiment 6. In our final experiment we use days 1 through 1248 of the data set, which covers the
entire five year period7 . In order to accomodate a larger data set, for this experiment we increased the
number of hidden neurons to h = 32.

pCvM = 0.99,
κhist = 130.56, (103)
κsynth = 85.03.

7
We use 1248 (rather than 1258) for convenience, as 1248 is divisible by 32, our batch size parameter.
Simulating financial market dynamics 25

Figure 11: Histograms of historical (left) and synthetic (right) returns in Experiment 6.

Figure 12: Q-Q plot of the historical versus synthetic returns in Experiment 6.

It is worth noting that the period used for our experiments was characterized by rapidly shifting
economic regimes, including market volatility from the pandemic, falling interest rates, subsequent
inflation, rising rates, and political uncertainty surrounding the presidential election. Despite these
challenges, the model consistently demonstrated strong performance across the different time win-
dows.
The model can generate an arbitrary number of synthetic data points without compromising their
tail behavior or the CvM p-value. Moreover, the condition number of the estimated covariance matrix
decreases as the sample size grows. For instance, in Experiment 1, where 16,384 synthetic scenarios
were generated, the covariance matrix condition number was κsynth = 47.33, compared to κsynth =
53.46 for 1,024 synthetic scenarios.
26 A. Lesniewski and G. Trigila

References
[1] Anderson, B. D. O.: Reverse-time diffusion equation models, Stochastic Processes and their
Applications, 12, 313 - 326 (1982).

[2] Anderson, T. W.: On the distribution of the two-sample Cramer-von Mises criterion, The Annals
of Mathematical Statistics, 1148 - 1159 (1962).

[3] Johansson, K., Ogut, N.G, Pelger, M., Schmelzer, T., and Stephen Boyd, S.: A Simple Method
for Predicting Covariance Matrices of Financial Returns, Foundations and Trends in Economet-
rics, 12, 324 – 407 (2023).

[4] Fan, J., Li, R., Zhang, CH, and Hui, Z.: Statistical Foundations of Data Science, CRC Press
(2020).

[5] Haussmann, U. G., and Pardoux, E.: Time Reversal of Diffusions, The Annals of Probability,
14, 1188 - 1205 (1986).

[6] Ho, J., Jain, A., and Abbeel, P.: Denoising diffusion probabilistic models, Advances in Neural
Information Processing Systems, 33, 6840 - 6851 (2020).

[7] Hyvärinen, A.: Estimation of non-normalized statistical modes by score matching, Journal of
Machine Learning Research, 6, 95 - 709 (2005).

[8] Ikeda, N., and Watanabe, S.: Stochastic Differential Equations and Diffusion Processes, North-
Holland (1989).

[9] Jolicoeur-Martineau, A., Li, K., Piche-Taillefer, R., Kachman, T., and Mitliagkas, I.: Gotta Go
Fast When Generating Data with Score-Based Models, arXiv:2105.14080v1 (2021).

[10] Jordon, J., Houssiau, F., Cherubin, G., Cohen, S. N, Szpruch, L., Bottarelli, M., Maple, C., and
Weller. A.: Synthetic Data - what, why and how?, arXiv:2205.03257 (2022).

[11] van Kampen, N. G.: Stochastic Processes in Physics and Chemistry, North Holland (1992).

[12] Kim, I., Balakrishnan, S., and Wasserman, L.: Robust Multivariate Nonparametric Tests via
Projection-Averaging, Annals of Statistics, 48, 3417 - 3441(2020).

[13] Kloeden, P. E., and Platen, E.: Numerical Solution of Stochastic Differential Equations, Springer
(1992).

[14] Kondratyev, A. and Schwarz, C.: The market generator, SSRN 3384948 (2020).

[15] Ledoit, O., and Wolf, M.: A well-conditioned estimator for large-dimensional covariance matri-
ces, Journal of Multivariate Analysis, 88, 365 – 411 (2004).

[16] Lesniewski, A., and Trigila, G.: Projected Cramer - von Mises test, in preparation.

[17] Oksendal, B.: Stochastic Differential Equations: An Introduction with Applications, Springer
(2014).

[18] Risken, H.: The Fokker-Planck Equation, Springer (1989).


Simulating financial market dynamics 27

[19] Sohl-Dickstein, J., Weiss, E. A., Maheswaranathan, N., and Ganguli, S.: Deep Unsupervised
Learning using Nonequilibrium Thermodynamics (2015).

[20] Song, Y., and Ermon, S.: Generative Modeling by Estimating Gradients of the Data Distribution,
NeurIPS Proceedings, 32 (2019).

[21] Song, Y., and Ermon, S.: Improved Techniques for Training Score-Based Generative Models,
NeurIPS Proceedings, 33 (2020).

[22] Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar,. A., Ermon, S., and Poole, B.: Score-Based
Generative Modeling through Stochastic Differential Equations (2021).

[23] Vincent, P.: A connection between score matching and denoising autoencoders, Neural compu-
tation, 23, 1661 – 1674 (2011).

[24] Wiese, M., Knobloch, R., Korn, R., and Kretschmer, P.: Quant GANs:Deep Generation of
Financial Time Series, Quantitative Finance, 20 1419 - 1440(2020).

[25] Yang, L., Zhang,. Z., Song, Y., Hong, S., Xu, R., Zhao, Y., Zhang, W., Cui, B., and
Yang, M.-H.: Diffusion Models: A Comprehensive Survey of Methods and Applications,
arXiv:2209.00796v11 (2023).

You might also like