GANs
GANs
(GANs)
1
2
GANs – General Idea
3
4
5
Discriminative Algorithms
• Discriminative algorithms try to classify input data; that is, given the
features of a data instance, they predict a label or category to which
that data belongs.
• Example: Given all the words in an email, a discriminative algorithm
could predict whether the message is spam or not_spam.
• Spam is one of the labels, and the bag of words gathered from the
email are the features that constitute the input data.
• Mathematically for label y and the features x; the formulation p(y|x)
is used to mean “the probability of y given x”, which in this case
would translate to “the probability that an email is spam given the
words it contains.”
6
7
8
Generative Algorithms
• Discriminative algorithms map features to labels.
• Generative algorithms do the opposite.
• Instead of predicting a label given certain features, they attempt to
predict features given a certain label.
• Example: Generative model would try to answer questions like:
Assuming this email is spam, how likely are these features?
• While discriminative models care about the relation between y and x,
generative models care about “how you get x.”
• They allow to capture p(x|y), the probability of x given y, or the
probability of features given a class
9
Generative vs. Discriminative Algorithms
• Discriminative models learn the boundary between classes
• Generative models model the distribution of individual classes
10
11
GANs - Applications
• Generative models allow a computer to create data — like photos,
movies or music — by itself
• Generative adversarial networks (GANs) consists of a clever
combination of two deep neural networks that compete with each
other (thus the “adversarial”).
• GANs were introduced by Ian Goodfellow et al. in 2014.
• GANs’ can learn to mimic any distribution of data.
• GANs can be taught to create worlds eerily similar to our own in any
domain: images, music, speech, prose.
https://skymind.ai/wiki/generative-adversarial-network-gan
12
GANs
• Images generated using GANs
13
Face Aging
14
GANs - Applications
• Work of Art
15
In a surreal turn, Christie’s sold a portrait for $432,000 that had
been generated by a GAN, based on an open-source code
16
GANs - Applications
17
How GANs Work?
• We create two deep neural networks.
• Then we make them fight against each other, endlessly attempting to
out-do one another.
• In the process, they both become stronger.
• One neural network, called the generator, generates new data
instances, while the other, the discriminator, evaluates them for
authenticity; i.e. the discriminator decides whether each instance of
data it reviews belongs to the actual training dataset or not.
18
How GANs Work?
• The generator takes in random numbers and returns an image.
• This generated image is fed into the discriminator alongside a stream
of images taken from the actual dataset.
• The discriminator takes in both real and fake images and returns
probabilities, a number between 0 and 1, with 1 representing a
prediction of authenticity and 0 representing fake.
• The discriminator is in a feedback loop with the ground truth of the
images, which we know.
• The generator is in a feedback loop with the discriminator.
19
How GANs Work?
20
21
Discriminator Model
22
Generator Model
• The generator is a brand new counterfeiter who is just learning how
to create fake money.
• For this second neural network, we’ll reverse the layers in a normal
ConvNet so that everything runs backwards.
• So instead of taking in a picture and outputting a value, it takes in a
list of values and outputs a picture.
23
How GANs Work
• Now we have a police officer (the Discriminator) looking for fake
money and a counterfeiter (the Generator) that’s printing fake
money. Let’s make them battle!
• In the first round, the Generator will create pathetic forgeries that
barely resemble money at all because it knows absolutely nothing
about what money is supposed to look like:
24
How GANs Work
25
How GANs Work
• Now we start Round 2. We tell the Generator that it’s money images
are suddenly getting rejected as fake so it needs to step up it’s game.
We also tell it that the Discriminator is now looking for faces, so the
best way to confuse the Discriminator is to put a face on the bill:
26
GANs - Training
• Discriminator’s weights are updated as to maximize the probability
that any real data input x is classified as belonging to the real dataset,
while minimizing the probability that any fake image is classified as
belonging to the real dataset.
• Furthermore, the Generator is trained to fool the Discriminator by
generating data as realistic as possible, which means that
the Generator’s weight’s are optimized to maximize the probability
that any fake image is classified as belonging to the real datase.
27
28
29
Implementation
• Discriminator DCGAN
30
Implementation
• Generator DCGAN
31
Implementation
• The Adversarial model is simply generator with its output connected
to the input of the discriminator.
• In the training process the Generator labels its fake image output with
1.0 trying to fool the Discriminator.
32
Implementation
• Discriminator model is trained to distinguish real from fake
handwritten images.
33
Loading Data
34
Building Generator
35
Building Generator
• Typically when we build a CNN, we start with an image that is very tall
and wide and uses convolutional layers to get a tensor that’s very
deep but less tall and wide.
• Here we will do the opposite.
• We’ll use a dense layer and a reshape to start with a 7 x 7 x
128 tensor and then, after doubling it twice, we’ll be left with a 28 x
28 tensor.
• Since we need a grayscale image, we can use a convolutional layer
with a single unit to get a 28 x 28 x 1 output.
36
Building Discriminator
37
Stacking Models
For this model we will not be updating the weights of the discriminator during backpropagation.
We will freeze these weights and only move the generator weights with the stack. The
discriminator will be trained separately.
38
Training Loop
This code is generating a matrix of noise vectors called z and sending it to the generator. It’s
getting a set of generated images back, which we’re calling fake images.
We will use these to train the discriminator, so the labels we want to use are 0s, indicating
that these are in fact generated images.
39
Updating the Discriminator
To get our real images, we will generate a random set of indices
across X_train and use that slice of X_train as our real images
We are using
discriminator’s train_on_batch()
method.
The train_on_batch() method
does exactly one round of forward
and backward propagation. Every
time we call it, it updates the model
once from the model’s previous
state.
It is a common practice to update for the real images and
fake images separately.
40
Generator is updated indirectly by updating the Updating the Generator
combined stack (adversarial network)
41