Unit 4 Generative AI
Unit 4 Generative AI
Unit 4 Generative AI
Timeline of Generative AI
Generative Al has been getting better and better over the years. Now, generative Al can do lots
of cool stuff like writing text, making pictures, and creating new things. It's been a long process
of learning and making things better, but now we can see all the amazing things it can do.
2015: Diffusion models were introduced, representing a novel approach to generative modelling,
example: Tensorflow
2021: Open AI launched DALL-E, an AI platform designed to generate images from textual
descriptions.
2022: Two notable AI image-generating tools, the open-source Stable Diffusion and Midjourney
were introduced and ChatGPT was introduced.
2023: OpenAI released GPT-4, an advanced version of Generative Pre- trained Transformer
series. Also Microsoft Copilot (previously Bing Chat), Google Gemini (previously Google
Bard), Adobe Firefly, Meta Llama were introduced.
Page 1 of 5
Answer the following questions:
Q1. What do you understand about Generative Artificial Intelligence? Give any two
examples.
Ans: Generative artificial intelligence (AI) refers to the algorithms that generate new data that
resembles human-generated content, such as audio, code, images, text, simulations, and videos.
This technology is trained with existing data and content, creating the potential for applications
such as natural language processing, computer vision, and speech synthesis.
b) Discriminator Network: It analyses the data and provides feedback, i.e. it takes real data
and the data generated by the Generator as input and attempts to distinguish between the
two.
Some of the examples of GANs are as follows:
It can create portraits of non-existing people.
It can convert images from day to night.
It can generate images based on textual description.
It can generate realistic video.
3. Recurrent Neural Networks (RNNs): RNNs are a special class of neural networks that
excel at handling sequential data, like music or tort. They excel at tasks where the order
of the data points is important, as they can remember previous inputs and use this
information to influence current outputs.
4. Autoencoders (AEs): These are Neural networks that have been trained to learn a
compressed representation of data. They work by compressing the data into a lower-
dimensional form (encoding) and then decompressing it back to its original form
(decoding). This process helps the network learn the most important features of the data.
Page 3 of 5
Some of the examples of AEs are as follows:
It can help in cleaning up noisy images to produce clear and highly realistic samples.
It can help in compressing high resolution images for efficient storage and transmission.
It can create artistic images based on learned features from famous paintings.
It can help in drug discovery by learning and generating molecular structures that have
desirable properties.
Q5. How Autoencoders and Variable Autoencoders are similar to each other?
Ans: The similarities between Autoencoders and Variable Autoencoders are as follows
i) Both AE and VAE are neural network architectures that are used for unsupervised
learning.
ii) Both AE and VAE consist of an encoder and a decoder network. The encoder maps the
input data to a latent representation, and decoder maps the latent representation back to
the original data.
iii) Both AE and VAE can be used for tasks such as dimensionality reduction, data
generation, and anomaly detection.
Basic Function Neural network model that learns to Similar to AE but incorporates
encode input data into a compressed probabilistic elements to learn a
representation and then decode it back latent space representation of
to the original data. input data.
Reconstruction Loss Minimizes the difference between the Same as AE but also includes a
input data and its reconstructed reugularizer.
output.
Handling Overfitting Can suffer from overfitting due to the Less prone to overfitting due to
fixed encoding structure. the probabilistic nature.
Page 5 of 5